WorldWideScience

Sample records for cddis computer facility

  1. Improvements in Space Geodesy Data Discovery at the CDDIS

    Science.gov (United States)

    Noll, C.; Pollack, N.; Michael, P.

    2011-01-01

    The Crustal Dynamics Data Information System (CDDIS) supports data archiving and distribution activities for the space geodesy and geodynamics community. The main objectives of the system are to store space geodesy and geodynamics related data products in a central data bank. to maintain information about the archival of these data, and to disseminate these data and information in a timely manner to a global scientific research community. The archive consists of GNSS, laser ranging, VLBI, and DORIS data sets and products derived from these data. The CDDIS is one of NASA's Earth Observing System Data and Information System (EOSDIS) distributed data centers; EOSDIS data centers serve a diverse user community and arc tasked to provide facilities to search and access science data and products. Several activities are currently under development at the CDDIS to aid users in data discovery, both within the current community and beyond. The CDDIS is cooperating in the development of Geodetic Seamless Archive Centers (GSAC) with colleagues at UNAVCO and SIO. TIle activity will provide web services to facilitate data discovery within and across participating archives. In addition, the CDDIS is currently implementing modifications to the metadata extracted from incoming data and product files pushed to its archive. These enhancements will permit information about COOlS archive holdings to be made available through other data portals such as Earth Observing System (EOS) Clearinghouse (ECHO) and integration into the Global Geodetic Observing System (GGOS) portal.

  2. Archiving Space Geodesy Data for 20+ Years at the CDDIS

    Science.gov (United States)

    Noll, Carey E.; Dube, M. P.

    2004-01-01

    Since 1982, the Crustal Dynamics Data Information System (CDDIS) has supported the archive and distribution of geodetic data products acquired by NASA programs. These data include GPS (Global Positioning System), GLONASS (GLObal NAvigation Satellite System), SLR (Satellite Laser Ranging), VLBI (Very Long Baseline Interferometry), and DORIS (Doppler Orbitography and Radiolocation Integrated by Satellite). The data archive supports NASA's space geodesy activities through the Solid Earth and Natural Hazards (SENH) program. The CDDIS data system and its archive have become increasingly important to many national and international programs, particularly several of the operational services within the International Association of Geodesy (IAG), including the International GPS Service (IGS), the International Laser Ranging Service (ILRS), the International VLBI Service for Geodesy and Astrometry (IVS), the International DORIS Service (IDS), and the International Earth Rotation Service (IERS). The CDDIS provides easy and ready access to a variety of data sets, products, and information about these data. The specialized nature of the CDDIS lends itself well to enhancement and thus can accommodate diverse data sets and user requirements. All data sets and metadata extracted from these data sets are accessible to scientists through ftp and the web; general information about each data set is accessible via the web. The CDDIS, including background information about the system and its user communities, the computer architecture, archive contents, available metadata, and future plans will be discussed.

  3. CDDIS: NASA's Archive of Space Geodesy Data and Products Supporting GGOS

    Science.gov (United States)

    Noll, Carey; Michael, Patrick

    2016-01-01

    The Crustal Dynamics Data Information System (CDDIS) supports data archiving and distribution activities for the space geodesy and geodynamics community. The main objectives of the system are to store space geodesy and geodynamics related data and products in a central archive, to maintain information about the archival of these data,to disseminate these data and information in a timely manner to a global scientific research community, and provide user based tools for the exploration and use of the archive. The CDDIS data system and its archive is a key component in several of the geometric services within the International Association of Geodesy (IAG) and its observing systemthe Global Geodetic Observing System (GGOS), including the IGS, the International DORIS Service (IDS), the International Laser Ranging Service (ILRS), the International VLBI Service for Geodesy and Astrometry (IVS), and the International Earth Rotation and Reference Systems Service (IERS). The CDDIS provides on-line access to over 17 Tbytes of dataand derived products in support of the IAG services and GGOS. The systems archive continues to grow and improve as new activities are supported and enhancements are implemented. Recently, the CDDIS has established a real-time streaming capability for GNSS data and products. Furthermore, enhancements to metadata describing the contents ofthe archive have been developed to facilitate data discovery. This poster will provide a review of the improvements in the system infrastructure that CDDIS has made over the past year for the geodetic community and describe future plans for the system.

  4. Global Navigation Satellite System (GNSS) Rapid Clock Product Summary from NASA CDDIS

    Data.gov (United States)

    National Aeronautics and Space Administration — This derived product set consists of Global Navigation Satellite System Rapid Clock Product Summary from the NASA Crustal Dynamics Data Information System (CDDIS)....

  5. Ground-Based Global Navigation Satellite System Data (30-second sampling, 1 hour files) from NASA CDDIS

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Navigation Satellite System (GNSS) daily 30-second sampled data available from the Crustal Dynamics Data Information System (CDDIS). Global Navigation...

  6. Ground-Based Global Navigation Satellite System Data (30-second sampling, 24 hour files) from NASA CDDIS

    Data.gov (United States)

    National Aeronautics and Space Administration — GNSS provide autonomous geo-spatial positioning with global coverage. GNSS data sets from ground receivers at the CDDIS consist primarily of the data from the U.S....

  7. Joint Computing Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Raised Floor Computer Space for High Performance ComputingThe ERDC Information Technology Laboratory (ITL) provides a robust system of IT facilities to develop and...

  8. Computer-Aided Facilities Management Systems (CAFM).

    Science.gov (United States)

    Cyros, Kreon L.

    Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…

  9. Computational Science at the Argonne Leadership Computing Facility

    Science.gov (United States)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  10. Conducting Computer Security Assessments at Nuclear Facilities

    International Nuclear Information System (INIS)

    2016-06-01

    Computer security is increasingly recognized as a key component in nuclear security. As technology advances, it is anticipated that computer and computing systems will be used to an even greater degree in all aspects of plant operations including safety and security systems. A rigorous and comprehensive assessment process can assist in strengthening the effectiveness of the computer security programme. This publication outlines a methodology for conducting computer security assessments at nuclear facilities. The methodology can likewise be easily adapted to provide assessments at facilities with other radioactive materials

  11. Computing facility at SSC for detectors

    International Nuclear Information System (INIS)

    Leibold, P.; Scipiono, B.

    1990-01-01

    A description of the RISC-based distributed computing facility for detector simulaiton being developed at the SSC Laboratory is discussed. The first phase of this facility is scheduled for completion in early 1991. Included is the status of the project, overview of the concepts used to model and define system architecture, networking capabilities for user access, plans for support of physics codes and related topics concerning the implementation of this facility

  12. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  13. Oak Ridge Leadership Computing Facility (OLCF)

    Data.gov (United States)

    Federal Laboratory Consortium — The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of standing up a supercomputer 100 times...

  14. 2016 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, Jim [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.

  15. Computer Security at Nuclear Facilities

    International Nuclear Information System (INIS)

    Cavina, A.

    2013-01-01

    This series of slides presents the IAEA policy concerning the development of recommendations and guidelines for computer security at nuclear facilities. A document of the Nuclear Security Series dedicated to this issue is on the final stage prior to publication. This document is the the first existing IAEA document specifically addressing computer security. This document was necessary for 3 mains reasons: first not all national infrastructures have recognized and standardized computer security, secondly existing international guidance is not industry specific and fails to capture some of the key issues, and thirdly the presence of more or less connected digital systems is increasing in the design of nuclear power plants. The security of computer system must be based on a graded approach: the assignment of computer system to different levels and zones should be based on their relevance to safety and security and the risk assessment process should be allowed to feed back into and influence the graded approach

  16. Academic Computing Facilities and Services in Higher Education--A Survey.

    Science.gov (United States)

    Warlick, Charles H.

    1986-01-01

    Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…

  17. 2015 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  18. 2014 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  19. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  20. Operational Circular nr 5 - October 2000 USE OF CERN COMPUTING FACILITIES

    CERN Multimedia

    Division HR

    2000-01-01

    New rules covering the use of CERN Computing facilities have been drawn up. All users of CERN’s computing facilites are subject to these rules, as well as to the subsidiary rules of use. The Computing Rules explicitly address your responsibility for taking reasonable precautions to protect computing equipment and accounts. In particular, passwords must not be easily guessed or obtained by others. Given the difficulty to completely separate work and personal use of computing facilities, the rules define under which conditions limited personal use is tolerated. For example, limited personal use of e-mail, news groups or web browsing is tolerated in your private time, provided CERN resources and your official duties are not adversely affected. The full conditions governing use of CERN’s computing facilities are contained in Operational Circular N° 5, which you are requested to read. Full details are available at : http://www.cern.ch/ComputingRules Copies of the circular are also available in the Divis...

  1. COMPUTER ORIENTED FACILITIES OF TEACHING AND INFORMATIVE COMPETENCE

    Directory of Open Access Journals (Sweden)

    Olga M. Naumenko

    2010-09-01

    Full Text Available In the article it is considered the history of views to the tasks of education, estimations of its effectiveness from the point of view of forming of basic vitally important competences. Opinions to the problem in different countries, international organizations, corresponding experience of the Ukrainian system of education are described. The necessity of forming of informative competence of future teacher is reasonable in the conditions of application of the computer oriented facilities of teaching at the study of naturally scientific cycle subjects in pedagogical colleges. Prognosis estimations concerning the development of methods of application of computer oriented facilities of teaching are presented.

  2. Modern computer hardware and the role of central computing facilities in particle physics

    International Nuclear Information System (INIS)

    Zacharov, V.

    1981-01-01

    Important recent changes in the hardware technology of computer system components are reviewed, and the impact of these changes assessed on the present and future pattern of computing in particle physics. The place of central computing facilities is particularly examined, to answer the important question as to what, if anything, should be their future role. Parallelism in computing system components is considered to be an important property that can be exploited with advantage. The paper includes a short discussion of the position of communications and network technology in modern computer systems. (orig.)

  3. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  4. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  5. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  6. Computer security at ukrainian nuclear facilities: interface between nuclear safety and security

    International Nuclear Information System (INIS)

    Chumak, D.; Klevtsov, O.

    2015-01-01

    Active introduction of information technology, computer instrumentation and control systems (I and C systems) in the nuclear field leads to a greater efficiency and management of technological processes at nuclear facilities. However, this trend brings a number of challenges related to cyber-attacks on the above elements, which violates computer security as well as nuclear safety and security of a nuclear facility. This paper considers regulatory support to computer security at the nuclear facilities in Ukraine. The issue of computer and information security considered in the context of physical protection, because it is an integral component. The paper focuses on the computer security of I and C systems important to nuclear safety. These systems are potentially vulnerable to cyber threats and, in case of cyber-attacks, the potential negative impact on the normal operational processes can lead to a breach of the nuclear facility security. While ensuring nuclear security of I and C systems, it interacts with nuclear safety, therefore, the paper considers an example of an integrated approach to the requirements of nuclear safety and security

  7. Survey of computer codes applicable to waste facility performance evaluations

    International Nuclear Information System (INIS)

    Alsharif, M.; Pung, D.L.; Rivera, A.L.; Dole, L.R.

    1988-01-01

    This study is an effort to review existing information that is useful to develop an integrated model for predicting the performance of a radioactive waste facility. A summary description of 162 computer codes is given. The identified computer programs address the performance of waste packages, waste transport and equilibrium geochemistry, hydrological processes in unsaturated and saturated zones, and general waste facility performance assessment. Some programs also deal with thermal analysis, structural analysis, and special purposes. A number of these computer programs are being used by the US Department of Energy, the US Nuclear Regulatory Commission, and their contractors to analyze various aspects of waste package performance. Fifty-five of these codes were identified as being potentially useful on the analysis of low-level radioactive waste facilities located above the water table. The code summaries include authors, identification data, model types, and pertinent references. 14 refs., 5 tabs

  8. National facility for advanced computational science: A sustainable path to scientific discovery

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  9. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  10. Integration of small computers in the low budget facility

    International Nuclear Information System (INIS)

    Miller, G.E.; Crofoot, T.A.

    1988-01-01

    Inexpensive computers (PC's) are well within the reach of low budget reactor facilities. It is possible to envisage many uses that will both improve capabilities of existing instrumentation and also assist operators and staff with certain routine tasks. Both of these opportunities are important for survival at facilities with severe budget and staffing limitations. (author)

  11. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  12. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    Science.gov (United States)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  13. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Jayatilaka, B. [Fermilab; Levshina, T. [Fermilab; Sehgal, C. [Fermilab; Gardner, R. [Chicago U.; Rynge, M. [USC - ISI, Marina del Rey; Würthwein, F. [UC, San Diego

    2017-11-22

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  14. Computer-Assisted School Facility Planning with ONPASS.

    Science.gov (United States)

    Urban Decision Systems, Inc., Los Angeles, CA.

    The analytical capabilities of ONPASS, an on-line computer-aided school facility planning system, are described by its developers. This report describes how, using the Canoga Park-Winnetka-Woodland Hills Planning Area as a test case, the Department of City Planning of the city of Los Angeles employed ONPASS to demonstrate how an on-line system can…

  15. Neutronic computational modeling of the ASTRA critical facility using MCNPX

    International Nuclear Information System (INIS)

    Rodriguez, L. P.; Garcia, C. R.; Milian, D.; Milian, E. E.; Brayner, C.

    2015-01-01

    The Pebble Bed Very High Temperature Reactor is considered as a prominent candidate among Generation IV nuclear energy systems. Nevertheless the Pebble Bed Very High Temperature Reactor faces an important challenge due to the insufficient validation of computer codes currently available for use in its design and safety analysis. In this paper a detailed IAEA computational benchmark announced by IAEA-TECDOC-1694 in the framework of the Coordinated Research Project 'Evaluation of High Temperature Gas Cooled Reactor (HTGR) Performance' was solved in support of the Generation IV computer codes validation effort using MCNPX ver. 2.6e computational code. In the IAEA-TECDOC-1694 were summarized a set of four calculational benchmark problems performed at the ASTRA critical facility. Benchmark problems include criticality experiments, control rod worth measurements and reactivity measurements. The ASTRA Critical Facility at the Kurchatov Institute in Moscow was used to simulate the neutronic behavior of nuclear pebble bed reactors. (Author)

  16. The Crustal Dynamics Data Information System: A Resource to Support Scientific Analysis Using Space Geodesy

    Science.gov (United States)

    Noll. Carey E.

    2010-01-01

    Since 1982. the Crustal Dynamics Data Information System (CDDIS) has supported the archive and distribution of geodetic data products acquired by the National Aeronautics and Space Administration (NASA) as well as national and international programs. The CDDIS provides easy, timely, and reliable access to a variety of data sets, products, and information about these data. These measurements. obtained from a global network of nearly 650 instruments at more than 400 distinct sites, include DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite), GNSS (Global Navigation Satellite System), SLR and LLR (Satellite and Lunar Laser Ranging), and VLBI (Very Long Baseline Interferometry). The CDDIS data system and its archive have become increasingly important to many national and international science communities, particularly several of the operational services within the International Association of Geodesy (IAG) and its observing system the Global Geodetic Observing System (GGOS), including the International DORIS Service (IDS), the International GNSS Service (IGS). the International Laser Ranging Service (ILRS), the International VLBI Service for Geodesy and Astrometry (IVS). and the International Earth rotation and Reference frame Service (IERS), Investigations resulting from the data and products available through the CDDIS support research in many aspects of Earth system science and global change. Each month, the CDDIS archives more than one million data and derived product files totaling over 90 Gbytes in volume. In turn. the global user community downloads nearly 1.2 TBytes (over 10.5 million files) of data and products from the CDDIS each month. The requirements of analysts have evolved since the start of the CDDIS; the specialized nature of the system accommodates the enhancements required to support diverse data sets and user needs. This paper discusses the CDDIS. including background information about the system and its. user communities

  17. Development of computer model for radionuclide released from shallow-land disposal facility

    International Nuclear Information System (INIS)

    Suganda, D.; Sucipta; Sastrowardoyo, P.B.; Eriendi

    1998-01-01

    Development of 1-dimensional computer model for radionuclide release from shallow land disposal facility (SLDF) has been done. This computer model is used for the SLDF facility at PPTA Serpong. The SLDF facility is above 1.8 metres from groundwater and 150 metres from Cisalak river. Numerical method by implicit method of finite difference solution is chosen to predict the migration of radionuclide with any concentration.The migration starts vertically from the bottom of SLDF until the groundwater layer, then horizontally in the groundwater until the critical population group. Radionuclide Cs-137 is chosen as a sample to know its migration. The result of the assessment shows that the SLDF facility at PPTA Serpong has the high safety criteria. (author)

  18. Shieldings for X-ray radiotherapy facilities calculated by computer

    International Nuclear Information System (INIS)

    Pedrosa, Paulo S.; Farias, Marcos S.; Gavazza, Sergio

    2005-01-01

    This work presents a methodology for calculation of X-ray shielding in facilities of radiotherapy with help of computer. Even today, in Brazil, the calculation of shielding for X-ray radiotherapy is done based on NCRP-49 recommendation establishing a methodology for calculating required to the elaboration of a project of shielding. With regard to high energies, where is necessary the construction of a labyrinth, the NCRP-49 is not very clear, so that in this field, studies were made resulting in an article that proposes a solution to the problem. It was developed a friendly program in Delphi programming language that, through the manual data entry of a basic design of architecture and some parameters, interprets the geometry and calculates the shields of the walls, ceiling and floor of on X-ray radiation therapy facility. As the final product, this program provides a graphical screen on the computer with all the input data and the calculation of shieldings and the calculation memory. The program can be applied in practical implementation of shielding projects for radiotherapy facilities and can be used in a didactic way compared to NCRP-49.

  19. Computer codes for ventilation in nuclear facilities

    International Nuclear Information System (INIS)

    Mulcey, P.

    1987-01-01

    In this paper the authors present some computer codes, developed in the last years, for ventilation and radioprotection. These codes are used for safety analysis in the conception, exploitation and dismantlement of nuclear facilities. The authors present particularly: DACC1 code used for aerosol deposit in sampling circuit of radiation monitors; PIAF code used for modelization of complex ventilation system; CLIMAT 6 code used for optimization of air conditioning system [fr

  20. Computer control and data acquisition system for the R.F. Test Facility

    International Nuclear Information System (INIS)

    Stewart, K.A.; Burris, R.D.; Mankin, J.B.; Thompson, D.H.

    1986-01-01

    The Radio Frequency Test Facility (RFTF) at Oak Ridge National Laboratory, used to test and evaluate high-power ion cyclotron resonance heating (ICRH) systems and components, is monitored and controlled by a multicomponent computer system. This data acquisition and control system consists of three major hardware elements: (1) an Allen-Bradley PLC-3 programmable controller; (2) a VAX 11/780 computer; and (3) a CAMAC serial highway interface. Operating in LOCAL as well as REMOTE mode, the programmable logic controller (PLC) performs all the control functions of the test facility. The VAX computer acts as the operator's interface to the test facility by providing color mimic panel displays and allowing input via a trackball device. The VAX also provides archiving of trend data acquired by the PLC. Communications between the PLC and the VAX are via the CAMAC serial highway. Details of the hardware, software, and the operation of the system are presented in this paper

  1. Centralized computer-based controls of the Nova Laser Facility

    International Nuclear Information System (INIS)

    Krammen, J.

    1985-01-01

    This article introduces the overall architecture of the computer-based Nova Laser Control System and describes its basic components. Use of standard hardware and software components ensures that the system, while specialized and distributed throughout the facility, is adaptable. 9 references, 6 figures

  2. Brookhaven Reactor Experiment Control Facility, a distributed function computer network

    International Nuclear Information System (INIS)

    Dimmler, D.G.; Greenlaw, N.; Kelley, M.A.; Potter, D.W.; Rankowitz, S.; Stubblefield, F.W.

    1975-11-01

    A computer network for real-time data acquisition, monitoring and control of a series of experiments at the Brookhaven High Flux Beam Reactor has been developed and has been set into routine operation. This reactor experiment control facility presently services nine neutron spectrometers and one x-ray diffractometer. Several additional experiment connections are in progress. The architecture of the facility is based on a distributed function network concept. A statement of implementation and results is presented

  3. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    Science.gov (United States)

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  4. On-line satellite/central computer facility of the Multiparticle Argo Spectrometer System

    International Nuclear Information System (INIS)

    Anderson, E.W.; Fisher, G.P.; Hien, N.C.; Larson, G.P.; Thorndike, A.M.; Turkot, F.; von Lindern, L.; Clifford, T.S.; Ficenec, J.R.; Trower, W.P.

    1974-09-01

    An on-line satellite/central computer facility has been developed at Brookhaven National Laboratory as part of the Multiparticle Argo Spectrometer System (MASS). This facility consisting of a PDP-9 and a CDC-6600, has been successfully used in study of proton-proton interactions at 28.5 GeV/c. (U.S.)

  5. Oak Ridge Leadership Computing Facility Position Paper

    Energy Technology Data Exchange (ETDEWEB)

    Oral, H Sarp [ORNL; Hill, Jason J [ORNL; Thach, Kevin G [ORNL; Podhorszki, Norbert [ORNL; Klasky, Scott A [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL

    2011-01-01

    This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.

  6. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  7. Computer modeling of commercial refrigerated warehouse facilities

    International Nuclear Information System (INIS)

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-01-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented

  8. ATLAS experience with HEP software at the Argonne leadership computing facility

    International Nuclear Information System (INIS)

    Uram, Thomas D; LeCompte, Thomas J; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  9. ATLAS Experience with HEP Software at the Argonne Leadership Computing Facility

    CERN Document Server

    LeCompte, T; The ATLAS collaboration; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  10. An analytical model for computation of reliability of waste management facilities with intermediate storages

    International Nuclear Information System (INIS)

    Kallweit, A.; Schumacher, F.

    1977-01-01

    A high reliability is called for waste management facilities within the fuel cycle of nuclear power stations which can be fulfilled by providing intermediate storage facilities and reserve capacities. In this report a model based on the theory of Markov processes is described which allows computation of reliability characteristics of waste management facilities containing intermediate storage facilities. The application of the model is demonstrated by an example. (orig.) [de

  11. COMPUTER ORIENTED FACILITIES OF TEACHING AND INFORMATIVE COMPETENCE

    OpenAIRE

    Olga M. Naumenko

    2010-01-01

    In the article it is considered the history of views to the tasks of education, estimations of its effectiveness from the point of view of forming of basic vitally important competences. Opinions to the problem in different countries, international organizations, corresponding experience of the Ukrainian system of education are described. The necessity of forming of informative competence of future teacher is reasonable in the conditions of application of the computer oriented facilities of t...

  12. Maintenance of reactor safety and control computers at a large government facility

    International Nuclear Information System (INIS)

    Brady, H.G.

    1985-01-01

    In 1950 the US Government contracted the Du Pont Company to design, build, and operate the Savannah River Plant (SRP). At the time, it was the largest construction project ever undertaken by man. It is still the largest of the Department of Energy facilities. In the nearly 35 years that have elapsed, Du Pont has met its commitments to the US Government and set world safety records in the construction and operation of nuclear facilities. Contributing factors in achieving production goals and setting the safety records are a staff of highly qualified personnel, a well maintained plant, and sound maintenance programs. There have been many ''first ever'' achievements at SRP. These ''firsts'' include: (1) computer control of a nuclear rector, and (2) use of computer systems as safety circuits. This presentation discusses the maintenance program provided for these computer systems and all digital systems at SRP. An in-house computer maintenance program that was started in 1966 with five persons has grown to a staff of 40 with investments in computer hardware increasing from $4 million in 1970 to more than $60 million in this decade. 4 figs

  13. The Argonne Leadership Computing Facility 2010 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Drugan, C. (LCF)

    2011-05-09

    Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued to provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers

  14. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Graf, F.A. Jr.

    1995-02-27

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System`s pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System.

  15. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    International Nuclear Information System (INIS)

    Graf, F.A. Jr.

    1995-01-01

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System's pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System

  16. Computer usage among nurses in rural health-care facilities in South Africa: obstacles and challenges.

    Science.gov (United States)

    Asah, Flora

    2013-04-01

    This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.

  17. Shielding Calculations for Positron Emission Tomography - Computed Tomography Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Baasandorj, Khashbayar [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Yang, Jeongseon [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-10-15

    Integrated PET-CT has been shown to be more accurate for lesion localization and characterization than PET or CT alone, and the results obtained from PET and CT separately and interpreted side by side or following software based fusion of the PET and CT datasets. At the same time, PET-CT scans can result in high patient and staff doses; therefore, careful site planning and shielding of this imaging modality have become challenging issues in the field. In Mongolia, the introduction of PET-CT facilities is currently being considered in many hospitals. Thus, additional regulatory legislation for nuclear and radiation applications is necessary, for example, in regulating licensee processes and ensuring radiation safety during the operations. This paper aims to determine appropriate PET-CT shielding designs using numerical formulas and computer code. Since presently there are no PET-CT facilities in Mongolia, contact was made with radiological staff at the Nuclear Medicine Center of the National Cancer Center of Mongolia (NCCM) to get information about facilities where the introduction of PET-CT is being considered. Well-designed facilities do not require additional shielding, which should help cut down overall costs related to PET-CT installation. According to the results of this study, building barrier thicknesses of the NCCM building is not sufficient to keep radiation dose within the limits.

  18. Evolution of facility layout requirements and CAD [computer-aided design] system development

    International Nuclear Information System (INIS)

    Jones, M.

    1990-06-01

    The overall configuration of the Superconducting Super Collider (SSC) including the infrastructure and land boundary requirements were developed using a computer-aided design (CAD) system. The evolution of the facility layout requirements and the use of the CAD system are discussed. The emphasis has been on minimizing the amount of input required and maximizing the speed by which the output may be obtained. The computer system used to store the data is also described

  19. Computer mapping and visualization of facilities for planning of D and D operations

    International Nuclear Information System (INIS)

    Wuller, C.E.; Gelb, G.H.; Cramond, R.; Cracraft, J.S.

    1995-01-01

    The lack of as-built drawings for many old nuclear facilities impedes planning for decontamination and decommissioning. Traditional manual walkdowns subject workers to lengthy exposure to radiological and other hazards. The authors have applied close-range photogrammetry, 3D solid modeling, computer graphics, database management, and virtual reality technologies to create geometrically accurate 3D computer models of the interiors of facilities. The required input to the process is a set of photographs that can be acquired in a brief time. They fit 3D primitive shapes to objects of interest in the photos and, at the same time, record attributes such as material type and link patches of texture from the source photos to facets of modeled objects. When they render the model as either static images or at video rates for a walk-through simulation, the phototextures are warped onto the objects, giving a photo-realistic impression. The authors have exported the data to commercial CAD, cost estimating, robotic simulation, and plant design applications. Results from several projects at old nuclear facilities are discussed

  20. Opportunities for artificial intelligence application in computer- aided management of mixed waste incinerator facilities

    International Nuclear Information System (INIS)

    Rivera, A.L.; Ferrada, J.J.; Singh, S.P.N.

    1992-01-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site. It is designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conservation and Recovery Act (RCRA). This facility, known as the TSCA Incinerator, services seven DOE/OR installations. This incinerator was recently authorized for production operation in the United States for the processing of mixed (radioactively contaminated-chemically hazardous) wastes as regulated under TSCA and RCRA. Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. These requirements impact the characteristics and disposition of incinerator residues, limits the quality of liquid and gaseous effluents, limit the characteristics and rates of waste feeds and operating conditions, and restrict the handling of the waste feed inventories. This incinerator facility presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. Demonstrated computer-aided management systems could be transferred to future mixed waste incinerator facilities

  1. The CT Scanner Facility at Stellenbosch University: An open access X-ray computed tomography laboratory

    Science.gov (United States)

    du Plessis, Anton; le Roux, Stephan Gerhard; Guelpa, Anina

    2016-10-01

    The Stellenbosch University CT Scanner Facility is an open access laboratory providing non-destructive X-ray computed tomography (CT) and a high performance image analysis services as part of the Central Analytical Facilities (CAF) of the university. Based in Stellenbosch, South Africa, this facility offers open access to the general user community, including local researchers, companies and also remote users (both local and international, via sample shipment and data transfer). The laboratory hosts two CT instruments, i.e. a micro-CT system, as well as a nano-CT system. A workstation-based Image Analysis Centre is equipped with numerous computers with data analysis software packages, which are to the disposal of the facility users, along with expert supervision, if required. All research disciplines are accommodated at the X-ray CT laboratory, provided that non-destructive analysis will be beneficial. During its first four years, the facility has accommodated more than 400 unique users (33 in 2012; 86 in 2013; 154 in 2014; 140 in 2015; 75 in first half of 2016), with diverse industrial and research applications using X-ray CT as means. This paper summarises the existence of the laboratory's first four years by way of selected examples, both from published and unpublished projects. In the process a detailed description of the capabilities and facilities available to users is presented.

  2. The CT Scanner Facility at Stellenbosch University: An open access X-ray computed tomography laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Plessis, Anton du, E-mail: anton2@sun.ac.za [CT Scanner Facility, Central Analytical Facilities, Stellenbosch University, Stellenbosch (South Africa); Physics Department, Stellenbosch University, Stellenbosch (South Africa); Roux, Stephan Gerhard le, E-mail: lerouxsg@sun.ac.za [CT Scanner Facility, Central Analytical Facilities, Stellenbosch University, Stellenbosch (South Africa); Guelpa, Anina, E-mail: aninag@sun.ac.za [CT Scanner Facility, Central Analytical Facilities, Stellenbosch University, Stellenbosch (South Africa)

    2016-10-01

    The Stellenbosch University CT Scanner Facility is an open access laboratory providing non-destructive X-ray computed tomography (CT) and a high performance image analysis services as part of the Central Analytical Facilities (CAF) of the university. Based in Stellenbosch, South Africa, this facility offers open access to the general user community, including local researchers, companies and also remote users (both local and international, via sample shipment and data transfer). The laboratory hosts two CT instruments, i.e. a micro-CT system, as well as a nano-CT system. A workstation-based Image Analysis Centre is equipped with numerous computers with data analysis software packages, which are to the disposal of the facility users, along with expert supervision, if required. All research disciplines are accommodated at the X-ray CT laboratory, provided that non-destructive analysis will be beneficial. During its first four years, the facility has accommodated more than 400 unique users (33 in 2012; 86 in 2013; 154 in 2014; 140 in 2015; 75 in first half of 2016), with diverse industrial and research applications using X-ray CT as means. This paper summarises the existence of the laboratory’s first four years by way of selected examples, both from published and unpublished projects. In the process a detailed description of the capabilities and facilities available to users is presented.

  3. Automation of a cryogenic facility by commercial process-control computer

    International Nuclear Information System (INIS)

    Sondericker, J.H.; Campbell, D.; Zantopp, D.

    1983-01-01

    To insure that Brookhaven's superconducting magnets are reliable and their field quality meets accelerator requirements, each magnet is pre-tested at operating conditions after construction. MAGCOOL, the production magnet test facility, was designed to perform these tests, having the capacity to test ten magnets per five day week. This paper describes the control aspects of MAGCOOL and the advantages afforded the designers by the implementation of a commercial process control computer system

  4. Development of the computer code to monitor gamma radiation in the nuclear facility environment

    International Nuclear Information System (INIS)

    Akhmad, Y. R.; Pudjiyanto, M.S.

    1998-01-01

    Computer codes for gamma radiation monitoring in the vicinity of nuclear facility which have been developed could be introduced to the commercial potable gamma analyzer. The crucial stage of the first year activity was succeeded ; that is the codes have been tested to transfer data file (pulse high distribution) from Micro NOMAD gamma spectrometer (ORTEC product) and the convert them into dosimetry and physics quantities. Those computer codes are called as GABATAN (Gamma Analyzer of Batan) and NAGABAT (Natural Gamma Analyzer of Batan). GABATAN code can isable to used at various nuclear facilities for analyzing gamma field up to 9 MeV, while NAGABAT could be used for analyzing the contribution of natural gamma rays to the exposure rate in the certain location

  5. Integration of distributed plant process computer systems to nuclear power generation facilities

    International Nuclear Information System (INIS)

    Bogard, T.; Finlay, K.

    1996-01-01

    Many operating nuclear power generation facilities are replacing their plant process computer. Such replacement projects are driven by equipment obsolescence issues and associated objectives to improve plant operability, increase plant information access, improve man machine interface characteristics, and reduce operation and maintenance costs. This paper describes a few recently completed and on-going replacement projects with emphasis upon the application integrated distributed plant process computer systems. By presenting a few recent projects, the variations of distributed systems design show how various configurations can address needs for flexibility, open architecture, and integration of technological advancements in instrumentation and control technology. Architectural considerations for optimal integration of the plant process computer and plant process instrumentation ampersand control are evident from variations of design features

  6. Computer-based data acquisition system in the Large Coil Test Facility

    International Nuclear Information System (INIS)

    Gould, S.S.; Layman, L.R.; Million, D.L.

    1983-01-01

    The utilization of computers for data acquisition and control is of paramount importance on large-scale fusion experiments because they feature the ability to acquire data from a large number of sensors at various sample rates and provide for flexible data interpretation, presentation, reduction, and analysis. In the Large Coil Test Facility (LCTF) a Digital Equipment Corporation (DEC) PDP-11/60 host computer with the DEC RSX-11M operating system coordinates the activities of five DEC LSI-11/23 front-end processors (FEPs) via direct memory access (DMA) communication links. This provides host control of scheduled data acquisition and FEP event-triggered data collection tasks. Four of the five FEPs have no operating system

  7. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    Energy Technology Data Exchange (ETDEWEB)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  8. The HEPCloud Facility: elastic computing for High Energy Physics - The NOvA Use Case

    Science.gov (United States)

    Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Norman, A.; Timm, S.; Tiradani, A.

    2017-10-01

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 38 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper

  9. Specialized computer architectures for computational aerodynamics

    Science.gov (United States)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  10. Computer program for storage of historical and routine safety data related to radiologically controlled facilities

    International Nuclear Information System (INIS)

    Marsh, D.A.; Hall, C.J.

    1984-01-01

    A method for tracking and quick retrieval of radiological status of radiation and industrial safety systems in an active or inactive facility has been developed. The system uses a mini computer, a graphics plotter, and mass storage devices. Software has been developed which allows input and storage of architectural details, radiological conditions such as exposure rates, current location of safety systems, and routine and historical information on exposure and contamination levels. A blue print size digitizer is used for input. The computer program retains facility floor plans in three dimensional arrays. The software accesses an eight pen color plotter for output. The plotter generates color plots of the floor plans and safety systems on 8 1/2 x 11 or 20 x 30 paper or on overhead transparencies for reports and presentations

  11. Atmospheric dispersion calculation for posturated accident of nuclear facilities and the computer code: PANDA

    International Nuclear Information System (INIS)

    Kitahara, Yoshihisa; Kishimoto, Yoichiro; Narita, Osamu; Shinohara, Kunihiko

    1979-01-01

    Several Calculation methods for relative concentration (X/Q) and relative cloud-gamma dose (D/Q) of the radioactive materials released from nuclear facilities by posturated accident are presented. The procedure has been formulated as a Computer program PANDA and the usage is explained. (author)

  12. Taking the classical large audience university lecture online using tablet computer and webconferencing facilities

    DEFF Research Database (Denmark)

    Brockhoff, Per B.

    2011-01-01

    During four offerings (September 2008 – May 2011) of the course 02402 Introduction to Statistics for Engineering students at DTU, with an average of 256 students, the lecturing was carried out 100% through a tablet computer combined with the web conferencing facility Adobe Connect (version 7...

  13. Animal facilities

    International Nuclear Information System (INIS)

    Fritz, T.E.; Angerman, J.M.; Keenan, W.G.; Linsley, J.G.; Poole, C.M.; Sallese, A.; Simkins, R.C.; Tolle, D.

    1981-01-01

    The animal facilities in the Division are described. They consist of kennels, animal rooms, service areas, and technical areas (examining rooms, operating rooms, pathology labs, x-ray rooms, and 60 Co exposure facilities). The computer support facility is also described. The advent of the Conversational Monitor System at Argonne has launched a new effort to set up conversational computing and graphics software for users. The existing LS-11 data acquisition systems have been further enhanced and expanded. The divisional radiation facilities include a number of gamma, neutron, and x-ray radiation sources with accompanying areas for related equipment. There are five 60 Co irradiation facilities; a research reactor, Janus, is a source for fission-spectrum neutrons; two other neutron sources in the Chicago area are also available to the staff for cell biology studies. The electron microscope facilities are also described

  14. Teaching ergonomics to nursing facility managers using computer-based instruction.

    Science.gov (United States)

    Harrington, Susan S; Walker, Bonnie L

    2006-01-01

    This study offers evidence that computer-based training is an effective tool for teaching nursing facility managers about ergonomics and increasing their awareness of potential problems. Study participants (N = 45) were randomly assigned into a treatment or control group. The treatment group completed the ergonomics training and a pre- and posttest. The control group completed the pre- and posttests without training. Treatment group participants improved significantly from 67% on the pretest to 91% on the posttest, a gain of 24%. Differences between mean scores for the control group were not significant for the total score or for any of the subtests.

  15. Computational Science Facility (CSF)

    Data.gov (United States)

    Federal Laboratory Consortium — PNNL Institutional Computing (PIC) is focused on meeting DOE's mission needs and is part of PNNL's overarching research computing strategy. PIC supports large-scale...

  16. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele

    2012-01-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  17. The HEPCloud Facility: elastic computing for High Energy Physics – The NOvA Use Case

    Energy Technology Data Exchange (ETDEWEB)

    Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Holzman, B. [Fermilab; Kennedy, R. [Fermilab; Norman, A. [Fermilab; Timm, S. [Fermilab; Tiradani, A. [Fermilab

    2017-03-15

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 25 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper

  18. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-05-01

    A proposal has been made at LBL to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multilevel, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data through typical transformations and correlations in under 30 s. The throughput for such a facility, for five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600. 3 figures

  19. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-01-01

    A proposal has been made to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multi-level, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data, through typical transformations and correlations, in under 30 sec. The throughput for such a facility, assuming five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600

  20. Application of personal computer to development of entrance management system for radiating facilities

    International Nuclear Information System (INIS)

    Suzuki, Shogo; Hirai, Shouji

    1989-01-01

    The report describes a system for managing the entrance and exit of personnel to radiating facilities. A personal computer is applied to its development. Major features of the system is outlined first. The computer is connected to the gate and two magnetic card readers provided at the gate. The gate, which is installed at the entrance to a room under control, opens only for those who have a valid card. The entrance-exit management program developed is described next. The following three files are used: ID master file (random file of the magnetic card number, name, qualification, etc., of each card carrier), entrance-exit management file (random file of time of entrance/exit, etc., updated everyday), and entrance-exit record file (sequential file of card number, name, date, etc.), which are stored on floppy disks. A display is provided to show various lists including a list of workers currently in the room and a list of workers who left the room at earlier times of the day. This system is useful for entrance management of a relatively small facility. Though small in required cost, it requires only a few operators to perform effective personnel management. (N.K.)

  1. Computer programs for capital cost estimation, lifetime economic performance simulation, and computation of cost indexes for laser fusion and other advanced technology facilities

    International Nuclear Information System (INIS)

    Pendergrass, J.H.

    1978-01-01

    Three FORTRAN programs, CAPITAL, VENTURE, and INDEXER, have been developed to automate computations used in assessing the economic viability of proposed or conceptual laser fusion and other advanced-technology facilities, as well as conventional projects. The types of calculations performed by these programs are, respectively, capital cost estimation, lifetime economic performance simulation, and computation of cost indexes. The codes permit these three topics to be addressed with considerable sophistication commensurate with user requirements and available data

  2. Computer Security at Nuclear Facilities (French Edition)

    International Nuclear Information System (INIS)

    2013-01-01

    category of the IAEA Nuclear Security Series, and deals with computer security at nuclear facilities. It is based on national experience and practices as well as publications in the fields of computer security and nuclear security. The guidance is provided for consideration by States, competent authorities and operators. The preparation of this publication in the IAEA Nuclear Security Series has been made possible by the contributions of a large number of experts from Member States. An extensive consultation process with all Member States included consultants meetings and open-ended technical meetings. The draft was then circulated to all Member States for 120 days to solicit further comments and suggestions. The comments received from Member States were reviewed and considered in the final version of the publication.

  3. FIRAC - a computer code to predict fire accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Foster, R.D.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire. A basic material transport capability that features the effects of convection, deposition, entrainment, and filtration of material is included. The interrelated effects of filter plugging, heat transfer, gas dynamics, and material transport are taken into account. In this paper the authors summarize the physical models used to describe the gas dynamics, material transport, and heat transfer processes. They also illustrate how a typical facility is modeled using the code

  4. Potential applications of artificial intelligence in computer-based management systems for mixed waste incinerator facility operation

    International Nuclear Information System (INIS)

    Rivera, A.L.; Singh, S.P.N.; Ferrada, J.J.

    1991-01-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site, designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conversion and Recovery Act (RCRA). Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. This presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. This paper describes mixed waste incinerator facility performance-oriented tasks that could be assisted by Artificial Intelligence (AI) and the requirements for AI tools that would implement these algorithms in a computer-based system. 4 figs., 1 tab

  5. Burnup calculations for KIPT accelerator driven subcritical facility using Monte Carlo computer codes-MCB and MCNPX

    International Nuclear Information System (INIS)

    Gohar, Y.; Zhong, Z.; Talamo, A.

    2009-01-01

    Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is ∼375 kW including the fission power of ∼260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the electrons and the

  6. High resolution muon computed tomography at neutrino beam facilities

    International Nuclear Information System (INIS)

    Suerfu, B.; Tully, C.G.

    2016-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pion decay pipe at a neutrino beam facility and what can be achieved for momentum resolution in a muon spectrometer. Such an imaging system can be applied in archaeology, art history, engineering, material identification and whenever there is a need to image inside a transportable object constructed of dense materials

  7. A personal computer code for seismic evaluations of nuclear power plant facilities

    International Nuclear Information System (INIS)

    Xu, J.; Graves, H.

    1991-01-01

    In the process of review and evaluation of licensing issues related to nuclear power plants, it is essential to understand the behavior of seismic loading, foundation and structural properties and their impact on the overall structural response. In most cases, such knowledge could be obtained by using simplified engineering models which, when properly implemented, can capture the essential parameters describing the physics of the problem. Such models do not require execution on large computer systems and could be implemented through a personal computer (PC) based capability. Recognizing the need for a PC software package that can perform structural response computations required for typical licensing reviews, the US Nuclear Regulatory Commission sponsored the development of a PC operated computer software package CARES (Computer Analysis for Rapid Evaluation of Structures) system. This development was undertaken by Brookhaven National Laboratory (BNL) during FY's 1988 and 1989. A wide range of computer programs and modeling approaches are often used to justify the safety of nuclear power plants. It is often difficult to assess the validity and accuracy of the results submitted by various utilities without developing comparable computer solutions. Taken this into consideration, CARES is designed as an integrated computational system which can perform rapid evaluations of structural behavior and examine capability of nuclear power plant facilities, thus CARES may be used by the NRC to determine the validity and accuracy of analysis methodologies employed for structural safety evaluations of nuclear power plants. CARES has been designed to operate on a PC, have user friendly input/output interface, and have quick turnaround. This paper describes the various features which have been implemented into the seismic module of CARES version 1.0

  8. A personal computer code for seismic evaluations of nuclear power plant facilities

    International Nuclear Information System (INIS)

    Xu, J.; Graves, H.

    1990-01-01

    A wide range of computer programs and modeling approaches are often used to justify the safety of nuclear power plants. It is often difficult to assess the validity and accuracy of the results submitted by various utilities without developing comparable computer solutions. Taken this into consideration, CARES is designed as an integrated computational system which can perform rapid evaluations of structural behavior and examine capability of nuclear power plant facilities, thus CARES may be used by the NRC to determine the validity and accuracy of analysis methodologies employed for structural safety evaluations of nuclear power plants. CARES has been designed to: operate on a PC, have user friendly input/output interface, and have quick turnaround. The CARES program is structured in a modular format. Each module performs a specific type of analysis. The basic modules of the system are associated with capabilities for static, seismic and nonlinear analyses. This paper describes the various features which have been implemented into the Seismic Module of CARES version 1.0. In Section 2 a description of the Seismic Module is provided. The methodologies and computational procedures thus far implemented into the Seismic Module are described in Section 3. Finally, a complete demonstration of the computational capability of CARES in a typical soil-structure interaction analysis is given in Section 4 and conclusions are presented in Section 5. 5 refs., 4 figs

  9. Research Facilities | Wind | NREL

    Science.gov (United States)

    Research Facilities Research Facilities NREL's state-of-the-art wind research facilities at the Research Facilities Photo of five men in hard hards observing the end of a turbine blade while it's being tested. Structural Research Facilities A photo of two people silhouetted against a computer simulation of

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  11. TUNL computer facilities

    International Nuclear Information System (INIS)

    Boyd, M.; Edwards, S.E.; Gould, C.R.; Roberson, N.R.; Westerfeldt, C.R.

    1985-01-01

    The XSYS system has been relatively stable during the last year, and most of our efforts have involved routine software maintenance and enhancement of existing XSYS capabilities. Modifications were made in the MBD program GDAP to increase the execution speed in key GDAP routines. A package of routines has been developed to allow communication between the XSYS and the new Wien filter microprocessor. Recently the authors have upgraded their operating system from VSM V3.7 to V4.1. This required numerous modifications to XSYS, mostly in the command procedures. A new reorganized edition of the XSYS manual will be issued shortly. The TUNL High Resolution Laboratory's VAX 11/750 computer has been in operation for its first full year as a replacement for the PRIME 300 computer which was purchased in 1974 and retired nine months ago. The data acquisition system on the VAX has been in use for the past twelve months performing a number of experiments

  12. GASFLOW: A computational model to analyze accidents in nuclear containment and facility buildings

    International Nuclear Information System (INIS)

    Travis, J.R.; Nichols, B.D.; Wilson, T.L.; Lam, K.L.; Spore, J.W.; Niederauer, G.F.

    1993-01-01

    GASFLOW is a finite-volume computer code that solves the time-dependent, compressible Navier-Stokes equations for multiple gas species. The fluid-dynamics algorithm is coupled to the chemical kinetics of combusting liquids or gases to simulate diffusion or propagating flames in complex geometries of nuclear containment or confinement and facilities' buildings. Fluid turbulence is calculated to enhance the transport and mixing of gases in rooms and volumes that may be connected by a ventilation system. The ventilation system may consist of extensive ductwork, filters, dampers or valves, and fans. Condensation and heat transfer to walls, floors, ceilings, and internal structures are calculated to model the appropriate energy sinks. Solid and liquid aerosol behavior is simulated to give the time and space inventory of radionuclides. The solution procedure of the governing equations is a modified Los Alamos ICE'd-ALE methodology. Complex facilities can be represented by separate computational domains (multiblocks) that communicate through overlapping boundary conditions. The ventilation system is superimposed throughout the multiblock mesh. Gas mixtures and aerosols are transported through the free three-dimensional volumes and the restricted one-dimensional ventilation components as the accident and fluid flow fields evolve. Combustion may occur if sufficient fuel and reactant or oxidizer are present and have an ignition source. Pressure and thermal loads on the building, structural components, and safety-related equipment can be determined for specific accident scenarios. GASFLOW calculations have been compared with large oil-pool fire tests in the 1986 HDR containment test T52.14, which is a 3000-kW fire experiment. The computed results are in good agreement with the observed data

  13. A Computer Simulation to Assess the Nuclear Material Accountancy System of a MOX Fuel Fabrication Facility

    International Nuclear Information System (INIS)

    Portaix, C.G.; Binner, R.; John, H.

    2015-01-01

    SimMOX is a computer programme that simulates container histories as they pass through a MOX facility. It performs two parallel calculations: · the first quantifies the actual movements of material that might be expected to occur, given certain assumptions about, for instance, the accumulation of material and waste, and of their subsequent treatment; · the second quantifies the same movements on the basis of the operator's perception of the quantities involved; that is, they are based on assumptions about quantities contained in the containers. Separate skeletal Excel computer programmes are provided, which can be configured to generate further accountancy results based on these two parallel calculations. SimMOX is flexible in that it makes few assumptions about the order and operational performance of individual activities that might take place at each stage of the process. It is able to do this because its focus is on material flows, and not on the performance of individual processes. Similarly there are no pre-conceptions about the different types of containers that might be involved. At the macroscopic level, the simulation takes steady operation as its base case, i.e., the same quantity of material is deemed to enter and leave the simulated area, over any given period. Transient situations can then be superimposed onto this base scene, by simulating them as operational incidents. A general facility has been incorporated into SimMOX to enable the user to create an ''act of a play'' based on a number of operational incidents that have been built into the programme. By doing this a simulation can be constructed that predicts the way the facility would respond to any number of transient activities. This computer programme can help assess the nuclear material accountancy system of a MOX fuel fabrication facility; for instance the implications of applying NRTA (near real time accountancy). (author)

  14. Health workers' knowledge of and attitudes towards computer applications in rural African health facilities.

    Science.gov (United States)

    Sukums, Felix; Mensah, Nathan; Mpembeni, Rose; Kaltschmidt, Jens; Haefeli, Walter E; Blank, Antje

    2014-01-01

    The QUALMAT (Quality of Maternal and Prenatal Care: Bridging the Know-do Gap) project has introduced an electronic clinical decision support system (CDSS) for pre-natal and maternal care services in rural primary health facilities in Burkina Faso, Ghana, and Tanzania. To report an assessment of health providers' computer knowledge, experience, and attitudes prior to the implementation of the QUALMAT electronic CDSS. A cross-sectional study was conducted with providers in 24 QUALMAT project sites. Information was collected using structured questionnaires. Chi-squared tests and one-way ANOVA describe the association between computer knowledge, attitudes, and other factors. Semi-structured interviews and focus groups were conducted to gain further insights. A total of 108 providers responded, 63% were from Tanzania and 37% from Ghana. The mean age was 37.6 years, and 79% were female. Only 40% had ever used computers, and 29% had prior computer training. About 80% were computer illiterate or beginners. Educational level, age, and years of work experience were significantly associated with computer knowledge (pworkplace. Given the low levels of computer knowledge among rural health workers in Africa, it is important to provide adequate training and support to ensure the successful uptake of electronic CDSSs in these settings. The positive attitudes to computers found in this study underscore that also rural care providers are ready to use such technology.

  15. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    Energy Technology Data Exchange (ETDEWEB)

    Zynovyev, Mykhaylo

    2012-06-29

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  16. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    International Nuclear Information System (INIS)

    Zynovyev, Mykhaylo

    2012-01-01

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  17. Computer Profile of School Facilities Energy Consumption.

    Science.gov (United States)

    Oswalt, Felix E.

    This document outlines a computerized management tool designed to enable building managers to identify energy consumption as related to types and uses of school facilities for the purpose of evaluating and managing the operation, maintenance, modification, and planning of new facilities. Specifically, it is expected that the statistics generated…

  18. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  19. Future Computer Requirements for Computational Aerodynamics

    Science.gov (United States)

    1978-01-01

    Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.

  20. Surface Water Modeling Using an EPA Computer Code for Tritiated Waste Water Discharge from the heavy Water Facility

    International Nuclear Information System (INIS)

    Chen, K.F.

    1998-06-01

    Tritium releases from the D-Area Heavy Water Facilities to the Savannah River have been analyzed. The U.S. EPA WASP5 computer code was used to simulate surface water transport for tritium releases from the D-Area Drum Wash, Rework, and DW facilities. The WASP5 model was qualified with the 1993 tritium measurements at U.S. Highway 301. At the maximum tritiated waste water concentrations, the calculated tritium concentration in the Savannah River at U.S. Highway 301 due to concurrent releases from D-Area Heavy Water Facilities varies from 5.9 to 18.0 pCi/ml as a function of the operation conditions of these facilities. The calculated concentration becomes the lowest when the batch releases method for the Drum Wash Waste Tanks is adopted

  1. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    Energy Technology Data Exchange (ETDEWEB)

    Kostin, Mikhail [Michigan State Univ., East Lansing, MI (United States); Mokhov, Nikolai [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Niita, Koji [Research Organization for Information Science and Technology, Ibaraki-ken (Japan)

    2013-09-25

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  2. Computational Simulations of the NASA Langley HyMETS Arc-Jet Facility

    Science.gov (United States)

    Brune, A. J.; Bruce, W. E., III; Glass, D. E.; Splinter, S. C.

    2017-01-01

    The Hypersonic Materials Environmental Test System (HyMETS) arc-jet facility located at the NASA Langley Research Center in Hampton, Virginia, is primarily used for the research, development, and evaluation of high-temperature thermal protection systems for hypersonic vehicles and reentry systems. In order to improve testing capabilities and knowledge of the test article environment, an effort is underway to computationally simulate the flow-field using computational fluid dynamics (CFD). A detailed three-dimensional model of the arc-jet nozzle and free-jet portion of the flow-field has been developed and compared to calibration probe Pitot pressure and stagnation-point heat flux for three test conditions at low, medium, and high enthalpy. The CFD model takes into account uniform pressure and non-uniform enthalpy profiles at the nozzle inlet as well as catalytic recombination efficiency effects at the probe surface. Comparing the CFD results and test data indicates an effectively fully-catalytic copper surface on the heat flux probe of about 10% efficiency and a 2-3 kpa pressure drop from the arc heater bore, where the pressure is measured, to the plenum section, prior to the nozzle. With these assumptions, the CFD results are well within the uncertainty of the stagnation pressure and heat flux measurements. The conditions at the nozzle exit were also compared with radial and axial velocimetry. This simulation capability will be used to evaluate various three-dimensional models that are tested in the HyMETS facility. An end-to-end aerothermal and thermal simulation of HyMETS test articles will follow this work to provide a better understanding of the test environment, test results, and to aid in test planning. Additional flow-field diagnostic measurements will also be considered to improve the modeling capability.

  3. The Overview of the National Ignition Facility Distributed Computer Control System

    International Nuclear Information System (INIS)

    Lagin, L.J.; Bettenhausen, R.C.; Carey, R.A.; Estes, C.M.; Fisher, J.M.; Krammen, J.E.; Reed, R.K.; VanArsdall, P.J.; Woodruff, J.P.

    2001-01-01

    The Integrated Computer Control System (ICCS) for the National Ignition Facility (NIF) is a layered architecture of 300 front-end processors (FEP) coordinated by supervisor subsystems including automatic beam alignment and wavefront control, laser and target diagnostics, pulse power, and shot control timed to 30 ps. FEP computers incorporate either VxWorks on PowerPC or Solaris on UltraSPARC processors that interface to over 45,000 control points attached to VME-bus or PCI-bus crates respectively. Typical devices are stepping motors, transient digitizers, calorimeters, and photodiodes. The front-end layer is divided into another segment comprised of an additional 14,000 control points for industrial controls including vacuum, argon, synthetic air, and safety interlocks implemented with Allen-Bradley programmable logic controllers (PLCs). The computer network is augmented asynchronous transfer mode (ATM) that delivers video streams from 500 sensor cameras monitoring the 192 laser beams to operator workstations. Software is based on an object-oriented framework using CORBA distribution that incorporates services for archiving, machine configuration, graphical user interface, monitoring, event logging, scripting, alert management, and access control. Software coding using a mixed language environment of Ada95 and Java is one-third complete at over 300 thousand source lines. Control system installation is currently under way for the first 8 beams, with project completion scheduled for 2008

  4. FIRAC: a computer code to predict fire-accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Krause, F.R.; Tang, P.K.; Andrae, R.W.; Martin, R.A.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire

  5. Computer software design description for the Treated Effluent Disposal Facility (TEDF), Project L-045H, Operator Training Station (OTS)

    International Nuclear Information System (INIS)

    Carter, R.L. Jr.

    1994-01-01

    The Treated Effluent Disposal Facility (TEDF) Operator Training Station (OTS) is a computer-based training tool designed to aid plant operations and engineering staff in familiarizing themselves with the TEDF Central Control System (CCS)

  6. CSNI Integral Test Facility Matrices for Validation of Best-Estimate Thermal-Hydraulic Computer Codes

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Internationally agreed Integral Test Facility (ITF) matrices for validation of realistic thermal hydraulic system computer codes were established. ITF development is mainly for Pressurised Water Reactors (PWRs) and Boiling Water Reactors (BWRs). A separate activity was for Russian Pressurised Water-cooled and Water-moderated Energy Reactors (WWER). Firstly, the main physical phenomena that occur during considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. In this paper some specific examples from the ITF matrices will also be provided. The matrices will be a guide for code validation, will be a basis for comparisons of code predictions performed with different system codes, and will contribute to the quantification of the uncertainty range of code model predictions. In addition to this objective, the construction of such a matrix is an attempt to record information which has been generated around the world over the last years, so that it is more accessible to present and future workers in that field than would otherwise be the case.

  7. Computer facilities for ISABELLE data handling

    International Nuclear Information System (INIS)

    Kramer, M.A.; Love, W.A.; Miller, R.J.; Zeller, M.

    1977-01-01

    The analysis of data produced by ISABELLE experiments will need a large system of computers. An official group of prospective users and operators of that system should begin planning now. Included in the array will be a substantial computer system at each ISABELLE intersection in use. These systems must include enough computer power to keep experimenters aware of the health of the experiment. This will require at least one very fast sophisticated processor in the system, the size depending on the experiment. Other features of the intersection systems must be a good, high speed graphic display, ability to record data on magnetic tape at 500 to 1000 KB, and a high speed link to a central computer. The operating system software must support multiple interactive users. A substantially larger capacity computer system, shared by the six intersection region experiments, must be available with good turnaround for experimenters while ISABELLE is running. A computer support group will be required to maintain the computer system and to provide and maintain software common to all experiments. Special superfast computing hardware or special function processors constructed with microprocessor circuitry may be necessary both in the data gathering and data processing work. Thus both the local and central processors should be chosen with the possibility of interfacing such devices in mind

  8. On a new method to compute photon skyshine doses around radiotherapy facilities

    Energy Technology Data Exchange (ETDEWEB)

    Falcao, R.; Facure, A. [Comissao Nacional de Eenrgia Nuclear, Rio de Janeiro (Brazil); Xavier, A. [PEN/Coppe -UFRJ, Rio de Janeiro (Brazil)

    2006-07-01

    Full text of publication follows: Nowadays, in a great number of situations constructions are raised around radiotherapy facilities. In cases where the constructions would not be in the primary x-ray beam, 'skyshine' radiation is normally accounted for. The skyshine method is commonly used to to calculate the dose contribution from scattered radiation in such circumstances, when the roof shielding is projected considering there will be no occupancy upstairs. In these cases, there will be no need to have the usual 1,5-2,0 m thick ceiling, and the construction costs can be considerably reduced. The existing expression to compute these doses do not accomplish to explain mathematically the existence of a shadow area just around the outer room walls, and its growth, as we get away from these walls. In this paper we propose a new method to compute photon skyshine doses, using geometrical considerations to find the maximum dose point. An empirical equation is derived, and its validity is tested using M.C.N.P. 5 Monte Carlo calculation to simulate radiotherapy rooms configurations. (authors)

  9. Operating procedures: Fusion Experiments Analysis Facility

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, R.A.; Carey, R.W.

    1984-03-20

    The Fusion Experiments Analysis Facility (FEAF) is a computer facility based on a DEC VAX 11/780 computer. It became operational in late 1982. At that time two manuals were written to aid users and staff in their interactions with the facility. This manual is designed as a reference to assist the FEAF staff in carrying out their responsibilities. It is meant to supplement equipment and software manuals supplied by the vendors. Also this manual provides the FEAF staff with a set of consistent, written guidelines for the daily operation of the facility.

  10. Operating procedures: Fusion Experiments Analysis Facility

    International Nuclear Information System (INIS)

    Lerche, R.A.; Carey, R.W.

    1984-01-01

    The Fusion Experiments Analysis Facility (FEAF) is a computer facility based on a DEC VAX 11/780 computer. It became operational in late 1982. At that time two manuals were written to aid users and staff in their interactions with the facility. This manual is designed as a reference to assist the FEAF staff in carrying out their responsibilities. It is meant to supplement equipment and software manuals supplied by the vendors. Also this manual provides the FEAF staff with a set of consistent, written guidelines for the daily operation of the facility

  11. Recommended practice for the design of a computer driven Alarm Display Facility for central control rooms of nuclear power generating stations

    International Nuclear Information System (INIS)

    Ben-Yaacov, G.

    1984-01-01

    This paper's objective is to explain the process by which design can prevent human errors in nuclear plant operation. Human factor engineering principles, data, and methods used in the design of computer driven alarm display facilities are discussed. A ''generic'', advanced Alarm Display Facility is described. It considers operator capabilities and limitations in decision-making processes, response dynamics, and human memory limitations. Highlighted are considerations of human factor criteria in the designing and layout of alarm displays. Alarm data sources are described, and their use within the Alarm Display Facility are illustrated

  12. The grand challenge of managing the petascale facility.

    Energy Technology Data Exchange (ETDEWEB)

    Aiken, R. J.; Mathematics and Computer Science

    2007-02-28

    This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, we should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected

  13. Computer surety: computer system inspection guidance. [Contains glossary

    Energy Technology Data Exchange (ETDEWEB)

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  14. Designing Facilities for Collaborative Operations

    Science.gov (United States)

    Norris, Jeffrey; Powell, Mark; Backes, Paul; Steinke, Robert; Tso, Kam; Wales, Roxana

    2003-01-01

    A methodology for designing operational facilities for collaboration by multiple experts has begun to take shape as an outgrowth of a project to design such facilities for scientific operations of the planned 2003 Mars Exploration Rover (MER) mission. The methodology could also be applicable to the design of military "situation rooms" and other facilities for terrestrial missions. It was recognized in this project that modern mission operations depend heavily upon the collaborative use of computers. It was further recognized that tests have shown that layout of a facility exerts a dramatic effect on the efficiency and endurance of the operations staff. The facility designs (for example, see figure) and the methodology developed during the project reflect this recognition. One element of the methodology is a metric, called effective capacity, that was created for use in evaluating proposed MER operational facilities and may also be useful for evaluating other collaboration spaces, including meeting rooms and military situation rooms. The effective capacity of a facility is defined as the number of people in the facility who can be meaningfully engaged in its operations. A person is considered to be meaningfully engaged if the person can (1) see, hear, and communicate with everyone else present; (2) see the material under discussion (typically data on a piece of paper, computer monitor, or projection screen); and (3) provide input to the product under development by the group. The effective capacity of a facility is less than the number of people that can physically fit in the facility. For example, a typical office that contains a desktop computer has an effective capacity of .4, while a small conference room that contains a projection screen has an effective capacity of around 10. Little or no benefit would be derived from allowing the number of persons in an operational facility to exceed its effective capacity: At best, the operations staff would be underutilized

  15. Computer Security at Nuclear Facilities. Reference Manual (Arabic Edition)

    International Nuclear Information System (INIS)

    2011-01-01

    category of the IAEA Nuclear Security Series, and deals with computer security at nuclear facilities. It is based on national experience and practices as well as publications in the fields of computer security and nuclear security. The guidance is provided for consideration by States, competent authorities and operators. The preparation of this publication in the IAEA Nuclear Security Series has been made possible by the contributions of a large number of experts from Member States. An extensive consultation process with all Member States included consultants meetings and open-ended technical meetings. The draft was then circulated to all Member States for 120 days to solicit further comments and suggestions. The comments received from Member States were reviewed and considered in the final version of the publication.

  16. Computer Security at Nuclear Facilities. Reference Manual (Russian Edition)

    International Nuclear Information System (INIS)

    2012-01-01

    category of the IAEA Nuclear Security Series, and deals with computer security at nuclear facilities. It is based on national experience and practices as well as publications in the fields of computer security and nuclear security. The guidance is provided for consideration by States, competent authorities and operators. The preparation of this publication in the IAEA Nuclear Security Series has been made possible by the contributions of a large number of experts from Member States. An extensive consultation process with all Member States included consultants meetings and open-ended technical meetings. The draft was then circulated to all Member States for 120 days to solicit further comments and suggestions. The comments received from Member States were reviewed and considered in the final version of the publication.

  17. Computer Security at Nuclear Facilities. Reference Manual (Chinese Edition)

    International Nuclear Information System (INIS)

    2012-01-01

    category of the IAEA Nuclear Security Series, and deals with computer security at nuclear facilities. It is based on national experience and practices as well as publications in the fields of computer security and nuclear security. The guidance is provided for consideration by States, competent authorities and operators. The preparation of this publication in the IAEA Nuclear Security Series has been made possible by the contributions of a large number of experts from Member States. An extensive consultation process with all Member States included consultants meetings and open-ended technical meetings. The draft was then circulated to all Member States for 120 days to solicit further comments and suggestions. The comments received from Member States were reviewed and considered in the final version of the publication.

  18. Computing, Environment and Life Sciences | Argonne National Laboratory

    Science.gov (United States)

    Computing, Environment and Life Sciences Research Divisions BIOBiosciences CPSComputational Science DSLData Argonne Leadership Computing Facility Biosciences Division Environmental Science Division Mathematics and Computer Science Division Facilities and Institutes Argonne Leadership Computing Facility News Events About

  19. CDDIS_SLR_data

    Data.gov (United States)

    National Aeronautics and Space Administration — In Satellite Laser Ranging (SLR), a short pulse of coherent light generated by a laser (Light Amplification by Stimulated Emission of Radiation) is transmitted in a...

  20. Enhanced computational infrastructure for data analysis at the DIII-D National Fusion Facility

    International Nuclear Information System (INIS)

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; McHarg, B.B.; Meyer, W.H.; Parker, C.T.

    2000-01-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from nine national laboratories, 19 foreign laboratories, 16 universities, and five industrial partnerships. As a result of this work, DIII-D data is available on a 24x7 basis from a set of viewing and analysis tools that can be run on either the collaborators' or DIII-D's computer systems. Additionally, a web based data and code documentation system has been created to aid the novice and expert user alike

  1. Enhanced Computational Infrastructure for Data Analysis at the DIII-D National Fusion Facility

    International Nuclear Information System (INIS)

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; Meyer, W.H.; Parker, C.T.; McCharg, B.B.

    1999-01-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from 9 national laboratories, 19 foreign laboratories, 16 universities, and 5 industrial partnerships. As a result of this work, DIII-D data is available on a 24 x 7 basis from a set of viewing and analysis tools that can be run either on the collaborators' or DIII-Ds computer systems. Additionally, a Web based data and code documentation system has been created to aid the novice and expert user alike

  2. New challenges for HEP computing: RHIC [Relativistic Heavy Ion Collider] and CEBAF [Continuous Electron Beam Accelerator Facility

    International Nuclear Information System (INIS)

    LeVine, M.J.

    1990-01-01

    We will look at two facilities; RHIC and CEBF. CEBF is in the construction phase, RHIC is about to begin construction. For each of them, we examine the kinds of physics measurements that motivated their construction, and the implications of these experiments for computing. Emphasis will be on on-line requirements, driven by the data rates produced by these experiments

  3. Software quality assurance plan for the National Ignition Facility integrated computer control system

    Energy Technology Data Exchange (ETDEWEB)

    Woodruff, J.

    1996-11-01

    Quality achievement is the responsibility of the line organizations of the National Ignition Facility (NIF) Project. This Software Quality Assurance Plan (SQAP) applies to the activities of the Integrated Computer Control System (ICCS) organization and its subcontractors. The Plan describes the activities implemented by the ICCS section to achieve quality in the NIF Project`s controls software and implements the NIF Quality Assurance Program Plan (QAPP, NIF-95-499, L-15958-2) and the Department of Energy`s (DOE`s) Order 5700.6C. This SQAP governs the quality affecting activities associated with developing and deploying all control system software during the life cycle of the NIF Project.

  4. Software quality assurance plan for the National Ignition Facility integrated computer control system

    International Nuclear Information System (INIS)

    Woodruff, J.

    1996-11-01

    Quality achievement is the responsibility of the line organizations of the National Ignition Facility (NIF) Project. This Software Quality Assurance Plan (SQAP) applies to the activities of the Integrated Computer Control System (ICCS) organization and its subcontractors. The Plan describes the activities implemented by the ICCS section to achieve quality in the NIF Project's controls software and implements the NIF Quality Assurance Program Plan (QAPP, NIF-95-499, L-15958-2) and the Department of Energy's (DOE's) Order 5700.6C. This SQAP governs the quality affecting activities associated with developing and deploying all control system software during the life cycle of the NIF Project

  5. A resource facility for kinetic analysis: modeling using the SAAM computer programs.

    Science.gov (United States)

    Foster, D M; Boston, R C; Jacquez, J A; Zech, L

    1989-01-01

    Kinetic analysis and integrated system modeling have contributed significantly to understanding the physiology and pathophysiology of metabolic systems in humans and animals. Many experimental biologists are aware of the usefulness of these techniques and recognize that kinetic modeling requires special expertise. The Resource Facility for Kinetic Analysis (RFKA) provides this expertise through: (1) development and application of modeling technology for biomedical problems, and (2) development of computer-based kinetic modeling methodologies concentrating on the computer program Simulation, Analysis, and Modeling (SAAM) and its conversational version, CONversational SAAM (CONSAM). The RFKA offers consultation to the biomedical community in the use of modeling to analyze kinetic data and trains individuals in using this technology for biomedical research. Early versions of SAAM were widely applied in solving dosimetry problems; many users, however, are not familiar with recent improvements to the software. The purpose of this paper is to acquaint biomedical researchers in the dosimetry field with RFKA, which, together with the joint National Cancer Institute-National Heart, Lung and Blood Institute project, is overseeing SAAM development and applications. In addition, RFKA provides many service activities to the SAAM user community that are relevant to solving dosimetry problems.

  6. AIRDOS-II computer code for estimating radiation dose to man from airborne radionuclides in areas surrouding nuclear facilities

    International Nuclear Information System (INIS)

    Moore, R.E.

    1977-04-01

    The AIRDOS-II computer code estimates individual and population doses resulting from the simultaneous atmospheric release of as many as 36 radionuclides from a nuclear facility. This report describes the meteorological and environmental models used is the code, their computer implementation, and the applicability of the code to assessments of radiological impact. Atmospheric dispersion and surface deposition of released radionuclides are estimated as a function of direction and distance from a nuclear power plant or fuel-cycle facility, and doses to man through inhalation, air immersion, exposure to contaminated ground, food ingestion, and water immersion are estimated in the surrounding area. Annual doses are estimated for total body, GI tract, bone, thyroid, lungs, muscle, kidneys, liver, spleen, testes, and ovaries. Either the annual population doses (man-rems/year) or the highest annual individual doses in the assessment area (rems/year), whichever are applicable, are summarized in output tables in several ways--by nuclides, modes of exposure, and organs. The location of the highest individual doses for each reference organ estimated for the area is specified in the output data

  7. A Bioinformatics Facility for NASA

    Science.gov (United States)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  8. Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.

    2014-12-01

    The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and

  9. Specific features of organizng the computer-aided design of radio-electronic equipment for electrophysical facilities

    International Nuclear Information System (INIS)

    Mozin, I.V.; Vasil'ev, M.P.

    1985-01-01

    Problems of developing systems for computer-aided design (CAD) of radioelectronic equipment for large electrophysical facilities such as charged particle accelerators of new generation are discussed. The PLATA subsystem representing a part of CAD and used for printed circuit design is described. The subsystem PLATA is utilized to design, on the average, up to 150 types of circuits a year, 100-120 of which belong to circuits of increased complexity. In this case labour productivity of a designer at documentation increases almost two times

  10. Thermal studies of the canister staging pit in a hypothetical Yucca Mountain canister handling facility using computational fluid dynamics

    International Nuclear Information System (INIS)

    Soltani, Mehdi; Barringer, Chris; Bues, Timothy T. de

    2007-01-01

    The proposed Yucca Mountain nuclear waste storage site will contain facilities for preparing the radioactive waste canisters for burial. A previous facility design considered was the Canister Handling Facility Staging Pit. This design is no longer used, but its thermal evaluation is typical of such facilities. Structural concrete can be adversely affected by the heat from radioactive decay. Consequently, facilities must have heating ventilation and air conditioning (HVAC) systems for cooling. Concrete temperatures are a function of conductive, convective and radiative heat transfer. The prediction of concrete temperatures under such complex conditions can only be adequately handled by computational fluid dynamics (CFD). The objective of the CFD analysis was to predict concrete temperatures under normal and off-normal conditions. Normal operation assumed steady state conditions with constant HVAC flow and temperatures. However, off-normal operation was an unsteady scenario which assumed a total HVAC failure for a period of 30 days. This scenario was particularly complex in that the concrete temperatures would gradually rise, and air flows would be buoyancy driven. The CFD analysis concluded that concrete wall temperatures would be at or below the maximum temperature limits in both the normal and off-normal scenarios. While this analysis was specific to a facility design that is no longer used, it demonstrates that such facilities are reasonably expected to have satisfactory thermal performance. (author)

  11. Radiation safety training for accelerator facilities

    International Nuclear Information System (INIS)

    Trinoskey, P.A.

    1997-02-01

    In November 1992, a working group was formed within the U.S. Department of Energy's (DOE's) accelerator facilities to develop a generic safety training program to meet the basic requirements for individuals working in accelerator facilities. This training, by necessity, includes sections for inserting facility-specific information. The resulting course materials were issued by DOE as a handbook under its technical standards in 1996. Because experimenters may be at a facility for only a short time and often at odd times during the day, the working group felt that computer-based training would be useful. To that end, Lawrence Livermore National Laboratory (LLNL) and Argonne National Laboratory (ANL) together have developed a computer-based safety training program for accelerator facilities. This interactive course not only enables trainees to receive facility- specific information, but time the training to their schedule and tailor it to their level of expertise

  12. Computer-guided facility for the study of single crystals at the gamma diffractometer GADI

    International Nuclear Information System (INIS)

    Heer, H.; Bleichert, H.; Gruhn, W.; Moeller, R.

    1984-10-01

    In the study of solid-state properties it is in many cases necessary to work with single crystals. The increased requirement in the industry and research as well as the desire for better characterization by means of γ-diffractometry made it necessary to improve and to modernize the existing instrument. The advantages of a computer-guided facility against the conventional, semiautomatic operation are manifold. Not only the process guidance, but also the data acquisition and evaluation are performed by the computer. By a remote control the operator is able to find quickly a reflex and to drive the crystal in every desired measuring position. The complete protocollation of all important measuring parameters, the convenient data storage, as well as the automatic evaluation are much useful for the user. Finally the measuring time can be increased to practically 24 hours per day. By this the versed characterization by means of γ-diffractometry is put on a completely new level. (orig.) [de

  13. Decommissioning Facility Characterization DB System

    International Nuclear Information System (INIS)

    Park, S. K.; Ji, Y. H.; Park, J. H.; Chung, U. S.

    2010-01-01

    Basically, when a decommissioning is planed for a nuclear facility, an investigation into the characterization of the nuclear facility is first required. The results of such an investigation are used for calculating the quantities of dismantled waste and estimating the cost of the decommissioning project. In this paper, it is presented a computer system for the characterization of nuclear facilities, called DEFACS (DEcommissioning FAcility Characterization DB System). This system consists of four main parts: a management coding system for grouping items, a data input system, a data processing system and a data output system. All data is processed in a simplified and formatted manner in order to provide useful information to the decommissioning planner. For the hardware, PC grade computers running Oracle software on Microsoft Windows OS were selected. The characterization data results for the nuclear facility under decommissioning will be utilized for the work-unit productivity calculation system and decommissioning engineering system as basic sources of information

  14. Decommissioning Facility Characterization DB System

    Energy Technology Data Exchange (ETDEWEB)

    Park, S. K.; Ji, Y. H.; Park, J. H.; Chung, U. S. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    Basically, when a decommissioning is planed for a nuclear facility, an investigation into the characterization of the nuclear facility is first required. The results of such an investigation are used for calculating the quantities of dismantled waste and estimating the cost of the decommissioning project. In this paper, it is presented a computer system for the characterization of nuclear facilities, called DEFACS (DEcommissioning FAcility Characterization DB System). This system consists of four main parts: a management coding system for grouping items, a data input system, a data processing system and a data output system. All data is processed in a simplified and formatted manner in order to provide useful information to the decommissioning planner. For the hardware, PC grade computers running Oracle software on Microsoft Windows OS were selected. The characterization data results for the nuclear facility under decommissioning will be utilized for the work-unit productivity calculation system and decommissioning engineering system as basic sources of information

  15. Facility model for the Los Alamos Plutonium Facility

    International Nuclear Information System (INIS)

    Coulter, C.A.; Thomas, K.E.; Sohn, C.L.; Yarbro, T.F.; Hench, K.W.

    1986-01-01

    The Los Alamos Plutonium Facility contains more than sixty unit processes and handles a large variety of nuclear materials, including many forms of plutonium-bearing scrap. The management of the Plutonium Facility is supporting the development of a computer model of the facility as a means of effectively integrating the large amount of information required for material control, process planning, and facility development. The model is designed to provide a flexible, easily maintainable facility description that allows the faciltiy to be represented at any desired level of detail within a single modeling framework, and to do this using a model program and data files that can be read and understood by a technically qualified person without modeling experience. These characteristics were achieved by structuring the model so that all facility data is contained in data files, formulating the model in a simulation language that provides a flexible set of data structures and permits a near-English-language syntax, and using a description for unit processes that can represent either a true unit process or a major subsection of the facility. Use of the model is illustrated by applying it to two configurations of a fictitious nuclear material processing line

  16. The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster

    Science.gov (United States)

    Löwe, P.; Klump, J.; Thaler, J.

    2012-04-01

    Compute clusters can be used as GIS workbenches, their wealth of resources allow us to take on geocomputation tasks which exceed the limitations of smaller systems. To harness these capabilities requires a Geographic Information System (GIS), able to utilize the available cluster configuration/architecture and a sufficient degree of user friendliness to allow for wide application. In this paper we report on the first successful porting of GRASS GIS, the oldest and largest Free Open Source (FOSS) GIS project, onto a compute cluster using Platform Computing's Load Sharing Facility (LSF). In 2008, GRASS6.3 was installed on the GFZ compute cluster, which at that time comprised 32 nodes. The interaction with the GIS was limited to the command line interface, which required further development to encapsulate the GRASS GIS business layer to facilitate its use by users not familiar with GRASS GIS. During the summer of 2011, multiple versions of GRASS GIS (v 6.4, 6.5 and 7.0) were installed on the upgraded GFZ compute cluster, now consisting of 234 nodes with 480 CPUs providing 3084 cores. The GFZ compute cluster currently offers 19 different processing queues with varying hardware capabilities and priorities, allowing for fine-grained scheduling and load balancing. After successful testing of core GIS functionalities, including the graphical user interface, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008). A first application of the new GIS functionality was the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). For this, up to 500 processing nodes were used in parallel. Further trials included the processing of geometrically complex problems, requiring significant amounts of processing time. The GIS cluster successfully completed all these tasks, with processing times

  17. Automated Computer-Based Facility for Measurement of Near-Field Structure of Microwave Radiators and Scatterers

    DEFF Research Database (Denmark)

    Mishra, Shantnu R.;; Pavlasek, Tomas J. F.;; Muresan, Letitia V.

    1980-01-01

    An automatic facility for measuring the three-dimensional structure of the near fields of microwave radiators and scatterers is described. The amplitude and phase for different polarization components can be recorded in analog and digital form using a microprocessor-based system. The stored data...... are transferred to a large high-speed computer for bulk processing and for the production of isophot and equiphase contour maps or profiles. The performance of the system is demonstrated through results for a single conical horn, for interacting rectangular horns, for multiple cylindrical scatterers...

  18. Safeguards Automated Facility Evaluation (SAFE) methodology

    International Nuclear Information System (INIS)

    Chapman, L.D.; Grady, L.M.; Bennett, H.A.; Sasser, D.W.; Engi, D.

    1978-08-01

    An automated approach to facility safeguards effectiveness evaluation has been developed. This automated process, called Safeguards Automated Facility Evaluation (SAFE), consists of a collection of a continuous stream of operational modules for facility characterization, the selection of critical paths, and the evaluation of safeguards effectiveness along these paths. The technique has been implemented on an interactive computer time-sharing system and makes use of computer graphics for the processing and presentation of information. Using this technique, a comprehensive evaluation of a safeguards system can be provided by systematically varying the parameters that characterize the physical protection components of a facility to reflect the perceived adversary attributes and strategy, environmental conditions, and site operational conditions. The SAFE procedure has broad applications in the nuclear facility safeguards field as well as in the security field in general. Any fixed facility containing valuable materials or components to be protected from theft or sabotage could be analyzed using this same automated evaluation technique

  19. Wind Energy Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Laurie, Carol

    2017-02-01

    This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.

  20. Cathare2 V1.3E post-test computations of SPE-1 and SPE-2 experiments at PMK-NVH facility

    International Nuclear Information System (INIS)

    Belliard, M.; Laugier, E.

    1994-01-01

    This paper presents the first CATHARE2 V1.3E simulations of the SPE-2 transients at PMK-NVH loop. Concerning the SPE-1 and the SPE-2 experimentations at PMK-NVH, it contains a description of the facilities and the transient, as well as different conditions of use. The paper includes also a presentation of the CATHARE2 model and different type of computation, such as the steady state computation or SPE-1 and SPE-2 transient (TEC). 4 refs., 12 figs., 4 tabs

  1. Computer Operating System Maintenance.

    Science.gov (United States)

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  2. Spatial interaction models facility location using game theory

    CERN Document Server

    D'Amato, Egidio; Pardalos, Panos

    2017-01-01

    Facility location theory develops the idea of locating one or more facilities by optimizing suitable criteria such as minimizing transportation cost, or capturing the largest market share. The contributions in this book focus an approach to facility location theory through game theoretical tools highlighting situations where a location decision is faced by several decision makers and leading to a game theoretical framework in non-cooperative and cooperative methods. Models and methods regarding the facility location via game theory are explored and applications are illustrated through economics, engineering, and physics. Mathematicians, engineers, economists and computer scientists working in theory, applications and computational aspects of facility location problems using game theory will find this book useful.

  3. Collaboration between J-PARC and computing science

    International Nuclear Information System (INIS)

    Nakatani, Takeshi; Inamura, Yasuhiro

    2010-01-01

    Many world-forefront experimental apparatuses are under construction at Materials and Life Science Facility of Japan Proton Accelerator Research Complex (J-PARC), and new experimental methods supported by the computer facility are under development towards practical use. Many problems, however, remains to be developed as a large open use facility under the Low for Promotion of Public Utilization. Some of them need the cooperation of experimental scientists and computer scientists to be solved. Present status of the computing ability at Materials and Life Science Facility of J-PARC, and research results expected to be brought by the collaboration of experimental- and computer-scientists are described. (author)

  4. Physics and detector simulation facility Type O workstation specifications

    International Nuclear Information System (INIS)

    Chartrand, G.; Cormell, L.R.; Hahn, R.; Jacobson, D.; Johnstad, H.; Leibold, P.; Marquez, M.; Ramsey, B.; Roberts, L.; Scipioni, B.; Yost, G.P.

    1990-11-01

    This document specifies the requirements for the front-end network of workstations of a distributed computing facility. This facility will be needed to perform the physics and detector simulations for the design of Superconducting Super Collider (SSC) detectors, and other computations in support of physics and detector needs. A detailed description of the computer simulation facility is given in the overall system specification document. This document provides revised subsystem specifications for the network of monitor-less Type 0 workstations. The requirements specified in this document supersede the requirements given. In Section 2 a brief functional description of the facility and its use are provided. The list of detailed specifications (vendor requirements) is given in Section 3 and the qualifying requirements (benchmarks) are described in Section 4

  5. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  6. A stand alone computer system to aid the development of mirror fusion test facility RF heating systems

    International Nuclear Information System (INIS)

    Thomas, R.A.

    1983-01-01

    The Mirror Fusion Test Facility (MFTF-B) control system architecture requires the Supervisory Control and Diagnostic System (SCDS) to communicate with a LSI-11 Local Control Computer (LCC) that in turn communicates via a fiber optic link to CAMAC based control hardware located near the machine. In many cases, the control hardware is very complex and requires a sizable development effort prior to being integrated into the overall MFTF-B system. One such effort was the development of the Electron Cyclotron Resonance Heating (ECRH) system. It became clear that a stand alone computer system was needed to simulate the functions of SCDS. This paper describes the hardware and software necessary to implement the SCDS Simulation Computer (SSC). It consists of a Digital Equipment Corporation (DEC) LSI-11 computer and a Winchester/Floppy disk operating under the DEC RT-11 operating system. All application software for MFTF-B is programmed in PASCAL, which allowed us to adapt procedures originally written for SCDS to the SSC. This nearly identical software interface means that software written during the equipment development will be useful to the SCDS programmers in the integration phase

  7. Los Alamos Plutonium Facility Waste Management System

    International Nuclear Information System (INIS)

    Smith, K.; Montoya, A.; Wieneke, R.; Wulff, D.; Smith, C.; Gruetzmacher, K.

    1997-01-01

    This paper describes the new computer-based transuranic (TRU) Waste Management System (WMS) being implemented at the Plutonium Facility at Los Alamos National Laboratory (LANL). The Waste Management System is a distributed computer processing system stored in a Sybase database and accessed by a graphical user interface (GUI) written in Omnis7. It resides on the local area network at the Plutonium Facility and is accessible by authorized TRU waste originators, count room personnel, radiation protection technicians (RPTs), quality assurance personnel, and waste management personnel for data input and verification. Future goals include bringing outside groups like the LANL Waste Management Facility on-line to participate in this streamlined system. The WMS is changing the TRU paper trail into a computer trail, saving time and eliminating errors and inconsistencies in the process

  8. Exercise evaluation and simulation facility

    International Nuclear Information System (INIS)

    Meitzler, W.D.; Jaske, R.T.

    1983-12-01

    The Exercise Evaluation and Simulation Facility (EESF) is a mini computer based system that will serve as a tool to aid FEMA in the evaluation of radiological emergency plans and preparedness around commercial nucler power facilities. The EESF integrates the following resources: a meteorological model, dose model, evacuation model, map information, and exercise information into a single system. Thus the user may access these various resources concurrently, and on completion display the results on a color graphic display or hardcopy unit. A unique capability made possible by the integration of these models is the computation of estimated total dose to the population

  9. The role of micro size computing clusters for small physics groups

    International Nuclear Information System (INIS)

    Shevel, A Y

    2014-01-01

    A small physics group (3-15 persons) might use a number of computing facilities for the analysis/simulation, developing/testing, teaching. It is discussed different types of computing facilities: collaboration computing facilities, group local computing cluster (including colocation), cloud computing. The author discuss the growing variety of different computing options for small groups and does emphasize the role of the group owned computing cluster of micro size.

  10. Computer Security Incident Response Planning at Nuclear Facilities

    International Nuclear Information System (INIS)

    2016-06-01

    The purpose of this publication is to assist Member States in developing comprehensive contingency plans for computer security incidents with the potential to impact nuclear security and/or nuclear safety. It provides an outline and recommendations for establishing a computer security incident response capability as part of a computer security programme, and considers the roles and responsibilities of the system owner, operator, competent authority, and national technical authority in responding to a computer security incident with possible nuclear security repercussions

  11. Computer control and data acquisition system for the Mirror Fusion Test Facility Ion Cyclotron Resonant Heating System (ICRH)

    International Nuclear Information System (INIS)

    Cheshire, D.L.; Thomas, R.A.

    1985-01-01

    The Lawrence Livermore National Laboratory (LLNL) large Mirror Fusion Test Facility (MFTF-B) will employ an Ion Cyclotron Resonant Heating (ICRH) system for plasma startup. As the MFTF-B Industrial Participant, TRW has responsibility for the ICRH system, including development of the data acquisition and control system. During the MFTF-B Supervisory Control and Diagnostic System (SCDS). For subsystem development and checkout at TRW, and for verification and acceptance testing at LLNL, the system will be run from a stand-alone computer system designed to simulate the functions of SCDS. The ''SCDS Simulator'' was developed originally for the MFTF-B ECRH System; descriptions of the hardware and software are updated in this paper. The computer control and data acquisition functions implemented for ICRH are described, including development status, and test schedule at TRW and at LLNL. The application software is written for the SCDS Simulator, but it is programmed in PASCAL and designed to facilitate conversion for use on the SCDS computers

  12. A guide for the selection of computer assisted mapping (CAM) and facilities informations systems

    Energy Technology Data Exchange (ETDEWEB)

    Haslin, S.; Baxter, P.; Jarvis, L.

    1980-12-01

    Many distribution engineers are now aware that computer assisted mapping (CAM) and facilities informations systems are probably the most significant breakthrough to date in computer applications for distribution engineering. The Canadian Electrical Asociation (CEA) recognized this and requested engineers of B.C. Hydro make a study of the state of the art in Canadian utilities and the progress of CAM systems on an international basis. The purpose was to provide a guide to assist Canadian utility distribution engineers faced with the problem of studying the application of CAM systems as an alternative to present methods, consideration being given to the long-term and other benefits that were perhaps not apparent for those approaching this field for the first time. It soon became apparent that technology was developing at a high rate and competition in the market was very strong. Also a number of publications were produced by other sources which adequately covered the scope of this study. This report is thus a collection of references to reports, manuals, and other documents with a few considerations provided for those companies interested in exploring further the use of interactive graphics. 24 refs.

  13. LLL transient-electromagnetics-measurement facility

    International Nuclear Information System (INIS)

    Deadrick, F.J.; Miller, E.K.; Hudson, H.G.

    1975-01-01

    The operation and hardware of the Lawrence Livermore Laboratory's transient-electromagnetics (EM)-measurement facility are described. The transient-EM range is useful for determining the time-domain transient responses of structures to incident EM pulses. To illustrate the accuracy and utility of the EM-measurement facility, actual experimental measurements are compared to numerically computed values

  14. Automating an EXAFS facility: hardware and software considerations

    International Nuclear Information System (INIS)

    Georgopoulos, P.; Sayers, D.E.; Bunker, B.; Elam, T.; Grote, W.A.

    1981-01-01

    The basic design considerations for computer hardware and software, applicable not only to laboratory EXAFS facilities, but also to synchrotron installations, are reviewed. Uniformity and standardization of both hardware configurations and program packages for data collection and analysis are heavily emphasized. Specific recommendations are made with respect to choice of computers, peripherals, and interfaces, and guidelines for the development of software packages are set forth. A description of two working computer-interfaced EXAFS facilities is presented which can serve as prototypes for future developments. 3 figures

  15. CDDIS_DORIS_products_quaternions

    Data.gov (United States)

    National Aeronautics and Space Administration — Satellite attitude information from satellites with Doppler Orbitography by Radiopositioning Integrated on Satellite (DORIS) receivers. Files include attitude...

  16. CDDIS_DORIS_data_rinex

    Data.gov (United States)

    National Aeronautics and Space Administration — The Doppler Orbitography by Radiopositioning Integrated on Satellite (DORIS) was developed by the Centre National d'Etudes Spatiales (CNES) with cooperation from...

  17. CDDIS_DORIS_data_cycle

    Data.gov (United States)

    National Aeronautics and Space Administration — The Doppler Orbitography by Radiopositioning Integrated on Satellite (DORIS) was developed by the Centre National d'Etudes Spatiales (CNES) with cooperation from...

  18. CDDIS_VLBI_products_positions

    Data.gov (United States)

    National Aeronautics and Space Administration — Station positions and velocity solutions in Software INdependent EXchange (SINEX) format derived from analysis of Very Long Baseline Interferometry (VLBI) data....

  19. CDDIS_GNSS_products_erp

    Data.gov (United States)

    National Aeronautics and Space Administration — Earth Rotation Parameters (ERPs) derived from analysis of Global Navigation Satellite System (GNSS) data. These products are the generated by analysis centers in...

  20. Irradiation facilities in JRR-3M

    International Nuclear Information System (INIS)

    Ohtomo, Akitoshi; Sigemoto, Masamitsu; Takahashi, Hidetake

    1992-01-01

    Irradiation facilities have been installed in the upgraded JRR-3 (JRR-3M) in Japan Atomic Energy Research Institute (JAERI). There are hydraulic rabbit facilities (HR), pneumatic rabbit facilities (PN), neutron activation analysis facility (PN3), uniform irradiation facility (SI), rotating irradiation facility and capsule irradiation facilities to carry out the neutron irradiation in the JRR-3M. These facilities are operated using a process control computer system to centerize the process information. Some of the characteristics for the facilities were satisfactorily measured at the same time of reactor performance test in 1990. During reactor operation, some of the tests are continued to confirm the basic characteristics on facilities, for example, PN3 was confirmed to have enough performance for activation analysis. Measurement of neutron flux at all irradiation positions has been carried out for the equilibrium core. (author)

  1. Guide to user facilities at the Lawrence Berkeley Laboratory

    International Nuclear Information System (INIS)

    1984-04-01

    Lawrence Berkeley Laboratories' user facilities are described. Specific facilities include: the National Center for Electron Microscopy; the Bevalac; the SuperHILAC; the Neutral Beam Engineering Test Facility; the National Tritium Labeling Facility; the 88 inch Cyclotron; the Heavy Charged-Particle Treatment Facility; the 2.5 MeV Van de Graaff; the Sky Simulator; the Center for Computational Seismology; and the Low Background Counting Facility

  2. Dynamic Thermal Loads and Cooling Requirements Calculations for V ACs System in Nuclear Fuel Processing Facilities Using Computer Aided Energy Conservation Models

    International Nuclear Information System (INIS)

    EL Fawal, M.M.; Gadalla, A.A.; Taher, B.M.

    2010-01-01

    In terms of nuclear safety, the most important function of ventilation air conditioning (VAC) systems is to maintain safe ambient conditions for components and structures important to safety inside the nuclear facility and to maintain appropriate working conditions for the plant's operating and maintenance staff. As a part of a study aimed to evaluate the performance of VAC system of the nuclear fuel cycle facility (NFCF) a computer model was developed and verified to evaluate the thermal loads and cooling requirements for different zones of fuel processing facility. The program is based on transfer function method (TFM) and it is used to calculate the dynamic heat gain by various multilayer walls constructions and windows hour by hour at any orientation of the building. The developed model was verified by comparing the obtained calculated results of the solar heat gain by a given building with the corresponding calculated values using finite difference method (FDM) and total equivalent temperature different method (TETD). As an example the developed program is used to calculate the cooling loads of the different zones of a typical nuclear fuel facility the results showed that the cooling capacities of the different cooling units of each zone of the facility meet the design requirements according to safety regulations in nuclear facilities.

  3. Lustre Distributed Name Space (DNE) Evaluation at the Oak Ridge Leadership Computing Facility (OLCF)

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, James S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Computational Sciences; Leverman, Dustin B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Computational Sciences; Hanley, Jesse A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Computational Sciences; Oral, Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Computational Sciences

    2016-08-22

    This document describes the Lustre Distributed Name Space (DNE) evaluation carried at the Oak Ridge Leadership Computing Facility (OLCF) between 2014 and 2015. DNE is a development project funded by the OpenSFS, to improve Lustre metadata performance and scalability. The development effort has been split into two parts, the first part (DNE P1) providing support for remote directories over remote Lustre Metadata Server (MDS) nodes and Metadata Target (MDT) devices, while the second phase (DNE P2) addressed split directories over multiple remote MDS nodes and MDT devices. The OLCF have been actively evaluating the performance, reliability, and the functionality of both DNE phases. For these tests, internal OLCF testbed were used. Results are promising and OLCF is planning on a full DNE deployment by mid-2016 timeframe on production systems.

  4. Laser performance operations model (LPOM): a computational system that automates the setup and performance analysis of the national ignition facility

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, M; House, R; Williams, W; Haynam, C; White, R; Orth, C; Sacks, R [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA, 94550 (United States)], E-mail: shaw7@llnl.gov

    2008-05-15

    The National Ignition Facility (NIF) is a stadium-sized facility containing a 192-beam, 1.8 MJ, 500-TW, 351-nm laser system together with a 10-m diameter target chamber with room for many target diagnostics. NIF will be the world's largest laser experimental system, providing a national center to study inertial confinement fusion and the physics of matter at extreme energy densities and pressures. A computational system, the Laser Performance Operations Model (LPOM) has been developed and deployed that automates the laser setup process, and accurately predict laser energetics. LPOM determines the settings of the injection laser system required to achieve the desired main laser output, provides equipment protection, determines the diagnostic setup, and supplies post shot data analysis and reporting.

  5. Manual for operation of the multipurpose thermalhydraulic test facility TOPFLOW (Transient Two Phase Flow Test Facility)

    International Nuclear Information System (INIS)

    Beyer, M.; Carl, H.; Schuetz, H.; Pietruske, H.; Lenk, S.

    2004-07-01

    The Forschungszentrum Rossendorf (FZR) e. V. is constructing a new large-scale test facility, TOPFLOW, for thermalhydraulic single effect tests. The acronym stands for transient two phase flow test facility. It will mainly be used for the investigation of generic and applied steady state and transient two phase flow phenomena and the development and validation of models of computational fluid dynamic (CFD) codes. The manual of the test facility must always be available for the staff in the control room and is restricted condition during operation of personnel and also reconstruction of the facility. (orig./GL)

  6. Computer-aided system for cryogenic research facilities

    International Nuclear Information System (INIS)

    Gerasimov, V.P.; Zhelamsky, M.V.; Mozin, I.V.; Repin, S.S.

    1994-01-01

    A computer-aided system is developed for the more effective choice and optimization of the design and manufacturing technologies of the superconductor for the magnet system of the International Thermonuclear Experimental Reactor (ITER) with the aim to ensure the superconductor certification. The computer-aided system provides acquisition, processing, storage and display of data describing the proceeding tests, the detection of any parameter deviations and their analysis. Besides, it generates commands for the equipment switch off in emergency situations. ((orig.))

  7. In-facility transport code review

    International Nuclear Information System (INIS)

    Spore, J.W.; Boyack, B.E.; Bohl, W.R.

    1996-07-01

    The following computer codes were reviewed by the In-Facility Transport Working Group for application to the in-facility transport of radioactive aerosols, flammable gases, and/or toxic gases: (1) CONTAIN, (2) FIRAC, (3) GASFLOW, (4) KBERT, and (5) MELCOR. Based on the review criteria as described in this report and the versions of each code available at the time of the review, MELCOR is the best code for the analysis of in-facility transport when multidimensional effects are not significant. When multi-dimensional effects are significant, GASFLOW should be used

  8. The dynamic analysis facility at the Chalk River Nuclear Laboratories

    International Nuclear Information System (INIS)

    Argue, D.S.; Howatt, W.T.

    1979-10-01

    The Dynamic Analysis Facility at the Chalk River Nuclear Laboratories (CRNL) of Atomic Energy of Canada Limited (AECL) comprises a Hybrid Computer, consisting of two Applied Dynamic International AD/FIVE analog computers and a Digital Equipment Corporation (DEC) PDP-11/55 digital computer, and a Program Development System based on a DEC PDP-11/45 digital computer. This report describes the functions of the various hardware components of the Dynamic Analysis Facility and the interactions between them. A brief description of the software available to the user is also given. (auth)

  9. Performance assessment of the proposed Monitored Retrievable Storage Facility

    International Nuclear Information System (INIS)

    Chockie, A.D.; Hostick, C.J.; Winter, C.

    1986-02-01

    Pacific Northwest laboratory (PNL) has completed a performance evaluation of the proposed monitored retrievable storage (MRS) facility. This study was undertaken as part of the Department of Energy MRS Program at PNL. The objective of the performance evaluation was to determine whether the conceptual MRS facility would be able to process spent fuel at the specified design rate of 3600 metric tons of uranium (MTU) per year. The performance of the proposed facility was assessed using the computer model COMPACT (Computer Optimization of Processing and Cask Transport) to simulate facility operations. The COMPACT model consisted of three application models each of which addressed a different aspect of the facility's operation: MRS/waste transportation interface; cask handling capability; and disassembly/consolidation (hot cell) operations. Our conclusions, based on the assessment of design criteria for the proposed facility, are as follows: Facilities and equipment throughout the facility have capability beyond the 3600 MTU/y design requirement. This added capability provides a reserve to compensate for unexpected perturbations in shipping or handling of the spent fuel. Calculations indicate that the facility's maximum maintainable processing capability is approximately 4800 MTU/y

  10. Three-dimensional coupled Monte Carlo-discrete ordinates computational scheme for shielding calculations of large and complex nuclear facilities

    International Nuclear Information System (INIS)

    Chen, Y.; Fischer, U.

    2005-01-01

    Shielding calculations of advanced nuclear facilities such as accelerator based neutron sources or fusion devices of the tokamak type are complicated due to their complex geometries and their large dimensions, including bulk shields of several meters thickness. While the complexity of the geometry in the shielding calculation can be hardly handled by the discrete ordinates method, the deep penetration of radiation through bulk shields is a severe challenge for the Monte Carlo particle transport technique. This work proposes a dedicated computational scheme for coupled Monte Carlo-Discrete Ordinates transport calculations to handle this kind of shielding problems. The Monte Carlo technique is used to simulate the particle generation and transport in the target region with both complex geometry and reaction physics, and the discrete ordinates method is used to treat the deep penetration problem in the bulk shield. The coupling scheme has been implemented in a program system by loosely integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and a newly developed coupling interface program for mapping process. Test calculations were performed with comparison to MCNP solutions. Satisfactory agreements were obtained between these two approaches. The program system has been chosen to treat the complicated shielding problem of the accelerator-based IFMIF neutron source. The successful application demonstrates that coupling scheme with the program system is a useful computational tool for the shielding analysis of complex and large nuclear facilities. (authors)

  11. Materials and Life Science Experimental Facility at the Japan Proton Accelerator Research Complex III: Neutron Devices and Computational and Sample Environments

    Directory of Open Access Journals (Sweden)

    Kaoru Sakasai

    2017-08-01

    Full Text Available Neutron devices such as neutron detectors, optical devices including supermirror devices and 3He neutron spin filters, and choppers are successfully developed and installed at the Materials Life Science Facility (MLF of the Japan Proton Accelerator Research Complex (J-PARC, Tokai, Japan. Four software components of MLF computational environment, instrument control, data acquisition, data analysis, and a database, have been developed and equipped at MLF. MLF also provides a wide variety of sample environment options including high and low temperatures, high magnetic fields, and high pressures. This paper describes the current status of neutron devices, computational and sample environments at MLF.

  12. Automated approach to nuclear facility safeguards effectiveness evaluation

    International Nuclear Information System (INIS)

    1977-01-01

    Concern over the security of nuclear facilities has generated a need for a reliable, time efficient, and easily applied method of evaluating the effectiveness of safeguards systems. Such an evaluation technique could be used (1) by the Nuclear Regulatory Commission to evaluate a licensee's proposal, (2) to assess the security status of a system, or (3) to design and/or upgrade nuclear facilities. The technique should be capable of starting with basic information, such as the facility layout and performance parameters for physical protection components, and analyzing that information so that a reliable overall facility evaluation is obtained. Responding to this expressed need, an automated approach to facility safeguards effectiveness evaluation has been developed. This procedure consists of a collection of functional modules for facility characterization, critical path generation, and path evaluation combined into a continuous stream of operations. The technique has been implemented on an interactive computer-timesharing system and makes use of computer graphics for the handling and presentation of information. Using this technique a thorough facility evaluation can be made by systematically varying parameters that characterize the physical protection components of a facility according to changes in perceived adversary attributes and strategy, environmental conditions, and site status

  13. AMRITA -- A computational facility

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, J.E. [California Inst. of Tech., CA (US); Quirk, J.J.

    1998-02-23

    Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

  14. Computer applications for the Fast Flux Test Facility

    International Nuclear Information System (INIS)

    Worth, G.A.; Patterson, J.R.

    1976-01-01

    Computer applications for the FFTF reactor include plant surveillance functions and fuel handling and examination control functions. Plant surveillance systems provide the reactor operator with a selection of over forty continuously updated, formatted displays of correlated data. All data are checked for limits and validity and the operator is advised of any anomaly. Data are also recorded on magnetic tape for historical purposes. The system also provides calculated variables, such as reactor thermal power and anomalous reactivity. Supplementing the basic plant surveillance computer system is a minicomputer system that monitors the reactor cover gas to detect and characterize absorber or fuel pin failures. In addition to plant surveillance functions, computers are used in the FFTF for controlling selected refueling equipment and for post-irradiation fuel pin examination. Four fuel handling or examination systems operate under computer control with manual monitoring and over-ride capability

  15. Performance analysis of cloud computing services for many-tasks scientific computing

    NARCIS (Netherlands)

    Iosup, A.; Ostermann, S.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.

    2011-01-01

    Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining expensive computing facilities by companies and institutes alike. Through the use of virtualization and resource time sharing, clouds serve with a single set of physical resources a

  16. Operational facility-integrated computer system for safeguards

    International Nuclear Information System (INIS)

    Armento, W.J.; Brooksbank, R.E.; Krichinsky, A.M.

    1980-01-01

    A computer system for safeguards in an active, remotely operated, nuclear fuel processing pilot plant has been developed. This sytem maintains (1) comprehensive records of special nuclear materials, (2) automatically updated book inventory files, (3) material transfer catalogs, (4) timely inventory estimations, (5) sample transactions, (6) automatic, on-line volume balances and alarmings, and (7) terminal access and applications software monitoring and logging. Future development will include near-real-time SNM mass balancing as both a static, in-tank summation and a dynamic, in-line determination. It is planned to incorporate aspects of site security and physical protection into the computer monitoring

  17. Visitor's Computer Guidelines | CTIO

    Science.gov (United States)

    Visitor's Computer Guidelines Network Connection Request Instruments Instruments by Telescope IR Instruments Guidelines Library Facilities Outreach NOAO-S EPO Program team Art of Darkness Image Gallery EPO/CADIAS ‹› You are here CTIO Home » Astronomers » Visitor's Computer Guidelines Visitor's Computer

  18. Computing challenges of the CMS experiment

    International Nuclear Information System (INIS)

    Krammer, N.; Liko, D.

    2017-01-01

    The success of the LHC experiments is due to the magnificent performance of the detector systems and the excellent operating computing systems. The CMS offline software and computing system is successfully fulfilling the LHC Run 2 requirements. For the increased data rate of future LHC operation, together with high pileup interactions, improvements of the usage of the current computing facilities and new technologies became necessary. Especially for the challenge of the future HL-LHC a more flexible and sophisticated computing model is needed. In this presentation, I will discuss the current computing system used in the LHC Run 2 and future computing facilities for the HL-LHC runs using flexible computing technologies like commercial and academic computing clouds. The cloud resources are highly virtualized and can be deployed for a variety of computing tasks providing the capacities for the increasing needs of large scale scientific computing.

  19. Design of an error-free nondestructive plutonium assay facility

    International Nuclear Information System (INIS)

    Moore, C.B.; Steward, W.E.

    1987-01-01

    An automated, at-line nondestructive assay (NDA) laboratory is installed in facilities recently constructed at the Savannah River Plant. The laboratory will enhance nuclear materials accounting in new plutonium scrap and waste recovery facilities. The advantages of at-line NDA operations will not be realized if results are clouded by errors in analytical procedures, sample identification, record keeping, or techniques for extracting samples from process streams. Minimization of such errors has been a primary design objective for the new facility. Concepts for achieving that objective include mechanizing the administrative tasks of scheduling activities in the laboratory, identifying samples, recording and storing assay data, and transmitting results information to process control and materials accounting functions. These concepts have been implemented in an analytical computer system that is programmed to avoid the obvious sources of error encountered in laboratory operations. The laboratory computer exchanges information with process control and materials accounting computers, transmitting results information and obtaining process data and accounting information as required to guide process operations and maintain current records of materials flow through the new facility

  20. Argonne Laboratory Computing Resource Center - FY2004 Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.

    2005-04-14

    In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz.

  1. Towards higher reliability of CMS computing facilities

    International Nuclear Information System (INIS)

    Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A

    2012-01-01

    The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.

  2. PROFEAT Update: A Protein Features Web Server with Added Facility to Compute Network Descriptors for Studying Omics-Derived Networks.

    Science.gov (United States)

    Zhang, P; Tao, L; Zeng, X; Qin, C; Chen, S Y; Zhu, F; Yang, S Y; Li, Z R; Chen, W P; Chen, Y Z

    2017-02-03

    The studies of biological, disease, and pharmacological networks are facilitated by the systems-level investigations using computational tools. In particular, the network descriptors developed in other disciplines have found increasing applications in the study of the protein, gene regulatory, metabolic, disease, and drug-targeted networks. Facilities are provided by the public web servers for computing network descriptors, but many descriptors are not covered, including those used or useful for biological studies. We upgraded the PROFEAT web server http://bidd2.nus.edu.sg/cgi-bin/profeat2016/main.cgi for computing up to 329 network descriptors and protein-protein interaction descriptors. PROFEAT network descriptors comprehensively describe the topological and connectivity characteristics of unweighted (uniform binding constants and molecular levels), edge-weighted (varying binding constants), node-weighted (varying molecular levels), edge-node-weighted (varying binding constants and molecular levels), and directed (oriented processes) networks. The usefulness of the network descriptors is illustrated by the literature-reported studies of the biological networks derived from the genome, interactome, transcriptome, metabolome, and diseasome profiles. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Development of a personal computer based facility-level SSAC component and inspector support system

    International Nuclear Information System (INIS)

    Markov, A.

    1989-08-01

    Research Contract No. 4658/RB was conducted between the IAEA and the Bulgarian Committee on Use of Atomic Energy for Peaceful Purposes. The contract required the Committee to develop and program a personal computer based software package to be used as a facility-level computerized State System of Accounting and Control (SSAC) at an off-load power reactor. The software delivered, called the National Safeguards System (NSS) keeps track of all fuel assembly activity at a power reactor and generates all ledgers, MBA material balances and any required reports to national or international authorities. The NSS is designed to operate on a PC/AT or compatible equipment with a hard disk of 20 MB, color graphics monitor or adaptor and at least one floppy disk drive, 360 Kb. The programs are written in Basic (compiler 2.0). They are executed under MS DOS 3.1 or later

  4. Method and computer program product for maintenance and modernization backlogging

    Science.gov (United States)

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  5. Race, wealth, and solid waste facilities in North Carolina.

    Science.gov (United States)

    Norton, Jennifer M; Wing, Steve; Lipscomb, Hester J; Kaufman, Jay S; Marshall, Stephen W; Cravey, Altha J

    2007-09-01

    Concern has been expressed in North Carolina that solid waste facilities may be disproportionately located in poor communities and in communities of color, that this represents an environmental injustice, and that solid waste facilities negatively impact the health of host communities. Our goal in this study was to conduct a statewide analysis of the location of solid waste facilities in relation to community race and wealth. We used census block groups to obtain racial and economic characteristics, and information on solid waste facilities was abstracted from solid waste facility permit records. We used logistic regression to compute prevalence odds ratios for 2003, and Cox regression to compute hazard ratios of facilities issued permits between 1990 and 2003. The adjusted prevalence odds of a solid waste facility was 2.8 times greater in block groups with > or = 50% people of color compared with block groups with or = 100,000 dollars. Among block groups that did not have a previously permitted solid waste facility, the adjusted hazard of a new permitted facility was 2.7 times higher in block groups with > or = 50% people of color compared with block groups with waste facilities present numerous public health concerns. In North Carolina solid waste facilities are disproportionately located in communities of color and low wealth. In the absence of action to promote environmental justice, the continued need for new facilities could exacerbate this environmental injustice.

  6. Rancang Bangun STIKI Class Facilities E-Complaint

    OpenAIRE

    Ni Kadek Ariasih; I Made Gede Sri Artha

    2017-01-01

    STMIK STIKOM Indonesia is one of the institutions in the field of computer-based education. In order to support the effectiveness of the implementation of teaching and learning activities that take place, it is need a service that support the availability of adequate class facilities and complaints services if there are constraints on facilities in the classroom. So far, the management of complaints complaints against classroom facilities or in the labarotorium which is handled by the Househo...

  7. A computer control system for a research reactor

    International Nuclear Information System (INIS)

    Crawford, K.C.; Sandquist, G.M.

    1987-01-01

    Most reactor applications until now, have not required computer control of core output. Commercial reactors are generally operated at a constant power output to provide baseline power. However, if commercial reactor cores are to become load following over a wide range, then centralized digital computer control is required to make the entire facility respond as a single unit to continual changes in power demand. Navy and research reactors are much smaller and simpler and are operated at constant power levels as required, without concern for the number of operators required to operate the facility. For navy reactors, centralized digital computer control may provide space savings and reduced personnel requirements. Computer control offers research reactors versatility to efficiently change a system to develop new ideas. The operation of any reactor facility would be enhanced by a controller that does not panic and is continually monitoring all facility parameters. Eventually very sophisticated computer control systems may be developed which will sense operational problems, diagnose the problem, and depending on the severity of the problem, immediately activate safety systems or consult with operators before taking action

  8. New Mandatory Computer Security Course

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Just like any other organization, CERN is permanently under attack - even right now. Consequently it's important to be vigilant about security risks, protecting CERN's reputation - and your work. The availability, integrity and confidentiality of CERN's computing services and the unhindered operation of its accelerators and experiments come down to the combined efforts of the CERN Security Team and you. In order to remain par with the attack trends, the Security Team regularly reminds CERN users about the computer security risks, and about the rules for using CERN’s computing facilities. Since 2007, newcomers have to follow a dedicated basic computer security course informing them about the “Do’s” and “Dont’s” when using CERNs computing facilities. This course has recently been redesigned. It is now mandatory for all CERN members (users and staff) owning a CERN computer account and must be followed once every three years. Members who...

  9. Altitude simulation facility for testing large space motors

    Science.gov (United States)

    Katz, U.; Lustig, J.; Cohen, Y.; Malkin, I.

    1993-02-01

    This work describes the design of an altitude simulation facility for testing the AKM motor installed in the 'Ofeq' satellite launcher. The facility, which is controlled by a computer, consists of a diffuser and a single-stage ejector fed with preheated air. The calculations of performance and dimensions of the gas extraction system were conducted according to a one-dimensional analysis. Tests were carried out on a small-scale model of the facility in order to examine the design concept, then the full-scale facility was constructed and operated. There was good agreement among the results obtained from the small-scale facility, from the full-scale facility, and from calculations.

  10. Control system of test and research facilities for nuclear energy industry

    International Nuclear Information System (INIS)

    1983-01-01

    IHI manufactures several kinds of test and research facilities used for research and development of new type power reactor and solidification system of high level radioactive liquid waste and safety research of light water reactor. These facilities are usually new type plants themselves, so that their control systems have to be designed individually for each plant with the basic conception. They have many operation modes because of their purposes of research and development, so the operation has to be automatized and requires the complicated sequence control system. In addition to these requirements, the detail design is hardly fixed on schedule and often modified during the initial start up period. Therefore, the computer control system was applied to these facilities with CRT display for man-machine communication earlier than to commercial power plants, because in the computer system the control logic is not hard wired but soft programmed and can be easily modified. In this paper, two typical computer control systems, one for PWR reflood test facility and another for mock-up test facility for solidification of liquid waste, are introduced. (author)

  11. Systems management of facilities agreements

    International Nuclear Information System (INIS)

    Blundell, A.

    1998-01-01

    The various types of facilities agreements, the historical obstacles to implementation of agreement management systems and the new opportunities emerging as industry is beginning to make an effort to overcome these obstacles, are reviewed. Barriers to computerized agreement management systems (lack of consistency, lack of standards, scarcity of appropriate computer software) are discussed. Characteristic features of a model facilities agreement management system and the forces driving the changing attitudes towards such systems (e.g. mergers) are also described

  12. National Ignition Facility integrated computer control system

    International Nuclear Information System (INIS)

    Van Arsdall, P.J. LLNL

    1998-01-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance

  13. Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC

    Science.gov (United States)

    Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet

    1999-01-01

    The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.

  14. LLNL superconducting magnets test facility

    Energy Technology Data Exchange (ETDEWEB)

    Manahan, R; Martovetsky, N; Moller, J; Zbasnik, J

    1999-09-16

    The FENIX facility at Lawrence Livermore National Laboratory was upgraded and refurbished in 1996-1998 for testing CICC superconducting magnets. The FENIX facility was used for superconducting high current, short sample tests for fusion programs in the late 1980s--early 1990s. The new facility includes a 4-m diameter vacuum vessel, two refrigerators, a 40 kA, 42 V computer controlled power supply, a new switchyard with a dump resistor, a new helium distribution valve box, several sets of power leads, data acquisition system and other auxiliary systems, which provide a lot of flexibility in testing of a wide variety of superconducting magnets in a wide range of parameters. The detailed parameters and capabilities of this test facility and its systems are described in the paper.

  15. Scientific Computing Strategic Plan for the Idaho National Laboratory

    International Nuclear Information System (INIS)

    Whiting, Eric Todd

    2015-01-01

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory's (INL's) challenge and charge, and is central to INL's ongoing success. Computing is an essential part of INL's future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing number of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.

  16. Implementation of computer security at nuclear facilities in Germany

    Energy Technology Data Exchange (ETDEWEB)

    Lochthofen, Andre; Sommer, Dagmar [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany)

    2013-07-01

    In recent years, electrical and I and C components in nuclear power plants (NPPs) were replaced by software-based components. Due to the increased number of software-based systems also the threat of malevolent interferences and cyber-attacks on NPPs has increased. In order to maintain nuclear security, conventional physical protection measures and protection measures in the field of computer security have to be implemented. Therefore, the existing security management process of the NPPs has to be expanded to computer security aspects. In this paper, we give an overview of computer security requirements for German NPPs. Furthermore, some examples for the implementation of computer security projects based on a GRS-best-practice-approach are shown. (orig.)

  17. Implementation of computer security at nuclear facilities in Germany

    International Nuclear Information System (INIS)

    Lochthofen, Andre; Sommer, Dagmar

    2013-01-01

    In recent years, electrical and I and C components in nuclear power plants (NPPs) were replaced by software-based components. Due to the increased number of software-based systems also the threat of malevolent interferences and cyber-attacks on NPPs has increased. In order to maintain nuclear security, conventional physical protection measures and protection measures in the field of computer security have to be implemented. Therefore, the existing security management process of the NPPs has to be expanded to computer security aspects. In this paper, we give an overview of computer security requirements for German NPPs. Furthermore, some examples for the implementation of computer security projects based on a GRS-best-practice-approach are shown. (orig.)

  18. HVAC optimization as facility requirements change with corporate restructuring

    Energy Technology Data Exchange (ETDEWEB)

    Rodak, R.R.; Sankey, M.S.

    1997-06-01

    The hyper-competitive, dynamic 1990`s forced many corporations to {open_quotes}Right-Size,{close_quotes} relocating resources and equipment -- even consolidating. These changes led to utility reduction if HVAC optimization was thoroughly addressed, and energy conservation opportunities were identified and properly designed. This is true particularly when the facility`s heating and cooling systems are matched to correspond with the load changes attributed to the reduction of staff and computers. Computers have been downsized and processing power per unit of energy input increased, thus, the need for large mainframe computer centers, and their associated high intensity energy usage, have been decreased or eliminated. Cooling, therefore, also has been reduced.

  19. Nuclear fuel cycle facility accident analysis handbook

    International Nuclear Information System (INIS)

    Ayer, J.E.; Clark, A.T.; Loysen, P.; Ballinger, M.Y.; Mishima, J.; Owczarski, P.C.; Gregory, W.S.; Nichols, B.D.

    1988-05-01

    The Accident Analysis Handbook (AAH) covers four generic facilities: fuel manufacturing, fuel reprocessing, waste storage/solidification, and spent fuel storage; and six accident types: fire, explosion, tornado, criticality, spill, and equipment failure. These are the accident types considered to make major contributions to the radiological risk from accidents in nuclear fuel cycle facility operations. The AAH will enable the user to calculate source term releases from accident scenarios manually or by computer. A major feature of the AAH is development of accident sample problems to provide input to source term analysis methods and transport computer codes. Sample problems and illustrative examples for different accident types are included in the AAH

  20. Reminder: Mandatory Computer Security Course

    CERN Multimedia

    IT Department

    2011-01-01

    Just like any other organization, CERN is permanently under attack – even right now. Consequently it's important to be vigilant about security risks, protecting CERN's reputation - and your work. The availability, integrity and confidentiality of CERN's computing services and the unhindered operation of its accelerators and experiments come down to the combined efforts of the CERN Security Team and you. In order to remain par with the attack trends, the Security Team regularly reminds CERN users about the computer security risks, and about the rules for using CERN’s computing facilities. Therefore, a new dedicated basic computer security course has been designed informing you about the “Do’s” and “Dont’s” when using CERN's computing facilities. This course is mandatory for all person owning a CERN computer account and must be followed once every three years. Users who have never done the course, or whose course needs to be renewe...

  1. Facility/equipment performance evaluation using microcomputer simulation analysis

    International Nuclear Information System (INIS)

    Chockie, A.D.; Hostick, C.J.

    1985-08-01

    A computer simulation analysis model was developed at the Pacific Northwest Laboratory to assist in assuring the adequacy of the Monitored Retrievable Storage facility design to meet the specified spent nuclear fuel throughput requirements. The microcomputer-based model was applied to the analysis of material flow, equipment capability and facility layout. The simulation analysis evaluated uncertainties concerning both facility throughput requirements and process duration times as part of the development of a comprehensive estimate of facility performance. The evaluations provided feedback into the design review task to identify areas where design modifications should be considered

  2. Physics Detector Simulation Facility (PDSF) architecture/utilization

    International Nuclear Information System (INIS)

    Scipioni, B.

    1993-05-01

    The current systems architecture for the SSCL's Physics Detector Simulation Facility (PDSF) is presented. Systems analysis data is presented and discussed. In particular, these data disclose the effectiveness of utilization of the facility for meeting the needs of physics computing, especially as concerns parallel architecture and processing. Detailed design plans for the highly networked, symmetric, parallel, UNIX workstation-based facility are given and discussed in light of the design philosophy. Included are network, CPU, disk, router, concentrator, tape, user and job capacities and throughput

  3. Computing in Research.

    Science.gov (United States)

    Ashenhurst, Robert L.

    The introduction and diffusion of automatic computing facilities during the 1960's is reviewed; it is described as a time when research strategies in a broad variety of disciplines changed to take advantage of the newfound power provided by the computer. Several types of typical problems encountered by researchers who adopted the new technologies,…

  4. Providing a computing environment for a high energy physics workshop

    International Nuclear Information System (INIS)

    Nicholls, J.

    1991-03-01

    Although computing facilities have been provided at conferences and workshops remote from the hose institution for some years, the equipment provided has rarely been capable of providing for much more than simple editing and electronic mail over leased lines. This presentation describes the pioneering effort involved by the Computing Department/Division at Fermilab in providing a local computing facility with world-wide networking capability for the Physics at Fermilab in the 1990's workshop held in Breckenridge, Colorado, in August 1989, as well as the enhanced facilities provided for the 1990 Summer Study on High Energy Physics at Snowmass, Colorado, in June/July 1990. Issues discussed include type and sizing of the facilities, advance preparations, shipping, on-site support, as well as an evaluation of the value of the facility to the workshop participants

  5. MONITOR: A computer model for estimating the costs of an integral monitored retrievable storage facility

    International Nuclear Information System (INIS)

    Reimus, P.W.; Sevigny, N.L.; Schutz, M.E.; Heller, R.A.

    1986-12-01

    The MONITOR model is a FORTRAN 77 based computer code that provides parametric life-cycle cost estimates for a monitored retrievable storage (MRS) facility. MONITOR is very flexible in that it can estimate the costs of an MRS facility operating under almost any conceivable nuclear waste logistics scenario. The model can also accommodate input data of varying degrees of complexity and detail (ranging from very simple to more complex) which makes it ideal for use in the MRS program, where new designs and new cost data are frequently offered for consideration. MONITOR can be run as an independent program, or it can be interfaced with the Waste System Transportation and Economic Simulation (WASTES) model, a program that simulates the movement of waste through a complete nuclear waste disposal system. The WASTES model drives the MONITOR model by providing it with the annual quantities of waste that are received, stored, and shipped at the MRS facility. Three runs of MONITOR are documented in this report. Two of the runs are for Version 1 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2A (backup) version of the MRS cost estimate. In one of these runs MONITOR was run as an independent model, and in the other run MONITOR was run using an input file generated by the WASTES model. The two runs correspond to identical cases, and the fact that they gave identical results verified that the code performed the same calculations in both modes of operation. The third run was made for Version 2 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2B (integral) version of the MRS cost estimate. This run was made with MONITOR being run as an independent model. The results of several cases have been verified by hand calculations

  6. Initial operation of the Holifield facility

    International Nuclear Information System (INIS)

    Ball, J.B.

    1982-01-01

    The Holifield Heavy Ion Research Facility (HHIRF) is located at Oak Ridge National Laboratory and operated, by the Physics Division, as a national user facility for research in heavy-ion science. The facility operates two accelerators: the new Pelletron electrostatic accelerator, designed to accelerate all ions at terminal potentials up to 25 million volts, and the Oak Ridge Isochronous Cyclotron (ORIC) which, in addition to its stand-alone capabilities, has been modified to serve also as a booster accelerator for ion beams from the Pelletron. In addition, a number of state-of-the-art experimental devices, a new data acquisition computer system, and special user accommodations have been implemented as part of the facility. The construction of the facility was completed officially in June of this year. This paper reports on the present status of facility operation, observations from testing and running of the 25 MV Pelletron, experience with coupled operation of the Pelletron with the ORIC booster, and a brief summary of the experimental devices now available at the facility

  7. Initial operation of the Holifield Facility

    International Nuclear Information System (INIS)

    Ball, J.B.

    1983-01-01

    The Holifield Heavy Ion Research Facility (HHIRF) is located at Oak Ridge National Laboratory and operated, by the Physics Division, as a national user facility for research in heavy-ion science. The facility operates two accelerators: the new pelletron electrostatic accelerator, designed to accelerate all ions at terminal potentials up to 25 million volts, and the Oak Ridge Isochronous Cyclotron (ORIC) which, in addition to its stand-alone capabilities, has been modified to serve also as a booster accelerator for ion beams from the Pelletron. In addition, a number of state-of-the-art experimental devices, a new data acquisition computer system, and special user accommodations have been implemented as part of the facility. The construction of the facility was completed officially in June of this year. This paper reports on the present status of facility operation, observations from testing and running of the 25 MV Pelletron, experience with coupled operation of the Pelletron with the ORIC booster, and a brief summary of the experimental devices now available at the facility

  8. Annual report to the Laser Facility Committee, 1982

    International Nuclear Information System (INIS)

    1982-03-01

    The report covers the work done at, or in association with, the Central Laser Facility during the year April 1981 to March 1982 under the headings; glass laser facility development, gas laser development, laser plasma interactions, transport and particle emission studies, ablative acceleration and compression studies, spectroscopy and XUV lasers, and, theory and computation. Publications based on the work of the facility which have either appeared or been accepted for publication during the year are listed. (U.K.)

  9. Computer performance evaluation of FACOM 230-75 computer system, (2)

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1980-08-01

    In this report are described computer performance evaluations for FACOM230-75 computers in JAERI. The evaluations are performed on following items: (1) Cost/benefit analysis of timesharing terminals, (2) Analysis of the response time of timesharing terminals, (3) Analysis of throughout time for batch job processing, (4) Estimation of current potential demands for computer time, (5) Determination of appropriate number of card readers and line printers. These evaluations are done mainly from the standpoint of cost reduction of computing facilities. The techniques adapted are very practical ones. This report will be useful for those people who are concerned with the management of computing installation. (author)

  10. Physics detector simulation facility system software description

    International Nuclear Information System (INIS)

    Allen, J.; Chang, C.; Estep, P.; Huang, J.; Liu, J.; Marquez, M.; Mestad, S.; Pan, J.; Traversat, B.

    1991-12-01

    Large and costly detectors will be constructed during the next few years to study the interactions produced by the SSC. Efficient, cost-effective designs for these detectors will require careful thought and planning. Because it is not possible to test fully a proposed design in a scaled-down version, the adequacy of a proposed design will be determined by a detailed computer model of the detectors. Physics and detector simulations will be performed on the computer model using high-powered computing system at the Physics Detector Simulation Facility (PDSF). The SSCL has particular computing requirements for high-energy physics (HEP) Monte Carlo calculations for the simulation of SSCL physics and detectors. The numerical calculations to be performed in each simulation are lengthy and detailed; they could require many more months per run on a VAX 11/780 computer and may produce several gigabytes of data per run. Consequently, a distributed computing environment of several networked high-speed computing engines is envisioned to meet these needs. These networked computers will form the basis of a centralized facility for SSCL physics and detector simulation work. Our computer planning groups have determined that the most efficient, cost-effective way to provide these high-performance computing resources at this time is with RISC-based UNIX workstations. The modeling and simulation application software that will run on the computing system is usually written by physicists in FORTRAN language and may need thousands of hours of supercomputing time. The system software is the ''glue'' which integrates the distributed workstations and allows them to be managed as a single entity. This report will address the computing strategy for the SSC

  11. The Education Value of Cloud Computing

    Science.gov (United States)

    Katzan, Harry, Jr.

    2010-01-01

    Cloud computing is a technique for supplying computer facilities and providing access to software via the Internet. Cloud computing represents a contextual shift in how computers are provisioned and accessed. One of the defining characteristics of cloud software service is the transfer of control from the client domain to the service provider.…

  12. Description of mathematical models and computer programs

    International Nuclear Information System (INIS)

    1977-01-01

    The paper gives a description of mathematical models and computer programs for analysing possible strategies for spent fuel management, with emphasis on economic analysis. The computer programs developed, describe the material flows, facility construction schedules, capital investment schedules and operating costs for the facilities used in managing the spent fuel. The computer programs use a combination of simulation and optimization procedures for the economic analyses. Many of the fuel cycle steps (such as spent fuel discharges, storage at the reactor, and transport to the RFCC) are described in physical and economic terms through simulation modeling, while others (such as reprocessing plant size and commissioning schedules, interim storage facility commissioning schedules etc.) are subjected to economic optimization procedures to determine the approximate lowest-cost plans from among the available feasible alternatives

  13. Image processing technology for nuclear facilities

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Beom; Kim, Woong Ki; Park, Soon Young

    1993-05-01

    Digital image processing technique is being actively studied since microprocessors and semiconductor memory devices have been developed in 1960's. Now image processing board for personal computer as well as image processing system for workstation is developed and widely applied to medical science, military, remote inspection, and nuclear industry. Image processing technology which provides computer system with vision ability not only recognizes nonobvious information but processes large information and therefore this technique is applied to various fields like remote measurement, object recognition and decision in adverse environment, and analysis of X-ray penetration image in nuclear facilities. In this report, various applications of image processing to nuclear facilities are examined, and image processing techniques are also analysed with the view of proposing the ideas for future applications. (Author)

  14. Grid computing in pakistan and: opening to large hadron collider experiments

    International Nuclear Information System (INIS)

    Batool, N.; Osman, A.; Mahmood, A.; Rana, M.A.

    2009-01-01

    A grid computing facility was developed at sister institutes Pakistan Institute of Nuclear Science and Technology (PINSTECH) and Pakistan Institute of Engineering and Applied Sciences (PIEAS) in collaboration with Large Hadron Collider (LHC) Computing Grid during early years of the present decade. The Grid facility PAKGRID-LCG2 as one of the grid node in Pakistan was developed employing mainly local means and is capable of supporting local and international research and computational tasks in the domain of LHC Computing Grid. Functional status of the facility is presented in terms of number of jobs performed. The facility developed provides a forum to local researchers in the field of high energy physics to participate in the LHC experiments and related activities at European particle physics research laboratory (CERN), which is one of the best physics laboratories in the world. It also provides a platform of an emerging computing technology (CT). (author)

  15. Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility

    Science.gov (United States)

    Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer

    2009-01-01

    Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits

  16. Scientific user facilities at Oak Ridge National Laboratory: New research capabilities and opportunities

    Science.gov (United States)

    Roberto, James

    2011-10-01

    Over the past decade, Oak Ridge National Laboratory (ORNL) has transformed its research infrastructure, particularly in the areas of neutron scattering, nanoscale science and technology, and high-performance computing. New facilities, including the Spallation Neutron Source, Center for Nanophase Materials Sciences, and Leadership Computing Facility, have been constructed that provide world-leading capabilities in neutron science, condensed matter and materials physics, and computational physics. In addition, many existing physics-related facilities have been upgraded with new capabilities, including new instruments and a high- intensity cold neutron source at the High Flux Isotope Reactor. These facilities are operated for the scientific community and are available to qualified users based on competitive peer-reviewed proposals. User facilities at ORNL currently welcome more than 2,500 researchers each year, mostly from universities. These facilities, many of which are unique in the world, will be reviewed including current and planned research capabilities, availability and operational performance, access procedures, and recent research results. Particular attention will be given to new neutron scattering capabilities, nanoscale science, and petascale simulation and modeling. In addition, user facilities provide a portal into ORNL that can enhance the development of research collaborations. The spectrum of partnership opportunities with ORNL will be described including collaborations, joint faculty, and graduate research and education.

  17. User's guide for the small-angle neutron scattering facility

    International Nuclear Information System (INIS)

    Vlak, W.A.H.M.; Werkhoven, E.J.

    1989-04-01

    This report serves as a manual for the users of the small-angle neutron scattering instrument located at beamport HB3 of the High Flux Reactor in Petten. The main part of the text is devoted to the control of the facility and the data handling by means of a μVAX computer. Also, the various possibilities to access the facility across computer networks are discussed. A collection of menu-driven and command-driven programs, which utilize the flexibility of the VMS operating system without requiring detailed knowledge of the user about the computer environment, enables to control the instrument. For the convenience of the experienced user, who might wish to update or extend the software, a technical supplement is included. 15 figs.; 8 refs

  18. Quantum information. Teleportation - cryptography - quantum computer

    International Nuclear Information System (INIS)

    Koenneker, Carsten

    2012-01-01

    The following topics are dealt with: Reality in the test facility, quantum teleportation, the reality of quanta, interaction-free quantum measurement, rules for quantum computers, quantum computers with ions, spintronics with diamond, the limits of the quantum computers, a view in the future of quantum optics. (HSI)

  19. [Elderlies in street situation or social vulnerability: facilities and difficulties in the use of computational tools].

    Science.gov (United States)

    Frias, Marcos Antonio da Eira; Peres, Heloisa Helena Ciqueto; Pereira, Valclei Aparecida Gandolpho; Negreiros, Maria Célia de; Paranhos, Wana Yeda; Leite, Maria Madalena Januário

    2014-01-01

    This study aimed to identify the advantages and difficulties encountered by older people living on the streets or social vulnerability, to use the computer or internet. It is an exploratory qualitative research, in which five elderlies, attended on a non-governmental organization located in the city of São Paulo, have participated. The discourses were analyzed by content analysis technique and showed, as facilities, among others, to clarify doubts with the monitors, the stimulus for new discoveries coupled with proactivity and curiosity, and develop new skills. The mentioned difficulties were related to physical or cognitive issues, lack of instructor, and lack of knowledge to interact with the machine. The studies focusing on the elderly population living on the streets or in social vulnerability may contribute with evidence to guide the formulation of public policies to this population.

  20. The UK Human Genome Mapping Project online computing service.

    Science.gov (United States)

    Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W

    1992-04-01

    This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  2. Mechanistic facility safety and source term analysis

    International Nuclear Information System (INIS)

    PLYS, M.G.

    1999-01-01

    A PC-based computer program was created for facility safety and source term analysis at Hanford The program has been successfully applied to mechanistic prediction of source terms from chemical reactions in underground storage tanks, hydrogen combustion in double contained receiver tanks, and proccss evaluation including the potential for runaway reactions in spent nuclear fuel processing. Model features include user-defined facility room, flow path geometry, and heat conductors, user-defined non-ideal vapor and aerosol species, pressure- and density-driven gas flows, aerosol transport and deposition, and structure to accommodate facility-specific source terms. Example applications are presented here

  3. How to Bill Your Computer Services.

    Science.gov (United States)

    Dooskin, Herbert P.

    1981-01-01

    A computer facility billing procedure should be designed so that the full costs of a computer center operation are equitably charged to the users. Design criteria, costing methods, and management's role are discussed. (Author/MLF)

  4. Bevalac Minibeam Facility

    International Nuclear Information System (INIS)

    Schimmerling, W.; Alonso, J.; Morgado, R.; Tobias, C.A.; Grunder, H.; Upham, F.T.; Windsor, A.; Armer, R.A.; Yang, T.C.H.; Gunn, J.T.

    1977-03-01

    The Minibeam Facility is a biomedical heavy-ion beam area at the Bevalac designed to satisfy the following requirements: (1) provide a beam incident in a vertical plane for experiments where a horizontal apparatus significantly increases the convenience of performing an experiment or even determines its feasibility; (2) provide an area that is well shielded with respect to electronic interference so that microvolt signals can be detected with acceptable signal-to-noise ratios; (3) provide a beam of small diameter, typically a few millimeters or less, for various studies of cellular function; and (4) provide a facility for experiments that require long setup and preparation times and apparatus that must be left relatively undisturbed between experiments and that need short periods of beam time. The design of such a facility and its main components is described. In addition to the above criteria, the design was constrained by the desire to have inexpensive, simple devices that work reliably and can be easily upgraded for interfacing to the Biomedical PDP 11/45 computer

  5. Development of a computer code for shielding calculation in X-ray facilities

    International Nuclear Information System (INIS)

    Borges, Diogo da S.; Lava, Deise D.; Affonso, Renato R.W.; Moreira, Maria de L.; Guimaraes, Antonio C.F.

    2014-01-01

    The construction of an effective barrier against the interaction of ionizing radiation present in X-ray rooms requires consideration of many variables. The methodology used for specifying the thickness of primary and secondary shielding of an traditional X-ray room considers the following factors: factor of use, occupational factor, distance between the source and the wall, workload, Kerma in the air and distance between the patient and the receptor. With these data it was possible the development of a computer program in order to identify and use variables in functions obtained through graphics regressions offered by NCRP Report-147 (Structural Shielding Design for Medical X-Ray Imaging Facilities) for the calculation of shielding of the room walls as well as the wall of the darkroom and adjacent areas. With the built methodology, a program validation is done through comparing results with a base case provided by that report. The thickness of the obtained values comprise various materials such as steel, wood and concrete. After validation is made an application in a real case of radiographic room. His visual construction is done with the help of software used in modeling of indoor and outdoor. The construction of barriers for calculating program resulted in a user-friendly tool for planning radiographic rooms to comply with the limits established by CNEN-NN-3:01 published in September / 2011

  6. Computers in experimental nuclear power facilities

    International Nuclear Information System (INIS)

    Jukl, M.

    1982-01-01

    The CIS 3000 information system is described used for monitoring the operating modes of large technological equipment. The CIS system consists of two ADT computers, an external drum store an analog input side, a bivalent input side, 4 control consoles with monitors and acoustic signalling, a print-out area with typewriters and punching machines and linear recorders. Various applications are described of the installed CIS configuration as is the general-purpose program for processing measured values into a protocol. The program operates in the conversational mode. Different processing variants are shown on the display monitor. (M.D.)

  7. Status of the National Ignition Facility Integrated Computer Control System (ICCS) on the Path to Ignition

    International Nuclear Information System (INIS)

    Lagin, L J; Bettenhauasen, R C; Bowers, G A; Carey, R W; Edwards, O D; Estes, C M; Demaret, R D; Ferguson, S W; Fisher, J M; Ho, J C; Ludwigsen, A P; Mathisen, D G; Marshall, C D; Matone, J M; McGuigan, D L; Sanchez, R J; Shelton, R T; Stout, E A; Tekle, E; Townsend, S L; Van Arsdall, P J; Wilson, E F

    2007-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility under construction that will contain a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber with room for multiple experimental diagnostics. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. NIF's laser beams are designed to compress fusion targets to conditions required for thermonuclear burn, liberating more energy than required to initiate the fusion reactions. NIF is comprised of 24 independent bundles of 8 beams each using laser hardware that is modularized into more than 6,000 line replaceable units such as optical assemblies, laser amplifiers, and multifunction sensor packages containing 60,000 control and diagnostic points. NIF is operated by the large-scale Integrated Computer Control System (ICCS) in an architecture partitioned by bundle and distributed among over 800 front-end processors and 50 supervisory servers. NIF's automated control subsystems are built from a common object-oriented software framework based on CORBA distribution that deploys the software across the computer network and achieves interoperation between different languages and target architectures. A shot automation framework has been deployed during the past year to orchestrate and automate shots performed at the NIF using the ICCS. In December 2006, a full cluster of 48 beams of NIF was fired simultaneously, demonstrating that the independent bundle control system will scale to full scale of 192 beams. At present, 72 beams have been commissioned and have demonstrated 1.4-Megajoule capability of infrared light. During the next two years, the control system will be expanded to include automation of target area systems including final optics, target positioners and

  8. Status of the National Ignition Facility Integrated Computer Control System (ICCS) on the path to ignition

    International Nuclear Information System (INIS)

    Lagin, L.J.; Bettenhausen, R.C.; Bowers, G.A.; Carey, R.W.; Edwards, O.D.; Estes, C.M.; Demaret, R.D.; Ferguson, S.W.; Fisher, J.M.; Ho, J.C.; Ludwigsen, A.P.; Mathisen, D.G.; Marshall, C.D.; Matone, J.T.; McGuigan, D.L.; Sanchez, R.J.; Stout, E.A.; Tekle, E.A.; Townsend, S.L.; Van Arsdall, P.J.

    2008-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility under construction that will contain a 192-beam, 1.8-MJ, 500-TW, ultraviolet laser system together with a 10-m diameter target chamber with room for multiple experimental diagnostics. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. NIF's laser beams are designed to compress fusion targets to conditions required for thermonuclear burn, liberating more energy than required to initiate the fusion reactions. NIF is comprised of 24 independent bundles of eight beams each using laser hardware that is modularized into more than 6000 line replaceable units such as optical assemblies, laser amplifiers, and multi-function sensor packages containing 60,000 control and diagnostic points. NIF is operated by the large-scale Integrated Computer Control System (ICCS) in an architecture partitioned by bundle and distributed among over 800 front-end processors and 50 supervisory servers. NIF's automated control subsystems are built from a common object-oriented software framework based on CORBA distribution that deploys the software across the computer network and achieves interoperation between different languages and target architectures. A shot automation framework has been deployed during the past year to orchestrate and automate shots performed at the NIF using the ICCS. In December 2006, a full cluster of 48 beams of NIF was fired simultaneously, demonstrating that the independent bundle control system will scale to full scale of 192 beams. At present, 72 beams have been commissioned and have demonstrated 1.4-MJ capability of infrared light. During the next 2 years, the control system will be expanded in preparation for project completion in 2009 to include automation of target area systems including final optics

  9. The Emergence of Large-Scale Computer Assisted Summative Examination Facilities in Higher Education

    NARCIS (Netherlands)

    Draaijer, S.; Warburton, W. I.

    2014-01-01

    A case study is presented of VU University Amsterdam where a dedicated large-scale CAA examination facility was established. In the facility, 385 students can take an exam concurrently. The case study describes the change factors and processes leading up to the decision by the institution to

  10. Computers in Schools: White Boys Only?

    Science.gov (United States)

    Hammett, Roberta F.

    1997-01-01

    Discusses the role of computers in today's world and the construction of computer use attitudes, such as gender gaps. Suggests how schools might close the gaps. Includes a brief explanation about how facility with computers is important for women in their efforts to gain equitable treatment in all aspects of their lives. (PA)

  11. Neutron generator instrumentation at the Department 2350 Neutron Generator Test Facility

    International Nuclear Information System (INIS)

    Bryant, T.C.; Mowrer, G.R.

    1979-06-01

    The computer and waveform digitizing capability at the test facility has allowed several changes in the techniques used to test neutron generators. These changes include methods used to calibrate the instrumentation and changes in the operation of the test facility. These changes have increased the efficiency of the test facility as well as increasing both timing and amplitude accuracy of neutron generator waveforms

  12. Automation of electromagnetic compatability (EMC) test facilities

    Science.gov (United States)

    Harrison, C. A.

    1986-01-01

    Efforts to automate electromagnetic compatibility (EMC) test facilities at Marshall Space Flight Center are discussed. The present facility is used to accomplish a battery of nine standard tests (with limited variations) deigned to certify EMC of Shuttle payload equipment. Prior to this project, some EMC tests were partially automated, but others were performed manually. Software was developed to integrate all testing by means of a desk-top computer-controller. Near real-time data reduction and onboard graphics capabilities permit immediate assessment of test results. Provisions for disk storage of test data permit computer production of the test engineer's certification report. Software flexibility permits variation in the tests procedure, the ability to examine more closely those frequency bands which indicate compatibility problems, and the capability to incorporate additional test procedures.

  13. Parking Navigation for Alleviating Congestion in Multilevel Parking Facility

    OpenAIRE

    Kenmotsu, Masahiro; Sun, Weihua; Shibata, Naoki; Yasumoto, Keiichi; Ito, Minoru

    2012-01-01

    Finding a vacant parking space in a large crowded parking facility takes long time. In this paper, we propose a navigation method that minimizes the parking time based on collected real-time positional information of cars. In the proposed method, a central server in the parking facility collects the information and estimates the occupancy of each parking zone. Then, the server broadcasts the occupancy data to the cars in the parking facility. Each car then computes a parking route with the sh...

  14. Simulation of facility operations and materials accounting for a combined reprocessing/MOX fuel fabrication facility

    International Nuclear Information System (INIS)

    Coulter, C.A.; Whiteson, R.; Zardecki, A.

    1991-01-01

    We are developing a computer model of facility operations and nuclear materials accounting for a facility that reprocesses spent fuel and fabricates mixed oxide (MOX) fuel rods and assemblies from the recovered uranium and plutonium. The model will be used to determine the effectiveness of various materials measurement strategies for the facility and, ultimately, of other facility safeguards functions as well. This portion of the facility consists of a spent fuel storage pond, fuel shear, dissolver, clarifier, three solvent-extraction stages with uranium-plutonium separation after the first stage, and product concentrators. In this facility area mixed oxide is formed into pellets, the pellets are loaded into fuel rods, and the fuel rods are fabricated into fuel assemblies. These two facility sections are connected by a MOX conversion line in which the uranium and plutonium solutions from reprocessing are converted to mixed oxide. The model of the intermediate MOX conversion line used in the model is based on a design provided by Mike Ehinger of Oak Ridge National Laboratory (private communication). An initial version of the simulation model has been developed for the entire MOX conversion and fuel fabrication sections of the reprocessing/MOX fuel fabrication facility, and this model has been used to obtain inventory difference variance estimates for those sections of the facility. A significant fraction of the data files for the fuel reprocessing section have been developed, but these data files are not yet complete enough to permit simulation of reprocessing operations in the facility. Accordingly, the discussion in the following sections is restricted to the MOX conversion and fuel fabrication lines. 3 tabs

  15. Journal of EEA, Vol. 30, 2013 COMPUTERIZED FACILITIES ...

    African Journals Online (AJOL)

    dell

    Key words: Computer Aided Layout Design,. Construction ... Commonly used software are ... popular improvement-type methods are. Computerized Relative Allocation of Facilities .... closeness ratings values are given different numerical.

  16. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Energy Technology Data Exchange (ETDEWEB)

    Holzman, Burt [Fermilab; Bauerdick, Lothar A.T. [Fermilab; Bockelman, Brian [Nebraska U.; Dykstra, Dave [Fermilab; Fisk, Ian [New York U.; Fuess, Stuart [Fermilab; Garzoglio, Gabriele [Fermilab; Girone, Maria [CERN; Gutsche, Oliver [Fermilab; Hufnagel, Dirk [Fermilab; Kim, Hyunwoo [Fermilab; Kennedy, Robert [Fermilab; Magini, Nicolo [Fermilab; Mason, David [Fermilab; Spentzouris, Panagiotis [Fermilab; Tiradani, Anthony [Fermilab; Timm, Steve [Fermilab; Vaandering, Eric W. [Fermilab

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.

  17. Risk evaluation system for facility safeguards and security planning

    International Nuclear Information System (INIS)

    Udell, C.J.; Carlson, R.L.

    1987-01-01

    The Risk Evaluation System (RES) is an integrated approach to determining safeguards and security effectiveness and risk. RES combines the planning and technical analysis into a format that promotes an orderly development of protection strategies, planing assumptions, facility targets, vulnerability and risk determination, enhancement planning, and implementation. In addition, the RES computer database program enhances the capability of the analyst to perform a risk evaluation of the facility. The computer database is menu driven using data input screens and contains an algorithm for determining the probability of adversary defeat and risk. Also, base case and adjusted risk data records can be maintained and accessed easily

  18. Risk evaluation system for facility safeguards and security planning

    International Nuclear Information System (INIS)

    Udell, C.J.; Carlson, R.L.

    1987-01-01

    The Risk Evaluation System (RES) is an integrated approach to determining safeguards and security effectiveness and risk. RES combines the planning and technical analysis into a format that promotes an orderly development of protection strategies, planning assumptions, facility targets, vulnerability and risk determination, enhancement planning, and implementation. In addition, the RES computer database program enhances the capability of the analyst to perform a risk evaluation of the facility. The computer database is menu driven using data input screens and contains an algorithm for determining the probability of adversary defeat and risk. Also, base case and adjusted risk data records can be maintained and accessed easily

  19. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  20. Computational Modeling in Support of High Altitude Testing Facilities, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Simulation technology plays an important role in propulsion test facility design and development by assessing risks, identifying failure modes and predicting...

  1. Ice condenser testing facility and plans

    International Nuclear Information System (INIS)

    Kannberg, L.D.; Ross, B.A.; Eschbach, E.J.; Ligotke, M.W.

    1987-01-01

    A facility is being constructed to experimentally validate the ICEDF computer code. The code was developed to estimate the extent of fission product retention in the ice compartments of pressurized water reactor ice condenser containment systems during severe accidents. The design and construction of the facility is based on a test design that addresses the validation needs of the code for conditions typical of those expected to occur during severe pressurized water reactor accidents. Detailed facility design has followed completion of a test design (i.e., assembled test cases each involving a different set of aerosol and thermohydraulic flow conditions). The test design was developed with the aid of statistical test design software and was scrutinized for applicability with the aid of ICEDF simulations. The test facility will incorporate a small section of a prototypic ice condenser (e.g., a cross section comprising the equivalent of four 1-ft-diameter ice baskets to their full prototypic height of 48 ft). The development of the test design, the detailed facility design, and the construction progress are described in this paper

  2. Computational Modeling in Support of High Altitude Testing Facilities, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Simulation technology plays an important role in rocket engine test facility design and development by assessing risks, identifying failure modes and predicting...

  3. Computational investigation of reshock strength in hydrodynamic instability growth at the National Ignition Facility

    Science.gov (United States)

    Bender, Jason; Raman, Kumar; Huntington, Channing; Nagel, Sabrina; Morgan, Brandon; Prisbrey, Shon; MacLaren, Stephan

    2017-10-01

    Experiments at the National Ignition Facility (NIF) are studying Richtmyer-Meshkov and Rayleigh-Taylor hydrodynamic instabilities in multiply-shocked plasmas. Targets feature two different-density fluids with a multimode initial perturbation at the interface, which is struck by two X-ray-driven shock waves. Here we discuss computational hydrodynamics simulations investigating the effect of second-shock (``reshock'') strength on instability growth, and how these simulations are informing target design for the ongoing experimental campaign. A Reynolds-Averaged Navier Stokes (RANS) model was used to predict motion of the spike and bubble fronts and the mixing-layer width. In addition to reshock strength, the reshock ablator thickness and the total length of the target were varied; all three parameters were found to be important for target design, particularly for ameliorating undesirable reflected shocks. The RANS data are compared to theoretical models that predict multimode instability growth proportional to the shock-induced change in interface velocity, and to currently-available data from the NIF experiments. Work performed under the auspices of the U.S. D.O.E. by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344. LLNL-ABS-734611.

  4. Refurbishment and Automation of the Thermal/Vacuum Facilities at the Goddard Space Flight Center

    Science.gov (United States)

    Donohue, John T.; Johnson, Chris; Ogden, Rick; Sushon, Janet

    1998-01-01

    The thermal/vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the 11 facilities, currently 10 of the systems are scheduled for refurbishment and/or replacement as part of a 5-year implementation. Expected return on investment includes the reduction in test schedules, improvements in the safety of facility operations, reduction in the complexity of a test and the reduction in personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering and for the automation of thermal/vacuum facilities and thermal/vacuum tests. Automation of the thermal/vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs) and the use of Supervisory Control and Data Acquisition (SCADA) systems. These components allow the computer control and automation of mechanical components such as valves and pumps. In some cases, the chamber and chamber shroud require complete replacement while others require only mechanical component retrofit or replacement. The project of refurbishment and automation began in 1996 and has resulted in the computer control of one Facility (Facility #225) and the integration of electronically controlled devices and PLCs within several other facilities. Facility 225 has been successfully controlled by PLC and SCADA for over one year. Insignificant anomalies have occurred and were resolved with minimal impact to testing and operations. The amount of work remaining to be performed will occur over the next four to five years. Fiscal year 1998 includes the complete refurbishment of one facility, computer control of the thermal systems in two facilities, implementation of SCADA and PLC systems to support multiple facilities and the implementation of a Database server to allow efficient test management and data analysis.

  5. Adolescents' physical activity: competition between perceived neighborhood sport facilities and home media resources.

    Science.gov (United States)

    Wong, Bonny Yee-Man; Cerin, Ester; Ho, Sai-Yin; Mak, Kwok-Kei; Lo, Wing-Sze; Lam, Tai-Hing

    2010-04-01

    To examine the independent, competing, and interactive effects of perceived availability of specific types of media in the home and neighborhood sport facilities on adolescents' leisure-time physical activity (PA). Survey data from 34 369 students in 42 Hong Kong secondary schools were collected (2006-07). Respondents reported moderate-to-vigorous leisure-time PA, presence of sport facilities in the neighborhood and of media equipment in the home. Being sufficiently physically active was defined as engaging in at least 30 minutes of non-school leisure-time PA on a daily basis. Logistic regression and post-estimation linear combinations of regression coefficients were used to examine the independent and competing effects of sport facilities and media equipment on leisure-time PA. Perceived availability of sport facilities was positively (OR(boys) = 1.17; OR(girls) = 1.26), and that of computer/Internet negatively (OR(boys) = 0.48; OR(girls) = 0.41), associated with being sufficiently active. A significant positive association between video game console and being sufficiently active was found in girls (OR(girls) = 1.19) but not in boys. Compared with adolescents without sport facilities and media equipment, those who reported sport facilities only were more likely to be physically active (OR(boys) = 1.26; OR(girls) = 1.34), while those who additionally reported computer/Internet were less likely to be physically active (OR(boys) = 0.60; OR(girls) = 0.54). Perceived availability of sport facilities in the neighborhood may positively impact on adolescents' level of physical activity. However, having computer/Internet may cancel out the effects of active opportunities in the neighborhood. This suggests that physical activity programs for adolescents need to consider limiting the access to computer-mediated communication as an important intervention component.

  6. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, J; Sartirana, A

    2001-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on thei...

  7. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, Jose

    2010-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on the...

  8. Development of a computational code for calculations of shielding in dental facilities

    International Nuclear Information System (INIS)

    Lava, Deise D.; Borges, Diogo da S.; Affonso, Renato R.W.; Guimaraes, Antonio C.F.; Moreira, Maria de L.

    2014-01-01

    This paper is prepared in order to address calculations of shielding to minimize the interaction of patients with ionizing radiation and / or personnel. The work includes the use of protection report Radiation in Dental Medicine (NCRP-145 or Radiation Protection in Dentistry), which establishes calculations and standards to be adopted to ensure safety to those who may be exposed to ionizing radiation in dental facilities, according to the dose limits established by CNEN-NN-3.1 standard published in September / 2011. The methodology comprises the use of computer language for processing data provided by that report, and a commercial application used for creating residential projects and decoration. The FORTRAN language was adopted as a method for application to a real case. The result is a programming capable of returning data related to the thickness of material, such as steel, lead, wood, glass, plaster, acrylic, acrylic and leaded glass, which can be used for effective shielding against single or continuous pulse beams. Several variables are used to calculate the thickness of the shield, as: number of films used in the week, film load, use factor, occupational factor, distance between the wall and the source, transmission factor, workload, area definition, beam intensity, intraoral and panoramic exam. Before the application of the methodology is made a validation of results with examples provided by NCRP-145. The calculations redone from the examples provide answers consistent with the report

  9. Building control for nuclear materials R and D facility

    International Nuclear Information System (INIS)

    Hart, O.

    1979-01-01

    The new plutonium research and development facility at LASL was the first facility to be completed in the United States under the new environmental requirements. To insure that these new requirements are met, a redundant computer system is used to monitor and control the building. This paper describes the supervisory control and data acquisition system that was implemented to perform that function

  10. Security and Privacy in Fog Computing: Challenges

    OpenAIRE

    Mukherjee, Mithun; Matam, Rakesh; Shu, Lei; Maglaras, Leandros; Ferrag, Mohamed Amine; Choudhry, Nikumani; Kumar, Vikas

    2017-01-01

    open access article Fog computing paradigm extends the storage, networking, and computing facilities of the cloud computing toward the edge of the networks while offloading the cloud data centers and reducing service latency to the end users. However, the characteristics of fog computing arise new security and privacy challenges. The existing security and privacy measurements for cloud computing cannot be directly applied to the fog computing due to its features, such as mobility, heteroge...

  11. Rancang Bangun STIKI Class Facilities E-Complaint

    Directory of Open Access Journals (Sweden)

    Ni Kadek Ariasih

    2017-08-01

    Full Text Available STMIK STIKOM Indonesia is one of the institutions in the field of computer-based education. In order to support the effectiveness of the implementation of teaching and learning activities that take place, it is need a service that support the availability of adequate class facilities and complaints services if there are constraints on facilities in the classroom. So far, the management of complaints complaints against classroom facilities or in the labarotorium which is handled by the Household Management Section is still on manua basis. In terms of record and handle complaints it is required information system which called STIKI Class Facilities E-Complaint. This system can assist the Household Management Section in monitoring complaints from the condition of existing room facilities if experiencing problems and also can improve the quality of service in handling complaints. The software development process model used is prototype and Web-based model with PHP and MySQL database.

  12. Guidance on the Stand Down, Mothball, and Reactivation of Ground Test Facilities

    Science.gov (United States)

    Volkman, Gregrey T.; Dunn, Steven C.

    2013-01-01

    The development of aerospace and aeronautics products typically requires three distinct types of testing resources across research, development, test, and evaluation: experimental ground testing, computational "testing" and development, and flight testing. Over the last twenty plus years, computational methods have replaced some physical experiments and this trend is continuing. The result is decreased utilization of ground test capabilities and, along with market forces, industry consolidation, and other factors, has resulted in the stand down and oftentimes closure of many ground test facilities. Ground test capabilities are (and very likely will continue to be for many years) required to verify computational results and to provide information for regimes where computational methods remain immature. Ground test capabilities are very costly to build and to maintain, so once constructed and operational it may be desirable to retain access to those capabilities even if not currently needed. One means of doing this while reducing ongoing sustainment costs is to stand down the facility into a "mothball" status - keeping it alive to bring it back when needed. Both NASA and the US Department of Defense have policies to accomplish the mothball of a facility, but with little detail. This paper offers a generic process to follow that can be tailored based on the needs of the owner and the applicable facility.

  13. Shielding design for positron emission tomography facility

    International Nuclear Information System (INIS)

    Abdallah, I.I.

    2007-01-01

    With the recent advent of readily available tracer isotopes, there has been marked increase in the number of hospital-based and free-standing positron emission tomography (PET) clinics. PET facilities employ relatively large activities of high-energy photon emitting isotopes, which can be dangerous to the health of humans and animals. This coupled with the current dose limits for radiation worker and members of the public can result in shielding requirements. This research contributes to the calculation of the appropriate shielding to keep the level of radiation within an acceptable recommended limit. Two different methods were used including measurements made at selected points of an operating PET facility and computer simulations by using Monte Carlo Transport Code. The measurements mainly concerned the radiation exposure at different points around facility using the survey meter detectors and Thermoluminescent Dosimeters (TLD). Then the set of manual calculation procedures were used to estimate the shielding requirements for a newly built PEF facility. The results from the measurement and the computer simulation were compared to the results obtained from the set manual calculation procedure. In general, the estimated weekly dose at the points of interest is lower than the regulatory limits for the little company of Mary Hospital. Furthermore, the density and the HVL for normal strength concrete and clay bricks are almost similar. In conclusion, PET facilities present somewhat different design requirements and are more likely to require additional radiation shielding. Therefore, existing shields at the little Company of Mary Hospital are in general found to be adequate and satisfactory and additional shielding was found necessary at the new PET facility in the department of Nuclear Medicine of the Dr. George Mukhari Hospital. By use of appropriate design, by implying specific shielding requirements and by maintaining good operating practices, radiation doses to

  14. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  15. Pilot-scale reactor activation facility at SRL

    International Nuclear Information System (INIS)

    Bowman, W.W.

    1976-01-01

    The Hydrogeocemical and Stream Sediment Reconnaissance portion of the National Uranium Resource Evaluation program requires an analytical technique for uranium and other elements. Based on an automated absolute activation analysis technique using 252 Cf, a pilt-scale facility installed in a production reactor has provided analyses for 2800 samples. Key features include: an automated sample transport system, a delayed neutron detector, two GeLi detectors, a loader, and an unloader, with all components controlled by a microprocessor; a dedicated PDP-9 computer and pulse height analyzer; and correlation and reduction of acquired data by a series of programs using an IBM 360/195 computer. The facility was calibrated with elemental and isotopic standards. Results of analyses of standard reference materials and operational detection limits for typical sediment samples are presented. Plans to increase sample throughput are discussed briefly

  16. Octopus: LLL's computing utility

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    The Laboratory's Octopus network constitutes one of the greatest concentrations of computing power in the world. This power derives from the network's organization as well as from the size and capability of its computers, storage media, input/output devices, and communication channels. Being in a network enables these facilities to work together to form a unified computing utility that is accessible on demand directly from the users' offices. This computing utility has made a major contribution to the pace of research and development at the Laboratory; an adequate rate of progress in research could not be achieved without it. 4 figures

  17. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  18. Flow analysis of HANARO flow simulated test facility

    International Nuclear Information System (INIS)

    Park, Yong-Chul; Cho, Yeong-Garp; Wu, Jong-Sub; Jun, Byung-Jin

    2002-01-01

    The HANARO, a multi-purpose research reactor of 30 MWth open-tank-in-pool type, has been under normal operation since its initial critical in February, 1995. Many experiments should be safely performed to activate the utilization of the NANARO. A flow simulated test facility is being developed for the endurance test of reactivity control units for extended life times and the verification of structural integrity of those experimental facilities prior to loading in the HANARO. This test facility is composed of three major parts; a half-core structure assembly, flow circulation system and support system. The half-core structure assembly is composed of plenum, grid plate, core channel with flow tubes, chimney and dummy pool. The flow channels are to be filled with flow orifices to simulate core channels. This test facility must simulate similar flow characteristics to the HANARO. This paper, therefore, describes an analytical analysis to study the flow behavior of the test facility. The computational flow analysis has been performed for the verification of flow structure and similarity of this test facility assuming that flow rates and pressure differences of the core channel are constant. The shapes of flow orifices were determined by the trial and error method based on the design requirements of core channel. The computer analysis program with standard k - ε turbulence model was applied to three-dimensional analysis. The results of flow simulation showed a similar flow characteristic with that of the HANARO and satisfied the design requirements of this test facility. The shape of flow orifices used in this numerical simulation can be adapted for manufacturing requirements. The flow rate and the pressure difference through core channel proved by this simulation can be used as the design requirements of the flow system. The analysis results will be verified with the results of the flow test after construction of the flow system. (author)

  19. I and C security program for nuclear facilities: implementation guide - TAFICS/IG/2

    International Nuclear Information System (INIS)

    2016-04-01

    This is the second in a series of documents being developed by TAFICS for protecting computer-based I and C systems of Indian nuclear facilities from cyber attacks. The document provides guidance to nuclear facility management to establish, implement and maintain a robust I and C security program - consisting of security plan and a set of security controls. In order to provide a firm basis for the security program, the document also identifies the fundamental security principles and foundational security requirements related to computer-based I and C systems of nuclear facilities. It is recommended that all applicable Indian nuclear facilities should implement the security program - with required adaptation - so as to provide the necessary assurance that the I and C systems are adequately protected against cyber attacks. (author)

  20. Computational Methods and Function Theory

    CERN Document Server

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  1. Safeguards Automated Facility Evaluation (SAFE) methodology

    International Nuclear Information System (INIS)

    Chapman, L.D.; Grady, L.M.; Bennett, H.A.; Sasser, D.W.; Engi, D.

    1978-01-01

    The SAFE procedure is an efficient method of evaluating the physical protection system of a nuclear facility. Since the algorithms used in SAFE for path generation and evaluation are analytical, many paths can be evaluated with a modest investment in computer time. SAFE is easy to use because the information required is well-defined and the interactive nature of this procedure lends itself to straightforward operation. The modular approach that has been taken allows other functionally equivalent modules to be substituted as they become available. The SAFE procedure has broad applications in the nuclear facility safeguards field as well as in the security field in general. Any fixed facility containing valuable materials or components to be protected from theft or sabotage could be analyzed using this same automated evaluation technique

  2. Data management and its role in delivering science at DOE BES user facilities - Past, Present, and Future

    Science.gov (United States)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.

    2009-07-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve

  3. Data management and its role in delivering science at DOE BES user facilities - Past, Present, and Future

    International Nuclear Information System (INIS)

    Miller, Stephen D; Herwig, Kenneth W; Ren, Shelly; Vazhkudai, Sudharshan S; Jemian, Pete R; Luitz, Steffen; Salnikov, Andrei A; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L

    2009-01-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better

  4. Data Management and its Role in Delivering Science at DOE BES User Facilities - Past, Present, and Future

    International Nuclear Information System (INIS)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Hagen, Mark E.

    2009-01-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better

  5. Data Management and Its Role in Delivering Science at DOE BES User Facilities Past, Present, and Future

    International Nuclear Information System (INIS)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.

    2009-01-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research (1). We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need (2). Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve

  6. Real-time data-intensive computing

    Energy Technology Data Exchange (ETDEWEB)

    Parkinson, Dilworth Y., E-mail: dyparkinson@lbl.gov; Chen, Xian; Hexemer, Alexander; MacDowell, Alastair A.; Padmore, Howard A.; Shapiro, David; Tamura, Nobumichi [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Beattie, Keith; Krishnan, Harinarayan; Patton, Simon J.; Perciano, Talita; Stromsness, Rune; Tull, Craig E.; Ushizima, Daniela [Computational Research Division, Lawrence Berkeley National Laboratory Berkeley CA 94720 (United States); Correa, Joaquin; Deslippe, Jack R. [National Energy Research Scientific Computing Center, Berkeley, CA 94720 (United States); Dart, Eli; Tierney, Brian L. [Energy Sciences Network, Berkeley, CA 94720 (United States); Daurer, Benedikt J.; Maia, Filipe R. N. C. [Uppsala University, Uppsala (Sweden); and others

    2016-07-27

    Today users visit synchrotrons as sources of understanding and discovery—not as sources of just light, and not as sources of data. To achieve this, the synchrotron facilities frequently provide not just light but often the entire end station and increasingly, advanced computational facilities that can reduce terabytes of data into a form that can reveal a new key insight. The Advanced Light Source (ALS) has partnered with high performance computing, fast networking, and applied mathematics groups to create a “super-facility”, giving users simultaneous access to the experimental, computational, and algorithmic resources to make this possible. This combination forms an efficient closed loop, where data—despite its high rate and volume—is transferred and processed immediately and automatically on appropriate computing resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beamtime. We will describe our work at the ALS ptychography, scattering, micro-diffraction, and micro-tomography beamlines.

  7. An assessment of future computer system needs for large-scale computation

    Science.gov (United States)

    Lykos, P.; White, J.

    1980-01-01

    Data ranging from specific computer capability requirements to opinions about the desirability of a national computer facility are summarized. It is concluded that considerable attention should be given to improving the user-machine interface. Otherwise, increased computer power may not improve the overall effectiveness of the machine user. Significant improvement in throughput requires highly concurrent systems plus the willingness of the user community to develop problem solutions for that kind of architecture. An unanticipated result was the expression of need for an on-going cross-disciplinary users group/forum in order to share experiences and to more effectively communicate needs to the manufacturers.

  8. Adiabatic quantum computing

    OpenAIRE

    Lobe, Elisabeth; Stollenwerk, Tobias; Tröltzsch, Anke

    2015-01-01

    In the recent years, the field of adiabatic quantum computing has gained importance due to the advances in the realisation of such machines, especially by the company D-Wave Systems. These machines are suited to solve discrete optimisation problems which are typically very hard to solve on a classical computer. Due to the quantum nature of the device it is assumed that there is a substantial speedup compared to classical HPC facilities. We explain the basic principles of adiabatic ...

  9. 200 area liquid effluent facility quality assurance program plan. Revision 1

    International Nuclear Information System (INIS)

    Sullivan, N.J.

    1995-01-01

    Direct revision of Supporting Document WHC-SD-LEF-QAPP-001, Rev. 0. 200 Area Liquid Effluent Facilities Quality Assurance Program Plan. Incorporates changes to references in tables. Revises test to incorporate WHC-SD-LEF-CSCM-001, Computer Software Configuration Management Plan for 200 East/West Liquid Effluent Facilities

  10. A facility for training Space Station astronauts

    Science.gov (United States)

    Hajare, Ankur R.; Schmidt, James R.

    1992-01-01

    The Space Station Training Facility (SSTF) will be the primary facility for training the Space Station Freedom astronauts and the Space Station Control Center ground support personnel. Conceptually, the SSTF will consist of two parts: a Student Environment and an Author Environment. The Student Environment will contain trainers, instructor stations, computers and other equipment necessary for training. The Author Environment will contain the systems that will be used to manage, develop, integrate, test and verify, operate and maintain the equipment and software in the Student Environment.

  11. Berkeley Lab Computing Sciences: Accelerating Scientific Discovery

    International Nuclear Information System (INIS)

    Hules, John A.

    2008-01-01

    Scientists today rely on advances in computer science, mathematics, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences organization researches, develops, and deploys new tools and technologies to meet these needs and to advance research in such areas as global climate change, combustion, fusion energy, nanotechnology, biology, and astrophysics

  12. Data Analysis Facility (DAF)

    Science.gov (United States)

    1991-01-01

    NASA-Dryden's Data Analysis Facility (DAF) provides a variety of support services to the entire Dryden community. It provides state-of-the-art hardware and software systems, available to any Dryden engineer for pre- and post-flight data processing and analysis, plus supporting all archival and general computer use. The Flight Data Access System (FDAS) is one of the advanced computer systems in the DAF, providing for fast engineering unit conversion and archival processing of flight data delivered from the Western Aeronautical Test Range. Engineering unit conversion and archival formatting of flight data is performed by the DRACO program on a Sun 690MP and an E-5000 computer. Time history files produced by DRACO are then moved to a permanent magneto-optical archive, where they are network-accessible 24 hours a day, 7 days a week. Pertinent information about the individual flights is maintained in a relational (Sybase) database. The DAF also houses all general computer services, including; the Compute Server 1 and 2 (CS1 and CS2), the server for the World Wide Web, overall computer operations support, courier service, a CD-ROM Writer system, a Technical Support Center, the NASA Dryden Phone System (NDPS), and Hardware Maintenance.

  13. 29 CFR 541.606 - Board, lodging or other facilities.

    Science.gov (United States)

    2010-07-01

    ... DEFINING AND DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Salary Requirements § 541.606 Board, lodging or other facilities. (a) To qualify for...

  14. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  15. NASA Center for Computational Sciences: History and Resources

    Science.gov (United States)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  16. Accountability control system in plutonium fuel facility

    International Nuclear Information System (INIS)

    Naruki, Kaoru; Aoki, Minoru; Mizuno, Ohichi; Mishima, Tsuyoshi

    1979-01-01

    More than 30 tons of plutonium-uranium mixed-oxide fuel have been manufactured at the Plutonium Facility in PNC for JOYO, FUGEN and DCA (Deuterium Critical Assembly) and for the purpose of irradiation tests. This report reviews the nuclear material accountability control system adopted in the Plutonium Facility. Initially, the main objective of the system was the criticality control of fissible materials at various stages of fuel manufacturing. The first part of this report describes the functions and the structure of the control system. A flow chart is provided to show the various stages of material flow and their associated computer files. The system is composed of the following three sub-systems: procedures of nuclear material transfer; PIT (Physical Inventory Taking); data retrieval, report preparation and file maintenance. OMR (Optical Mark Reader) sheets are used to record the nuclear material transfer. The MUF (Materials Unaccounted For) are evaluated by PIT every three months through computer processing based on the OMR sheets. The MUF ratio of Pu handled in the facility every year from 1966 to 1977 are presented by a curve, indicating that the MUF ratio was kept well under 0.5% for every project (JOYO, FUGEN, and DCA). As for the Pu safeguards, the MBA (Material Balance Area) and the KMP (Key Measurement Point) in the facility of PNC are illustrated. The general idea of the projected PINC (Plutonium Inventory Control) system in PNC is also shortly explained. (Aoki, K.)

  17. Improving CMS data transfers among its distributed computing facilities

    International Nuclear Information System (INIS)

    Flix, J; Magini, N; Sartirana, A

    2011-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  18. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    Science.gov (United States)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  19. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    International Nuclear Information System (INIS)

    Limosani, Antonio; Boland, Lucien; Crosby, Sean; Huang, Joanna; Sevior, Martin; Coddington, Paul; Zhang, Shunde; Wilson, Ross

    2014-01-01

    The Australian Government is making a $AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  20. BombCAD - A new tool for bomb defense in nuclear facilities

    International Nuclear Information System (INIS)

    Massa, D.J.; Howard, J.W.; Sturm, S.R.

    1988-01-01

    This paper describes a new tool for analysis of the specific vulnerability of diverse facilites to bomb attack and for computer-aided-design (CAD) of siting, screening and hardening/softening aspects of comprehensive bomb defense programs. BombCAD combines the extensive architectural and engineering data base and graphics capabilities of modern architectural CAD systems with the bomb effects computational capability of the ''SECUREPLAN'' BOMB UTILITY. BombCAD permits architects/engineers, security professionals and facility managers to analytically estimate and graphically display facility vulnerability and changes (reductions) in vulnerability which result from the adoption of various bomb defense measures

  1. Annual report to the Laser Facility Committee 1978

    International Nuclear Information System (INIS)

    1978-01-01

    The report is in sections, as follows: the development of the facility (glass laser physics and development, performance and reliability of the glass laser, computer control, target fabrication, target area, optical design, gas laser development); single beam interaction studies (optical and magnetic measurements, X-ray and VUV spectroscopy, optical emission studies, particle emission measurements, gas breakdown observations, related theoretical and computational studies); two beam compression studies (vacuum ultra violet and X-ray spectroscopy, optical spectroscopy, particle emission studies, optical and magnetic measurements, theory and computational modelling). (U.K.)

  2. Fast Breeder Blanket Facility. Quarterly progress report, July 1, 1978--September 30, 1978

    International Nuclear Information System (INIS)

    Clikeman, F.M.

    1978-09-01

    This quarterly progress report summarizes work done at Purdue University's Fast Breeder Blanket Facility for the Department of Energy during the months July to September 1978. The summary includes reports on the models and methods used to characterize the FBBF facility. Using the reported models and calculational methods and computer codes a new cross section set has been generated, self-shielded for 300 0 K for use in all FBBF calculations using the 2DB computer code. The summary includes reports of the reproducability of foil activation data and measurements of the azmuthal symmetry of the facility. The status of the development of technique for the experimental measurements and preliminary foil activation measurements are also reviewed

  3. Writing Apprehension, Computer Anxiety and Telecomputing: A Pilot Study.

    Science.gov (United States)

    Harris, Judith; Grandgenett, Neal

    1992-01-01

    A study measured graduate students' writing apprehension and computer anxiety levels before and after using electronic mail, computer conferencing, and remote database searching facilities during an educational technology course. Results indicted postcourse computer anxiety levels significantly related to usage statistics. Precourse writing…

  4. Magnox Electric Littlebrook reactor inspection and repair rehearsal facility

    International Nuclear Information System (INIS)

    Barnes, S.A.; Clayton, R.; Gaydon, B.G.; Ramsey, B.H.

    1996-01-01

    Magnox reactors, although designed to be maintenance free during their operational life, have nevertheless highlighted the need for test rig facilities to train operators in the methods and techniques of reactor inspection and repair. The history of the facility for reactor engineering development (FRED) is described and its present role as a repair rehearsal facility noted. Advances in computer graphics may, in future, mean that such operator training will be virtual reality rather than analog reality based; however the need for such rigs to commission techniques and equipment and to establish performance and reliability is likely to continue. (UK)

  5. High energy physics computing in Japan

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1989-01-01

    A brief overview of the computing provision for high energy physics in Japan is presented. Most of the computing power for high energy physics is concentrated in KEK. Here there are two large scale systems: one providing a general computing service including vector processing and the other dedicated to TRISTAN experiments. Each university group has a smaller sized mainframe or VAX system to facilitate both their local computing needs and the remote use of the KEK computers through a network. The large computer system for the TRISTAN experiments is described. An overview of a prospective future large facility is also given. (orig.)

  6. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  7. Second Annual AEC Scientific Computer Information Exhange Meeting. Proceedings of the technical program theme: computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Peskin,A.M.; Shimamoto, Y.

    1974-01-01

    The topic of computer graphics serves well to illustrate that AEC affiliated scientific computing installations are well represented in the forefront of computing science activities. The participant response to the technical program was overwhelming--both in number of contributions and quality of the work described. Session I, entitled Advanced Systems, contains presentations describing systems that contain features not generally found in graphics facilities. These features can be roughly classified as extensions of standard two-dimensional monochromatic imaging to higher dimensions including color and time as well as multidimensional metrics. Session II presents seven diverse applications ranging from high energy physics to medicine. Session III describes a number of important developments in establishing facilities, techniques and enhancements in the computer graphics area. Although an attempt was made to schedule as many of these worthwhile presentations as possible, it appeared impossible to do so given the scheduling constraints of the meeting. A number of prospective presenters 'came to the rescue' by graciously withdrawing from the sessions. Some of their abstracts have been included in the Proceedings.

  8. Nuclear Station Facilities Improvement Planning

    International Nuclear Information System (INIS)

    Hooks, R. W.; Lunardini, A. L.; Zaben, O.

    1991-01-01

    An effective facilities improvement program will include a plan for the temporary relocation of personnel during the construction of an adjoining service building addition. Since the smooth continuation of plant operation is of paramount importance, the phasing plan is established to minimize the disruptions in day-to-day station operation and administration. This plan should consider the final occupancy arrangements and the transition to the new structure; for example, computer hookup and phase-in should be considered. The nuclear industry is placing more emphasis on safety and reliability of nuclear power plants. In order to do this, more emphasis is placed on operations and maintenance. This results in increased size of managerial, technical and maintenance staffs. This in turn requires improved office and service facilities. The facilities that require improvement may include training areas, rad waste processing and storage facilities, and maintenance facilities. This paper discusses an approach for developing an effective program to plan and implement these projects. These improvement projects can range in magnitude from modifying a simple system to building a new structure to allocating space for a future project. This paper addresses the planning required for the new structures with emphasis on site location, space allocation, and internal layout. Since facility planning has recently been completed by Sargent and Leyden at six U. S. nuclear stations, specific examples from some of those plants are presented. Site planning and the establishment of long-range goals are of the utmost importance when undertaking a facilities improvement program for a nuclear station. A plan that considers the total site usage will enhance the value of both the new and existing facilities. Proper planning at the beginning of the program can minimize costs and maximize the benefits of the program

  9. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Peisert, Sean [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Potok, Thomas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jones, Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-03

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues included research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the

  10. Modeling bubble condenser containment with computer code COCOSYS: post-test calculations of the main steam line break experiment at ELECTROGORSK BC V-213 test facility

    International Nuclear Information System (INIS)

    Lola, I.; Gromov, G.; Gumenyuk, D.; Pustovit, V.; Sholomitsky, S.; Wolff, H.; Arndt, S.; Blinkov, V.; Osokin, G.; Melikhov, O.; Melikhov, V.; Sokoline, A.

    2005-01-01

    Containment of the WWER-440 Model 213 nuclear power plant features a Bubble Condenser, a complex passive pressure suppression system, intended to limit pressure rise in the containment during accidents. Due to lack of experimental evidence of its successful operation in the original design documentation, the performance of this system under accidents with ruptures of large high-energy pipes of the primary and secondary sides remains a known safety concern for this containment type. Therefore, a number of research and analytical studies have been conducted by the countries operating WWER-440 reactors and their Western partners in the recent years to verify Bubble Condenser operation under accident conditions. Comprehensive experimental research studies at the Electrogorsk BC V-213 test facility, commissioned in 1999 in Electrogorsk Research and Engineering Centre (EREC), constitute essential part of these efforts. Nowadays this is the only operating large-scale facility enabling integral tests on investigation of the Bubble Condenser performance. Several large international research projects, conducted at this facility in 1999-2003, have covered a spectrum of pipe break accidents. These experiments have substantially improved understanding of the overall system performance and thermal hydraulic phenomena in the Bubble Condenser Containment, and provided valuable information for validating containment codes against experimental results. One of the recent experiments, denoted as SLB-G02, has simulated steam line break. The results of this experiment are of especial value for the engineers working in the area of computer code application for WWER-440 containment analyses, giving an opportunity to verify validity of the code predictions and identify possibilities for model improvement. This paper describes the results of the post-test calculations of the SLB-G02 experiment, conducted as a joint effort of GRS, Germany and Ukrainian technical support organizations for

  11. Virtual laboratories: Collaborative environments and facilities-on-line

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, C.E. Jr. [Oak Ridge National Lab., TN (United States). I and C Div.; Cavallini, J.S.; Seweryniak, G.R.; Kitchens, T.A.; Hitchcock, D.A.; Scott, M.A.; Welch, L.C. [Dept. of Energy, Germantown, MD (United States). Mathematical Information, and Computational Sciences Div.; Aiken, R.J. [Dept. of Energy, Germantown, MD (United States). Mathematical Information, and Computational Sciences Div.]|[Lawrence Livermore National Lab., CA (United States); Stevens, R.L. [Argonne National Lab., IL (United States). Mathematics and Computer Sciences Div.

    1995-07-01

    The Department of Energy (DOE) has major research laboratories in a number of locations in the US, typically co-located with large research instruments or research facilities valued at tens of millions to even billions of dollars. Present budget exigencies facing the entire nation are felt very deeply at DOE, just as elsewhere. Advances over the last few years in networking and computing technologies make virtual collaborative environments and conduct of experiments over the internetwork structure a possibility. The authors believe that development of these collaborative environments and facilities-on-line could lead to a ``virtual laboratory`` with tremendous potential for decreasing the costs of research and increasing the productivity of their capital investment in research facilities. The majority of these cost savings would be due to increased productivity of their research efforts, better utilization of resources and facilities, and avoiding duplication of expensive facilities. A vision of how this might all fit together and a discussion of the infrastructure necessary to enable these developments is presented.

  12. Radiological Risk Assessments for Occupational Exposure at Fuel Fabrication Facility in AlTuwaitha Site Baghdad – Iraq by using RESRAD Computer Code

    Science.gov (United States)

    Ibrahim, Ziadoon H.; Ibrahim, S. A.; Mohammed, M. K.; Shaban, A. H.

    2018-05-01

    The purpose of this study is to evaluate the radiological risks for workers for one year of their activities at Fuel Fabrication Facility (FFF) so as to make the necessary protection to prevent or minimize risks resulted from these activities this site now is under the Iraqi decommissioning program (40). Soil samples surface and subsurface were collected from different positions of this facility and analyzed by gamma rays spectroscopy technique High Purity Germanium detector (HPGe) was used. It was found out admixture of radioactive isotopes (232Th 40K 238U 235U137Cs) according to the laboratory results the highest values were (975758) for 238U (21203) for 235U (218) for 232Th (4046) for 40K and (129) for 137Cs in (Bqkg1) unit. The annual total radiation dose and risks were estimated by using RESRAD (onsite) 70 computer code. The highest total radiation dose was (5617μSv/year) in area that represented by soil sample (S7) and the radiological risks morbidity and mortality (118E02 8661E03) respectively in the same area

  13. Startup of the Whiteshell irradiation facility

    International Nuclear Information System (INIS)

    Barnard, J.W.; Stanley, F.W.

    1989-01-01

    Recently, a 10-MeV, 1-kW electron linear accelerator was installed in a specially designed irradiation facility at the Whiteshell Nuclear Research Establishment. The facility was designed for radiation applications research in the development of new radiation processes up to the pilot scale level. The accelerator is of advanced design. Automatic startup via computer control makes it compatible with industrial processing. It has been operated successfully as a fully integrated electron irradiator for a number of applications including curing of plastics and composites, sterilization of medical disposables and animal feed irradiation. We report here on our experience during the first six months of operation. (orig.)

  14. Startup of the whiteshell irradiation facility

    Science.gov (United States)

    Barnard, J. W.; Stanley, F. W.

    1989-04-01

    Recently, a 10-MeV, 1-kW electron linear accelerator was installed in a specially designed irradiation facility at the Whiteshell Nuclear Research Establishment. The facility was designed for radiation applications research in the development of new radiation processes up to the pilot scale level. The accelerator is of advanced design. Automatic startup via computer control makes it compatible with industrial processing. It has been operated successfully as a fully integrated electron irradiator for a number of applications including curing of plastics and composites, sterilization of medical disposables and animal feed irradiation. We report here on our experience during the first six months of operation.

  15. Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    Science.gov (United States)

    Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.

    1992-01-01

    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.

  16. User guide to the SRS data logging facility

    International Nuclear Information System (INIS)

    Tyson, B.E.

    1979-02-01

    The state of the SRS is recorded every two minutes, thus providing a detailed History of its parameters. Recording of History is done via the SRS Computer Network. This consists of a Master Computer, an Interdata 7/32, and three Minicomputers, Interdata 7/16s. Each of the Minicomputers controls one of the accelerators, Linac, Booster and Storage Ring. The Master Computer is connected to the Central Computer, an IBM 370/165, for jobs where greater computing power and storage are required. The Master Computer has a total of 20 Megabytes of fixed and movable disc space but only about 5 Megabytes are available for History storage. The Minicomputers have no storage facilities. The user guide is set out as follows: History filing system, History storage on the Master Computer, transfer of the History to the Central Computer, transferring History to tapes, job integrity, the SRS tape catalogue system. (author)

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  18. Instrumentation of the ESRF medical imaging facility

    CERN Document Server

    Elleaume, H; Berkvens, P; Berruyer, G; Brochard, T; Dabin, Y; Domínguez, M C; Draperi, A; Fiedler, S; Goujon, G; Le Duc, G; Mattenet, M; Nemoz, C; Pérez, M; Renier, M; Schulze, C; Spanne, P; Suortti, P; Thomlinson, W; Estève, F; Bertrand, B; Le Bas, J F

    1999-01-01

    At the European Synchrotron Radiation Facility (ESRF) a beamport has been instrumented for medical research programs. Two facilities have been constructed for alternative operation. The first one is devoted to medical imaging and is focused on intravenous coronary angiography and computed tomography (CT). The second facility is dedicated to pre-clinical microbeam radiotherapy (MRT). This paper describes the instrumentation for the imaging facility. Two monochromators have been designed, both are based on bent silicon crystals in the Laue geometry. A versatile scanning device has been built for pre-alignment and scanning of the patient through the X-ray beam in radiography or CT modes. An intrinsic germanium detector is used together with large dynamic range electronics (16 bits) to acquire the data. The beamline is now at the end of its commissioning phase; intravenous coronary angiography is intended to start in 1999 with patients and the CT pre-clinical program is underway on small animals. The first in viv...

  19. Magnetic-fusion energy and computers

    International Nuclear Information System (INIS)

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups

  20. Magnetic fusion energy and computers

    International Nuclear Information System (INIS)

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups

  1. Waste encapsulation and storage facility function analysis report

    International Nuclear Information System (INIS)

    Lund, D.P.

    1995-09-01

    The document contains the functions, function definitions, function interfaces, function interface definitions, Input Computer Automated Manufacturing Definition (IDEFO) diagrams, and a function hierarchy chart that describe what needs to be performed to deactivate Waste Encapsulation and Storage Facility (WESF)

  2. A distributed data base management facility for the CAD/CAM environment

    Science.gov (United States)

    Balza, R. M.; Beaudet, R. W.; Johnson, H. R.

    1984-01-01

    Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.

  3. Experimental facilities and simulation means

    International Nuclear Information System (INIS)

    Thomas, J.B.

    2009-01-01

    This paper and its associated series of slides review the experimental facilities and the simulation means used for the development of nuclear reactors in France. These experimental facilities include installations used for the measurement and qualification of nuclear data (mainly cross-sections) like EOLE reactor and Minerve zero power reactor, installations like material testing reactors, installations dedicated to reactor safety experiments like Cabri reactor, and other installations like accelerators (Jannus accelerator, GANIL for instance) that are complementary to neutron irradiations in experimental reactors. The simulation means rely on a series of advanced computer codes: Tripoli-Apollo for neutron transport, Numodis for irradiation impact on materials, Neptune and Cathare for 2-phase fluid dynamics, Europlexus for mechanical structures, and Pleiades (with Alcyone) for nuclear fuels. (A.C.)

  4. X-ray emission from National Ignition Facility indirect drive targets

    International Nuclear Information System (INIS)

    Anderson, A.T.; Managan, R.A.; Tobin, M.T.; Peterson, P.F.

    1996-01-01

    We have performed a series of 1-D numerical simulations of the x-ray emission from National Ignition Facility (NIF) targets. Results are presented in terms of total x-ray energy, pulse length, and spectrum. Scaling of x-ray emissions is presented for variations in both target yield and hohlraum thickness. Experiments conducted on the Nova facility provide some validation of the computational tools and methods

  5. EXPERIMENTAL AND COMPUTATIONAL ACTIVITIES AT THE OREGON STATE UNIVERSITY NEES TSUNAMI RESEARCH FACILITY

    Directory of Open Access Journals (Sweden)

    S.C. Yim

    2009-01-01

    Full Text Available A diverse series of research projects have taken place or are underway at the NEES Tsunami Research Facility at Oregon State University. Projects range from the simulation of the processes and effects of tsunamis generated by sub-aerial and submarine landslides (NEESR, Georgia Tech., model comparisons of tsunami wave effects on bottom profiles and scouring (NEESR, Princeton University, model comparisons of wave induced motions on rigid and free bodies (Shared-Use, Cornell, numerical model simulations and testing of breaking waves and inundation over topography (NEESR, TAMU, structural testing and development of standards for tsunami engineering and design (NEESR, University of Hawaii, and wave loads on coastal bridge structures (non-NEES, to upgrading the two-dimensional wave generator of the Large Wave Flume. A NEESR payload project (Colorado State University was undertaken that seeks to improve the understanding of the stresses from wave loading and run-up on residential structures. Advanced computational tools for coupling fluid-structure interaction including turbulence, contact and impact are being developed to assist with the design of experiments and complement parametric studies. These projects will contribute towards understanding the physical processes that occur during earthquake generated tsunamis including structural stress, debris flow and scour, inundation and overland flow, and landslide generated tsunamis. Analytical and numerical model development and comparisons with the experimental results give engineers additional predictive tools to assist in the development of robust structures as well as identification of hazard zones and formulation of hazard plans.

  6. The role of dedicated data computing centers in the age of cloud computing

    Science.gov (United States)

    Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2017-10-01

    Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.

  7. Computing needs of the superconducting super collider

    International Nuclear Information System (INIS)

    Diebold, R.

    1984-01-01

    Following a brief description of the SSC, the computing needs are discussed for both the accelerator design and the experimentation. The computing power required is considerably beyond that being used at present facilities, and parallel processing is expected to play an important role in supplying these needs

  8. Detecting Development Pattern of Urban Business Facilities Using Reviews Data

    Directory of Open Access Journals (Sweden)

    JIANG Botao

    2015-09-01

    Full Text Available This paper reveals and utilizes the growing power of online customer reviews in the space and time context. The location of commercial facilities and online customer reviews offered by Dianping.com provide an important data source for the study of spatial and temporal dynamics of urban commercial facilities. The constraints of road network are taken into account towards computing the density of urban commercial facilities and associated online customer reviews, as well as their spatial distribution, temporal trend, and the coupling relationship between facility number and stratification level. This paper maps the spatial distribution of commercial facilities onto the nearby road network, reflecting the influences of the locations, number and satisfaction levels of other commercial facilities across various street types. Because more and more customers tend to make a final shopping decision by sorting through search results by ratings and feedback, the research conducted in this paper can provide the proof for quantitative evaluation of urban planning on commercial facility development.

  9. Computing Services and Assured Computing

    Science.gov (United States)

    2006-05-01

    fighters’ ability to execute the mission.” Computing Services 4 We run IT Systems that: provide medical care pay the warfighters manage maintenance...users • 1,400 applications • 18 facilities • 180 software vendors • 18,000+ copies of executive software products • Virtually every type of mainframe and... chocs electriques, de branchez les deux cordons d’al imentation avant de faire le depannage P R IM A R Y SD A S B 1 2 PowerHub 7000 RST U L 00- 00

  10. A test matrix sequencer for research test facility automation

    Science.gov (United States)

    Mccartney, Timothy P.; Emery, Edward F.

    1990-01-01

    The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.

  11. Project assembling and commissioning of a rewetting test facility

    International Nuclear Information System (INIS)

    Rezende, H.C.

    1985-08-01

    A test facility (ITR - Instalacao de Testes de Remolhamento) has been erected at the Thermal-hydraulics Laboratory of CDTN, dedicated to the investigation of the basic phenomena that can occur during the reflood phase of a Loss of Coolant Accident (LOCA) in a Pressurized Water Reactor (PWR), utilizing tubular and annular test sections. The present work consists in a presentation of the facility design and a report of its commissioning. The mechanical aspects of the facility, its power supply system and its instrumentation are described. The results of the instruments calibration and two operational tests are presented and a comparison is done with calculations perfomed usign a computer code. (Author) [pt

  12. Accelerator based research facility as an inter university centre

    International Nuclear Information System (INIS)

    Mehta, G.K.

    1995-01-01

    15 UD pelletron has been operating as a user facility from July 1991. It is being utilised by a large number of universities and other institutions for research in basic Nuclear Physics, Materials Science, Atomic Physics, Radiobiology and Radiation Chemistry. There is an on-going programme for augmenting the accelerator facilities by injecting Pelletron beams into superconducting linear accelerator modules. Superconducting niobium resonator is being developed in Argonne National Laboratory as a joint collaborative effort. All other things such as cryostats, rf instrumentation, cryogenic distribution system, computer control etc are being done indigenously. Research facilities, augmentation plans and the research being conducted by the universities in various disciplines are described. (author)

  13. An improved Lagrangian relaxation and dual ascent approach to facility location problems

    DEFF Research Database (Denmark)

    Jörnsten, Kurt; Klose, Andreas

    2016-01-01

    not be reduced to the same extent as in the case of ordinary semi-Lagrangian relaxation. Hence, an effective method for optimizing the Lagrangian dual function is of utmost importance for obtaining a computational advantage from the simplified Lagrangian dual function. In this paper, we suggest a new dual ascent...... method for optimizing both the semi-Lagrangian dual function as well as its simplified form for the case of a generic discrete facility location problem and apply the method to the uncapacitated facility location problem. Our computational results show that the method generally only requires a very few...

  14. IAEA puts cyber security in focus for nuclear facilities in 2015

    International Nuclear Information System (INIS)

    Shepherd, John

    2015-01-01

    Later in 2015 the International Atomic Energy Agency (IAEA) will convene a special conference to discuss computer security, in the wake of cyber attacks on global financial institutions and government agencies that were increasingly in the news. According to the IAEA, the prevalence of IT security incidents in recent years involving the Stuxnet malware 'demonstrated that nuclear facilities can be susceptible to cyber attack'. The IAEA said this and other events have significantly raised global concerns over potential vulnerabilities and the possibility of a cyber attack, or a joint cyber-physical attack, that could impact on nuclear security. The IAEA has correctly identified that the use of computers and other digital electronic equipment in physical protection systems at nuclear facilities, as well as in facility safety systems, instrumentation, information processing and communication, 'continues to grow and presents an ever more likely target for cyber attack'. The agency's Vienna conference, to be held in June, will review emerging trends in computer security and areas that may still need to be addressed. The meeting follows a declaration of ministers of IAEA member states in 2013 that called on the agency to help raise awareness of the growing threat of cyber attacks and their potential impact on nuclear security. The conference is being organised 'to foster international cooperation in computer security as an essential element of nuclear security', the IAEA said. Details of the IAEA's 'International Conference on Computer Security in a Nuclear World: Expert Discussion and Exchange' are on the 'meetings' section of the agency's web site.

  15. RI management by personal computer

    International Nuclear Information System (INIS)

    Ono, Isamu; Hiyoshi, Katsunori; Ono, Kazuhiko; Morimitsu, Wataru

    1983-01-01

    For RI-handling facilities up to medium scale, it has been studied whether a personal computer is applicable to the practical management of radioisotopes. In the present system, the number of writing in mini floppy disks is 1280 articles per diskette, which is the sufficient capacity for a storage medium for one year in variety of books. The correction of radioactivity decay as well as various totalizations can be made easily, so that the state of RI storage and use for the whole RI-handling facility can be grasped. Further, by the improvement of the output formats, the transfer to the books as obligated to write is possible. From the above reason, a personal computer is practically applicable with the management system and also leads to labor saving in RI management personnel. (Mori, K.)

  16. A computational test facility for distributed analysis of gravitational wave signals

    International Nuclear Information System (INIS)

    Amico, P; Bosi, L; Cattuto, C; Gammaitoni, L; Punturo, M; Travasso, F; Vocca, H

    2004-01-01

    In the gravitational wave detector Virgo, the in-time detection of a gravitational wave signal from a coalescing binary stellar system is an intensive computational task. A parallel computing scheme using the message passing interface (MPI) is described. Performance results on a small-scale cluster are reported

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  18. DYMAC computer system

    International Nuclear Information System (INIS)

    Hagen, J.; Ford, R.F.

    1979-01-01

    The DYnamic Materials ACcountability program (DYMAC) has been monitoring nuclear material at the Los Alamos Scientific Laboratory plutonium processing facility since January 1978. This paper presents DYMAC's features and philosophy, especially as reflected in its computer system design. Early decisions and tradeoffs are evaluated through the benefit of a year's operating experience

  19. Large mass storage facility

    Energy Technology Data Exchange (ETDEWEB)

    Peskin, Arnold M.

    1978-08-01

    This is the final report of a study group organized to investigate questions surrounding the acquisition of a large mass storage facility. The programatic justification for such a system at Brookhaven is reviewed. Several candidate commercial products are identified and discussed. A draft of a procurement specification is developed. Some thoughts on possible new directions for computing at Brookhaven are also offered, although this topic was addressed outside of the context of the group's deliberations. 2 figures, 3 tables.

  20. Data base of reactor physics experimental results in Kyoto University critical assembly experimental facilities

    International Nuclear Information System (INIS)

    Ichihara, Chihiro; Fujine, Shigenori; Hayashi, Masatoshi

    1986-01-01

    The Kyoto University critical assembly experimental facilities belong to the Kyoto University Research Reactor Institute, and are the versatile critical assembly constructed for experimentally studying reactor physics and reactor engineering. The facilities are those for common utilization by universities in whole Japan. During more than ten years since the initial criticality in 1974, various experiments on reactor physics and reactor engineering have been carried out using many experimental facilities such as two solidmoderated cores, a light water-moderated core and a neutron generator. The kinds of the experiment carried out were diverse, and to find out the required data from them is very troublesome, accordingly it has become necessary to make a data base which can be processed by a computer with the data accumulated during the past more than ten years. The outline of the data base, the data base CAEX using personal computers, the data base supported by a large computer and so on are reported. (Kako, I.)

  1. Use of fire hazard analysis to cost effectively manage facility modifications

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, K., E-mail: kkruger@plcfire.com [PLC Fire Safety Solutions, Fredericton, NB (Canada); Cronk, R., E-mail: rcronk@plcfire.com [PLC Fire Safety Solutions, Mississauga, ON (Canada)

    2014-07-01

    In Canada, licenced Nuclear power facilities, or facilities that process, handle or store nuclear material are required by the Canadian Nuclear Safety Commission to have a change control process in place. These processes are in place to avoid facility modifications that could result in an increase in fire hazards, or degradation of fire protection systems. Change control processes can have a significant impact on budgets associated with plant modifications. A Fire Hazard Analysis (FHA) is also a regulatory requirement for licenced facilities in Canada. An FHA is an extensive evaluation of a facility's construction, nuclear safety systems, fire hazards, and fire protection features. This paper is being presented to outline how computer based data management software can help organize facilities' fire safety information, manage this information, and reduce the costs associated with preparation of FHAs as well as facilities' change control processes. (author)

  2. Computer systems and nuclear industry

    International Nuclear Information System (INIS)

    Nkaoua, Th.; Poizat, F.; Augueres, M.J.

    1999-01-01

    This article deals with computer systems in nuclear industry. In most nuclear facilities it is necessary to handle a great deal of data and of actions in order to help plant operator to drive, to control physical processes and to assure the safety. The designing of reactors requires reliable computer codes able to simulate neutronic or mechanical or thermo-hydraulic behaviours. Calculations and simulations play an important role in safety analysis. In each of these domains, computer systems have progressively appeared as efficient tools to challenge and master complexity. (A.C.)

  3. Web-Based Requesting and Scheduling Use of Facilities

    Science.gov (United States)

    Yeager, Carolyn M.

    2010-01-01

    Automated User's Training Operations Facility Utilization Request (AutoFUR) is prototype software that administers a Web-based system for requesting and allocating facilities and equipment for astronaut-training classes in conjunction with scheduling the classes. AutoFUR also has potential for similar use in such applications as scheduling flight-simulation equipment and instructors in commercial airplane-pilot training, managing preventive- maintenance facilities, and scheduling operating rooms, doctors, nurses, and medical equipment for surgery. Whereas requesting and allocation of facilities was previously a manual process that entailed examination of documents (including paper drawings) from different sources, AutoFUR partly automates the process and makes all of the relevant information available via the requester s computer. By use of AutoFUR, an instructor can fill out a facility-utilization request (FUR) form on line, consult the applicable flight manifest(s) to determine what equipment is needed and where it should be placed in the training facility, reserve the corresponding hardware listed in a training-hardware inventory database, search for alternative hardware if necessary, submit the FUR for processing, and cause paper forms to be printed. Auto-FUR also maintains a searchable archive of prior FURs.

  4. Facility for protection of technological, especially power assemblies

    International Nuclear Information System (INIS)

    Cichon, S.; Hahn, J.; Malatek, K.; Randak, O.; Vitovec, P.; Zidek, M.

    1987-01-01

    The facility consists of sensors producing analog signals, used as input information for the evaluation of process conditions or equipment failures. The sensors are fitted to partial functional parts of technological assemblies, such as nuclear reactors. The individual sensors are connected via unification converters to the respective protection units. The facility is resistant to breakdowns of the analog sensors and other components including the computer; it features the possibility of in-service failure detection and the capability of immediate regeneration following a failure. This capability prevents, with high probability, the production of non-accident failures of the technological assembly. The block diagram is described of the facility and its operation in the event of an emergency. (J.B.). 1 fig

  5. Science and Engineering Research Council Central Laser Facility

    International Nuclear Information System (INIS)

    1981-03-01

    This report covers the work done at, or in association with, the Central Laser Facility during the year April 1980 to March 1981. In the first chapter the major reconstruction and upgrade of the glass laser, which has been undertaken in order to increase the versatility of the facility, is described. The work of the six groups of the Glass Laser Scientific Progamme and Scheduling Committee is described in further chapters entitled; glass laser development, laser plasma interactions, transport and particle emission studies, ablative acceleration and compression studies, spectroscopy and XUV lasers, and theory and computation. Publications based on the work of the facility which have either appeared or been accepted for publication during the year are listed. (U.K.)

  6. Support facilities

    International Nuclear Information System (INIS)

    Williamson, F.S.; Blomquist, J.A.; Fox, C.A.

    1977-01-01

    Computer support is centered on the Remote Access Data Station (RADS), which is equipped with a 1000 lpm printer, 1000 cpm reader, and a 300 cps paper tape reader with 500-foot spools. The RADS is located in a data preparation room with four 029 key punches (two of which interpret), a storage vault for archival magnetic tapes, card files, and a 30 cps interactive terminal principally used for job inquiry and routing. An adjacent room provides work space for users, with a documentation library and a consultant's office, plus file storage for programs and their documentations. The facility has approximately 2,600 square feet of working laboratory space, and includes two fully equipped photographic darkrooms, sectioning and autoradiographic facilities, six microscope cubicles, and five transmission electron microscopes and one Cambridge scanning electron microscope equipped with an x-ray energy dispersive analytical system. Ancillary specimen preparative equipment includes vacuum evaporators, freeze-drying and freeze-etching equipment, ultramicrotomes, and assorted photographic and light microscopic equipment. The extensive physical plant of the animal facilities includes provisions for holding all species of laboratory animals under controlled conditions of temperature, humidity, and lighting. More than forty rooms are available for studies of the smaller species. These have a potential capacity of more than 75,000 mice, or smaller numbers of larger species and those requiring special housing arrangements. There are also six dog kennels to accommodate approximately 750 dogs housed in runs that consist of heated indoor compartments and outdoor exercise areas

  7. Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities

    International Nuclear Information System (INIS)

    Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.; Salas, E.; Martin, N.

    2008-01-01

    The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has been included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys

  8. Digital tape unit test facility software

    Science.gov (United States)

    Jackson, J. T.

    1971-01-01

    Two computer programs are described which are used for the collection and analysis of data from the digital tape unit test facility (DTUTF). The data are the recorded results of skew tests made on magnetic digital tapes which are used on computers as input/output media. The results of each tape test are keypunched onto an 80 column computer card. The format of the card is checked and the card image is stored on a master summary tape via the DTUTF card checking and tape updating system. The master summary tape containing the results of all the tape tests is then used for analysis as input to the DTUTF histogram generating system which produces a histogram of skew vs. date for selected data, followed by some statistical analysis of the data.

  9. XML Based Scientific Data Management Facility

    Science.gov (United States)

    Mehrotra, P.; Zubair, M.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of XML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management ,facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.

  10. Integrated computer aided design simulation and manufacture

    OpenAIRE

    Diko, Faek

    1989-01-01

    Computer Aided Design (CAD) and Computer Aided Manufacture (CAM) have been investigated and developed since twenty years as standalone systems. A large number of very powerful but independent packages have been developed for Computer Aided Design,Aanlysis and Manufacture. However, in most cases these packages have poor facility for communicating with other packages. Recently attempts have been made to develop integrated CAD/CAM systems and many software companies a...

  11. High Power RF Test Facility at the SNS

    CERN Document Server

    Kang, Yoon W; Campisi, Isidoro E; Champion, Mark; Crofford, Mark; Davis, Kirk; Drury, Michael A; Fuja, Ray E; Gurd, Pamela; Kasemir, Kay-Uwe; McCarthy, Michael P; Powers, Tom; Shajedul Hasan, S M; Stirbet, Mircea; Stout, Daniel; Tang, Johnny Y; Vassioutchenko, Alexandre V; Wezensky, Mark

    2005-01-01

    RF Test Facility has been completed in the SNS project at ORNL to support test and conditioning operation of RF subsystems and components. The system consists of two transmitters for two klystrons powered by a common high voltage pulsed converter modulator that can provide power to two independent RF systems. The waveguides are configured with WR2100 and WR1150 sizes for presently used frequencies: 402.5 MHz and 805 MHz. Both 402.5 MHz and 805 MHz systems have circulator protected klystrons that can be powered by the modulator capable of delivering 11 MW peak and 1 MW average power. The facility has been equipped with computer control for various RF processing and complete dual frequency operation. More than forty 805 MHz fundamental power couplers for the SNS superconducting linac (SCL) cavitites have been RF conditioned in this facility. The facility provides more than 1000 ft2 floor area for various test setups. The facility also has a shielded cave area that can support high power tests of normal conducti...

  12. 77 FR 61771 - Facility Security Officer Training Requirements

    Science.gov (United States)

    2012-10-11

    ... following: (1) Draft model FSO training course; (2) Computer-based training and distance learning; (3... DEPARTMENT OF HOMELAND SECURITY Coast Guard [Docket No. USCG-2012-0908] Facility Security Officer... Security Officer training program, with the primary focus on developing the curriculum for such a program...

  13. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  14. Integration of facility modeling capabilities for nuclear nonproliferation analysis

    International Nuclear Information System (INIS)

    Garcia, Humberto; Burr, Tom; Coles, Garill A.; Edmunds, Thomas A.; Garrett, Alfred; Gorensek, Maximilian; Hamm, Luther; Krebs, John; Kress, Reid L.; Lamberti, Vincent; Schoenwald, David; Tzanos, Constantine P.; Ward, Richard C.

    2012-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  15. Integration Of Facility Modeling Capabilities For Nuclear Nonproliferation Analysis

    International Nuclear Information System (INIS)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  16. Investigation of analytical and experimental behavior of nuclear facility ventilation systems

    International Nuclear Information System (INIS)

    Smith, P.R.; Ricketts, C.I.; Andrae, R.W.; Bolstad, J.W.; Horak, H.L.; Martin, R.A.; Tang, P.K.; Gregory, W.S.

    1979-01-01

    The behavior of nuclear facility ventilation systems subjected to both natural and man-caused accidents is being investigated. The purpose of the paper is to present a program overview and highlight recent results of the investigations. The program includes both analytical and experimental investigations. Computer codes for predicting accident-induced gas dynamics and test facilities to obtain supportive experimental data to define structural integrity and confinement effectiveness of ventilation system components are described. A unique test facility and recently obtained structural limits for high efficiency particulate air filters are reported

  17. Computer based plant display and digital control system of Wolsong NPP Tritium Removal Facility

    International Nuclear Information System (INIS)

    Jung, C.; Smith, B.; Tosello, G.; Grosbois, J. de; Ahn, J.

    2007-01-01

    The Wolsong Tritium Removal Facility (WTRF) is an AECL-designed, first-of-a-kind facility that removes tritium from the heavy water that is used in systems of the CANDUM reactors in operation at the Wolsong Nuclear Power Plant in South Korea. The Plant Display and Control System (PDCS) provides digital plant monitoring and control for the WTRF and offers the advantages of state-of-the-art digital control system technologies for operations and maintenance. The overall features of the PDCS will be described and some of the specific approaches taken on the project to save construction time and costs, to reduce in-service life-cycle costs and to improve quality will be presented. The PDCS consists of two separate computer sub-systems: the Digital Control System (DCS) and the Plant Display System (PDS). The PDS provides the computer-based Human Machine Interface (HMI) for operators, and permits efficient supervisory or device level monitoring and control. A System Maintenance Console (SMC) is included in the PDS for the purpose of software and hardware configuration and on-line maintenance. A Historical Data System (HDS) is also included in the PDS as a data-server that continuously captures and logs process data and events for long-term storage and on-demand selective retrieval. The PDCS of WTRF has been designed and implemented based on an off-the-self PDS/DCS product combination, the Delta-V System from Emerson. The design includes fully redundant Ethernet network communications, controllers, power supplies and redundancy on selected I/O modules. The DCS provides field bus communications to interface with 3rd party controllers supplied on specialized skids, and supports HART communication with field transmitters. The DCS control logic was configured using a modular and graphical approach. The control strategies are primarily device control modules implemented as autonomous control loops, and implemented using IEC 61131-3 Function Block Diagram (FBD) and Structured

  18. Using Amazon's Elastic Compute Cloud to scale CMS' compute hardware dynamically.

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud-computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely on-demand as limits and caps on usage are imposed. Our trial workflows allow us t...

  19. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... Children's (Pediatric) CT (Computed Tomography) Sponsored by Please note RadiologyInfo.org is not a medical facility. Please ... is further reviewed by committees from the American College of Radiology (ACR) and the Radiological Society of ...

  20. Concept of scaled test facility for simulating the PWR thermalhydraulic behaviour

    International Nuclear Information System (INIS)

    Silva Filho, E.

    1990-01-01

    This work deals with the design of a scaled test facility of a typical pressurized water reactor plant, to simulation of small break Loss-of-Coolant Accident. The computer code RELAP 5/ MOD1 has been utilized to simulate the accident and to compare the test facility behaviour with the reactor plant one. The results demonstrate similar thermal-hydraulic behaviours of the two sistema. (author)

  1. Quantum information. Teleportation - cryptography - quantum computer; Quanteninformation. Teleportation - Kryptografie - Quantencomputer

    Energy Technology Data Exchange (ETDEWEB)

    Koenneker, Carsten (comp.)

    2012-11-01

    The following topics are dealt with: Reality in the test facility, quantum teleportation, the reality of quanta, interaction-free quantum measurement, rules for quantum computers, quantum computers with ions, spintronics with diamond, the limits of the quantum computers, a view in the future of quantum optics. (HSI)

  2. Turbo Pascal Computer Code for PIXE Analysis

    International Nuclear Information System (INIS)

    Darsono

    2002-01-01

    To optimal utilization of the 150 kV ion accelerator facilities and to govern the analysis technique using ion accelerator, the research and development of low energy PIXE technology has been done. The R and D for hardware of the low energy PIXE installation in P3TM have been carried on since year 2000. To support the R and D of PIXE accelerator facilities in harmonize with the R and D of the PIXE hardware, the development of PIXE software for analysis is also needed. The development of database of PIXE software for analysis using turbo Pascal computer code is reported in this paper. This computer code computes the ionization cross-section, the fluorescence yield, and the stopping power of elements also it computes the coefficient attenuation of X- rays energy. The computer code is named PIXEDASIS and it is part of big computer code planed for PIXE analysis that will be constructed in the near future. PIXEDASIS is designed to be communicative with the user. It has the input from the keyboard. The output shows in the PC monitor, which also can be printed. The performance test of the PIXEDASIS shows that it can be operated well and it can provide data agreement with data form other literatures. (author)

  3. Test Facilities and Experience on Space Nuclear System Developments at the Kurchatov Institute

    International Nuclear Information System (INIS)

    Ponomarev-Stepnoi, Nikolai N.; Garin, Vladimir P.; Glushkov, Evgeny S.; Kompaniets, George V.; Kukharkin, Nikolai E.; Madeev, Vicktor G.; Papin, Vladimir K.; Polyakov, Dmitry N.; Stepennov, Boris S.; Tchuniyaev, Yevgeny I.; Tikhonov, Lev Ya.; Uksusov, Yevgeny I.

    2004-01-01

    The complexity of space fission systems and rigidity of requirement on minimization of weight and dimension characteristics along with the wish to decrease expenditures on their development demand implementation of experimental works which results shall be used in designing, safety substantiation, and licensing procedures. Experimental facilities are intended to solve the following tasks: obtainment of benchmark data for computer code validations, substantiation of design solutions when computational efforts are too expensive, quality control in a production process, and 'iron' substantiation of criticality safety design solutions for licensing and public relations. The NARCISS and ISKRA critical facilities and unique ORM facility on shielding investigations at the operating OR nuclear research reactor were created in the Kurchatov Institute to solve the mentioned tasks. The range of activities performed at these facilities within the implementation of the previous Russian nuclear power system programs is briefly described in the paper. This experience shall be analyzed in terms of methodological approach to development of future space nuclear systems (this analysis is beyond this paper). Because of the availability of these facilities for experiments, the brief description of their critical assemblies and characteristics is given in this paper

  4. Large-coil-test-facility fault-tree analysis

    International Nuclear Information System (INIS)

    1982-01-01

    An operating-safety study is being conducted for the Large Coil Test Facility (LCTF). The purpose of this study is to provide the facility operators and users with added insight into potential problem areas that could affect the safety of personnel or the availability of equipment. This is a preliminary report, on Phase I of that study. A central feature of the study is the incorporation of engineering judgements (by LCTF personnel) into an outside, overall view of the facility. The LCTF was analyzed in terms of 32 subsystems, each of which are subject to failure from any of 15 generic failure initiators. The study identified approximately 40 primary areas of concern which were subjected to a computer analysis as an aid in understanding the complex subsystem interactions that can occur within the facility. The study did not analyze in detail the internal structure of the subsystems at the individual component level. A companion study using traditional fault tree techniques did analyze approximately 20% of the LCTF at the component level. A comparison between these two analysis techniques is included in Section 7

  5. A computer code to estimate accidental fire and radioactive airborne releases in nuclear fuel cycle facilities: User's manual for FIRIN

    International Nuclear Information System (INIS)

    Chan, M.K.; Ballinger, M.Y.; Owczarski, P.C.

    1989-02-01

    This manual describes the technical bases and use of the computer code FIRIN. This code was developed to estimate the source term release of smoke and radioactive particles from potential fires in nuclear fuel cycle facilities. FIRIN is a product of a broader study, Fuel Cycle Accident Analysis, which Pacific Northwest Laboratory conducted for the US Nuclear Regulatory Commission. The technical bases of FIRIN consist of a nonradioactive fire source term model, compartment effects modeling, and radioactive source term models. These three elements interact with each other in the code affecting the course of the fire. This report also serves as a complete FIRIN user's manual. Included are the FIRIN code description with methods/algorithms of calculation and subroutines, code operating instructions with input requirements, and output descriptions. 40 refs., 5 figs., 31 tabs

  6. Computer-Aided Engineering Education at the K.U. Leuven.

    Science.gov (United States)

    Snoeys, R.; Gobin, R.

    1987-01-01

    Describes some recent initiatives and developments in the computer-aided design program in the engineering faculty of the Katholieke Universiteit Leuven (Belgium). Provides a survey of the engineering curriculum, the computer facilities, and the main software packages available. (TW)

  7. 300 Area fuel supply facilities deactivation function analysis report

    International Nuclear Information System (INIS)

    Lund, D.P.

    1995-09-01

    The document contains the functions, function definitions, function interfaces, function interface definitions, Input Computer Automated Manufacturing Definition (IDEFO) diagrams, and a function hierarchy chart that describe what needs to be performed to deactivate the 300 Area Fuel Supply Facilities

  8. Modeling of the YALINA booster facility by the Monte Carlo code MONK

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, Y.; Kondev, F.; Kiyavitskaya, H.; Serafimovich, I.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2007-01-01

    The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics arameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  9. IAEA puts cyber security in focus for nuclear facilities in 2015

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, John [nuclear 24, Brighton (United Kingdom)

    2015-01-15

    Later in 2015 the International Atomic Energy Agency (IAEA) will convene a special conference to discuss computer security, in the wake of cyber attacks on global financial institutions and government agencies that were increasingly in the news. According to the IAEA, the prevalence of IT security incidents in recent years involving the Stuxnet malware 'demonstrated that nuclear facilities can be susceptible to cyber attack'. The IAEA said this and other events have significantly raised global concerns over potential vulnerabilities and the possibility of a cyber attack, or a joint cyber-physical attack, that could impact on nuclear security. The IAEA has correctly identified that the use of computers and other digital electronic equipment in physical protection systems at nuclear facilities, as well as in facility safety systems, instrumentation, information processing and communication, 'continues to grow and presents an ever more likely target for cyber attack'. The agency's Vienna conference, to be held in June, will review emerging trends in computer security and areas that may still need to be addressed. The meeting follows a declaration of ministers of IAEA member states in 2013 that called on the agency to help raise awareness of the growing threat of cyber attacks and their potential impact on nuclear security. The conference is being organised 'to foster international cooperation in computer security as an essential element of nuclear security', the IAEA said. Details of the IAEA's 'International Conference on Computer Security in a Nuclear World: Expert Discussion and Exchange' are on the 'meetings' section of the agency's web site.

  10. Public computing options for individuals with cognitive impairments: survey outcomes.

    Science.gov (United States)

    Fox, Lynn Elizabeth; Sohlberg, McKay Moore; Fickas, Stephen; Lemoncello, Rik; Prideaux, Jason

    2009-09-01

    To examine availability and accessibility of public computing for individuals with cognitive impairment (CI) who reside in the USA. A telephone survey was administered as a semi-structured interview to 145 informants representing seven types of public facilities across three geographically distinct regions using a snowball sampling technique. An Internet search of wireless (Wi-Fi) hotspots supplemented the survey. Survey results showed the availability of public computer terminals and Internet hotspots was greatest in the urban sample, followed by the mid-sized and rural cities. Across seven facility types surveyed, libraries had the highest percentage of access barriers, including complex queue procedures, login and password requirements, and limited technical support. University assistive technology centres and facilities with a restricted user policy, such as brain injury centres, had the lowest incidence of access barriers. Findings suggest optimal outcomes for people with CI will result from a careful match of technology and the user that takes into account potential barriers and opportunities to computing in an individual's preferred public environments. Trends in public computing, including the emergence of widespread Wi-Fi and limited access to terminals that permit auto-launch applications, should guide development of technology designed for use in public computing environments.

  11. Computing requirements for S.S.C. accelerator design and studies

    International Nuclear Information System (INIS)

    Dragt, A.; Talman, R.; Siemann, R.; Dell, G.F.; Leemann, B.; Leemann, C.; Nauenberg, U.; Peggs, S.; Douglas, D.

    1984-01-01

    We estimate the computational hardware resources that will be required for accelerator physics studies during the design of the Superconducting SuperCollider. It is found that both Class IV and Class VI facilities (1) will be necessary. We describe a user environment for these facilities that is desirable within the context of accelerator studies. An acquisition scenario for these facilities is presented

  12. [The Psychomat computer complex for psychophysiologic studies].

    Science.gov (United States)

    Matveev, E V; Nadezhdin, D S; Shemsudov, A I; Kalinin, A V

    1991-01-01

    The authors analyze the principles of the design of a computed psychophysiological system for universal uses. Show the effectiveness of the use of computed technology as a combination of universal computation and control potentialities of a personal computer equipped with problem-oriented specialized facilities of stimuli presentation and detection of the test subject's reactions. Define the hardware and software configuration of the microcomputer psychophysiological system "Psychomat". Describe its functional possibilities and the basic medico-technical characteristics. Review organizational issues of the maintenance of its full-scale production.

  13. Guide to computing at ANL

    Energy Technology Data Exchange (ETDEWEB)

    Peavler, J. (ed.)

    1979-06-01

    This publication gives details about hardware, software, procedures, and services of the Central Computing Facility, as well as information about how to become an authorized user. Languages, compilers' libraries, and applications packages available are described. 17 tables. (RWR)

  14. Launch Site Computer Simulation and its Application to Processes

    Science.gov (United States)

    Sham, Michael D.

    1995-01-01

    This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.

  15. High-speed packet switching network to link computers

    CERN Document Server

    Gerard, F M

    1980-01-01

    Virtually all of the experiments conducted at CERN use minicomputers today; some simply acquire data and store results on magnetic tape while others actually control experiments and help to process the resulting data. Currently there are more than two hundred minicomputers being used in the laboratory. In order to provide the minicomputer users with access to facilities available on mainframes and also to provide intercommunication between various experimental minicomputers, CERN opted for a packet switching network back in 1975. It was decided to use Modcomp II computers as switching nodes. The only software to be taken was a communications-oriented operating system called Maxcom. Today eight Modcomp II 16-bit computers plus six newer Classic minicomputers from Modular Computer Services have been purchased for the CERNET data communications networks. The current configuration comprises 11 nodes connecting more than 40 user machines to one another and to the laboratory's central computing facility. (0 refs).

  16. Computer-Aided dispatching system design specification

    International Nuclear Information System (INIS)

    Briggs, M.G.

    1996-01-01

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This system is defined as a Commercial-Off the-Shelf computer dispatching system providing both text and graphical display information while interfacing with the diverse reporting system within the Hanford Facility. This system also provided expansion capabilities to integrate Hanford Fire and the Occurrence Notification Center and provides back-up capabilities for the Plutonium Processing Facility

  17. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  18. Physics Detector Simulation Facility Phase II system software description

    International Nuclear Information System (INIS)

    Scipioni, B.; Allen, J.; Chang, C.; Huang, J.; Liu, J.; Mestad, S.; Pan, J.; Marquez, M.; Estep, P.

    1993-05-01

    This paper presents the Physics Detector Simulation Facility (PDSF) Phase II system software. A key element in the design of a distributed computing environment for the PDSF has been the separation and distribution of the major functions. The facility has been designed to support batch and interactive processing, and to incorporate the file and tape storage systems. By distributing these functions, it is often possible to provide higher throughput and resource availability. Similarly, the design is intended to exploit event-level parallelism in an open distributed environment

  19. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... CT (Computed Tomography) Sponsored by Please note RadiologyInfo.org is not a medical facility. Please contact your ... links: For the convenience of our users, RadiologyInfo .org provides links to relevant websites. RadiologyInfo.org , ACR ...

  20. Advanced physical protection systems for facilities and transportation

    International Nuclear Information System (INIS)

    Jones, O.E.

    1976-01-01

    Sandia Laboratories is developing advanced physical protection safeguards in order to improve the security of special nuclear materials, facilities, and transportation. Computer models are being used to assess the cost-effectiveness of alternative systems for protecting facilities against external attack which may include internal assistance, and against internal theft or sabotage. Physical protection elements such as admittance controls, portals and detectors, perimeter and interior intrusion alarms, fixed and remotely activated barriers, and secure communications are being evaluated, adapted, and where required, developed. New facilities safeguards concepts which involve ''control loops'' between physical protection and materials control elements are being evolved jointly between Sandia Laboratories and Los Alamos Scientific Laboratory. Special vehicles and digital communications equipment have been developed for the ERDA safe-secure transportation system. The current status and direction of these activities are surveyed

  1. Advanced physical protection systems for facilities and transportation

    International Nuclear Information System (INIS)

    Jones, O.E.

    1976-01-01

    Sandia Laboratories is developing advanced physical protection safeguards in order to improve the security of special nuclear materials, facilities, and transportation. Computer models are being used to assess the cost-effectiveness of alternative systems for protecting facilities against external attack which may include internal assistance, and against internal theft or sabotage. Physical protection elements such as admittance controls, portals and detectors, perimeter and interior intrusion alarms, fixed and remotely-activated barriers, and secure communications are being evaluated, adapted, and where required, developed. New facilities safeguards concepts which involve (control loops) between physical protection and materials control elements are being evolved jointly between Sandia Laboratories and Los Alamos Scientific Laboratory. Special vehicles and digital communications equipment have been developed for the ERDA safe-secure transportation system. The current status and direction of these activities are surveyed

  2. Computer programs supporting instruction in acoustics

    OpenAIRE

    Melody, Kevin Andrew

    1998-01-01

    Approved for public release, distribution is unlimited Traditionally, the study of mechanical vibration and sound wave propagation has been presented through textbooks, classroom discussion and laboratory experiments. However, in today's academic environment, students have access to high performance computing facilities which can greatly augment the learning process. This thesis provides computer algorithms for examining selected topics drawn from the text, Fundamentals of Acoustics, Third...

  3. The establishment of computer system for nuclear material accounting

    International Nuclear Information System (INIS)

    Hong, Jong Sook; Lee, Byung Doo; Park, Ho Joon

    1988-01-01

    Computer based nuclear material accountancy system will not only increase the credibility of KOREA-IAEA safeguards agreement and bilateral agreements but also decrease the man-power needed to carry out the inspection activity at state level and at facility level. Computer software for nuclear material accounting for and control has been materialized the application to both item and bulk facilities and software for database at state level has been also established to maintain up -to-date status of nation-wide nuclear material inventory. Computer recordings and reporting have been realized to fulfill the national and international commitments to nuclear material accounting for and control. The exchange of information related to nuclear material accounting for has become possible by PC diskettes. (Author)

  4. Development of DCC software dynamic test facility: past and future

    International Nuclear Information System (INIS)

    McDonald, A.M.; Thai, N.D.; Buijs, W.J.

    1996-01-01

    This paper describes a test facility for future dynamic testing of DCC software used in the control computers of CANDU nuclear power stations. It is a network of three computers: the DCC emulator, the dynamic CANDU plant simulator and the testing computer. Shared network files are used for input/output data exchange between computers. The DCC emulator runs directly on the binary image of the DCC software. The dynamic CANDU plant simulator accepts control signals from the DCC emulator and returns realistic plant behaviour. The testing computer accepts test scripts written in AECL Test Language. Both dynamic test and static tests may be performed on the DCC software to verify control program outputs and dynamic responses. (author)

  5. All Solid State Optical Pulse Shaper for the OMEGA Laser Fusion Facility

    International Nuclear Information System (INIS)

    Okishev, A.V.; Skeldon, M.D.; Keck, R.L.; Seka, W.

    2000-01-01

    OAK-B135 All Solid State Optical Pulse Shaper for the OMEGA Laser Fusion Facility. The authors have developed an all-solid-state, compact, computer-controlled, flexible optical pulse shaper for the OMEGA laser facility. This pulse shaper produces high bandwidth, temporally shaped laser pulses that meet OMEGA requirements. The design is a significant simplification over existing technology with improved performance capabilities

  6. Planning Tools For Estimating Radiation Exposure At The National Ignition Facility

    International Nuclear Information System (INIS)

    Verbeke, J.; Young, M.; Brereton, S.; Dauffy, L.; Hall, J.; Hansen, L.; Khater, H.; Kim, S.; Pohl, B.; Sitaraman, S.

    2010-01-01

    A set of computational tools was developed to help estimate and minimize potential radiation exposure to workers from material activation in the National Ignition Facility (NIF). AAMI (Automated ALARA-MCNP Interface) provides an efficient, automated mechanism to perform the series of calculations required to create dose rate maps for the entire facility with minimal manual user input. NEET (NIF Exposure Estimation Tool) is a web application that combines the information computed by AAMI with a given shot schedule to compute and display the dose rate maps as a function of time. AAMI and NEET are currently used as work planning tools to determine stay-out times for workers following a given shot or set of shots, and to help in estimating integrated doses associated with performing various maintenance activities inside the target bay. Dose rate maps of the target bay were generated following a low-yield 10 16 D-T shot and will be presented in this paper.

  7. Advanced accounting techniques in automated fuel fabrication facilities

    International Nuclear Information System (INIS)

    Carlson, R.L.; DeMerschman, A.W.; Engel, D.W.

    1977-01-01

    The accountability system being designed for automated fuel fabrication facilities will provide real-time information on all Special Nuclear Material (SNM) located in the facility. It will utilize a distributed network of microprocessors and minicomputers to monitor material movement and obtain nuclear materials measurements directly from remote, in-line Nondestructive Assay instrumentation. As SNM crosses an accounting boundary, the accountability computer will update the master files and generate audit trail records. Mass balance accounting techniques will be used around each unit process step, while item control will be used to account for encapsulated material, and SNM in transit

  8. CAMAC based computer--computer communications via microprocessor data links

    International Nuclear Information System (INIS)

    Potter, J.M.; Machen, D.R.; Naivar, F.J.; Elkins, E.P.; Simmonds, D.D.

    1976-01-01

    Communications between the central control computer and remote, satellite data acquisition/control stations at The Clinton P. Anderson Meson Physics Facility (LAMPF) is presently accomplished through the use of CAMAC based Data Link Modules. With the advent of the microprocessor, a new philosophy for digital data communications has evolved. Data Link modules containing microprocessor controllers provide link management and communication network protocol through algorithms executed in the Data Link microprocessor

  9. Scaling analysis for the OSU AP600 test facility (APEX)

    International Nuclear Information System (INIS)

    Reyes, J.N.

    1998-01-01

    In this paper, the authors summarize the key aspects of a state-of-the-art scaling analysis (Reyes et al. (1995)) performed to establish the facility design and test conditions for the advanced plant experiment (APEX) at Oregon State University (OSU). This scaling analysis represents the first, and most comprehensive, application of the hierarchical two-tiered scaling (H2TS) methodology (Zuber (1991)) in the design of an integral system test facility. The APEX test facility, designed and constructed on the basis of this scaling analysis, is the most accurate geometric representation of a Westinghouse AP600 nuclear steam supply system. The OSU APEX test facility has served to develop an essential component of the integral system database used to assess the AP600 thermal hydraulic safety analysis computer codes. (orig.)

  10. Cloud computing can simplify HIT infrastructure management.

    Science.gov (United States)

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  12. Identification and categorisation of critical digital assets of I and C systems at nuclear facilities: implementation guide - TAFICS/IG/1

    International Nuclear Information System (INIS)

    2015-06-01

    This document is the first in a series of documents being developed by TAFICS for protecting computer-based I and C systems of Indian nuclear facilities from cyber attacks. This document identifies the Indian nuclear facilities and the types of computer systems within facilities - called Critical Digital Assets (CDA) - that are to be covered by security program. It also describes the process for identification and categorisation of CDA. The document covers operational facilities - such as reactors - as well as development facilities - such as I and C design organisations. The CDA identification and categorisation would help to implement a robust security program in a graded manner - as stipulated by international standards such as that of IAEA. It is recommended that all applicable Indian nuclear facilities should implement the process described in this document to generate a list of CD As of the respective facility. (author)

  13. Monte Carlo simulations and dosimetric studies of an irradiation facility

    Energy Technology Data Exchange (ETDEWEB)

    Belchior, A. [Instituto Tecnologico e Nuclear, Estrada nacional no. 10, Apartado 21, 2686-953 Sacavem (Portugal)], E-mail: anabelchior@itn.pt; Botelho, M.L; Vaz, P. [Instituto Tecnologico e Nuclear, Estrada nacional no. 10, Apartado 21, 2686-953 Sacavem (Portugal)

    2007-09-21

    There is an increasing utilization of ionizing radiation for industrial applications. Additionally, the radiation technology offers a variety of advantages in areas, such as sterilization and food preservation. For these applications, dosimetric tests are of crucial importance in order to assess the dose distribution throughout the sample being irradiated. The use of Monte Carlo methods and computational tools in support of the assessment of the dose distributions in irradiation facilities can prove to be economically effective, representing savings in the utilization of dosemeters, among other benefits. One of the purposes of this study is the development of a Monte Carlo simulation, using a state-of-the-art computational tool-MCNPX-in order to determine the dose distribution inside an irradiation facility of Cobalt 60. This irradiation facility is currently in operation at the ITN campus and will feature an automation and robotics component, which will allow its remote utilization by an external user, under REEQ/996/BIO/2005 project. The detailed geometrical description of the irradiation facility has been implemented in MCNPX, which features an accurate and full simulation of the electron-photon processes involved. The validation of the simulation results obtained was performed by chemical dosimetry methods, namely a Fricke solution. The Fricke dosimeter is a standard dosimeter and is widely used in radiation processing for calibration purposes.

  14. INFN Tier-1 Testbed Facility

    International Nuclear Information System (INIS)

    Gregori, Daniele; Cavalli, Alessandro; Dell'Agnello, Luca; Dal Pra, Stefano; Prosperini, Andrea; Ricci, Pierpaolo; Ronchieri, Elisabetta; Sapunenko, Vladimir

    2012-01-01

    INFN-CNAF, located in Bologna, is the Information Technology Center of National Institute of Nuclear Physics (INFN). In the framework of the Worldwide LHC Computing Grid, INFN-CNAF is one of the eleven worldwide Tier-1 centers to store and reprocessing Large Hadron Collider (LHC) data. The Italian Tier-1 provides the resources of storage (i.e., disk space for short term needs and tapes for long term needs) and computing power that are needed for data processing and analysis to the LHC scientific community. Furthermore, INFN Tier-1 houses computing resources for other particle physics experiments, like CDF at Fermilab, SuperB at Frascati, as well as for astro particle and spatial physics experiments. The computing center is a very complex infrastructure, the hardaware layer include the network, storage and farming area, while the software layer includes open source and proprietary software. Software updating and new hardware adding can unexpectedly deteriorate the production activity of the center: therefore a testbed facility has been set up in order to reproduce and certify the various layers of the Tier-1. In this article we describe the testbed and the checks performed.

  15. Analysis of Student Satisfaction Toward Quality of Service Facility

    Science.gov (United States)

    Napitupulu, D.; Rahim, R.; Abdullah, D.; Setiawan, MI; Abdillah, LA; Ahmar, AS; Simarmata, J.; Hidayat, R.; Nurdiyanto, H.; Pranolo, A.

    2018-01-01

    The development of higher education is very rapid rise to the tight competition both public universities and private colleges. XYZ University realized to win the competition, required continuous quality improvement, including the quality of existing service facilities. Amenities quality services is believed to support the success of the learning activities and improve user satisfaction. This study aims to determine the extent to which the quality of the services effect on user satisfaction. The research method used is survey-based questionnaire that measure perception and expectation. The results showed a gap between perception and expectations of the respondents have a negative value for each item. This means XYZ service facility at the university is not currently meet the expectations of society members. Three service facility that has the lowest index is based on the perception of respondents is a laboratory (2.56), computer and multimedia (2.63) as well as wifi network (2.99). The magnitude of the correlation between satisfaction with the quality of service facilities is 0.725 which means a strong and positive relationship. The influence of the quality of service facilities to the satisfaction of the students is 0.525 meaning that the variable quality of the services facility can explain 52.5% of the variable satisfaction. The study provided recommendations for improvements to enhance the quality of services facility at the XYZ university facilities.

  16. International physical protection self-assessment tool for chemical facilities.

    Energy Technology Data Exchange (ETDEWEB)

    Tewell, Craig R.; Burdick, Brent A.; Stiles, Linda L.; Lindgren, Eric Richard

    2010-09-01

    This report is the final report for Laboratory Directed Research and Development (LDRD) Project No.130746, International Physical Protection Self-Assessment Tool for Chemical Facilities. The goal of the project was to develop an exportable, low-cost, computer-based risk assessment tool for small to medium size chemical facilities. The tool would assist facilities in improving their physical protection posture, while protecting their proprietary information. In FY2009, the project team proposed a comprehensive evaluation of safety and security regulations in the target geographical area, Southeast Asia. This approach was later modified and the team worked instead on developing a methodology for identifying potential targets at chemical facilities. Milestones proposed for FY2010 included characterizing the international/regional regulatory framework, finalizing the target identification and consequence analysis methodology, and developing, reviewing, and piloting the software tool. The project team accomplished the initial goal of developing potential target categories for chemical facilities; however, the additional milestones proposed for FY2010 were not pursued and the LDRD funding therefore was redirected.

  17. Technical Cybersecurity Controls for Nuclear Facilities

    International Nuclear Information System (INIS)

    Oh, Jinseok; Ryou, Jaecheol; Kim, Youngmi; Jeong, Choonghei

    2014-01-01

    To strengthen cybersecurity for nuclear facilities, many countries take a regulatory approach. For example, US Government issued several regulations . Title 10, of the Code of Federal Regulations, Section 73.54, 'Protection of Digital Computer and Communication Systems and Networks (10 CFR 73.54) for cybersecurity requirements and Regulatory Guide 5.71 (RG. 5.71) for cybersecurity guidance and so on. In the case of Korea, Korean Government issued '8.22 Cybersecurity of I and C systems (KINS/RG-NO8.22). In particular, Reg. 5.71 provides a list of security controls to address the potential cyber risks to a nuclear facilities. Implementing and adopting security controls, we can improve the level of cybersecurity for nuclear facilities. RG 5.71 follows the recommendation of NIST SP 800-53. NIST standard provides security controls for IT systems. And NRC staff tailored the controls in NIST standards to unique environments of nuclear facilities. In this paper, we are going to analysis and compare NRC RG 5.71 and NIST SP800-53, in particular, for technical security controls. If RG 5.71 omits the specific security control that is included in SP800-53, we would review that omitting is adequate or not. If RG 5.71 includes the specific security control that is not included in SP800-53, we would also review the rationale. And we are going to some security controls to strengthen cybersecurity of nuclear facilities. In this paper, we compared and analyzed of two regulation in technical security controls. RG 5.71 that is based on NIST standard provides well-understood security controls for nuclear facility. But some omitting from NIST standard can threaten security state of nuclear facility

  18. Technical Cybersecurity Controls for Nuclear Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jinseok; Ryou, Jaecheol [Chungnam National Univ., Daejeon (Korea, Republic of); Kim, Youngmi; Jeong, Choonghei [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2014-05-15

    To strengthen cybersecurity for nuclear facilities, many countries take a regulatory approach. For example, US Government issued several regulations . Title 10, of the Code of Federal Regulations, Section 73.54, 'Protection of Digital Computer and Communication Systems and Networks (10 CFR 73.54) for cybersecurity requirements and Regulatory Guide 5.71 (RG. 5.71) for cybersecurity guidance and so on. In the case of Korea, Korean Government issued '8.22 Cybersecurity of I and C systems (KINS/RG-NO8.22). In particular, Reg. 5.71 provides a list of security controls to address the potential cyber risks to a nuclear facilities. Implementing and adopting security controls, we can improve the level of cybersecurity for nuclear facilities. RG 5.71 follows the recommendation of NIST SP 800-53. NIST standard provides security controls for IT systems. And NRC staff tailored the controls in NIST standards to unique environments of nuclear facilities. In this paper, we are going to analysis and compare NRC RG 5.71 and NIST SP800-53, in particular, for technical security controls. If RG 5.71 omits the specific security control that is included in SP800-53, we would review that omitting is adequate or not. If RG 5.71 includes the specific security control that is not included in SP800-53, we would also review the rationale. And we are going to some security controls to strengthen cybersecurity of nuclear facilities. In this paper, we compared and analyzed of two regulation in technical security controls. RG 5.71 that is based on NIST standard provides well-understood security controls for nuclear facility. But some omitting from NIST standard can threaten security state of nuclear facility.

  19. Congestion Service Facilities Location Problem with Promise of Response Time

    Directory of Open Access Journals (Sweden)

    Dandan Hu

    2013-01-01

    Full Text Available In many services, promise of specific response time is advertised as a commitment by the service providers for the customer satisfaction. Congestion on service facilities could delay the delivery of the services and hurts the overall satisfaction. In this paper, congestion service facilities location problem with promise of response time is studied, and a mixed integer nonlinear programming model is presented with budget constrained. The facilities are modeled as M/M/c queues. The decision variables of the model are the locations of the service facilities and the number of servers at each facility. The objective function is to maximize the demands served within specific response time promised by the service provider. To solve this problem, we propose an algorithm that combines greedy and genetic algorithms. In order to verify the proposed algorithm, a lot of computational experiments are tested. And the results demonstrate that response time has a significant impact on location decision.

  20. 33 CFR 105.305 - Facility Security Assessment (FSA) requirements.

    Science.gov (United States)

    2010-07-01

    ... evacuation routes and assembly stations; and (viii) Existing security and safety equipment for protection of... protection systems; (iv) Procedural policies; (v) Radio and telecommunication systems, including computer... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Facility Security Assessment (FSA...

  1. Fire simulation in nuclear facilities: the FIRAC code and supporting experiments

    International Nuclear Information System (INIS)

    Burkett, M.W.; Martin, R.A.; Fenton, D.L.; Gunaji, M.V.

    1984-01-01

    The fire accident analysis computer code FIRAC was designed to estimate radioactive and nonradioactive source terms and predict fire-induced flows and thermal and material transport within the ventilation systems of nuclear fuel cycle facilities. FIRAC maintains its basic structure and features and has been expanded and modified to include the capabilities of the zone-type compartment fire model computer code FIRIN developed by Battelle Pacific Northwest Laboratory. The two codes have been coupled to provide an improved simulation of a fire-induced transient within a facility. The basic material transport capability of FIRAC has been retained and includes estimates of entrainment, convection, deposition, and filtration of material. The interrelated effects of filter plugging, heat transfer, gas dynamics, material transport, and fire and radioactive source terms also can be simulated. Also, a sample calculation has been performed to illustrate some of the capabilities of the code and how a typical facility is modeled with FIRAC. In addition to the analytical work being performed at Los Alamos, experiments are being conducted at the New Mexico State University to support the FIRAC computer code development and verification. This paper summarizes two areas of the experimental work that support the material transport capabiities of the code: the plugging of high-efficiency particulate air (HEPA) filters by combustion aerosols and the transport and deposition of smoke in ventilation system ductwork

  2. Fire simulation in nuclear facilities--the FIRAC code and supporting experiments

    International Nuclear Information System (INIS)

    Burkett, M.W.; Martin, R.A.; Fenton, D.L.; Gunaji, M.V.

    1985-01-01

    The fire accident analysis computer code FIRAC was designed to estimate radioactive and nonradioactive source terms and predict fire-induced flows and thermal and material transport within the ventilation systems of nuclear fuel cycle facilities. FIRAC maintains its basic structure and features and has been expanded and modified to include the capabilities of the zone-type compartment fire model computer code FIRIN developed by Battelle Pacific Northwest Laboratory. The two codes have been coupled to provide an improved simulation of a fire-induced transient within a facility. The basic material transport capability of FIRAC has been retained and includes estimates of entrainment, convection, deposition, and filtration of material. The interrelated effects of filter plugging, heat transfer, gas dynamics, material transport, and fire and radioactive source terms also can be simulated. Also, a sample calculation has been performed to illustrate some of the capabilities of the code and how a typical facility is modeled with FIRAC. In addition to the analytical work being performed at Los Alamos, experiments are being conducted at the New Mexico State University to support the FIRAC computer code development and verification. This paper summarizes two areas of the experimental work that support the material transport capabilities of the code: the plugging of high-efficiency particulate air (HEPA) filters by combustion aerosols and the transport and deposition of smoke in ventilation system ductwork

  3. A personal computer code for seismic evaluations of nuclear power plants facilities

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulos, A.J.; Graves, H.

    1990-01-01

    The program CARES (Computer Analysis for Rapid Evaluation of Structures) is an integrated computational system being developed by Brookhaven National Laboratory (BNL) for the U.S. Nuclear Regulatory Commission. It is specifically designed to be a personal computer (PC) operated package which may be used to determine the validity and accuracy of analysis methodologies used for structural safety evaluations of nuclear power plants. CARES is structured in a modular format. Each module performs a specific type of analysis i.e., static or dynamic, linear or nonlinear, etc. This paper describes the various features which have been implemented into the Seismic Module of CARES

  4. State-of-the-art technology for an extended computing centre

    CERN Multimedia

    Anaïs Schaeffer

    2013-01-01

    On 7 May, CERN’s Director-General, Rolf Heuer, the Director for Research and Computing, Sergio Bertolucci, the EN Department Head, Roberto Saban, and several guests joined the IT Department Head, Frédéric Hemmer, for the inauguration of the new facilities at the CERN Computing Centre.   One of the new ventilation units and a big duct, installed as part of the Computing Centre consolidation project. After nearly two years of work, the IT Department now boasts a new computer room, equipped with its own cooling system to house the Computing Centre’s critical IT systems, which can, from now on, be decoupled from the other systems in the building. New electrical facilities have been added too, boosting the Centre’s computing power from 2.9 to 3.5 MW. Finally, an additional 40 cubic-metre water tank has been installed to allow continued cooling of the IT systems in the event of a major incident. But the star attraction of the extension project has ...

  5. Computational modelling in fluid mechanics

    International Nuclear Information System (INIS)

    Hauguel, A.

    1985-01-01

    The modelling of the greatest part of environmental or industrial flow problems gives very similar types of equations. The considerable increase in computing capacity over the last ten years consequently allowed numerical models of growing complexity to be processed. The varied group of computer codes presented are now a complementary tool of experimental facilities to achieve studies in the field of fluid mechanics. Several codes applied in the nuclear field (reactors, cooling towers, exchangers, plumes...) are presented among others [fr

  6. SAFE users manual. Volume 4. Computer programs

    International Nuclear Information System (INIS)

    Grady, L.M.

    1983-06-01

    Documentation for the Safeguards Automated Facility Evaluation (SAFE) computer programs is presented. The documentation is in the form of subprogram trees, program abstracts, flowcharts, and listings. Listings are provided on microfiche

  7. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  8. The DIII-D Computing Environment: Characteristics and Recent Changes

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.

    1999-01-01

    The DIII-D tokamak national fusion research facility along with its predecessor Doublet III has been operating for over 21 years. The DIII-D computing environment consists of real-time systems controlling the tokamak, heating systems, and diagnostics, and systems acquiring experimental data from instrumentation; major data analysis server nodes performing short term and long term data access and data analysis; and systems providing mechanisms for remote collaboration and the dissemination of information over the world wide web. Computer systems for the facility have undergone incredible changes over the course of time as the computer industry has changed dramatically. Yet there are certain valuable characteristics of the DIII-D computing environment that have been developed over time and have been maintained to this day. Some of these characteristics include: continuous computer infrastructure improvements, distributed data and data access, computing platform integration, and remote collaborations. These characteristics are being carried forward as well as new characteristics resulting from recent changes which have included: a dedicated storage system and a hierarchical storage management system for raw shot data, various further infrastructure improvements including deployment of Fast Ethernet, the introduction of MDSplus, LSF and common IDL based tools, and improvements to remote collaboration capabilities. This paper will describe this computing environment, important characteristics that over the years have contributed to the success of DIII-D computing systems, and recent changes to computer systems

  9. Performance indicator program for U.S. Department of Energy reactors and facilities

    International Nuclear Information System (INIS)

    Sastry, R.; Fielding, J.R.; Snyder, B.J.; Usher, J.; Boccio, J.

    1990-01-01

    The U.S. Department of Energy (DOE) is developing a Performance Indicator (PI) Program for all facilities. The objective is to periodically collect, statistically analyze and present performance-related information in a concise and consistent format for DOE and safety of facility operations. A set of 14 DOE-Hq. defined PI's has been established after review of programs used by other organizations. Since July 1989, these PI's have been used in a trial program for eight diverse DOE facilities. Electronic reporting is directly to the DOE Safety Performance Measurement System computer. This paper reports on results demonstrated for the feasibility and usefulness of a DOE-wide PI Program and steps being taken to include all DOE facilities

  10. Draft of diagnostic techniques for primary coolant circuit facilities using control computer

    International Nuclear Information System (INIS)

    Suchy, R.; Procka, V.; Murin, V.; Rybarova, D.

    A method is proposed of in-service on-line diagnostics of primary circuit selected parts by means of a control computer. Computer processing will involve the measurements of neutron flux, pressure difference in pumps and in the core, and the vibrations of primary circuit mechanical parts. (H.S.)

  11. Cloud Computing Databases: Latest Trends and Architectural Concepts

    OpenAIRE

    Tarandeep Singh; Parvinder S. Sandhu

    2011-01-01

    The Economic factors are leading to the rise of infrastructures provides software and computing facilities as a service, known as cloud services or cloud computing. Cloud services can provide efficiencies for application providers, both by limiting up-front capital expenses, and by reducing the cost of ownership over time. Such services are made available in a data center, using shared commodity hardware for computation and storage. There is a varied set of cloud services...

  12. Challenges in scaling NLO generators to leadership computers

    Science.gov (United States)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  13. Annual report to the Laser Facility Committee 1979

    International Nuclear Information System (INIS)

    1979-03-01

    The report covers the work done at the Central Laser Facility, Rutherford Laboratory during the year preceding 31 March 1979. Preliminary work already undertaken on the upgrade of the glass laser and target areas consisting of the relocation of the two beam target chamber and tests on phosphate glass and also the completion of the electron beam generator for use by researchers on high power gas laser systems, are described. Work of the groups using the glass laser facility are considered under the headings; glass laser development, gas laser development, laser plasma interactions, transport and particle emission, ablative compression studies, atomic and radiation physics, XUV lasers, theory and computation. (U.K.)

  14. Monte Carlo in radiotherapy: experience in a distributed computational environment

    Science.gov (United States)

    Caccia, B.; Mattia, M.; Amati, G.; Andenna, C.; Benassi, M.; D'Angelo, A.; Frustagli, G.; Iaccarino, G.; Occhigrossi, A.; Valentini, S.

    2007-06-01

    New technologies in cancer radiotherapy need a more accurate computation of the dose delivered in the radiotherapeutical treatment plan, and it is important to integrate sophisticated mathematical models and advanced computing knowledge into the treatment planning (TP) process. We present some results about using Monte Carlo (MC) codes in dose calculation for treatment planning. A distributed computing resource located in the Technologies and Health Department of the Italian National Institute of Health (ISS) along with other computer facilities (CASPUR - Inter-University Consortium for the Application of Super-Computing for Universities and Research) has been used to perform a fully complete MC simulation to compute dose distribution on phantoms irradiated with a radiotherapy accelerator. Using BEAMnrc and GEANT4 MC based codes we calculated dose distributions on a plain water phantom and air/water phantom. Experimental and calculated dose values below ±2% (for depth between 5 mm and 130 mm) were in agreement both in PDD (Percentage Depth Dose) and transversal sections of the phantom. We consider these results a first step towards a system suitable for medical physics departments to simulate a complete treatment plan using remote computing facilities for MC simulations.

  15. User perspectives on computer applications

    International Nuclear Information System (INIS)

    Trammell, H.E.

    1979-04-01

    Experiences of a technical group that uses the services of computer centers are recounted. An orientation on the ORNL Engineering Technology Division and its missions is given to provide background on the diversified efforts undertaken by the Division and its opportunities to benefit from computer technology. Specific ways in which computers are used within the Division are described; these include facility control, data acquisition, data analysis, theory applications, code development, information processing, cost control, management of purchase requisitions, maintenance of personnel information, and control of technical publications. Problem areas found to need improvement are the overloading of computers during normal working hours, lack of code transportability, delay in obtaining routine programming, delay in key punching services, bewilderment in the use of large computer centers, complexity of job control language, and uncertain quality of software. 20 figures

  16. Cloud@Home: A New Enhanced Computing Paradigm

    Science.gov (United States)

    Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco

    Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).

  17. Development of a spent fuel management technology research and test facility

    Energy Technology Data Exchange (ETDEWEB)

    Park, S W; Noh, S K; Lee, J S. and others

    1997-12-01

    This study was intended to develop concept for a pilot-scale remote operation facility for longer term management of spent fuel and therefrom to provide technical requirement for later basic design of the facility. Main scope of work for the study was to revise the past (1990) conceptual design in functions, scale, hot cell layout, etc. based on user requirements. Technical reference was made to the PKA facility in Germany, through collaboration with appropriate partner, to elaborate the design and requirements. A simulator of the conceptual design was also developed by use of virtual reality technique by 3-D computer graphics for equipment and building. (author). 18 tabs., 39 figs

  18. Emission Facilities - Erosion & Sediment Control Facilities

    Data.gov (United States)

    NSGIC Education | GIS Inventory — An Erosion and Sediment Control Facility is a DEP primary facility type related to the Water Pollution Control program. The following sub-facility types related to...

  19. Current personnel dosimetry practices at DOE facilities

    International Nuclear Information System (INIS)

    Fix, J.J.

    1981-05-01

    Only three parameters were included in the personnel occupational exposure records by all facilities. These are employee name, social security number, and whole body dose. Approximate percentages of some other parameters included in the record systems are sex (50%), birthdate (90%), occupation (26%), previous employer radiation exposure (74%), etc. Statistical analysis of the data for such parameters as sex versus dose distribution, age versus dose distribution, cumulative lifetime dose, etc. was apparently seldom done. Less than 50% of the facilities reported having formal documentation for either the dosimeter, records system, or reader. Slightly greater than 50% of facilities reported having routine procedures in place. These are considered maximum percentages because some respondents considered computer codes as formal documentation. The repository receives data from DOE facilities regarding the (a) distribution of annual whole body doses, (b) significant internal depositions, and (c) individual doses upon termination. It is expected that numerous differences exist in the dose data submitted by the different facilities. Areas of significant differences would likely include the determination of non-measurable doses, the methods used to determine previous employer radiation dose, the methods of determining cumulative radiation dose, and assessment of internal doses. Undoubtedly, the accuracy of the different dosimetry systems, especially at low doses, is very important to the credibility of data summaries (e.g., man-rem) provided by the repository

  20. Applicability of base-isolation R and D in non-reactor facilities to a nuclear reactor plant

    International Nuclear Information System (INIS)

    Seidensticker, R.W.

    1989-01-01

    Seismic isolation is gaining increased attention worldwide for use in a wide spectrum of critical facilities, ranging from hospitals and computing centers to nuclear power plants. While the fundamental principles and technology are applicable to all of these facilities, the degree of assurance that the actual behavior of the isolation systems is as specified varies with the nature of the facility involved. Obviously, the level of effort to provide such assurance for a nuclear power plant will be much greater than that required for, say, a critical computer facility. This paper reviews the research and development (R and D) programs ongoing for seismic isolation in non-nuclear facilities and related experience and makes a preliminary assessment of the extent to which such R and D and experience can be used for nuclear power plant application. Ways are suggested to improve the usefulness of such non-nuclear R and D in providing the high level of confidence required for the use of seismic isolation in a nuclear reactor plant

  1. Major Cyber threat on Nuclear Facility and Key Entry Points of Malicious Codes

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Ickhyun; Kwon, Kookheui [Korea Institute of Nuclear Nonproliferation and Control, Daejeon (Korea, Republic of)

    2013-05-15

    Cyber security incident explicitly shows that the domestic intra net system which is not connected to the Internet can be compromised by the USB based mal ware which was developed by the state-sponsored group. It also tells that the actor for cyber-attack has been changed from script kiddies to state's governments and the target has been changed to nation's main infrastructures such as electricity, transportation and etc. Since the cyber sabotage on nuclear facility has been proven to be possible and can be replicated again with same method, the cyber security on nuclear facility must be strengthened. In this paper, it is explained why the malicious code is the one of the biggest cyber threat in nuclear facility's digital I and C(Instrumentation and Controls) system by analyzing recent cyber attacks and well-known malicious codes. And a feasible cyber attack scenario on nuclear facility's digital I and C system is suggested along with some security measures for prevention of malicious code. As experienced from the cyber sabotage on Iranian nuclear facility in 2010, cyber attack on nuclear facility can be replicated by infecting the computer network with malicious codes. One of the cyber attack scenario on nuclear digital I and C computer network with using malicious code was suggested to help security manager establishing cyber security plan for prevention of malicious code. And some security measures on prevention of malicious code are also provided for reference.

  2. Major Cyber threat on Nuclear Facility and Key Entry Points of Malicious Codes

    International Nuclear Information System (INIS)

    Shin, Ickhyun; Kwon, Kookheui

    2013-01-01

    Cyber security incident explicitly shows that the domestic intra net system which is not connected to the Internet can be compromised by the USB based mal ware which was developed by the state-sponsored group. It also tells that the actor for cyber-attack has been changed from script kiddies to state's governments and the target has been changed to nation's main infrastructures such as electricity, transportation and etc. Since the cyber sabotage on nuclear facility has been proven to be possible and can be replicated again with same method, the cyber security on nuclear facility must be strengthened. In this paper, it is explained why the malicious code is the one of the biggest cyber threat in nuclear facility's digital I and C(Instrumentation and Controls) system by analyzing recent cyber attacks and well-known malicious codes. And a feasible cyber attack scenario on nuclear facility's digital I and C system is suggested along with some security measures for prevention of malicious code. As experienced from the cyber sabotage on Iranian nuclear facility in 2010, cyber attack on nuclear facility can be replicated by infecting the computer network with malicious codes. One of the cyber attack scenario on nuclear digital I and C computer network with using malicious code was suggested to help security manager establishing cyber security plan for prevention of malicious code. And some security measures on prevention of malicious code are also provided for reference

  3. Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture

    Science.gov (United States)

    Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert

    2015-07-28

    Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.

  4. Tools for remote collaboration on the DIII-D national fusion facility

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.; Greenwood, D.

    1999-01-01

    The DIII-D national fusion facility, a tokamak experiment funded by the US Department of Energy and operated by General Atomics (GA), is an international resource for plasma physics and fusion energy science research. This facility has a long history of collaborations with scientists from a wide variety of laboratories and universities from around the world. That collaboration has mostly been conducted by travel to and participation at the DIII-D site. Many new developments in the computing and technology fields are now facilitating collaboration from remote sites, thus reducing some of the needs to travel to the experiment. Some of these developments include higher speed wide area networks, powerful workstations connected within a distributed computing environment, network based audio/video capabilities, and the use of the world wide web. As the number of collaborators increases, the need for remote tools become important options to efficiently utilize the DIII-D facility. In the last two years a joint study by GA, Princeton Plasma Physics Laboratory (PPPL), Lawrence Livermore National Laboratory (LLNL), and Oak Ridge National Laboratory (ORNL) has introduced remote collaboration tools into the DIII-D environment and studied their effectiveness. These tools have included the use of audio/video for communication from the DIII-D control room, the broadcast of meetings, use of inter-process communication software to post events to the network during a tokamak shot, the creation of a DCE (distributed computing environment) cell for creating a common collaboratory environment, distributed use of computer cycles, remote data access, and remote display of results. This study also included sociological studies of how scientists in this environment work together as well as apart. (orig.)

  5. Radiation management computer system for Monju

    International Nuclear Information System (INIS)

    Aoyama, Kei; Yasutomo, Katsumi; Sudou, Takayuki; Yamashita, Masahiro; Hayata, Kenichi; Ueda, Hajime; Hosokawa, Hideo

    2002-01-01

    Radiation management of nuclear power research institutes, nuclear power stations and other such facilities are strictly managed under Japanese laws and management policies. Recently, the momentous issues of more accurate radiation dose management and increased work efficiency has been discussed. Up to now, Fuji Electric Company has supplied a large number of Radiation Management Systems to nuclear power stations and related nuclear facilities. We introduce the new radiation management computer system with adopted WWW technique for Japan Nuclear Cycle Development Institute, MONJU Fast Breeder Reactor (MONJU). (author)

  6. Processing Optimization of Typed Resources with Synchronized Storage and Computation Adaptation in Fog Computing

    Directory of Open Access Journals (Sweden)

    Zhengyang Song

    2018-01-01

    Full Text Available Wide application of the Internet of Things (IoT system has been increasingly demanding more hardware facilities for processing various resources including data, information, and knowledge. With the rapid growth of generated resource quantity, it is difficult to adapt to this situation by using traditional cloud computing models. Fog computing enables storage and computing services to perform at the edge of the network to extend cloud computing. However, there are some problems such as restricted computation, limited storage, and expensive network bandwidth in Fog computing applications. It is a challenge to balance the distribution of network resources. We propose a processing optimization mechanism of typed resources with synchronized storage and computation adaptation in Fog computing. In this mechanism, we process typed resources in a wireless-network-based three-tier architecture consisting of Data Graph, Information Graph, and Knowledge Graph. The proposed mechanism aims to minimize processing cost over network, computation, and storage while maximizing the performance of processing in a business value driven manner. Simulation results show that the proposed approach improves the ratio of performance over user investment. Meanwhile, conversions between resource types deliver support for dynamically allocating network resources.

  7. A free-piston Stirling engine/linear alternator controls and load interaction test facility

    Science.gov (United States)

    Rauch, Jeffrey S.; Kankam, M. David; Santiago, Walter; Madi, Frank J.

    1992-01-01

    A test facility at LeRC was assembled for evaluating free-piston Stirling engine/linear alternator control options, and interaction with various electrical loads. This facility is based on a 'SPIKE' engine/alternator. The engine/alternator, a multi-purpose load system, a digital computer based load and facility control, and a data acquisition system with both steady-periodic and transient capability are described. Preliminary steady-periodic results are included for several operating modes of a digital AC parasitic load control. Preliminary results on the transient response to switching a resistive AC user load are discussed.

  8. Computerized radionuclidic analysis in production facilities

    International Nuclear Information System (INIS)

    Gibbs, A.

    1978-03-01

    The Savannah River Plant Laboratories Department has been using a dual computer system to control all radionuclidic pulse height analyses since 1971. This computerized system analyzes 7000 to 8000 samples per month and has allowed the counting room staff to be reduced from three persons to one person. More reliable process information is being returned to the production facilities and for environmental evaluations and being returned faster, even though the sample load has more than tripled. This information is now more easily retrievable for other evaluations. The computer is also used for mass spectrometer data reduction and for quality control data analysis. The basic system is being expanded by interfacing microcomputers which provide data input from all of the laboratory modules for quality assurance programs

  9. Combustion Dynamics Facility: April 1990 workshop working group reports

    Energy Technology Data Exchange (ETDEWEB)

    Kung, A.H.; Lee, Y.T.

    1990-04-01

    This document summarizes results from a workshop held April 5--7, 1990, on the proposed Combustion Dynamics Facility (CDF). The workshop was hosted by the Lawrence Berkeley Laboratory (LBL) and Sandia National Laboratories (SNL) to provide an opportunity for potential users to learn about the proposed experimental and computational facilities, to discuss the science that could be conducted with such facilities, and to offer suggestions as to how the specifications and design of the proposed facilities might be further refined to address the most visionary scientific opportunities. Some 130 chemical physicists, combustion chemists, and specialists in UV synchrotron radiation sources and free-electron lasers (more than half of whom were from institutions other than LBL and SNL) attended the five plenary sessions and participated in one or more of the nine parallel working group sessions. Seven of these sessions were devoted to broadening and strengthening the scope of CDF scientific opportunities and to detail the experimental facilities required to realize these opportunities. Two technical working group sessions addressed the design and proposed performance of two of the major CDF experimental facilities. These working groups and their chairpersons are listed below. A full listing of the attendees of the workshop is given in Appendix A. 1 tab.

  10. Advanced toroidal facility vaccuum vessel stress analyses

    International Nuclear Information System (INIS)

    Hammonds, C.J.; Mayhall, J.A.

    1987-01-01

    The complex geometry of the Advance Toroidal Facility (ATF) vacuum vessel required special analysis techniques in investigating the structural behavior of the design. The response of a large-scale finite element model was found for transportation and operational loading. Several computer codes and systems, including the National Magnetic Fusion Energy Computer Center Cray machines, were implemented in accomplishing these analyses. The work combined complex methods that taxed the limits of both the codes and the computer systems involved. Using MSC/NASTRAN cyclic-symmetry solutions permitted using only 1/12 of the vessel geometry to mathematically analyze the entire vessel. This allowed the greater detail and accuracy demanded by the complex geometry of the vessel. Critical buckling-pressure analyses were performed with the same model. The development, results, and problems encountered in performing these analyses are described. 5 refs., 3 figs

  11. Nondestructive assay system development for a plutonium scrap recovery facility

    International Nuclear Information System (INIS)

    Hsue, S.T.; Baker, M.P.

    1984-01-01

    A plutonium scrap recovery facility is being constructed at the Savannah River Plant (SRP). The safeguards groups of the Los Alamos National Laboratory have been working since the early design stage of the facility with SRP and other national laboratories to develop a state-of-the-art assay system for this new facility. Not only will the most current assay techniques be incorporated into the system, but also the various nondestructive assay (NDA) instruments are to be integrated with an Instrument Control Computer (ICC). This undertaking is both challenging and ambitious; an entire assay system of this type has never been done before in a working facility. This paper will describe, in particular, the effort of the Los Alamos Safeguards Assay Group in this endeavor. Our effort in this project can be roughly divided into three phases: NDA development, system integration, and integral testing. 6 references

  12. A safety decision analysis for Saudi Arabian nuclear research facility

    International Nuclear Information System (INIS)

    Abulfaraj, W.H.; Abdul-Fattah, A.F.

    1985-01-01

    Establishment of a nuclear research facility should be the first step in planning for introducing the nuclear energy to Saudi Arabia. The fuzzy set decision theory is selected among different decision theories to be applied for this analysis. Four research reactors from USA are selected for the present study. The IFDA computer code, based on the fuzzy set theory is applied. Results reveal that the FNR reactor is the best alternative for the case of Saudi Arabian nuclear research facility, and MITR is the second best. 17 refs

  13. Current internal-dosimetry practices at US Department of Energy facilities

    International Nuclear Information System (INIS)

    Traub, R.J.; Murphy, B.L.; Selby, J.M.; Vallario, E.J.

    1985-04-01

    The internal dosimetry practice at DOE facilities were characterized. The purpose was to determine the size of the facilities' internal dosimetry programs, the uniformity of the programs among the facilities, and the areas of greatest concern to health physicists in providing and reporting accurate estimates of internal radiation dose and in meeting proposed changes in internal dosimetry. The differences among the internal-dosimetry programs are related to the radioelements in use at each facility and, to some extent, the number of workers at each facility. The differences include different frequencies in the use of quality control samples, different minimum detection levels, different methods of recording radionuclides, different amounts of data recorded in the permanent record, and apparent differences in modeling the metabolism of radionuclides within the body. Recommendations for improving internal-dosimetry practices include studying the relationship between air-monitoring/survey readings and bioassay data, establishing uniform methods for recording bioassay results, developing more sensitive direct-bioassay procedures, establishing a mechanism for sharing information on internal-dosimetry procedures among DOE facilities, and developing mathematical models and interactive computer codes that can help quantify the uptake of radioactive materials and predict their distribution in the body. 19 refs., 8 tabs

  14. National Ignition Facility system design requirements NIF integrated computer controls SDR004

    International Nuclear Information System (INIS)

    Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development, and test requirements for the NIF Integrated Computer Control System. The Integrated Computer Control System (ICCS) is covered in NIF WBS element 1.5. This document responds directly to the requirements detailed in the NIF Functional Requirements/Primary Criteria, and is supported by subsystem design requirements documents for each major ICCS Subsystem

  15. A new AMS facility in Mexico

    International Nuclear Information System (INIS)

    Solís, C.; Chávez-Lomelí, E.; Ortiz, M.E.; Huerta, A.; Andrade, E.; Barrios, E.

    2014-01-01

    A new Accelerator Mass Spectrometry system has been installed at the Institute of Physics of the National Autonomous University of Mexico (UNAM). A sample preparation chemistry laboratory equipped with computer controlled graphitization equipment (AGEIII) has also been established. Together both facilities constitute the LEMA (Laboratorio de Espectrometría de Masas con Aceleradores) first of its kind in Mexico. High sensitivity characterization of the concentration in a sample of 14 C as well as 10 Be, 26 Al, 129 I and Pu are now possible. Since the demand for 14 C dating is far more abundant, a data analysis program was developed in the cross-platform programming language Python in order to calculate radiocarbon age. Results from installation, acceptance tests and the first results of 14 C analyses of reference materials prepared in our own facility are presented

  16. A new AMS facility in Mexico

    Science.gov (United States)

    Solís, C.; Chávez-Lomelí, E.; Ortiz, M. E.; Huerta, A.; Andrade, E.; Barrios, E.

    2014-07-01

    A new Accelerator Mass Spectrometry system has been installed at the Institute of Physics of the National Autonomous University of Mexico (UNAM). A sample preparation chemistry laboratory equipped with computer controlled graphitization equipment (AGEIII) has also been established. Together both facilities constitute the LEMA (Laboratorio de Espectrometría de Masas con Aceleradores) first of its kind in Mexico. High sensitivity characterization of the concentration in a sample of 14C as well as 10Be, 26Al, 129I and Pu are now possible. Since the demand for 14C dating is far more abundant, a data analysis program was developed in the cross-platform programming language Python in order to calculate radiocarbon age. Results from installation, acceptance tests and the first results of 14C analyses of reference materials prepared in our own facility are presented.

  17. Interactive simulation of nuclear power systems using a dedicated minicomputer - computer graphics facility

    International Nuclear Information System (INIS)

    Tye, C.; Sezgen, A.O.

    1980-01-01

    The design of control systems and operational procedures for large scale nuclear power plant poses a difficult optimization problem requiring a lot of computational effort. Plant dynamic simulation using digital minicomputers offers the prospect of relatively low cost computing and when combined with graphical input/output provides a powerful tool for studying such problems. The paper discusses the results obtained from a simulation study carried out at the Computer Graphics Unit of the University of Manchester using a typical station control model for an Advanced Gas Cooled reactor. Particular reference is placed on the use of computer graphics for information display, parameter and control system optimization and techniques for using graphical input for defining and/or modifying the control system topology. Experience gained from this study has shown that a relatively modest minicomputer system can be used for simulating large scale dynamic systems and that highly interactive computer graphics can be used to advantage to relieve the designer of many of the tedious aspects of simulation leaving him free to concentrate on the more creative aspects of his work. (author)

  18. Huff-type competitive facility location model with foresight in a discrete space

    Directory of Open Access Journals (Sweden)

    Milad Gorji Ashtiani

    2011-01-01

    Full Text Available Consider a chain as leader that wants to open p new facilities in a linear market, like metro. In this market, there is a competitor, called follower. The leader and the follower have established some facilities in advance. When the leader opens p new facilities, its competitor, follower, reacts the leader’s action and opens r new facilities. The optimal locations for leader and follower are chosen among predefined potential locations. Demand is considered as demand points and is assumed inelastic. Considering huff model, demand points are probabilistically absorbed by all facilities. The leader’s objective is maximization of its market share after opening follower’s new facilities. For solving leader problem, first the follower’s problem is solved for all leader’s potential locations and the best location for leader is obtained and then, a heuristic model is proposed for leader problem when the leader and the follower want to open one new facility. Computational results show that the proposed method is efficient for large scale problems.

  19. How You Can Protect Public Access Computers "and" Their Users

    Science.gov (United States)

    Huang, Phil

    2007-01-01

    By providing the public with online computing facilities, librarians make available a world of information resources beyond their traditional print materials. Internet-connected computers in libraries greatly enhance the opportunity for patrons to enjoy the benefits of the digital age. Unfortunately, as hackers become more sophisticated and…

  20. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  1. Distributed computer controls for accelerator systems

    Science.gov (United States)

    Moore, T. L.

    1989-04-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multiuser Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implantation within four months with a computer and instrumentation cost of approximately $100k. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking of operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the effective implementation of the system. The definition, design, implementation, operation and total system performance will be discussed.

  2. Distributed computer controls for accelerator systems

    International Nuclear Information System (INIS)

    Moore, T.L.

    1989-01-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multiuser Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implantation within four months with a computer and instrumentation cost of approximately $100k. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking of operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the effective implementation of the system. The definition, design, implementation, operation and total system performance will be discussed. (orig.)

  3. Distributed computer controls for accelerator systems

    International Nuclear Information System (INIS)

    Moore, T.L.

    1988-09-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multi-user Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implementation with four months with a computer and instrumentation cost of approximately $100K. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking and operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the efficient implementation of the system. The definition, design, implementation, operation and total system performance will be discussed. 3 refs

  4. Cold moderator test facilities working group

    International Nuclear Information System (INIS)

    Bauer, Guenter S.; Lucas, A. T.

    1997-09-01

    The working group meeting was chaired by Bauer and Lucas.Testing is a vital part of any cold source development project. This applies to specific physics concept verification, benchmarking in conjunction with computer modeling and engineering testing to confirm the functional viability of a proposed system. Irradiation testing of materials will always be needed to continuously extend a comprehensive and reliable information database. An ever increasing worldwide effort to enhance the performance of reactor and accelerator based neutron sources, coupled with the complexity and rising cost of building new generation facilities, gives a new dimension to cold source development and testing programs. A stronger focus is now being placed on the fine-tuning of cold source design to maximize its effectiveness in fully exploiting the facility. In this context, pulsed spallation neutron sources pose an extra challenge due to requirements regarding pulse width and shape which result from a large variety of different instrument concepts. The working group reviewed these requirements in terms of their consequences on the needs for testing equipment and compiled a list of existing and proposed facilities suitable to carry out the necessary development work.

  5. Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee

    Science.gov (United States)

    Gallagher, D. L. (Editor)

    1993-01-01

    The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.

  6. NDA [nondestructive assay] for a facility at SRP

    International Nuclear Information System (INIS)

    Studley, R.V.

    1987-01-01

    A near-real-time accountability system with associated high accuracy assay measurements has recently been placed in service at a Savannah River Plant (SRP) facility. A computer cluster provides facility wide communication between personnel and the accountability, process control, and laboratory data systems. The cluster is also connected to communicate with process, accountability, and laboratory instrumentation and process controls plus an item tracking bar code printer/reader system. Eight high performance microprocessor-based nondestructive assay (NDA) systems which were developed at the Los Alamos National Laboratory (LANL) for this process are also connected to this cluster. With standards developed for them, these instruments are achieving the highest currently known NDA measurement accuracies

  7. Development of safeguards information treatment system at facility level in Korea

    International Nuclear Information System (INIS)

    So, D.S.; Lee, B.D.; Song, D.Y.

    2001-01-01

    Safeguards Information Treatment System (SITS) at Facility level was developed to implement efficiently the obligations under IAEA comprehensive Safeguards Agreement, bilateral nuclear cooperation Agreements with other countries and domestic law, and to manage efficiently the information related to safeguards implementation at facility level in Korea. Nuclear facilities in Korea are categorized into 8 types based on its accounting characteristics as follows: (1) Item counting facility or bulk handling facility; (2) Batch follow-up facility or not; (3) MUF (Material Unaccounted For) occurrence or not; (4) Nuclear production facility or not; (5) Operation status of facility; (6) Information management of nuclear material transfer status between KMPs or not; (7) Indication of inventory KMP on the inventory change of nuclear material is required or not. Hardware and Software for SITS can be loaded on a personal computer under operation system of Window 2000 or Window NT. MS SQL server 7 and MS Internet Information Server were adopted for database management system and Web server, respectively. Network environment of SITS was designed to include nuclear research institute, nuclear power plants of PWR and CANDU, nuclear fuel fabrication facilities and other facilities. SITS can be operated standalone or under the client-server system if intranet exists. More detailed contents of SITS are described elsewhere. Each module of SITS will be tested during incorporation of existing data into SITS and SITS will be distributed to nuclear facilities in Korea

  8. Computational upgrading. Quarterly report, January--March 1971

    Energy Technology Data Exchange (ETDEWEB)

    Van Velkinburgh, J.H.

    1997-09-01

    Additions to the data acquisition and reduction facilities and other non-programming endeavors are discussed. Computer codes designed to aid in the analysis and interpretation of quantitative data which were written or modified this period are discussed.

  9. How to maintain hundreds of computers offering different functionalities with only 2 system administrators

    International Nuclear Information System (INIS)

    Krempaska, R.; Bertrand, A.; Higgs, C.; Kapeller, R.; Lutz, H.; Provenzano, M.

    2012-01-01

    At the Paul Scherrer Institute, the control systems of our large research facilities are maintained by the Controls section. These facilities include two proton accelerators, (HIPA and PROSCAN), two electron accelerators, (SLS and the Injector Test Facility of the future SwissFEL) as well as the control systems of all their related beamlines and test facilities. The control system configuration and applications for each facility is stored on independent NFS file servers. The total number of Linux computers and servers is about 500. Since only two system administrators are responsible for their installation, configuration and maintenance, we have adopted a well defined solution that relies on 3 ideas: -) Virtualization, -) Unified operating system installation and update mechanism, and -) Automatic configuration by a common tool (puppet). This paper describes methods and tools which are used to develop and maintain the challenging computing infrastructure deployed by the Controls section

  10. Computers and Play in Early Childhood: Affordances and Limitations

    Science.gov (United States)

    Verenikina, Irina; Herrington, Jan; Peterson, Rob; Mantei, Jessica

    2010-01-01

    The widespread proliferation of computer games for children as young as six months of age, merits a reexamination of their manner of use and a review of their facility to provide opportunities for developmental play. This article describes a research study conducted to explore the use of computer games by young children, specifically to…

  11. Computing facilities available to final-year students at 3 UK dental schools in 1997/8: their use, and students' attitudes to information technology.

    Science.gov (United States)

    Grigg, P; Macfarlane, T V; Shearer, A C; Jepson, N J; Stephens, C D

    2001-08-01

    To identify computer facilities available in 3 dental schools where 3 different approaches to the use of technology-based learning material have been adopted and assess dental students' perception of their own computer skills and their attitudes towards information technology. Multicentre cross sectional by questionnaire. All 181 dental students in their final year of study (1997-8). The overall participation rate was 80%. There were no differences between schools in the students' self assessment of their IT skills but only 1/3 regarded themselves as competent in basic skills and nearly 50% of students in all 3 schools felt that insufficient IT training had been provided to enable them to follow their course without difficulty. There were significant differences between schools in most of the other areas examined which reflect the different ways in which IT can be used to support the dental course. 1. Students value IT as an educational tool. 2. Their awareness of the relevance of a knowledge of information technology for their future careers remains generally low. 3. There is a need to provide effective instruction in IT skills for those dental students who do not acquire these during secondary education.

  12. Availability of Supportive Facilities for Effective Teaching

    Directory of Open Access Journals (Sweden)

    Eugene Okyere-Kwakye

    2013-10-01

    Full Text Available Work environment of teachers has been identified by many researchers as one of the key propensity for quality teaching. Unlike the private schools, there has been a continues sentiments that, most government Junior High schools in Ghana do not performance satisfactorily during the Basic Education Certificate Examination (B.E.C.E. As majority of Ghanaian pupils’ school in this sector of education, hence this argument is wealthy of investigation. Therefore the purpose of this study is to identify the availability and the adequacy of certain necessary school facilities within the environment of Junior High Schools in the New Juaben Municipality, Eastern Region of Ghana. Questionnaire was used to collect data from two hundred (200 teachers who were selected from twenty (20 Junior High Schools in the New Juaben Municipality. The results reveal that facilities like furniture for pupil, urinal and toilet facilities and classroom blocks, were available but not adequate. However, computer laboratories, library books, staff common room and teachers’ accommodation were unavailable. Practical Implications of these results are been discussed.

  13. ANS main control complex three-dimensional computer model development

    International Nuclear Information System (INIS)

    Cleaves, J.E.; Fletcher, W.M.

    1993-01-01

    A three-dimensional (3-D) computer model of the Advanced Neutron Source (ANS) main control complex is being developed. The main control complex includes the main control room, the technical support center, the materials irradiation control room, computer equipment rooms, communications equipment rooms, cable-spreading rooms, and some support offices and breakroom facilities. The model will be used to provide facility designers and operations personnel with capabilities for fit-up/interference analysis, visual ''walk-throughs'' for optimizing maintain-ability, and human factors and operability analyses. It will be used to determine performance design characteristics, to generate construction drawings, and to integrate control room layout, equipment mounting, grounding equipment, electrical cabling, and utility services into ANS building designs. This paper describes the development of the initial phase of the 3-D computer model for the ANS main control complex and plans for its development and use

  14. Lightning and surge protection of large ground facilities

    Science.gov (United States)

    Stringfellow, Michael F.

    1988-04-01

    The vulnerability of large ground facilities to direct lightning strikes and to lightning-induced overvoltages on the power distribution, telephone and data communication lines are discussed. Advanced electrogeometric modeling is used for the calculation of direct strikes to overhead power lines, buildings, vehicles and objects within the facility. Possible modes of damage, injury and loss are discussed. Some appropriate protection methods for overhead power lines, structures, vehicles and aircraft are suggested. Methods to mitigate the effects of transients on overhead and underground power systems as well as within buildings and other structures are recommended. The specification and location of low-voltage surge suppressors for the protection of vulnerable hardware such as computers, telecommunication equipment and radar installations are considered. The advantages and disadvantages of commonly used grounding techniques, such as single point, multiple and isolated grounds are compared. An example is given of the expected distribution of lightning flashes to a large airport, its buildings, structures and facilities, as well as to vehicles on the ground.

  15. Steam condensation induced water hammer in a vertical up-fill configuration within an integral test facility. Experiments and computational simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dirndorfer, Stefan

    2017-01-17

    Condensation induced water hammer is a source of danger and unpredictable loads in pipe systems. Studies concerning condensation induced water hammer were predominantly made for horizontal pipes, studies concerning vertical pipe geometries are quite rare. This work presents a new integral test facility and an analysis of condensation induced water hammer in a vertical up-fill configuration. Thanks to the state of the art technology, the phenomenology of vertical condensation induced water hammer can be analysed by means of sufficient high-sampled experimental data. The system code ATHLET is used to simulate UniBw condensation induced water hammer experiments. A newly developed and implemented direct contact condensation model enables ATHLET to calculate condensation induced water hammer. Selected experiments are validated by the modified ATHLET system code. A sensitivity analysis in ATHLET, together with the experimental data, allows to assess the performance of ATHLET to compute condensation induced water hammer in a vertical up-fill configuration.

  16. Steam condensation induced water hammer in a vertical up-fill configuration within an integral test facility. Experiments and computational simulations

    International Nuclear Information System (INIS)

    Dirndorfer, Stefan

    2017-01-01

    Condensation induced water hammer is a source of danger and unpredictable loads in pipe systems. Studies concerning condensation induced water hammer were predominantly made for horizontal pipes, studies concerning vertical pipe geometries are quite rare. This work presents a new integral test facility and an analysis of condensation induced water hammer in a vertical up-fill configuration. Thanks to the state of the art technology, the phenomenology of vertical condensation induced water hammer can be analysed by means of sufficient high-sampled experimental data. The system code ATHLET is used to simulate UniBw condensation induced water hammer experiments. A newly developed and implemented direct contact condensation model enables ATHLET to calculate condensation induced water hammer. Selected experiments are validated by the modified ATHLET system code. A sensitivity analysis in ATHLET, together with the experimental data, allows to assess the performance of ATHLET to compute condensation induced water hammer in a vertical up-fill configuration.

  17. YALINA facility a sub-critical Accelerator- Driven System (ADS) for nuclear energy research facility description and an overview of the research program (1997-2008).

    Energy Technology Data Exchange (ETDEWEB)

    Gohar, Y.; Smith, D. L.; Nuclear Engineering Division

    2010-04-28

    The YALINA facility is a zero-power, sub-critical assembly driven by a conventional neutron generator. It was conceived, constructed, and put into operation at the Radiation Physics and Chemistry Problems Institute of the National Academy of Sciences of Belarus located in Minsk-Sosny, Belarus. This facility was conceived for the purpose of investigating the static and dynamic neutronics properties of accelerator driven sub-critical systems, and to serve as a neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinide nuclei. This report provides a detailed description of this facility and documents the progress of research carried out there during a period of approximately a decade since the facility was conceived and built until the end of 2008. During its history of development and operation to date (1997-2008), the YALINA facility has hosted several foreign groups that worked with the resident staff as collaborators. The participation of Argonne National Laboratory in the YALINA research programs commenced in 2005. For obvious reasons, special emphasis is placed in this report on the work at YALINA facility that has involved Argonne's participation. Attention is given here to the experimental program at YALINA facility as well as to analytical investigations aimed at validating codes and computational procedures and at providing a better understanding of the physics and operational behavior of the YALINA facility in particular, and ADS systems in general, during the period 1997-2008.

  18. Utilization of Relap 5 computer code for analyzing thermohydraulic projects

    International Nuclear Information System (INIS)

    Silva Filho, E.

    1987-01-01

    This work deals with the design of a scaled test facility of a typical pressurized water reactor plant of the 1300 MW (electric) class. A station blackout has been choosen to investigate the thermohydraulic behaviour of the the test facility in comparison to the reactor plant. The computer code RELAPS/MOD1 has been utilized to simulate the blackout and to compare the test facility behaviour with the reactor plant one. The results demonstrate similar thermohydraulic behaviours of the two systems. (author) [pt

  19. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  20. Computer applications in controlled fusion research

    International Nuclear Information System (INIS)

    Killeen, J.

    1975-01-01

    The application of computers to controlled thermonuclear research (CTR) is essential. In the near future the use of computers in the numerical modeling of fusion systems should increase substantially. A recent panel has identified five categories of computational models to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies is called for. The development and application of computer codes to implement these models is a vital step in reaching the goal of fusion power. To meet the needs of the fusion program the National CTR Computer Center has been established at the Lawrence Livermore Laboratory. A large central computing facility is linked to smaller computing centers at each of the major CTR Laboratories by a communication network. The crucial element needed for success is trained personnel. The number of people with knowledge of plasma science and engineering trained in numerical methods and computer science must be increased substantially in the next few years. Nuclear engineering departments should encourage students to enter this field and provide the necessary courses and research programs in fusion computing

  1. LEGS data acquisition facility

    International Nuclear Information System (INIS)

    LeVine, M.J.

    1985-01-01

    The data acquisition facility for the LEGS medium energy photonuclear beam line is composed of an auxiliary crate controller (ACC) acting as a front-end processor, loosely coupled to a time-sharing host computer based on a UNIX-like environment. The ACC services all real-time demands in the CAMAC crate: it responds to LAMs generated by data acquisition modules, to keyboard commands, and it refreshes the graphics display at frequent intervals. The host processor is needed only for printing histograms and recording event buffers on magnetic tape. The host also provides the environment for software development. The CAMAC crate is interfaced by a VERSAbus CAMAC branch driver

  2. Procedures for economic distribution of radionuclides in research facilities

    International Nuclear Information System (INIS)

    Perry, N.A.

    1979-01-01

    A radionuclide accountability system for use in a research facility is described. It can be operated manually or adapted for computer use. All radionuclides are ordered, received, distributed and paid for by the Radiological Control Office who keep complete records of date of order, receipt, calibration use, transfer and/or disposal. Wipe leak tests, specific activity and lot number are also recorded. The procedure provides centralized total accountability records, including financial records, of all radionuclide orders, and the economic advantages of combined purchasing. The use of this system in two medical facilities has resulted in considerable financial savings in the first year of operation. (author)

  3. A dynamic simulation of the Hanford site grout facility

    International Nuclear Information System (INIS)

    Zimmerman, B.D.; Klimper, S.C.; Williamson, G.F.

    1992-01-01

    Computer-based dynamic simulation can be a powerful, low-cost tool for investigating questions concerning timing, throughput capability, and ability of engineering facilities and systems to meet established milestones. The simulation project described herein was undertaken to develop a dynamic simulation model of the Hanford site grout facility and its associated systems at the US Department of Energy's (DOE's) Hanford site in Washington State. The model allows assessment of the effects of engineering design and operation trade-offs and of variable programmatic constraints, such as regulatory review, on the ability of the grout system to meet milestones established by DOE for low-level waste disposal

  4. The data acquisition and control system for Thomson Scattering on ATF [Advanced Toroidal Facility

    International Nuclear Information System (INIS)

    Stewart, K.A.; Kindsfather, R.R.; Rasmussen, D.A.

    1989-01-01

    The 2-dimensional Thomson Scattering System measuring electron temperatures and densities in the Advanced Toroidal Facility (ATF) is interfaced to a VAX-8700 computer system running in a clustered configuration. Calibration, alignment, and operation of this diagnostic is under computer control. Extensive CAMAC instrumentation is used for timing control, data acquisition, and laser alignment. This paper will discuss the computer hardware and software, system operations, and data storage and retrieval. 3 refs

  5. The ATF [Advanced Toroidal Facility] Status and Control System

    International Nuclear Information System (INIS)

    Baylor, L.R.; Devan, W.R.; Sumner, J.N.; Alban, A.M.

    1987-01-01

    The Advanced Toroidal Facility (ATF) Status and Control System (SCS) is a programmable controller-based state monitoring and supervisory control system. This paper describes the SCS implementation and its use of a host computer to run a commercially available software package that provides color graphic interactive displays, alarm logging, and archiving of state data

  6. Electromagnetic Induction: A Computer-Assisted Experiment

    Science.gov (United States)

    Fredrickson, J. E.; Moreland, L.

    1972-01-01

    By using minimal equipment it is possible to demonstrate Faraday's Law. An electronic desk calculator enables sophomore students to solve a difficult mathematical expression for the induced EMF. Polaroid pictures of the plot of induced EMF, together with the computer facility, enables students to make comparisons. (PS)

  7. EBR-II high-ramp transients under computer control

    International Nuclear Information System (INIS)

    Forrester, R.J.; Larson, H.A.; Christensen, L.J.; Booty, W.F.; Dean, E.M.

    1983-01-01

    During reactor run 122, EBR-II was subjected to 13 computer-controlled overpower transients at ramps of 4 MWt/s to qualify the facility and fuel for transient testing of LMFBR oxide fuels as part of the EBR-II operational-reliability-testing (ORT) program. A computer-controlled automatic control-rod drive system (ACRDS), designed by EBR-II personnel, permitted automatic control on demand power during the transients

  8. Editorial team for the CERN Computer News Letter

    CERN Multimedia

    Maximilien Brice

    2004-01-01

    The CERN Computer News Letter (CNL) informs the users of CERN computing facilities about changes and trends in this area. The CNL can also include some conference or meeting reports and technical briefs which can --as they are not necessarily CERN specific-- be helpful to non-CERN users. From left to right: W. von Rueden, Nicole Crémel and François Grey

  9. THE TRENDS AND USE OF COMPUTER AND INTERNET AMONG MEDICAL STUDENTS

    Directory of Open Access Journals (Sweden)

    M. Sathikumar

    2018-02-01

    Full Text Available BACKGROUND Computer-based learning is becoming more and more widespread and it has been important especially in medical subjects since lifelong learning is a goal of medical professional. The study was conducted to find out the computer literacy, computer and internet availability and the trend of use of computer, laptop and other gadget among medical students. MATERIALS AND METHODS A cross sectional descriptive study was conducted among the medical students of Jubilee Mission Medical College & Research Institute, Thrissur and SUT Academy of Medical Sciences, Thiruvananthapuram, Kerala. A total of 420 students participated in the study. RESULTS Out of the 420 students, 42.38% students had their own laptop or computer and 45.71% students were using family shared computer or laptop for their use. 80.48% students were found using mobile phones or tablets with internet facility. Most of the students, access internet for recreational facilities. Regarding e- learning 54.29% of the students participated in the study were of aware of it. Majority of medical students are of the opinion that computer and internet use should be encouraged in medical colleges. CONCLUSION Those who have participated in the study have necessary infrastructure and positive attitude about computer-based learning even though they are using it mainly for recreational purposes.

  10. Facility Interface Capability Assessment (FICA) user manual

    International Nuclear Information System (INIS)

    Pope, R.B.; MacDonald, R.R.; Massaglia, J.L.; Williamson, D.A.; Viebrock, J.M.; Mote, N.

    1995-09-01

    The US Department of Energy's (DOE) Office of Civilian Radioactive Waste Management (OCRWM) is responsible for developing the Civilian Radioactive Waste Management System (CRWMS) to accept spent nuclear fuel from commercial facilities. The objective of the Facility Interface Capability Assessment (FICA) project was to assess the capability of each commercial spent nuclear fuel (SNF) storage facility, at which SNF is stored, to handle various SNF shipping casks. The purpose of this report is describe the FICA computer software and to provide the FICA user with a guide on how to use the FICA system. The FICA computer software consists of two executable programs: the FICA Reactor Report program and the FICA Summary Report program (written in the Ca-Clipper version 5.2 development system). The complete FICA software system is contained on either a 3.5 in. (double density) or a 5.25 in. (high density) diskette and consists of the two FICA programs and all the database files (generated using dBASE III). The FICA programs are provided as ''stand alone'' systems and neither the Ca-Clipper compiler nor dBASE III is required to run the FICA programs. The steps for installing the FICA software system and executing the FICA programs are described in this report. Instructions are given on how to install the FICA software system onto the hard drive of the PC and how to execute the FICA programs from the FICA subdirectory on the hard drive. Both FICA programs are menu driven with the up-arrow and down-arrow keys used to move the cursor to the desired selection

  11. Navier-Stokes Simulation of Airconditioning Facility of a Large Modem Computer Room

    Science.gov (United States)

    2005-01-01

    NASA recently assembled one of the world's fastest operational supercomputers to meet the agency's new high performance computing needs. This large-scale system, named Columbia, consists of 20 interconnected SGI Altix 512-processor systems, for a total of 10,240 Intel Itanium-2 processors. High-fidelity CFD simulations were performed for the NASA Advanced Supercomputing (NAS) computer room at Ames Research Center. The purpose of the simulations was to assess the adequacy of the existing air handling and conditioning system and make recommendations for changes in the design of the system if needed. The simulations were performed with NASA's OVERFLOW-2 CFD code which utilizes overset structured grids. A new set of boundary conditions were developed and added to the flow solver for modeling the roomls air-conditioning and proper cooling of the equipment. Boundary condition parameters for the flow solver are based on cooler CFM (flow rate) ratings and some reasonable assumptions of flow and heat transfer data for the floor and central processing units (CPU) . The geometry modeling from blue prints and grid generation were handled by the NASA Ames software package Chimera Grid Tools (CGT). This geometric model was developed as a CGT-scripted template, which can be easily modified to accommodate any changes in shape and size of the room, locations and dimensions of the CPU racks, disk racks, coolers, power distribution units, and mass-storage system. The compute nodes are grouped in pairs of racks with an aisle in the middle. High-speed connection cables connect the racks with overhead cable trays. The cool air from the cooling units is pumped into the computer room from a sub-floor through perforated floor tiles. The CPU cooling fans draw cool air from the floor tiles, which run along the outside length of each rack, and eject warm air into the center isle between the racks. This warm air is eventually drawn into the cooling units located near the walls of the room. One

  12. Assessment of the integrity of structural shielding of four computed tomography facilities in the greater Accra region of Ghana

    International Nuclear Information System (INIS)

    Nkansah, A.; Schandorf, C.; Boadu, M.; Fletcher, J. J.

    2013-01-01

    The structural shielding thicknesses of the walls of four computed tomography (CT) facilities in Ghana were re-evaluated to verify the shielding integrity using the new shielding design methods recommended by the National Council on Radiological Protection and Measurements (NCRP). The shielding thickness obtained ranged from 120 to 155 mm using default DLP values proposed by the European Commission and 110 to 168 mm using derived DLP values from the four CT manufacturers. These values are within the accepted standard concrete wall thickness ranging from 102 to 152 mm prescribed by the NCRP. The ultrasonic pulse testing of all walls indicated that these are of good quality and free of voids since pulse velocities estimated were within the range of 3.496±0.005 km s -1 . An average dose equivalent rate estimated for supervised areas is 3.4±0.27 μSv week -1 and that for the controlled area is 18.0±0.15 μSv week -1 , which are within acceptable values. (authors)

  13. Lean coding machine. Facilities target productivity and job satisfaction with coding automation.

    Science.gov (United States)

    Rollins, Genna

    2010-07-01

    Facilities are turning to coding automation to help manage the volume of electronic documentation, streamlining workflow, boosting productivity, and increasing job satisfaction. As EHR adoption increases, computer-assisted coding may become a necessity, not an option.

  14. Plant model of KIPT neutron source facility simulator

    International Nuclear Information System (INIS)

    Cao, Yan; Wei, Thomas Y.; Grelle, Austin L.; Gohar, Yousry

    2016-01-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  15. Plant model of KIPT neutron source facility simulator

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Yan [Argonne National Lab. (ANL), Argonne, IL (United States); Wei, Thomas Y. [Argonne National Lab. (ANL), Argonne, IL (United States); Grelle, Austin L. [Argonne National Lab. (ANL), Argonne, IL (United States); Gohar, Yousry [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-02-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  16. A new AMS facility in Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Solís, C., E-mail: corina@fisica.unam.mx; Chávez-Lomelí, E.; Ortiz, M.E.; Huerta, A.; Andrade, E.; Barrios, E.

    2014-07-15

    A new Accelerator Mass Spectrometry system has been installed at the Institute of Physics of the National Autonomous University of Mexico (UNAM). A sample preparation chemistry laboratory equipped with computer controlled graphitization equipment (AGEIII) has also been established. Together both facilities constitute the LEMA (Laboratorio de Espectrometría de Masas con Aceleradores) first of its kind in Mexico. High sensitivity characterization of the concentration in a sample of {sup 14}C as well as {sup 10}Be, {sup 26}Al, {sup 129}I and Pu are now possible. Since the demand for {sup 14}C dating is far more abundant, a data analysis program was developed in the cross-platform programming language Python in order to calculate radiocarbon age. Results from installation, acceptance tests and the first results of {sup 14}C analyses of reference materials prepared in our own facility are presented.

  17. Data-acquisition software for the Holifield Heavy Ion Research Facility

    International Nuclear Information System (INIS)

    Atkins, W.H.

    1983-01-01

    A new computer system to perform data acquisition and analysis for the Holifield Heavey Ion Research Facility's Oak Ridge Isochronous Cyclotron (ORIC) and the newer 25-MV tandem accelerator has been under development. This paper presents the current implementation and discusses the design of the data-acquisition/analysis software

  18. Calculation of displacement and helium production at the Clinton P. Anderson Los Alamos Meson Physics Facility (LAMPF) irradiation facility

    International Nuclear Information System (INIS)

    Wechsler, M.S.; Davidson, D.R.; Greenwood, L.R.; Sommer, W.F.

    1984-01-01

    CT: Differential and total displacement and helium production rates are calculated for copper irradiated by spallation neutrons and 760 MeV protons at the Clinton P. Anderson Los Alamos Meson Physics Facility (LAMPF). The calculations are performed using the SPECTER and VNMTC computer codes, the latter being specially designed for spallation radiation damage calculations. For comparison, similar SPECTER calculations are also described for irradiation of copper in EBR-II and RTNS-II. The results indicate substantial contributions to the displacement and helium production rates due to neutrons in the high-energy tail (above 20 MeV) of the LAMPF spallation neutron spectrum. Still higher production rates are calculated for irradiations in the direct proton beam. These results will provide useful background information for research to be conducted at a new irradiation facility at LAMPF

  19. Computing in support of experiments at LAMPF

    International Nuclear Information System (INIS)

    Thomas, R.F.; Amann, J.F.; Butler, H.S.

    1976-10-01

    This report documents the discussions and conclusions of a study, conducted in August 1976, of the requirements for computer support of the experimental program in medium-energy physics at the Clinton P. Anderson Meson Physics Facility. 1 figure, 1 table

  20. Combined Simulated Annealing Algorithm for the Discrete Facility Location Problem

    Directory of Open Access Journals (Sweden)

    Jin Qin

    2012-01-01

    Full Text Available The combined simulated annealing (CSA algorithm was developed for the discrete facility location problem (DFLP in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it.

  1. NNS computing facility manual P-17 Neutron and Nuclear Science

    International Nuclear Information System (INIS)

    Hoeberling, M.; Nelson, R.O.

    1993-11-01

    This document describes basic policies and provides information and examples on using the computing resources provided by P-17, the Neutron and Nuclear Science (NNS) group. Information on user accounts, getting help, network access, electronic mail, disk drives, tape drives, printers, batch processing software, XSYS hints, PC networking hints, and Mac networking hints is given

  2. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  3. Overview of the Defense Programs Research and Technology Development Program for fiscal year 1993. Appendix II research laboratories and facilities

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-30

    This document contains summaries of the research facilities that support the Defense Programs Research and Technology Development Program for FY 1993. The nine program elements are aggregated into three program clusters as follows: (1) Advanced materials sciences and technologies; chemistry and materials, explosives, special nuclear materials (SNM), and tritium. (2) Design sciences and advanced computation; physics, conceptual design and assessment, and computation and modeling. (3) Advanced manufacturing technologies and capabilities; system engineering science and technology, and electronics, photonics, sensors, and mechanical components. Section I gives a brief summary of 23 major defense program (DP) research and technology facilities and shows how these major facilities are organized by program elements. Section II gives a more detailed breakdown of the over 200 research and technology facilities being used at the Laboratories to support the Defense Programs mission.

  4. Reducing cooling energy consumption in data centres and critical facilities

    Science.gov (United States)

    Cross, Gareth

    Given the rise of our everyday reliance on computers in all walks of life, from checking the train times to paying our credit card bills online, the need for computational power is ever increasing. Other than the ever-increasing performance of home Personal Computers (PC's) this reliance has given rise to a new phenomenon in the last 10 years ago. The data centre. Data centres contain vast arrays of IT cabinets loaded with servers that perform millions of computational equations every second. It is these data centres that allow us to continue with our reliance on the internet and the PC. As more and more data centres become necessary due to the increase in computing processing power required for the everyday activities we all take for granted so the energy consumed by these data centres rises. Not only are more and more data centres being constructed daily, but operators are also looking at ways to squeeze more processing from their existing data centres. This in turn leads to greater heat outputs and therefore requires more cooling. Cooling data centres requires a sizeable energy input, indeed to many megawatts per data centre site. Given the large amounts of money dependant on the successful operation of data centres, in particular for data centres operated by financial institutions, the onus is predominantly on ensuring the data centres operate with no technical glitches rather than in an energy conscious fashion. This report aims to investigate the ways and means of reducing energy consumption within data centres without compromising the technology the data centres are designed to house. As well as discussing the individual merits of the technologies and their implementation technical calculations will be undertaken where necessary to determine the levels of energy saving, if any, from each proposal. To enable comparison between each proposal any design calculations within this report will be undertaken against a notional data facility. This data facility will

  5. SALP-PC, a computer program for fault tree analysis on personal computers

    International Nuclear Information System (INIS)

    Contini, S.; Poucet, A.

    1987-01-01

    The paper presents the main characteristics of the SALP-PC computer code for fault tree analysis. The program has been developed in Fortran 77 on an Olivetti M24 personal computer (IBM compatible) in order to reach a high degree of portability. It is composed of six processors implementing the different phases of the analysis procedure. This particular structure presents some advantages like, for instance, the restart facility and the possibility to develop an event tree analysis code. The set of allowed logical operators, i.e. AND, OR, NOT, K/N, XOR, INH, together with the possibility to define boundary conditions, make the SALP-PC code a powerful tool for risk assessment. (orig.)

  6. Software for computing and annotating genomic ranges.

    Science.gov (United States)

    Lawrence, Michael; Huber, Wolfgang; Pagès, Hervé; Aboyoun, Patrick; Carlson, Marc; Gentleman, Robert; Morgan, Martin T; Carey, Vincent J

    2013-01-01

    We describe Bioconductor infrastructure for representing and computing on annotated genomic ranges and integrating genomic data with the statistical computing features of R and its extensions. At the core of the infrastructure are three packages: IRanges, GenomicRanges, and GenomicFeatures. These packages provide scalable data structures for representing annotated ranges on the genome, with special support for transcript structures, read alignments and coverage vectors. Computational facilities include efficient algorithms for overlap and nearest neighbor detection, coverage calculation and other range operations. This infrastructure directly supports more than 80 other Bioconductor packages, including those for sequence analysis, differential expression analysis and visualization.

  7. Regulatory requirements for designing PET-CT facility in India

    International Nuclear Information System (INIS)

    Tandon, Pankaj

    2010-01-01

    In India, cyclotron-produced radionuclides are gaining importance in molecular imaging in Nuclear Medicine (NM) departments. The importance of this modality among others is due to the fact that it provides valuable clinical information, which was lacking in other available modalities. Presently, every well-established hospital would like to procure Medical Cyclotron or positron emission tomography-computed tomography (PET-CT) facility in their NM department. Because cyclotron-produced radionuclides have higher energy than the other routinely used radionuclides for diagnosis, it becomes essential for the user to know about the regulatory requirement and radiation safety precautions that one has to take for the installation of this new modality in their premises. The various stages of approval of PET-CT facility by the Atomic Energy Regulatory Board (AERB) and important steps that one has to know/follow before planning for this new facility are summarized

  8. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  9. Reliable Biomass Supply Chain Design under Feedstock Seasonality and Probabilistic Facility Disruptions

    Directory of Open Access Journals (Sweden)

    Zhixue Liu

    2017-11-01

    Full Text Available While biomass has been recognized as an important renewable energy source which has a range of positive impacts on the economy, environment, and society, the existence of feedstock seasonality and risk of service disruptions at collection facilities potentially compromises the efficiency and reliability of the energy supply system. In this paper, we consider reliable supply chain design for biomass collection against feedstock seasonality and time-varying disruption risks. We optimize facility location, inventory, biomass quantity, and shipment decisions in a multi-period planning horizon setting. A real-world case in Hubei, China is studied to offer managerial insights. Our computational results show that: (1 the disruption risk significantly affects both the optimal facility locations and the supply chain cost; (2 no matter how the failure probability changes, setting backup facilities can significantly decrease the total cost; and (3 the feedstock seasonality does not affect locations of the collection facilities, but it affects the allocations of collection facilities and brings higher inventory cost for the biomass supply chain.

  10. Implementation of the Facility Integrated Inventory Computer System (FICS)

    International Nuclear Information System (INIS)

    McEvers, J.A.; Krichinsky, A.M.; Layman, L.R.; Dunnigan, T.H.; Tuft, R.M.; Murray, W.P.

    1980-01-01

    This paper describes a computer system which has been developed for nuclear material accountability and implemented in an active radiochemical processing plant involving remote operations. The system posesses the following features: comprehensive, timely records of the location and quantities of special nuclear materials; automatically updated book inventory files on the plant and sub-plant levels of detail; material transfer coordination and cataloging; automatic inventory estimation; sample transaction coordination and cataloging; automatic on-line volume determination, limit checking, and alarming; extensive information retrieval capabilities; and terminal access and application software monitoring and logging

  11. Superconducting dipole magnet for the UTSI MHD facility

    International Nuclear Information System (INIS)

    Wang, S.T.; Niemann, R.C.; Turner, L.R.

    1978-01-01

    The Argonne National Laboratory is designing and will build a large superconducting dipole magnet system for use in the Coal Fired Flow MHD Research Facility at the University of Tennessee Space Institute (UTSI). Presented in detail are the conceptual design of the magnet geometry, conductor design, cryostability evaluation, magnetic pressure computation, structural design, cryostat design, the cryogenics system design, and magnet instrumentations and control

  12. Integration of facility modeling capabilities for nuclear nonproliferation analysis

    International Nuclear Information System (INIS)

    Burr, Tom; Gorensek, M.B.; Krebs, John; Kress, Reid L.; Lamberti, Vincent; Schoenwald, David; Ward, Richard C.

    2012-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclearnonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facilitymodeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facilitymodeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facilitymodelingcapabilities and illustrates how they could be integrated and utilized for nonproliferationanalysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facilitymodeling tools. After considering a representative sampling of key facilitymodelingcapabilities, the proposed integration framework is illustrated with several examples.

  13. Facility effluent monitoring plan determinations for the 400 Area facilities

    International Nuclear Information System (INIS)

    Nickels, J.M.

    1991-09-01

    This Facility Effluent Monitoring Plan determination resulted from an evaluation conducted for the Westinghouse Hanford Company 400 Area facilities on the Hanford Site. The Facility Effluent Monitoring Plan determinations have been prepared in accordance with A Guide for Preparing Hanford Site Facility Effluent Monitoring Plans. Two major Westinghouse Hanford Company facilities in the 400 Area were evaluated: the Fast Flux Test Facility and the Fuels Manufacturing and examination Facility. The determinations were prepared by Westinghouse Hanford Company. Of these two facilities, only the Fast Flux Test Facility will require a Facility Effluent Monitoring Plan. 7 refs., 5 figs., 4 tabs

  14. R.I.P. Computer Animal Shelter

    CERN Multimedia

    2012-01-01

    Due to a brutal and unjustified attack on our facilities in front of the CERN Computer Centre, we had to close the CERN Animal Shelter on 5/1/2012 after only 9 months of operation (the shelter was inaugurated on 1/4/2011). With deep sadness we look back to the old days when everything was fine. R.I.P.   The Computer Mice shelter after the attack. More photographs available here.  All surviving mice have been returned to their owners, who have also been advised to "Stop --- Think --- Click" in order to securely browse the Internet and securely read e-mails. Users who have followed this recommendation in the past were less likely to have their computer infected or their computing account compromised. However, still too many users click on malicious web-links and put their computer and account at risk. Thank you all for your support during the last 9 months. The Computer Animal Shelter    

  15. Design of integrated safeguards systems for nuclear facilities

    International Nuclear Information System (INIS)

    de Montmollin, J.M.; Walton, R.B.

    1976-01-01

    Safeguards systems that are capable of countering postulated threats to nuclear facilities must be closely integrated with plant layout and processes if they are to be effective and if potentially severe impacts on plant operations are to be averted. A facilities safeguards system suitable for a production plant is described in which the traditional elements of physical protection and periodic material-balance accounting are extended and augmented to provide close control of material flows. Discrete material items are subjected to direct, overriding physical control where appropriate. Materials in closely coupled process streams are protected by on-line NDA and weight measurements, with rapid computation of material balances to provide immediate indication of large-scale diversion. The system provides an information and actions at the safeguards/operations interface

  16. Design of integrated safeguards systems for nuclear facilities

    International Nuclear Information System (INIS)

    de Montmollin, J.M.; Walton, R.B.

    1978-06-01

    Safeguards systems that are capable of countering postulated threats to nuclear facilities must be closely integrated with plant layout and processes if they are to be effective and if potentially-severe impacts on plant operations are to be averted. This paper describes a facilities safeguards system suitable for production plant, in which the traditional elements of physical protection and periodic material-balance accounting are extended and augmented to provide close control of material flows. Discrete material items are subjected to direct, overriding physical control where appropriate. Materials in closely-coupled process streams are protected by on-line NDA and weight measurements, with rapid computation of material balances to provide immediate indication of large-scale diversion. The system provides information and actions at the safeguards/operations interface

  17. A nuclear facility Security Analyzer written in Prolog

    International Nuclear Information System (INIS)

    Zimmerman, B.D.

    1987-01-01

    The Security Analyzer project was undertaken to use the Prolog artificial intelligence programming language and Entity-Relationship database construction techniques to produce an intelligent database computer program capable of analyzing the effectiveness of a nuclear facility's security systems. The Security Analyzer program can search through a facility to find all possible surreptitious entry paths that meet various user-selected time and detection probability criteria. The program can also respond to user-formulated queries concerning the database information. The intelligent database approach allows the program to perform a more comprehensive path search than other programs that only find a single optimal path. The program also is more flexible in that the database, once constructed, can be interrogated and used for purposes independent of the searching function

  18. A nuclear facility Security Analyzer written in PROLOG

    International Nuclear Information System (INIS)

    Zimmerman, B.D.

    1987-08-01

    The Security Analyzer project was undertaken to use the Prolog ''artificial intelligence'' programming language and Entity-Relationship database construction techniques to produce an intelligent database computer program capable of analyzing the effectiveness of a nuclear facility's security systems. The Security Analyzer program can search through a facility to find all possible surreptitious entry paths that meet various user-selected time and detection probability criteria. The program can also respond to user-formulated queries concerning the database information. The intelligent database approach allows the program to perform a more comprehensive path search than other programs that only find a single ''optimal'' path. The program also is more flexible in that the database, once constructed, can be interrogated and used for purposes independent of the searching function

  19. E-SCAPE: A scale facility for liquid-metal, pool-type reactor thermal hydraulic investigations

    Energy Technology Data Exchange (ETDEWEB)

    Van Tichelen, Katrien, E-mail: kvtichel@sckcen.be [SCK-CEN, Boeretang 200, 2400 Mol (Belgium); Mirelli, Fabio, E-mail: fmirelli@sckcen.be [SCK-CEN, Boeretang 200, 2400 Mol (Belgium); Greco, Matteo, E-mail: mgreco@sckcen.be [SCK-CEN, Boeretang 200, 2400 Mol (Belgium); Viviani, Giorgia, E-mail: giorgiaviviani@gmail.com [University of Pisa, Lungarno Pacinotti 43, 56126 Pisa (Italy)

    2015-08-15

    Highlights: • The E-SCAPE facility is a thermal hydraulic scale model of the MYRRHA fast reactor. • The focus is on mixing and stratification in liquid-metal pool-type reactors. • Forced convection, natural convection and the transition are investigated. • Extensive instrumentation allows validation of computational models. • System thermal hydraulic and CFD models have been used for facility design. - Abstract: MYRRHA (Multi-purpose hYbrid Research Reactor for High-tech Applications) is a flexible fast-spectrum research reactor under design at SCK·CEN. MYRRHA is a pool-type reactor with lead bismuth eutectic (LBE) as primary coolant. The proper understanding of the thermal hydraulic phenomena occurring in the reactor pool is an important issue in the design and licensing of the MYRRHA system and liquid-metal cooled reactors by extension. Model experiments are necessary for understanding the physics, for validating experimental tools and to qualify the design for the licensing. The E-SCAPE (European SCAled Pool Experiment) facility at SCK·CEN is a thermal hydraulic 1/6-scale model of the MYRRHA reactor, with an electrical core simulator, cooled by LBE. It provides experimental feedback to the designers on the forced and natural circulation flow patterns. Moreover, it enables to validate the computational methods for their use with LBE. The paper will elaborate on the design of the E-SCAPE facility and its main parameters. Also the experimental matrix and the pre-test analysis using computational fluid dynamics (CFD) and system thermal hydraulics codes will be described.

  20. National Ignition Facility sub-system design requirements computer system SSDR 1.5.1

    International Nuclear Information System (INIS)

    Spann, J.; VanArsdall, P.; Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development and test requirements for the Computer System, WBS 1.5.1 which is part of the NIF Integrated Computer Control System (ICCS). This document responds directly to the requirements detailed in ICCS (WBS 1.5) which is the document directly above

  1. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; Buncic, P; De, K; Oleynik, D; Petrosyan, A; Jha, S; Mount, R; Porter, R J; Read, K F; Wells, J C; Vaniachine, A

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2 ) sites, O(10 5 ) cores, O(10 8 ) jobs per year, O(10 3 ) users, and ATLAS data volume is O(10 17 ) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center 'Kurchatov Institute' together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the

  2. The TOPFLOW multi-purpose thermohydraulic test facility

    International Nuclear Information System (INIS)

    Schaffrath, Andreas; Kruessenberg, A.-K.; Weiss, F.-P.; Prasser, H.-M.

    2002-01-01

    The TOPFLOW (Transient Two Phase Flow Test Facility) multi-purpose thermohydraulic test facility is being built for studies of steady-state and transient flow phenomena in two-phase flows, and for the development and validation of the models contained in CFD (Computational Fluid Dynamics) codes. The facility is under construction at the Institute for Safety Research of the Rossendorf Research Center (FZR). It will be operated together with the Dresden Technical University and the Zittau/Goerlitz School for Technology, Economics and Social Studies within the framework of the Nuclear Technology Competence Preservation Program. TOPFLOW, with its test sections and its flexible concept, is available as an attractive facility also to users from all European countries. Experiments are planned in these fields, among others: - Transient two-phase flows in vertical and horizontal pipes and pipes of any inclination as well as in geometries typical of nuclear reactors (annulus, hot leg). - Boiling in large vessels and water pools (measurements of steam generation, 3D steam content distribution, turbulence, temperature stratification). - Test of passive components and safety systems. - Condensation in horizontal pipes in the absence and presence of non-condensable gases. The construction phase of TOPFLOW has been completed more or less on schedule. Experiments can be started after a commissioning phase in the 3rd quarter of 2002. (orig.) [de

  3. Distributed computing for macromolecular crystallography.

    Science.gov (United States)

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Winn, Martyn; Ballard, Charles

    2018-02-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community.

  4. Computer simulation as an operational and training aid

    International Nuclear Information System (INIS)

    Lee, D.J.; Tottman-Trayner, E.

    1995-01-01

    The paper describes how the rapid development of desktop computing power, the associated fall in prices, and the advancement of computer graphics technology driven by the entertainment industry has enabled the nuclear industry to achieve improvements in operation and training through the use of computer simulation. Applications are focused on the fuel handling operations at Torness Power Station where visualization through computer modelling is being used to enhance operator awareness and to assist in a number of operational scenarios. It is concluded that there are significant benefits to be gained from the introduction of the facility at Torness as well as other locations. (author)

  5. Data acquisition for the Sodium Loop Safety Facility experiment P4

    International Nuclear Information System (INIS)

    Baldwin, R.D.; Kraimer, M.R.; Wilson, R.E.; Gilbert, D.M.

    1982-01-01

    Data acquisition for the Sodium Loop Safety Facility (SLSF) experiment P4 used three computers for the continuous collection of data and two computers for the routing and displaying of data. Four of these computer systems were located at the Engineering Test Reactor (ETR) site, in Idaho, to access sensor signals from the analog to digital interfaces. The fifth system was located at Argonne National Laboratory (ANL), in Illinois, and was used mainly for display and storage of data. All display computers were connected together using the DECNET software package. The transmission of data was managed over a dedicated phone line using 9600 baud long distance modems. A stand-alone high speed data acquisition system was also used to record data during planned reactor transients

  6. Investigation of development and management of treatment planning systems for BNCT at foreign facilities

    International Nuclear Information System (INIS)

    2001-03-01

    A new computational dosimetry system for BNCT: JCDS is developed by JAERI in order to carry out BNCT with epithermal neutron beam at present. The development and management situation of computational dosimetry system, which are developed and are used in BNCT facilities in foreign countries, were investigated in order to accurately grasp functions necessary for preparation of the treatment planning and its future subjects. In present state, 'SERA', which are developed by Idaho National Engineering and Environmental Laboratory (INEEL), is used in many BNCT facilities. Followings are necessary for development and management of the treatment planning system. (1) Reliability confirmation of system performance by verification as comparison examination of calculated value with actual experimental measured value. (2) Confirmation systems such as periodic maintenance for retention of the system quality. (3) The improvement system, which always considered relative merits and demerits with other computational dosimetry system. (4) The development of integrated system with patient setting. (author)

  7. Accident risks in nuclear facilities. (Latest citations from the NTIS Bibliographic database). Published Search

    International Nuclear Information System (INIS)

    1994-02-01

    The bibliography contains citations concerning risk analysis and hazards evaluation of the design, construction, and operation of nuclear facilities. The citations also explore the risk and hazards of transporting radioactive materials to and from these facilities. Radiological calculations for environmental effects of nuclear accidents and the use of computer models in risk analysis are also included. (Contains 250 citations and includes a subject term index and title list.)

  8. Analysis of Department of Defense Organic Depot Maintenance Capacity Management and Facility Utilization Factors

    Science.gov (United States)

    1991-09-01

    System ( CAPMS ) in lieu of using DODI 4151.15H. Facility utilization rate computation is not explicitly defined; it is merely identified as a ratio of...front of a bottleneck buffers the critical resource and protects against disruption of the system. This approach optimizes facility utilization by...run titled BUFFERED BASELINE. Three different levels of inventory were used to evaluate the effect of increasing the inventory level on critical

  9. Construction of the two-phase critical flow test facility

    International Nuclear Information System (INIS)

    Chung, C. H.; Chang, S. K.; Park, H. S.; Min, K. H.; Choi, N. H.; Kim, C. H.; Lee, S. H.; Kim, H. C.; Chang, M. H.

    2002-03-01

    The two-phase critical test loop facility has been constructed in the KAERI engineering laboratory for the simulation of small break loss of coolant accident entrained with non-condensible gas of SMART. The test facility can operate at 12 MPa of pressure and 0 to 60 C of sub-cooling with 0.5 kg/s of non- condensible gas injection into break flow, and simulate up to 20 mm of pipe break. Main components of the test facility were arranged such that the pressure vessel containing coolant, a test section simulating break and a suppression tank inter-connected with pipings were installed vertically. As quick opening valve opens, high pressure/temperature coolant flows through the test section forming critical two-phase flow into the suppression tank. The pressure vessel was connected to two high pressure N2 gas tanks through a control valve to control pressure in the pressure vessel. Another N2 gas tank was also connected to the test section for the non-condensible gas injection. The test facility operation was performed on computers supported with PLC systems installed in the control room, and test data such as temperature, break flow rate, pressure drop across test section, gas injection flow rate were all together gathered in the data acquisition system for further data analysis. This test facility was classified as a safety related high pressure gas facility in law. Thus the loop design documentation was reviewed, and inspected during construction of the test loop by the regulatory body. And the regulatory body issued permission for the operation of the test facility

  10. Thermal operations conditions in a national waste terminal storage facility

    International Nuclear Information System (INIS)

    1976-09-01

    Some of the major technical questions associated with the burial of radioactive high-level wastes in geologic formations are related to the thermal environments generated by the waste and the impact of this dissipated heat on the surrounding environment. The design of a high level waste storage facility must be such that the temperature variations that occur do not adversely affect operating personnel and equipment. The objective of this investigation was to assist OWI by determining the thermal environment that would be experienced by personnel and equipment in a waste storage facility in salt. Particular emphasis was placed on determining the maximum floor and air temperatures with and without ventilation in the first 30 years after waste emplacement. The assumed facility design differs somewhat from those previously analyzed and reported, but many of the previous parametric surveys are useful for comparison. In this investigation a number of 2-dimensional and 3-dimensional simulations of the heat flow in a repository have been performed on the HEATING5 and TRUMP heat transfer codes. The representative repository constructs used in the simulations are described, as well as the computational models and computer codes. Results of the simulations are presented and discussed. Comparisons are made between the recent results and those from previous analyses. Finally, a summary of study limitations, comparisons, and conclusions is given

  11. 6th July 2010 - United Kingdom Science and Technology Facilities Council W. Whitehorn signing the guest book with Head of International relations F. Pauss, visiting the Computing Centre with Information Technology Department Head Deputy D. Foster, the LHC superconducting magnet test hall with Technology Department P. Strubin,the Centre Control Centre with Operation Group Leader M. Lamont and the CLIC/CTF3 facility with Project Leader J.-P. Delahaye.

    CERN Multimedia

    Teams : M. Brice, JC Gadmer

    2010-01-01

    6th July 2010 - United Kingdom Science and Technology Facilities Council W. Whitehorn signing the guest book with Head of International relations F. Pauss, visiting the Computing Centre with Information Technology Department Head Deputy D. Foster, the LHC superconducting magnet test hall with Technology Department P. Strubin,the Centre Control Centre with Operation Group Leader M. Lamont and the CLIC/CTF3 facility with Project Leader J.-P. Delahaye.

  12. Surgical resource utilization in urban terrorist bombing: a computer simulation.

    Science.gov (United States)

    Hirshberg, A; Stein, M; Walden, R

    1999-09-01

    The objective of this study was to analyze the utilization of surgical staff and facilities during an urban terrorist bombing incident. A discrete-event computer model of the emergency room and related hospital facilities was constructed and implemented, based on cumulated data from 12 urban terrorist bombing incidents in Israel. The simulation predicts that the admitting capacity of the hospital depends primarily on the number of available surgeons and defines an optimal staff profile for surgeons, residents, and trauma nurses. The major bottlenecks in the flow of critical casualties are the shock rooms and the computed tomographic scanner but not the operating rooms. The simulation also defines the number of reinforcement staff needed to treat noncritical casualties and shows that radiology is the major obstacle to the flow of these patients. Computer simulation is an important new tool for the optimization of surgical service elements for a multiple-casualty situation.

  13. Automatically controlled facilities for irradiation of silicon crystals at the Rossendorf Research Reactor

    International Nuclear Information System (INIS)

    Ross, R.

    1988-01-01

    This report describes the facilities for neutron transmutation doping of silicon in GDR. The irradiation of silicon single crystals began at Rossendorf in 1978 with simple equipment. Only a small amount of silicon could be irradiated in it. The fast increasing need of NTD-silicon made it necessary to design and construct new and better facilities. The new facilities are capable of irradiating silicon from 2'' to 3'' in diameter. The irradiation process takes place automatically with the assistance of a computer. Material produced has an axial homogeneity of ± 7%. Irradiation riggs, techniques, irradiation control and quality control are discussed. (author). 4 figs

  14. Radiological assessments for the National Ignition Facility

    International Nuclear Information System (INIS)

    Hong, Kou-John; Lazaro, M.A.

    1996-01-01

    The potential radiological impacts of the National Ignition Facility (NIF), a proposed facility for fusion ignition and high energy density experiments, were assessed for five candidate sites to assist in site selection. The GENII computer program was used to model releases of radionuclides during normal NIF operations and a postulated accident and to calculate radiation doses to the public. Health risks were estimated by converting the estimated doses into health effects using a standard cancer fatality risk factor. The greatest calculated radiation dose was less than one thousandth of a percent of the dose received from natural background radiation; no cancer fatalities would be expected to occur in the public as the result of normal operations. The highest dose conservatively estimated to result from a postulated accident could lead to one in one million risk of cancer

  15. Nuclear Facility Isotopic Content (NFIC) Waste Management System to provide input for safety envelope definition

    International Nuclear Information System (INIS)

    Genser, J.R.

    1992-01-01

    The Westinghouse Savannah River Company (WSRC) is aggressively applying environmental remediation and radioactive waste management activities at the US Department of Energy's Savannah River Site (SRS) to ensure compliance with today's challenging governmental laws and regulatory requirements. This report discusses a computer-based Nuclear Facility Isotopic Content (NFIC) Waste Management System developed to provide input for the safety envelope definition and assessment of site-wide facilities. Information was formulated describing the SRS ''Nuclear Facilities'' and their respective bounding inventories of nuclear materials and radioactive waste using the NFIC Waste Management System

  16. Computer programs at SRL to evaluate environmental effects SRP operations and postulated accidental releases

    International Nuclear Information System (INIS)

    Cooper, R.E.

    1975-09-01

    Savannah River Plant operations unavoidably result in the release of some chemical and radioactive effluents to the environs. The most environmentally significant releases are gaseous effluents to the atmosphere; computer codes dealing with these atmospheric releases are discussed in this report. There is a wide variety of effluents, both chemical and radioactive, to be considered, and each must be correlated with meteorological dispersion data as a function of time to estimate the environmental effects. In addition, large inventories of toxic and radioactive materials in some facilities represent a potential for accidental releases. Accidents are postulated for these facilities, and the environmental effects of resulting releases are again evaluated by correlating with meteorological dispersion data. In accordance with AEC Regulatory Guide 23, a 2-year meteorological data base is used in performing all analyses. Due to the diversity of possible releases and the large meteorological data base, the environmental analyses are necessarily performed with the aid of a large computer facility. Several computer programs have been written to facilitate these analyses according to the type of analysis desired. The computer programs described in this report are basically of three categories: probability distributions of estimated concentrations or doses as a function of distance from a point of origin, estimates of average concentrations or doses over a specified time period such as annual averages, and some miscellaneous programs in support of the first two categories to optimize the use of the computing facility. A complete documentation of each program is included with a program listing and sample input-output

  17. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  18. UGCT: New X-ray radiography and tomography facility

    International Nuclear Information System (INIS)

    Masschaele, B.C.; Cnudde, V.; Dierick, M.; Jacobs, P.; Hoorebeke, L. van; Vlassenbroeck, J.

    2007-01-01

    The UGCT (University Gent Computer Tomography) facility, a cooperation between the Radiation Physics research group and the Sedimentary Geology and Engineering Geology research group is a new CT facility providing a large range of scanning possibilities. Formerly a Skyscan 1072 was used to perform X-ray micro-CT scans at the UGCT facility and although this is a very powerful instrument, there were needs for a higher resolution and more flexibility. Therefore, the UCGT facility started the construction of a multidisciplinary micro-CT scanner inside a shielded room with a maximum flexibility of the set-up. The X-ray tube of this high-resolution CT scanner is a state-of-the-art open-type device with dual head: one head for high power micro-CT and one for sub-micro- or also called nano-CT. An important advantage of this scanner is that different detectors can be used to optimize the scanning conditions of the objects under investigation. The entire set-up is built on a large optical table to obtain the highest possible stability. Due to the flexible set-up and the powerful CT reconstruction software 'Octopus', it is possible to obtain the highest quality and the best signal-to-noise of the reconstructed images for each type of sample

  19. UGCT: New X-ray radiography and tomography facility

    Energy Technology Data Exchange (ETDEWEB)

    Masschaele, B.C. [Department of Subatomic and Radiation Physics, Ghent University, Proeftuinstraat 86, B-9000 Gent (Belgium)], E-mail: bert.masschaele@ugent.be; Cnudde, V. [Department of Geology and Soil Science, Ghent University, Krijgslaan 281, B-9000 Gent (Belgium); Dierick, M. [Department of Subatomic and Radiation Physics, Ghent University, Proeftuinstraat 86, B-9000 Gent (Belgium); Jacobs, P. [Department of Geology and Soil Science, Ghent University, Krijgslaan 281, B-9000 Gent (Belgium); Hoorebeke, L. van; Vlassenbroeck, J. [Department of Subatomic and Radiation Physics, Ghent University, Proeftuinstraat 86, B-9000 Gent (Belgium)

    2007-09-21

    The UGCT (University Gent Computer Tomography) facility, a cooperation between the Radiation Physics research group and the Sedimentary Geology and Engineering Geology research group is a new CT facility providing a large range of scanning possibilities. Formerly a Skyscan 1072 was used to perform X-ray micro-CT scans at the UGCT facility and although this is a very powerful instrument, there were needs for a higher resolution and more flexibility. Therefore, the UCGT facility started the construction of a multidisciplinary micro-CT scanner inside a shielded room with a maximum flexibility of the set-up. The X-ray tube of this high-resolution CT scanner is a state-of-the-art open-type device with dual head: one head for high power micro-CT and one for sub-micro- or also called nano-CT. An important advantage of this scanner is that different detectors can be used to optimize the scanning conditions of the objects under investigation. The entire set-up is built on a large optical table to obtain the highest possible stability. Due to the flexible set-up and the powerful CT reconstruction software 'Octopus', it is possible to obtain the highest quality and the best signal-to-noise of the reconstructed images for each type of sample.

  20. Decontamination and decommissioning project for the nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Park, J. H.; Paik, S. T.; Park, S. W. (and others)

    2007-02-15

    The final goal of this project is to complete the decommissioning of the Korean Research Reactor no.1 and no. 2(KRR-1 and 2) and uranium conversion plant safely and successfully. The goal of this project in 2006 is to complete the decontamination of the inside reactor hall of the KRR-2 which will be operating as a temporary storage for the radioactive waste until the construction and operation of the national repository site. Also the decommissioning work of the KRR-1 and auxiliary facilities is being progress. As the compaction of decommissioning project is near at hand, a computer information system was developed for a systematically control and preserve a technical experience and decommissioning data for the future reuse. The nuclear facility decommissioning, which is the first challenge in Korea, is being closed to the final stages. We completed the decommissioning of all the bio-shielding concrete for KRR-2 in 2005 and carried out the decontamination and waste material grouping of the roof, wall and bottom of the reactor hall of the KRR-2. The decommissioning for nuclear facility were demanded the high technology, remote control equipment and radioactivity analysis. So developed equipment and experience will be applied at the decommissioning for new nuclear facility in the future.