WorldWideScience

Sample records for facility integrated computer

  1. Integration of distributed plant process computer systems to nuclear power generation facilities

    International Nuclear Information System (INIS)

    Bogard, T.; Finlay, K.

    1996-01-01

    Many operating nuclear power generation facilities are replacing their plant process computer. Such replacement projects are driven by equipment obsolescence issues and associated objectives to improve plant operability, increase plant information access, improve man machine interface characteristics, and reduce operation and maintenance costs. This paper describes a few recently completed and on-going replacement projects with emphasis upon the application integrated distributed plant process computer systems. By presenting a few recent projects, the variations of distributed systems design show how various configurations can address needs for flexibility, open architecture, and integration of technological advancements in instrumentation and control technology. Architectural considerations for optimal integration of the plant process computer and plant process instrumentation ampersand control are evident from variations of design features

  2. Integration of small computers in the low budget facility

    International Nuclear Information System (INIS)

    Miller, G.E.; Crofoot, T.A.

    1988-01-01

    Inexpensive computers (PC's) are well within the reach of low budget reactor facilities. It is possible to envisage many uses that will both improve capabilities of existing instrumentation and also assist operators and staff with certain routine tasks. Both of these opportunities are important for survival at facilities with severe budget and staffing limitations. (author)

  3. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    Energy Technology Data Exchange (ETDEWEB)

    Zynovyev, Mykhaylo

    2012-06-29

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  4. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    International Nuclear Information System (INIS)

    Zynovyev, Mykhaylo

    2012-01-01

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  5. Software quality assurance plan for the National Ignition Facility integrated computer control system

    Energy Technology Data Exchange (ETDEWEB)

    Woodruff, J.

    1996-11-01

    Quality achievement is the responsibility of the line organizations of the National Ignition Facility (NIF) Project. This Software Quality Assurance Plan (SQAP) applies to the activities of the Integrated Computer Control System (ICCS) organization and its subcontractors. The Plan describes the activities implemented by the ICCS section to achieve quality in the NIF Project`s controls software and implements the NIF Quality Assurance Program Plan (QAPP, NIF-95-499, L-15958-2) and the Department of Energy`s (DOE`s) Order 5700.6C. This SQAP governs the quality affecting activities associated with developing and deploying all control system software during the life cycle of the NIF Project.

  6. Software quality assurance plan for the National Ignition Facility integrated computer control system

    International Nuclear Information System (INIS)

    Woodruff, J.

    1996-11-01

    Quality achievement is the responsibility of the line organizations of the National Ignition Facility (NIF) Project. This Software Quality Assurance Plan (SQAP) applies to the activities of the Integrated Computer Control System (ICCS) organization and its subcontractors. The Plan describes the activities implemented by the ICCS section to achieve quality in the NIF Project's controls software and implements the NIF Quality Assurance Program Plan (QAPP, NIF-95-499, L-15958-2) and the Department of Energy's (DOE's) Order 5700.6C. This SQAP governs the quality affecting activities associated with developing and deploying all control system software during the life cycle of the NIF Project

  7. Energy Systems Integration Facility Videos | Energy Systems Integration

    Science.gov (United States)

    Facility | NREL Energy Systems Integration Facility Videos Energy Systems Integration Facility Integration Facility NREL + SolarCity: Maximizing Solar Power on Electrical Grids Redefining What's Possible for Renewable Energy: Grid Integration Robot-Powered Reliability Testing at NREL's ESIF Microgrid

  8. Computer security at ukrainian nuclear facilities: interface between nuclear safety and security

    International Nuclear Information System (INIS)

    Chumak, D.; Klevtsov, O.

    2015-01-01

    Active introduction of information technology, computer instrumentation and control systems (I and C systems) in the nuclear field leads to a greater efficiency and management of technological processes at nuclear facilities. However, this trend brings a number of challenges related to cyber-attacks on the above elements, which violates computer security as well as nuclear safety and security of a nuclear facility. This paper considers regulatory support to computer security at the nuclear facilities in Ukraine. The issue of computer and information security considered in the context of physical protection, because it is an integral component. The paper focuses on the computer security of I and C systems important to nuclear safety. These systems are potentially vulnerable to cyber threats and, in case of cyber-attacks, the potential negative impact on the normal operational processes can lead to a breach of the nuclear facility security. While ensuring nuclear security of I and C systems, it interacts with nuclear safety, therefore, the paper considers an example of an integrated approach to the requirements of nuclear safety and security

  9. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  10. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  11. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  12. Integration of facility modeling capabilities for nuclear nonproliferation analysis

    International Nuclear Information System (INIS)

    Garcia, Humberto; Burr, Tom; Coles, Garill A.; Edmunds, Thomas A.; Garrett, Alfred; Gorensek, Maximilian; Hamm, Luther; Krebs, John; Kress, Reid L.; Lamberti, Vincent; Schoenwald, David; Tzanos, Constantine P.; Ward, Richard C.

    2012-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  13. Integration Of Facility Modeling Capabilities For Nuclear Nonproliferation Analysis

    International Nuclear Information System (INIS)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  14. CSNI Integral Test Facility Matrices for Validation of Best-Estimate Thermal-Hydraulic Computer Codes

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Internationally agreed Integral Test Facility (ITF) matrices for validation of realistic thermal hydraulic system computer codes were established. ITF development is mainly for Pressurised Water Reactors (PWRs) and Boiling Water Reactors (BWRs). A separate activity was for Russian Pressurised Water-cooled and Water-moderated Energy Reactors (WWER). Firstly, the main physical phenomena that occur during considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. In this paper some specific examples from the ITF matrices will also be provided. The matrices will be a guide for code validation, will be a basis for comparisons of code predictions performed with different system codes, and will contribute to the quantification of the uncertainty range of code model predictions. In addition to this objective, the construction of such a matrix is an attempt to record information which has been generated around the world over the last years, so that it is more accessible to present and future workers in that field than would otherwise be the case.

  15. Integrated computer aided design simulation and manufacture

    OpenAIRE

    Diko, Faek

    1989-01-01

    Computer Aided Design (CAD) and Computer Aided Manufacture (CAM) have been investigated and developed since twenty years as standalone systems. A large number of very powerful but independent packages have been developed for Computer Aided Design,Aanlysis and Manufacture. However, in most cases these packages have poor facility for communicating with other packages. Recently attempts have been made to develop integrated CAD/CAM systems and many software companies a...

  16. Joint Computing Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Raised Floor Computer Space for High Performance ComputingThe ERDC Information Technology Laboratory (ITL) provides a robust system of IT facilities to develop and...

  17. Integration of facility modeling capabilities for nuclear nonproliferation analysis

    International Nuclear Information System (INIS)

    Burr, Tom; Gorensek, M.B.; Krebs, John; Kress, Reid L.; Lamberti, Vincent; Schoenwald, David; Ward, Richard C.

    2012-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclearnonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facilitymodeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facilitymodeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facilitymodelingcapabilities and illustrates how they could be integrated and utilized for nonproliferationanalysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facilitymodeling tools. After considering a representative sampling of key facilitymodelingcapabilities, the proposed integration framework is illustrated with several examples.

  18. National Ignition Facility integrated computer control system

    International Nuclear Information System (INIS)

    Van Arsdall, P.J. LLNL

    1998-01-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance

  19. Energy Systems Integration Facility News | Energy Systems Integration

    Science.gov (United States)

    Facility | NREL Energy Systems Integration Facility News Energy Systems Integration Facility Energy Dataset A massive amount of wind data was recently made accessible online, greatly expanding the Energy's National Renewable Energy Laboratory (NREL) has completed technology validation testing for Go

  20. Computer-Aided Facilities Management Systems (CAFM).

    Science.gov (United States)

    Cyros, Kreon L.

    Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…

  1. Integrated Disposal Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Located near the center of the 586-square-mile Hanford Site is the Integrated Disposal Facility, also known as the IDF.This facility is a landfill similar in concept...

  2. National Ignition Facility system design requirements NIF integrated computer controls SDR004

    International Nuclear Information System (INIS)

    Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development, and test requirements for the NIF Integrated Computer Control System. The Integrated Computer Control System (ICCS) is covered in NIF WBS element 1.5. This document responds directly to the requirements detailed in the NIF Functional Requirements/Primary Criteria, and is supported by subsystem design requirements documents for each major ICCS Subsystem

  3. Survey of computer codes applicable to waste facility performance evaluations

    International Nuclear Information System (INIS)

    Alsharif, M.; Pung, D.L.; Rivera, A.L.; Dole, L.R.

    1988-01-01

    This study is an effort to review existing information that is useful to develop an integrated model for predicting the performance of a radioactive waste facility. A summary description of 162 computer codes is given. The identified computer programs address the performance of waste packages, waste transport and equilibrium geochemistry, hydrological processes in unsaturated and saturated zones, and general waste facility performance assessment. Some programs also deal with thermal analysis, structural analysis, and special purposes. A number of these computer programs are being used by the US Department of Energy, the US Nuclear Regulatory Commission, and their contractors to analyze various aspects of waste package performance. Fifty-five of these codes were identified as being potentially useful on the analysis of low-level radioactive waste facilities located above the water table. The code summaries include authors, identification data, model types, and pertinent references. 14 refs., 5 tabs

  4. Power Systems Integration Laboratory | Energy Systems Integration Facility

    Science.gov (United States)

    | NREL Power Systems Integration Laboratory Power Systems Integration Laboratory Research in the Energy System Integration Facility's Power Systems Integration Laboratory focuses on the microgrid applications. Photo of engineers testing an inverter in the Power Systems Integration Laboratory

  5. Concept of development of integrated computer - based control system for 'Ukryttia' object

    International Nuclear Information System (INIS)

    Buyal'skij, V.M.; Maslov, V.P.

    2003-01-01

    The structural concept of Chernobyl NPP 'Ukryttia' Object's integrated computer - based control system development is presented on the basis of general concept of integrated Computer - based Control System (CCS) design process for organizing and technical management subjects.The concept is aimed at state-of-the-art architectural design technique application and allows using modern computer-aided facilities for functional model,information (logical and physical) models development,as well as for system object model under design

  6. ATLAS experience with HEP software at the Argonne leadership computing facility

    International Nuclear Information System (INIS)

    Uram, Thomas D; LeCompte, Thomas J; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  7. ATLAS Experience with HEP Software at the Argonne Leadership Computing Facility

    CERN Document Server

    LeCompte, T; The ATLAS collaboration; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  8. Integrated Electrical and Thermal Grid Facility - Testing of Future Microgrid Technologies

    Directory of Open Access Journals (Sweden)

    Sundar Raj Thangavelu

    2015-09-01

    Full Text Available This paper describes the Experimental Power Grid Centre (EPGC microgrid test facility, which was developed to enable research, development and testing for a wide range of distributed generation and microgrid technologies. The EPGC microgrid facility comprises a integrated electrical and thermal grid with a flexible and configurable architecture, and includes various distributed energy resources and emulators, such as generators, renewable, energy storage technologies and programmable load banks. The integrated thermal grid provides an opportunity to harness waste heat produced by the generators for combined heat, power and cooling applications, and support research in optimization of combined electrical-thermal systems. Several case studies are presented to demonstrate the testing of different control and operation strategies for storage systems in grid-connected and islanded microgrids. One of the case studies also demonstrates an integrated thermal grid to convert waste heat to useful energy, which thus far resulted in a higher combined energy efficiency. Experiment results confirm that the facility enables testing and evaluation of grid technologies and practical problems that may not be apparent in a computer simulated environment.

  9. Design of integrated safeguards systems for nuclear facilities

    International Nuclear Information System (INIS)

    de Montmollin, J.M.; Walton, R.B.

    1976-01-01

    Safeguards systems that are capable of countering postulated threats to nuclear facilities must be closely integrated with plant layout and processes if they are to be effective and if potentially severe impacts on plant operations are to be averted. A facilities safeguards system suitable for a production plant is described in which the traditional elements of physical protection and periodic material-balance accounting are extended and augmented to provide close control of material flows. Discrete material items are subjected to direct, overriding physical control where appropriate. Materials in closely coupled process streams are protected by on-line NDA and weight measurements, with rapid computation of material balances to provide immediate indication of large-scale diversion. The system provides an information and actions at the safeguards/operations interface

  10. Design of integrated safeguards systems for nuclear facilities

    International Nuclear Information System (INIS)

    de Montmollin, J.M.; Walton, R.B.

    1978-06-01

    Safeguards systems that are capable of countering postulated threats to nuclear facilities must be closely integrated with plant layout and processes if they are to be effective and if potentially-severe impacts on plant operations are to be averted. This paper describes a facilities safeguards system suitable for production plant, in which the traditional elements of physical protection and periodic material-balance accounting are extended and augmented to provide close control of material flows. Discrete material items are subjected to direct, overriding physical control where appropriate. Materials in closely-coupled process streams are protected by on-line NDA and weight measurements, with rapid computation of material balances to provide immediate indication of large-scale diversion. The system provides information and actions at the safeguards/operations interface

  11. Computational Science at the Argonne Leadership Computing Facility

    Science.gov (United States)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  12. Energy Systems Integration News | Energy Systems Integration Facility |

    Science.gov (United States)

    the Energy Systems Integration Facility as part of NREL's work with SolarCity and the Hawaiian Electric Companies. Photo by Amy Glickson, NREL Welcome to Energy Systems Integration News, NREL's monthly date on the latest energy systems integration (ESI) developments at NREL and worldwide. Have an item

  13. Status of the National Ignition Facility Integrated Computer Control System (ICCS) on the Path to Ignition

    International Nuclear Information System (INIS)

    Lagin, L J; Bettenhauasen, R C; Bowers, G A; Carey, R W; Edwards, O D; Estes, C M; Demaret, R D; Ferguson, S W; Fisher, J M; Ho, J C; Ludwigsen, A P; Mathisen, D G; Marshall, C D; Matone, J M; McGuigan, D L; Sanchez, R J; Shelton, R T; Stout, E A; Tekle, E; Townsend, S L; Van Arsdall, P J; Wilson, E F

    2007-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility under construction that will contain a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber with room for multiple experimental diagnostics. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. NIF's laser beams are designed to compress fusion targets to conditions required for thermonuclear burn, liberating more energy than required to initiate the fusion reactions. NIF is comprised of 24 independent bundles of 8 beams each using laser hardware that is modularized into more than 6,000 line replaceable units such as optical assemblies, laser amplifiers, and multifunction sensor packages containing 60,000 control and diagnostic points. NIF is operated by the large-scale Integrated Computer Control System (ICCS) in an architecture partitioned by bundle and distributed among over 800 front-end processors and 50 supervisory servers. NIF's automated control subsystems are built from a common object-oriented software framework based on CORBA distribution that deploys the software across the computer network and achieves interoperation between different languages and target architectures. A shot automation framework has been deployed during the past year to orchestrate and automate shots performed at the NIF using the ICCS. In December 2006, a full cluster of 48 beams of NIF was fired simultaneously, demonstrating that the independent bundle control system will scale to full scale of 192 beams. At present, 72 beams have been commissioned and have demonstrated 1.4-Megajoule capability of infrared light. During the next two years, the control system will be expanded to include automation of target area systems including final optics, target positioners and

  14. Status of the National Ignition Facility Integrated Computer Control System (ICCS) on the path to ignition

    International Nuclear Information System (INIS)

    Lagin, L.J.; Bettenhausen, R.C.; Bowers, G.A.; Carey, R.W.; Edwards, O.D.; Estes, C.M.; Demaret, R.D.; Ferguson, S.W.; Fisher, J.M.; Ho, J.C.; Ludwigsen, A.P.; Mathisen, D.G.; Marshall, C.D.; Matone, J.T.; McGuigan, D.L.; Sanchez, R.J.; Stout, E.A.; Tekle, E.A.; Townsend, S.L.; Van Arsdall, P.J.

    2008-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility under construction that will contain a 192-beam, 1.8-MJ, 500-TW, ultraviolet laser system together with a 10-m diameter target chamber with room for multiple experimental diagnostics. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. NIF's laser beams are designed to compress fusion targets to conditions required for thermonuclear burn, liberating more energy than required to initiate the fusion reactions. NIF is comprised of 24 independent bundles of eight beams each using laser hardware that is modularized into more than 6000 line replaceable units such as optical assemblies, laser amplifiers, and multi-function sensor packages containing 60,000 control and diagnostic points. NIF is operated by the large-scale Integrated Computer Control System (ICCS) in an architecture partitioned by bundle and distributed among over 800 front-end processors and 50 supervisory servers. NIF's automated control subsystems are built from a common object-oriented software framework based on CORBA distribution that deploys the software across the computer network and achieves interoperation between different languages and target architectures. A shot automation framework has been deployed during the past year to orchestrate and automate shots performed at the NIF using the ICCS. In December 2006, a full cluster of 48 beams of NIF was fired simultaneously, demonstrating that the independent bundle control system will scale to full scale of 192 beams. At present, 72 beams have been commissioned and have demonstrated 1.4-MJ capability of infrared light. During the next 2 years, the control system will be expanded in preparation for project completion in 2009 to include automation of target area systems including final optics

  15. Conducting Computer Security Assessments at Nuclear Facilities

    International Nuclear Information System (INIS)

    2016-06-01

    Computer security is increasingly recognized as a key component in nuclear security. As technology advances, it is anticipated that computer and computing systems will be used to an even greater degree in all aspects of plant operations including safety and security systems. A rigorous and comprehensive assessment process can assist in strengthening the effectiveness of the computer security programme. This publication outlines a methodology for conducting computer security assessments at nuclear facilities. The methodology can likewise be easily adapted to provide assessments at facilities with other radioactive materials

  16. Integral test facilities for validation of the performance of passive safety systems and natural circulation

    International Nuclear Information System (INIS)

    Choi, J. H.

    2010-10-01

    Passive safety systems are becoming an important component in advanced reactor designs. This has led to an international interest in examining natural circulation phenomena as this may play an important role in the operation of these passive safety systems. Understanding reactor system behaviour is a challenging process due to the complex interactions between components and associated phenomena. Properly scaled integral test facilities can be used to explore these complex interactions. In addition, system analysis computer codes can be used as predictive tools in understanding the complex reactor system behaviour. However, before the application of system analysis computer codes for reactor design, it is capability in making predictions needs to be validated against the experimental data from a properly scaled integral test facility. The IAEA has organized a coordinated research project (CRP) on natural circulation phenomena, modeling and reliability of passive systems that utilize natural circulation. This paper is a part of research results from this CRP and describes representative international integral test facilities that can be used for data collection for reactor types in which natural circulation may play an important role. Example experiments were described along with the analyses of these example cases in order to examine the ability of system codes to model the phenomena that are occurring in the test facilities. (Author)

  17. DKIST facility management system integration

    Science.gov (United States)

    White, Charles R.; Phelps, LeEllen

    2016-07-01

    The Daniel K. Inouye Solar Telescope (DKIST) Observatory is under construction at Haleakalā, Maui, Hawai'i. When complete, the DKIST will be the largest solar telescope in the world. The Facility Management System (FMS) is a subsystem of the high-level Facility Control System (FCS) and directly controls the Facility Thermal System (FTS). The FMS receives operational mode information from the FCS while making process data available to the FCS and includes hardware and software to integrate and control all aspects of the FTS including the Carousel Cooling System, the Telescope Chamber Environmental Control Systems, and the Temperature Monitoring System. In addition it will integrate the Power Energy Management System and several service systems such as heating, ventilation, and air conditioning (HVAC), the Domestic Water Distribution System, and the Vacuum System. All of these subsystems must operate in coordination to provide the best possible observing conditions and overall building management. Further, the FMS must actively react to varying weather conditions and observational requirements. The physical impact of the facility must not interfere with neighboring installations while operating in a very environmentally and culturally sensitive area. The FMS system will be comprised of five Programmable Automation Controllers (PACs). We present a pre-build overview of the functional plan to integrate all of the FMS subsystems.

  18. Integrated Facilities and Infrastructure Plan.

    Energy Technology Data Exchange (ETDEWEB)

    Reisz Westlund, Jennifer Jill

    2017-03-01

    Our facilities and infrastructure are a key element of our capability-based science and engineering foundation. The focus of the Integrated Facilities and Infrastructure Plan is the development and implementation of a comprehensive plan to sustain the capabilities necessary to meet national research, design, and fabrication needs for Sandia National Laboratories’ (Sandia’s) comprehensive national security missions both now and into the future. A number of Sandia’s facilities have reached the end of their useful lives and many others are not suitable for today’s mission needs. Due to the continued aging and surge in utilization of Sandia’s facilities, deferred maintenance has continued to increase. As part of our planning focus, Sandia is committed to halting the growth of deferred maintenance across its sites through demolition, replacement, and dedicated funding to reduce the backlog of maintenance needs. Sandia will become more agile in adapting existing space and changing how space is utilized in response to the changing requirements. This Integrated Facilities & Infrastructure (F&I) Plan supports the Sandia Strategic Plan’s strategic objectives, specifically Strategic Objective 2: Strengthen our Laboratories’ foundation to maximize mission impact, and Strategic Objective 3: Advance an exceptional work environment that enables and inspires our people in service to our nation. The Integrated F&I Plan is developed through a planning process model to understand the F&I needs, analyze solution options, plan the actions and funding, and then execute projects.

  19. The Integral Test Facility Karlstein

    Directory of Open Access Journals (Sweden)

    Stephan Leyer

    2012-01-01

    Full Text Available The Integral Test Facility Karlstein (INKA test facility was designed and erected to test the performance of the passive safety systems of KERENA, the new AREVA Boiling Water Reactor design. The experimental program included single component/system tests of the Emergency Condenser, the Containment Cooling Condenser and the Passive Core Flooding System. Integral system tests, including also the Passive Pressure Pulse Transmitter, will be performed to simulate transients and Loss of Coolant Accident scenarios at the test facility. The INKA test facility represents the KERENA Containment with a volume scaling of 1 : 24. Component heights and levels are in full scale. The reactor pressure vessel is simulated by the accumulator vessel of the large valve test facility of Karlstein—a vessel with a design pressure of 11 MPa and a storage capacity of 125 m3. The vessel is fed by a benson boiler with a maximum power supply of 22 MW. The INKA multi compartment pressure suppression Containment meets the requirements of modern and existing BWR designs. As a result of the large power supply at the facility, INKA is capable of simulating various accident scenarios, including a full train of passive systems, starting with the initiating event—for example pipe rupture.

  20. Computing facility at SSC for detectors

    International Nuclear Information System (INIS)

    Leibold, P.; Scipiono, B.

    1990-01-01

    A description of the RISC-based distributed computing facility for detector simulaiton being developed at the SSC Laboratory is discussed. The first phase of this facility is scheduled for completion in early 1991. Included is the status of the project, overview of the concepts used to model and define system architecture, networking capabilities for user access, plans for support of physics codes and related topics concerning the implementation of this facility

  1. Integrated computer-aided design using minicomputers

    Science.gov (United States)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  2. MONITOR: A computer model for estimating the costs of an integral monitored retrievable storage facility

    International Nuclear Information System (INIS)

    Reimus, P.W.; Sevigny, N.L.; Schutz, M.E.; Heller, R.A.

    1986-12-01

    The MONITOR model is a FORTRAN 77 based computer code that provides parametric life-cycle cost estimates for a monitored retrievable storage (MRS) facility. MONITOR is very flexible in that it can estimate the costs of an MRS facility operating under almost any conceivable nuclear waste logistics scenario. The model can also accommodate input data of varying degrees of complexity and detail (ranging from very simple to more complex) which makes it ideal for use in the MRS program, where new designs and new cost data are frequently offered for consideration. MONITOR can be run as an independent program, or it can be interfaced with the Waste System Transportation and Economic Simulation (WASTES) model, a program that simulates the movement of waste through a complete nuclear waste disposal system. The WASTES model drives the MONITOR model by providing it with the annual quantities of waste that are received, stored, and shipped at the MRS facility. Three runs of MONITOR are documented in this report. Two of the runs are for Version 1 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2A (backup) version of the MRS cost estimate. In one of these runs MONITOR was run as an independent model, and in the other run MONITOR was run using an input file generated by the WASTES model. The two runs correspond to identical cases, and the fact that they gave identical results verified that the code performed the same calculations in both modes of operation. The third run was made for Version 2 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2B (integral) version of the MRS cost estimate. This run was made with MONITOR being run as an independent model. The results of several cases have been verified by hand calculations

  3. Thermal Distribution System | Energy Systems Integration Facility | NREL

    Science.gov (United States)

    Thermal Distribution System Thermal Distribution System The Energy Systems Integration Facility's . Photo of the roof of the Energy Systems Integration Facility. The thermal distribution bus allows low as 10% of its full load level). The 60-ton chiller cools water with continuous thermal control

  4. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Peisert, Sean [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Potok, Thomas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jones, Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-03

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues included research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the

  5. Integrating the Media Computation API with Pythy, an Online IDE for Novice Python Programmers

    OpenAIRE

    Athri, Ashima

    2015-01-01

    Improvements in both software and curricula have helped introductory computer science courses attract and retain more students. Pythy is one such online learning environment that aims to reduce software setup related barriers to learning Python while providing facilities like course management and grading to instructors. To further enable its goals of being beginner-centric, we want to integrate full support for media-computation-style programming activities. The media computation curriculum ...

  6. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  7. Integrated software package for nuclear material safeguards in a MOX fuel fabrication facility

    International Nuclear Information System (INIS)

    Schreiber, H.J.; Piana, M.; Moussalli, G.; Saukkonen, H.

    2000-01-01

    Since computerized data processing was introduced to Safeguards at large bulk handling facilities, a large number of individual software applications have been developed for nuclear material Safeguards implementation. Facility inventory and flow data are provided in computerized format for performing stratification, sample size calculation and selection of samples for destructive and non-destructive assay. Data is collected from nuclear measurement systems running in attended, unattended mode and more recently from remote monitoring systems controlled. Data sets from various sources have to be evaluated for Safeguards purposes, such as raw data, processed data and conclusions drawn from data evaluation results. They are reported in computerized format at the International Atomic Energy Agency headquarters and feedback from the Agency's mainframe computer system is used to prepare and support Safeguards inspection activities. The integration of all such data originating from various sources cannot be ensured without the existence of a common data format and a database system. This paper describes the fundamental relations between data streams, individual data processing tools, data evaluation results and requirements for an integrated software solution to facilitate nuclear material Safeguards at a bulk handling facility. The paper also explains the basis for designing a software package to manage data streams from various data sources and for incorporating diverse data processing tools that until now have been used independently from each other and under different computer operating systems. (author)

  8. Integrated numerical platforms for environmental dose assessments of large tritium inventory facilities

    International Nuclear Information System (INIS)

    Castro, P.; Ardao, J.; Velarde, M.; Sedano, L.; Xiberta, J.

    2013-01-01

    Related with a prospected new scenario of large inventory tritium facilities [KATRIN at TLK, CANDUs, ITER, EAST, other coming] the prescribed dosimetric limits by ICRP-60 for tritium committed-doses are under discussion requiring, in parallel, to surmount the highly conservative assessments by increasing the refinement of dosimetric-assessments in many aspects. Precise Lagrangian-computations of dosimetric cloud-evolution after standardized (normal/incidental/SBO) tritium cloud emissions are today numerically open to the perfect match of real-time meteorological-data, and patterns data at diverse scales for prompt/early and chronic tritium dose assessments. The trends towards integrated-numerical-platforms for environmental-dose assessments of large tritium inventory facilities under development.

  9. Computer usage among nurses in rural health-care facilities in South Africa: obstacles and challenges.

    Science.gov (United States)

    Asah, Flora

    2013-04-01

    This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.

  10. Multi-objective reverse logistics model for integrated computer waste management.

    Science.gov (United States)

    Ahluwalia, Poonam Khanijo; Nema, Arvind K

    2006-12-01

    This study aimed to address the issues involved in the planning and design of a computer waste management system in an integrated manner. A decision-support tool is presented for selecting an optimum configuration of computer waste management facilities (segregation, storage, treatment/processing, reuse/recycle and disposal) and allocation of waste to these facilities. The model is based on an integer linear programming method with the objectives of minimizing environmental risk as well as cost. The issue of uncertainty in the estimated waste quantities from multiple sources is addressed using the Monte Carlo simulation technique. An illustrated example of computer waste management in Delhi, India is presented to demonstrate the usefulness of the proposed model and to study tradeoffs between cost and risk. The results of the example problem show that it is possible to reduce the environmental risk significantly by a marginal increase in the available cost. The proposed model can serve as a powerful tool to address the environmental problems associated with exponentially growing quantities of computer waste which are presently being managed using rudimentary methods of reuse, recovery and disposal by various small-scale vendors.

  11. Integration of computer technology into the medical curriculum: the King's experience

    Directory of Open Access Journals (Sweden)

    Vickie Aitken

    1997-12-01

    Full Text Available Recently, there have been major changes in the requirements of medical education which have set the scene for the revision of medical curricula (Towle, 1991; GMC, 1993. As part of the new curriculum at King's, the opportunity has been taken to integrate computer technology into the course through Computer-Assisted Learning (CAL, and to train graduates in core IT skills. Although the use of computers in the medical curriculum has up to now been limited, recent studies have shown encouraging steps forward (see Boelen, 1995. One area where there has been particular interest is the use of notebook computers to allow students increased access to IT facilities (Maulitz et al, 1996.

  12. Shielding Calculations for Positron Emission Tomography - Computed Tomography Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Baasandorj, Khashbayar [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Yang, Jeongseon [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-10-15

    Integrated PET-CT has been shown to be more accurate for lesion localization and characterization than PET or CT alone, and the results obtained from PET and CT separately and interpreted side by side or following software based fusion of the PET and CT datasets. At the same time, PET-CT scans can result in high patient and staff doses; therefore, careful site planning and shielding of this imaging modality have become challenging issues in the field. In Mongolia, the introduction of PET-CT facilities is currently being considered in many hospitals. Thus, additional regulatory legislation for nuclear and radiation applications is necessary, for example, in regulating licensee processes and ensuring radiation safety during the operations. This paper aims to determine appropriate PET-CT shielding designs using numerical formulas and computer code. Since presently there are no PET-CT facilities in Mongolia, contact was made with radiological staff at the Nuclear Medicine Center of the National Cancer Center of Mongolia (NCCM) to get information about facilities where the introduction of PET-CT is being considered. Well-designed facilities do not require additional shielding, which should help cut down overall costs related to PET-CT installation. According to the results of this study, building barrier thicknesses of the NCCM building is not sufficient to keep radiation dose within the limits.

  13. Oak Ridge Leadership Computing Facility (OLCF)

    Data.gov (United States)

    Federal Laboratory Consortium — The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of standing up a supercomputer 100 times...

  14. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, S; Berzano, D; Brunetti, R; Lusso, S; Vallero, S

    2014-01-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  15. Energy Systems Integration Facility (ESIF) Facility Stewardship Plan: Revision 2.1

    Energy Technology Data Exchange (ETDEWEB)

    Torres, Juan [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Anderson, Art [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-02

    The U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy (EERE), has established the Energy Systems Integration Facility (ESIF) on the campus of the National Renewable Energy Laboratory (NREL) and has designated it as a DOE user facility. This 182,500-ft2 research facility provides state-of-the-art laboratory and support infrastructure to optimize the design and performance of electrical, thermal, fuel, and information technologies and systems at scale. This Facility Stewardship Plan provides DOE and other decision makers with information about the existing and expected capabilities of the ESIF and the expected performance metrics to be applied to ESIF operations. This plan is a living document that will be updated and refined throughout the lifetime of the facility.

  16. Natural circulation in an integral CANDU test facility

    International Nuclear Information System (INIS)

    Ingham, P.J.; Sanderson, T.V.; Luxat, J.C.; Melnyk, A.J.

    2000-01-01

    Over 70 single- and two-phase natural circulation experiments have been completed in the RD-14M facility, an integral CANDU thermalhydraulic test loop. This paper describes the RD-14M facility and provides an overview of the impact of key parameters on the results of natural circulation experiments. Particular emphasis will be on phenomena which led to heat up at high system inventories in a small subset of experiments. Clarification of misunderstandings in a recently published comparison of the effectiveness of natural circulation flows in RD-14M to integral facilities simulating other reactor geometries will also be provided. (author)

  17. Operational facility-integrated computer system for safeguards

    International Nuclear Information System (INIS)

    Armento, W.J.; Brooksbank, R.E.; Krichinsky, A.M.

    1980-01-01

    A computer system for safeguards in an active, remotely operated, nuclear fuel processing pilot plant has been developed. This sytem maintains (1) comprehensive records of special nuclear materials, (2) automatically updated book inventory files, (3) material transfer catalogs, (4) timely inventory estimations, (5) sample transactions, (6) automatic, on-line volume balances and alarmings, and (7) terminal access and applications software monitoring and logging. Future development will include near-real-time SNM mass balancing as both a static, in-tank summation and a dynamic, in-line determination. It is planned to incorporate aspects of site security and physical protection into the computer monitoring

  18. 2016 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, Jim [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.

  19. Computer Security at Nuclear Facilities

    International Nuclear Information System (INIS)

    Cavina, A.

    2013-01-01

    This series of slides presents the IAEA policy concerning the development of recommendations and guidelines for computer security at nuclear facilities. A document of the Nuclear Security Series dedicated to this issue is on the final stage prior to publication. This document is the the first existing IAEA document specifically addressing computer security. This document was necessary for 3 mains reasons: first not all national infrastructures have recognized and standardized computer security, secondly existing international guidance is not industry specific and fails to capture some of the key issues, and thirdly the presence of more or less connected digital systems is increasing in the design of nuclear power plants. The security of computer system must be based on a graded approach: the assignment of computer system to different levels and zones should be based on their relevance to safety and security and the risk assessment process should be allowed to feed back into and influence the graded approach

  20. Academic Computing Facilities and Services in Higher Education--A Survey.

    Science.gov (United States)

    Warlick, Charles H.

    1986-01-01

    Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…

  1. Multiloop Integral System Test (MIST): MIST Facility Functional Specification

    International Nuclear Information System (INIS)

    Habib, T.F.; Koksal, C.G.; Moskal, T.E.; Rush, G.C.; Gloudemans, J.R.

    1991-04-01

    The Multiloop Integral System Test (MIST) is part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock and Wilcox designed plants. MIST is sponsored by the US Nuclear Regulatory Commission, the Babcock ampersand Wilcox Owners Group, the Electric Power Research Institute, and Babcock and Wilcox. The unique features of the Babcock and Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral facilities to address the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility -- the Once Through Integral System (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST Functional Specification documents as-built design features, dimensions, instrumentation, and test approach. It also presents the scaling basis for the facility and serves to define the scope of work for the facility design and construction. 13 refs., 112 figs., 38 tabs

  2. FFTF integrated leak rate computer system

    International Nuclear Information System (INIS)

    Hubbard, J.A.

    1987-01-01

    The Fast Flux Test Facility (FFTF) is a liquid-metal-cooled test reactor located on the Hanford site. The FFTF is the only reactor of this type designed and operated to meet the licensing requirements of the Nuclear Regulatory Commission. Unique characteristics of the FFTF that present special challenges related to leak rate testing include thin wall containment vessel construction, cover gas systems that penetrate containment, and a low-pressure design basis accident. The successful completion of the third FFTF integrated leak rate test 5 days ahead of schedule and 10% under budget was a major achievement for the Westinghouse Hanford Company. The success of this operational safety test was due in large part to a special network (LAN) of three IBM PC/XT computers, which monitored the sensor data, calculated the containment vessel leak rate, and displayed test results. The equipment configuration allowed continuous monitoring of the progress of the test independent of the data acquisition and analysis functions, and it also provided overall improved system reliability by permitting immediate switching to backup computers in the event of equipment failure

  3. 2015 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  4. 2014 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  5. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  6. CIPSS [computer-integrated process and safeguards system]: The integration of computer-integrated manufacturing and robotics with safeguards, security, and process operations

    International Nuclear Information System (INIS)

    Leonard, R.S.; Evans, J.C.

    1987-01-01

    This poster session describes the computer-integrated process and safeguards system (CIPSS). The CIPSS combines systems developed for factory automation and automated mechanical functions (robots) with varying degrees of intelligence (expert systems) to create an integrated system that would satisfy current and emerging security and safeguards requirements. Specifically, CIPSS is an extension of the automated physical security functions concepts. The CIPSS also incorporates the concepts of computer-integrated manufacturing (CIM) with integrated safeguards concepts, and draws upon the Defense Advance Research Project Agency's (DARPA's) strategic computing program

  7. Double-shell tank waste transfer facilities integrity assessment plan

    International Nuclear Information System (INIS)

    Hundal, T.S.

    1998-01-01

    This document presents the integrity assessment plan for the existing double-shell tank waste transfer facilities system in the 200 East and 200 West Areas of Hanford Site. This plan identifies and proposes the integrity assessment elements and techniques to be performed for each facility. The integrity assessments of existing tank systems that stores or treats dangerous waste is required to be performed to be in compliance with the Washington State Department of Ecology Dangerous Waste Regulations, Washington Administrative Code WAC-173-303-640 requirements

  8. Operational Circular nr 5 - October 2000 USE OF CERN COMPUTING FACILITIES

    CERN Multimedia

    Division HR

    2000-01-01

    New rules covering the use of CERN Computing facilities have been drawn up. All users of CERN’s computing facilites are subject to these rules, as well as to the subsidiary rules of use. The Computing Rules explicitly address your responsibility for taking reasonable precautions to protect computing equipment and accounts. In particular, passwords must not be easily guessed or obtained by others. Given the difficulty to completely separate work and personal use of computing facilities, the rules define under which conditions limited personal use is tolerated. For example, limited personal use of e-mail, news groups or web browsing is tolerated in your private time, provided CERN resources and your official duties are not adversely affected. The full conditions governing use of CERN’s computing facilities are contained in Operational Circular N° 5, which you are requested to read. Full details are available at : http://www.cern.ch/ComputingRules Copies of the circular are also available in the Divis...

  9. Development of an integrated assay facility

    International Nuclear Information System (INIS)

    Molesworth, T.V.; Bailey, M.; Findlay, D.J.S.; Parsons, T.V.; Sene, M.R.; Swinhoe, M.T.

    1990-01-01

    The I.R.I.S. concept proposed the use of passive examination and active interrogation techniques in an integrated assay facility. A linac would generate the interrogating gamma and neutron beams. Insufficiently detailed knowledge about active neutron and gamma interrogation of 500 litre drums of cement immobilised intermediate level waste led to a research programme which is now in its main experimental stage. Measurements of interrogation responses are being made using simulated waste drums containing actinide samples and calibration sources, in an experimental assay assembly. Results show that responses are generally consistent with theory, but that improvements are needed in some areas. A preliminary appraisal of the engineering and economic aspects of integrated assay shows that correct operational sequencing is required to achieve the short cycle time needed for high throughput. The main engineering features of a facility have been identified

  10. An integrated lean-methods approach to hospital facilities redesign.

    Science.gov (United States)

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  11. Steam condensation induced water hammer in a vertical up-fill configuration within an integral test facility. Experiments and computational simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dirndorfer, Stefan

    2017-01-17

    Condensation induced water hammer is a source of danger and unpredictable loads in pipe systems. Studies concerning condensation induced water hammer were predominantly made for horizontal pipes, studies concerning vertical pipe geometries are quite rare. This work presents a new integral test facility and an analysis of condensation induced water hammer in a vertical up-fill configuration. Thanks to the state of the art technology, the phenomenology of vertical condensation induced water hammer can be analysed by means of sufficient high-sampled experimental data. The system code ATHLET is used to simulate UniBw condensation induced water hammer experiments. A newly developed and implemented direct contact condensation model enables ATHLET to calculate condensation induced water hammer. Selected experiments are validated by the modified ATHLET system code. A sensitivity analysis in ATHLET, together with the experimental data, allows to assess the performance of ATHLET to compute condensation induced water hammer in a vertical up-fill configuration.

  12. Steam condensation induced water hammer in a vertical up-fill configuration within an integral test facility. Experiments and computational simulations

    International Nuclear Information System (INIS)

    Dirndorfer, Stefan

    2017-01-01

    Condensation induced water hammer is a source of danger and unpredictable loads in pipe systems. Studies concerning condensation induced water hammer were predominantly made for horizontal pipes, studies concerning vertical pipe geometries are quite rare. This work presents a new integral test facility and an analysis of condensation induced water hammer in a vertical up-fill configuration. Thanks to the state of the art technology, the phenomenology of vertical condensation induced water hammer can be analysed by means of sufficient high-sampled experimental data. The system code ATHLET is used to simulate UniBw condensation induced water hammer experiments. A newly developed and implemented direct contact condensation model enables ATHLET to calculate condensation induced water hammer. Selected experiments are validated by the modified ATHLET system code. A sensitivity analysis in ATHLET, together with the experimental data, allows to assess the performance of ATHLET to compute condensation induced water hammer in a vertical up-fill configuration.

  13. Introduction to Large-sized Test Facility for validating Containment Integrity under Severe Accidents

    International Nuclear Information System (INIS)

    Na, Young Su; Hong, Seongwan; Hong, Seongho; Min, Beongtae

    2014-01-01

    An overall assessment of containment integrity can be conducted properly by examining the hydrogen behavior in the containment building. Under severe accidents, an amount of hydrogen gases can be generated by metal oxidation and corium-concrete interaction. Hydrogen behavior in the containment building strongly depends on complicated thermal hydraulic conditions with mixed gases and steam. The performance of a PAR can be directly affected by the thermal hydraulic conditions, steam contents, gas mixture behavior and aerosol characteristics, as well as the operation of other engineering safety systems such as a spray. The models in computer codes for a severe accident assessment can be validated based on the experiment results in a large-sized test facility. The Korea Atomic Energy Research Institute (KAERI) is now preparing a large-sized test facility to examine in detail the safety issues related with hydrogen including the performance of safety devices such as a PAR in various severe accident situations. This paper introduces the KAERI test facility for validating the containment integrity under severe accidents. To validate the containment integrity, a large-sized test facility is necessary for simulating complicated phenomena induced by an amount of steam and gases, especially hydrogen released into the containment building under severe accidents. A pressure vessel 9.5 m in height and 3.4 m in diameter was designed at the KAERI test facility for the validating containment integrity, which was based on the THAI test facility with the experimental safety and the reliable measurement systems certified for a long time. This large-sized pressure vessel operated in steam and iodine as a corrosive agent was made by stainless steel 316L because of corrosion resistance for a long operating time, and a vessel was installed in at KAERI in March 2014. In the future, the control systems for temperature and pressure in a vessel will be constructed, and the measurement system

  14. Assessment of the integrity of structural shielding of four computed tomography facilities in the greater Accra region of Ghana

    International Nuclear Information System (INIS)

    Nkansah, A.; Schandorf, C.; Boadu, M.; Fletcher, J. J.

    2013-01-01

    The structural shielding thicknesses of the walls of four computed tomography (CT) facilities in Ghana were re-evaluated to verify the shielding integrity using the new shielding design methods recommended by the National Council on Radiological Protection and Measurements (NCRP). The shielding thickness obtained ranged from 120 to 155 mm using default DLP values proposed by the European Commission and 110 to 168 mm using derived DLP values from the four CT manufacturers. These values are within the accepted standard concrete wall thickness ranging from 102 to 152 mm prescribed by the NCRP. The ultrasonic pulse testing of all walls indicated that these are of good quality and free of voids since pulse velocities estimated were within the range of 3.496±0.005 km s -1 . An average dose equivalent rate estimated for supervised areas is 3.4±0.27 μSv week -1 and that for the controlled area is 18.0±0.15 μSv week -1 , which are within acceptable values. (authors)

  15. COMPUTER ORIENTED FACILITIES OF TEACHING AND INFORMATIVE COMPETENCE

    Directory of Open Access Journals (Sweden)

    Olga M. Naumenko

    2010-09-01

    Full Text Available In the article it is considered the history of views to the tasks of education, estimations of its effectiveness from the point of view of forming of basic vitally important competences. Opinions to the problem in different countries, international organizations, corresponding experience of the Ukrainian system of education are described. The necessity of forming of informative competence of future teacher is reasonable in the conditions of application of the computer oriented facilities of teaching at the study of naturally scientific cycle subjects in pedagogical colleges. Prognosis estimations concerning the development of methods of application of computer oriented facilities of teaching are presented.

  16. Integrated network for structural integrity monitoring of critical components in nuclear facilities, RIMIS

    International Nuclear Information System (INIS)

    Roth, Maria; Constantinescu, Dan Mihai; Brad, Sebastian; Ducu, Catalin; Malinovschi, Viorel

    2008-01-01

    The round table aims to join specialists working in the research area of the Romanian R and D Institutes and Universities involved in structural integrity assessment of materials, especially those working in the nuclear field, together with the representatives of the end user, the Cernavoda NPP. This scientific event will offer the opportunity to disseminate the theoretical, experimental and modelling activities, carried out to date, in the framework of the National Program 'Research of Excellence', Module I 2006-2008, managed by the National Authority for Scientific Research. Entitled 'Integrated Network for Structural Integrity Monitoring of Critical Components in Nuclear Facilities, RIMIS, the project has two main objectives: 1. - to elaborate a procedure applicable to the structural integrity assessment of critical components used in Romanian nuclear facilities (CANDU type Reactor, Hydrogen Isotopes Separation installations); 2. - to integrate the national networking into a similar one of European level, and to enhance the scientific significance of Romanian R and D organisations as well as to increase the contribution in solving major issues of the nuclear field. The topics of the round table will be focused on: 1. Development of a Structural Integrity Assessment Methodology applicable to the nuclear facilities components; 2. Experimental investigation methods and procedures; 3. Numeric simulation of nuclear components behaviour; 4. Further activities to finalize the assessment procedure. Also participations and contributions to sustain the activity in the European Network NULIFE, FP6 will be discussed. (authors)

  17. Structural integrity monitoring of critical components in nuclear facilities

    International Nuclear Information System (INIS)

    Roth, Maria; Constantinescu, Dan Mihai; Brad, Sebastian; Ducu, Catalin; Malinovschi, Viorel

    2007-01-01

    Full text: The paper presents the results obtained as part of the Project 'Integrated Network for Structural Integrity Monitoring of Critical Components in Nuclear Facilities', RIMIS, a research work underway within the framework of the Ministry of Education and Research Programme 'Research of Excellence'. The main objective of the Project is to constitute a network integrating the national R and D institutes with preoccupations in the structural integrity assessment of critical components in the nuclear facilities operating in Romania, in order to elaborate a specific procedure for this field. The degradation mechanisms of the structural materials used in the CANDU type reactors, operated by Unit 1 and Unit 2 at Cernavoda (pressure tubes, fuel elements sheaths, steam generator tubing) and in the nuclear facilities relating to reactors of this type as, for instance, the Hydrogen Isotopes Separation facility, will be investigated. The development of a flexible procedure will offer the opportunity to extend the applications to other structural materials used in the nuclear field and in the non-nuclear fields as well, in cooperation with other institutes involved in the developed network. The expected results of the project will allow the integration of the network developed at national level in the structures of similar networks operating within the EU, the enhancement of the scientific importance of Romanian R and D organizations as well as the increase of our country's contribution in solving the major issues of the nuclear field. (authors)

  18. Integral Monitored Retrievable Storage (MRS) Facility conceptual design report

    International Nuclear Information System (INIS)

    1985-09-01

    The Basis for Design established the functional requirements and design criteria for an Integral Monitored Retrievable Storage (MRS) facility. The MRS Facility design, described in this report, is based on those requirements and includes all infrastructure, facilities, and equipment required to routinely receive, unload, prepare for storage, and store spent fuel (SF), high-level waste (HLW), and transuranic waste (TRU), and to decontaminate and return shipping casks received by both rail and truck. The facility is complete with all supporting facilities to make the MRS Facility a self-sufficient installation

  19. Analysis on working pressure selection of ACME integral test facility

    International Nuclear Information System (INIS)

    Chen Lian; Chang Huajian; Li Yuquan; Ye Zishen; Qin Benke

    2011-01-01

    An integral effects test facility, advanced core cooling mechanism experiment facility (ACME) was designed to verify the performance of the passive safety system and validate its safety analysis codes of a pressurized water reactor power plant. Three test facilities for AP1000 design were introduced and review was given. The problems resulted from the different working pressures of its test facilities were analyzed. Then a detailed description was presented on the working pressure selection of ACME facility as well as its characteristics. And the approach of establishing desired testing initial condition was discussed. The selected 9.3 MPa working pressure covered almost all important passive safety system enables the ACME to simulate the LOCAs with the same pressure and property similitude as the prototype. It's expected that the ACME design would be an advanced core cooling integral test facility design. (authors)

  20. Integrated social facility location planning for decision support: Accessibility studies provide support to facility location and integration of social service provision

    CSIR Research Space (South Africa)

    Green, Cheri A

    2012-09-01

    Full Text Available for two or more facilities to create an integrated plan for development Step 6 Costing of development plan Case Study Access norms and thresholds guidelines in accessibility analysis Appropriate norms/provision guidelines facilitate both service... access norms and threshold standards ?Test the relationship between service demand and the supply (service capacity) of the facility provision points within a defined catchment area ?Promote the ?right?sizing? of facilities relative to the demand...

  1. Modern computer hardware and the role of central computing facilities in particle physics

    International Nuclear Information System (INIS)

    Zacharov, V.

    1981-01-01

    Important recent changes in the hardware technology of computer system components are reviewed, and the impact of these changes assessed on the present and future pattern of computing in particle physics. The place of central computing facilities is particularly examined, to answer the important question as to what, if anything, should be their future role. Parallelism in computing system components is considered to be an important property that can be exploited with advantage. The paper includes a short discussion of the position of communications and network technology in modern computer systems. (orig.)

  2. Exercise evaluation and simulation facility

    International Nuclear Information System (INIS)

    Meitzler, W.D.; Jaske, R.T.

    1983-12-01

    The Exercise Evaluation and Simulation Facility (EESF) is a mini computer based system that will serve as a tool to aid FEMA in the evaluation of radiological emergency plans and preparedness around commercial nucler power facilities. The EESF integrates the following resources: a meteorological model, dose model, evacuation model, map information, and exercise information into a single system. Thus the user may access these various resources concurrently, and on completion display the results on a color graphic display or hardcopy unit. A unique capability made possible by the integration of these models is the computation of estimated total dose to the population

  3. Monitored retrievable storage (MRS) facility and salt repository integration: Engineering study report

    International Nuclear Information System (INIS)

    1987-07-01

    This MRS Facility and Salt Repository Integration Study evaluates the impacts of an integrated MRS/Salt Repository Waste Management System on the Salt Repository Surface facilities' design, operations, cost, and schedule. Eight separate cases were studied ranging from a two phase repository design with no MRS facility to a design in which the repository only received package waste from the MRS facility for emplacement. The addition of the MRS facility to the Waste Management System significantly reduced the capital cost of the salt repository. All but one of the cases studied were capable of meeting the waste acceptance data. The reduction in the size and complexity of the Salt Repository waste handling building with the integration of the MRS facility reduces the design and operating staff requirements. 7 refs., 35 figs., 43 tabs

  4. Summarisation of construction and commissioning experience for nuclear power integrated test facility

    International Nuclear Information System (INIS)

    Xiao Zejun; Jia Dounan; Jiang Xulun; Chen Bingde

    2003-01-01

    Since the foundation of Nuclear Power Institute of China, it has successively designed various engineering experimental facilities, and constructed nuclear power experimental research base, and accumulated rich construction experiences of nuclear power integrated test facility. The author presents experience on design, construction and commissioning of nuclear power integrated test facility

  5. ASCR Cybersecurity for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Piesert, Sean

    2015-02-27

    The Department of Energy (DOE) has the responsibility to address the energy, environmental, and nuclear security challenges that face our nation. Much of DOE’s enterprise involves distributed, collaborative teams; a signi¬cant fraction involves “open science,” which depends on multi-institutional, often international collaborations that must access or share signi¬cant amounts of information between institutions and over networks around the world. The mission of the Office of Science is the delivery of scienti¬c discoveries and major scienti¬c tools to transform our understanding of nature and to advance the energy, economic, and national security of the United States. The ability of DOE to execute its responsibilities depends critically on its ability to assure the integrity and availability of scienti¬c facilities and computer systems, and of the scienti¬c, engineering, and operational software and data that support its mission.

  6. Design Integration of Facilities Management

    DEFF Research Database (Denmark)

    Jensen, Per Anker

    2009-01-01

    One of the problems in the building industry is a limited degree of learning from experiences of use and operation of existing buildings. Development of professional facilities management (FM) can be seen as the missing link to bridge the gap between building operation and building design....... Strategies, methods and barriers for the transfer and integration of operational knowledge into the design process are discussed. Multiple strategies are needed to improve the integration of FM in design. Building clients must take on a leading role in defining and setting up requirements and procedures...... on literature studies and case studies from the Nordic countries in Europe, including research reflections on experiences from a main case study, where the author, before becoming a university researcher, was engaged in the client organization as deputy project director with responsibility for the integration...

  7. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  8. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  9. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  10. National facility for advanced computational science: A sustainable path to scientific discovery

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  11. Utilizing Computer Integration to Assist Nursing

    OpenAIRE

    Hujcs, Marianne

    1990-01-01

    As the use of computers in health care continues to increase, methods of using these computers to assist nursing practice are also increasing. This paper describes how integration within a hospital information system (HIS) contributed to the development of a report format and computer generated alerts used by nurses. Discussion also includes how the report and alerts impact those nurses providing bedside care as well as how integration of an HIS creates challenges for nursing.

  12. Evaluation of scaling concepts for integral system test facilities

    International Nuclear Information System (INIS)

    Condie, K.G.; Larson, T.K.; Davis, C.B.

    1987-01-01

    A study was conducted by EG and G Idaho, Inc., to identify and technically evaluate potential concepts which will allow the U.S. Nuclear Regulatory Commission to maintain the capability to conduct future integral, thermal-hydraulic facility experiments of interest to light water reactor safety. This paper summarizes the methodology used in the study and presents a rankings for each facility concept relative to its ability to simulate phenomena identified as important in selected reactor transients in Babcock and Wilcox and Westinghouse large pressurized water reactors. Established scaling methodologies are used to develop potential concepts for scaled integral thermal-hydraulic experiment facilities. Concepts selected included: full height, full pressure water; reduced height, reduced pressure water; reduced height, full pressure water; one-tenth linear, full pressure water; and reduced height, full scaled pressure Freon. Results from this study suggest that a facility capable of operating at typical reactor operating conditions will scale most phenomena reasonably well. Local heat transfer phenomena is best scaled by the full height facility, while the reduced height facilities provide better scaling where multi-dimensional phenomena are considered important. Although many phenomena in facilities using Freon or water at nontypical pressure will scale reasonably well, those phenomena which are heavily dependent on quality can be distorted. Furthermore, relation of data produced in facilities operating with nontypical fluids or at nontypical pressures to large plants will be a difficult and time-consuming process

  13. Design of an integrated non-destructive plutonium assay facility

    International Nuclear Information System (INIS)

    Moore, C.B.

    1984-01-01

    The Department of Energy requires improved technology for nuclear materials accounting as an essential part of new plutonium processing facilities. New facilities are being constructed at the Savannah River Plant by the Du Pont Company, Operating Contractor, to recover plutonium from scrap and waste material generated at SRP and other DOE contract processing facilities. This paper covers design concepts and planning required to incorporate state-of-the-art plutonium assay instruments developed at several national laboratories into an integrated, at-line nuclear material accounting facility operating in the production area. 3 figures

  14. Integrated Human Test Facilities at NASA and the Role of Human Engineering

    Science.gov (United States)

    Tri, Terry O.

    2002-01-01

    Integrated human test facilities are a key component of NASA's Advanced Life Support Program (ALSP). Over the past several years, the ALSP has been developing such facilities to serve as a large-scale advanced life support and habitability test bed capable of supporting long-duration evaluations of integrated bioregenerative life support systems with human test crews. These facilities-targeted for evaluation of hypogravity compatible life support and habitability systems to be developed for use on planetary surfaces-are currently in the development stage at the Johnson Space Center. These major test facilities are comprised of a set of interconnected chambers with a sealed internal environment, which will be outfitted with systems capable of supporting test crews of four individuals for periods exceeding one year. The advanced technology systems to be tested will consist of both biological and physicochemical components and will perform all required crew life support and habitability functions. This presentation provides a description of the proposed test "missions" to be supported by these integrated human test facilities, the overall system architecture of the facilities, the current development status of the facilities, and the role that human design has played in the development of the facilities.

  15. Integration of process computer systems to Cofrentes NPP

    International Nuclear Information System (INIS)

    Saettone Justo, A.; Pindado Andres, R.; Buedo Jimenez, J.L.; Jimenez Fernandez-Sesma, A.; Delgado Muelas, J.A.

    1997-01-01

    The existence of three different process computer systems in Cofrentes NPP and the ageing of two of them have led to the need for their integration into a single real time computer system, known as Integrated ERIS-Computer System (SIEC), which covers the functionality of the three systems: Process Computer (PC), Emergency Response Information System (ERIS) and Nuclear Calculation Computer (OCN). The paper describes the integration project developed, which has essentially consisted in the integration of PC, ERIS and OCN databases into a single database, the migration of programs from the old process computer into the new SIEC hardware-software platform and the installation of a communications programme to transmit all necessary data for OCN programs from the SIEC computer, which in the new configuration is responsible for managing the databases of the whole system. (Author)

  16. High resolution muon computed tomography at neutrino beam facilities

    International Nuclear Information System (INIS)

    Suerfu, B.; Tully, C.G.

    2016-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pion decay pipe at a neutrino beam facility and what can be achieved for momentum resolution in a muon spectrometer. Such an imaging system can be applied in archaeology, art history, engineering, material identification and whenever there is a need to image inside a transportable object constructed of dense materials

  17. Study on system integration of robots operated in nuclear fusion facility and nuclear power plant facilities

    International Nuclear Information System (INIS)

    Oka, Kiyoshi

    2004-07-01

    A present robot is required to apply to many fields such as amusement, welfare and protection against disasters. The are however only limited numbers of the robots, which can work under the actual conditions as a robot system. It is caused by the following reasons: (1) the robot system cannot be realized by the only collection of the elemental technologies, (2) the performance of the robot is determined by that of the integrated system composed of the complicated elements with many functions, and (3) the respective elements have to be optimized in the integrated robot system with a well balance among them, through their examination, adjustment and improvement. Therefore, the system integration of the robot composed of a large number of elements is the most critical issue to realize the robot system for actual use. In the present paper, I describe the necessary approaches and elemental technologies to solve the issues on the system integration of the typical robot systems for maintenance in the nuclear fusion facility and rescue in the accident of the nuclear power plant facilities. These robots work under the intense radiation condition and restricted space in place of human. In particular, I propose a new approach to realize the system integration of the robot for actual use from the viewpoints of not only the environment and working conditions but also the restructure and optimization of the required elemental technologies with a well balance in the robot system. Based on the above approach, I have a contribution to realize the robot systems working under the actual conditions for maintenance in the nuclear fusion facility and rescue in the accident of the nuclear power plant facilities. (author)

  18. Systems engineering applied to integrated safety management for high consequence facilities

    International Nuclear Information System (INIS)

    Barter, R; Morais, B.

    1998-01-01

    Integrated Safety Management is a concept that is being actively promoted by the U.S. Department of Energy as a means of assuring safe operation of its facilities. The concept involves the integration of safety precepts into work planning rather than adjusting for safe operations after defining the work activity. The system engineering techniques used to design an integrated safety management system for a high consequence research facility are described. An example is given to show how the concepts evolved with the system design

  19. Integrated safeguards and facility design and operations

    International Nuclear Information System (INIS)

    Tape, J.W.; Coulter, C.A.; Markin, J.T.; Thomas, K.E.

    1987-01-01

    The integration of safeguards functions to deter or detect unauthorized actions by an insider requires the careful communication and management of safeguards-relevant information on a timely basis. The traditional separation of safeguards functions into physical protection, materials control, and materials accounting often inhibits important information flows. Redefining the major safeguards functions as authorization, enforcement, and verification, and careful attention to management of information from acquisition to organization, to analysis, to decision making can result in effective safeguards integration. The careful inclusion of these ideas in facility designs and operations will lead to cost-effective safeguards systems. The safeguards authorization function defines, for example, personnel access requirements, processing activities, and materials movements/locations that are permitted to accomplish the mission of the facility. Minimizing the number of authorized personnel, limiting the processing flexibility, and maintaining up-to-date flow sheets will facilitate the detection of unauthorized activities. Enforcement of the authorized activities can be achieved in part through the use of barriers, access control systems, process sensors, and health and safety information. Consideration of safeguards requirements during facility design can improve the enforcement function. Verification includes the familiar materials accounting activities as well as auditing and testing of the other functions

  20. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  1. Simulation of natural circulation on an integral type experimental facility, MASLWR

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Youngjong; Lim, Sungwon; Ha, Jaejoo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-05-15

    The OSU MASLWR test facility was reconfigured to eliminate a recurring grounding problem and improve facility reliability in anticipation of conducting an IAEA International Collaborative Standard Problem (ICSP). The purpose of ICSP is to provide experimental data on flow instability phenomena under natural circulation conditions and coupled containment/reactor vessel behavior in integral-type reactors, and to evaluate system code capabilities to predict natural circulation phenomena for integral type PWR, by simulating an integrated experiment. A natural circulation in the primary side during various core powers is analyzed using TASS/SMR code for the integral type experimental facility. The calculation results show higher steady state primary flow than experiment. If it matches the initial flow with experiment, it shows lower primary flow than experiment according to the increase of power. The code predictions may be improved by applying a Reynolds number dependent form loss coefficient to accurately account for unrecoverable pressure losses.

  2. Integrated engineering system for nuclear facilities building

    International Nuclear Information System (INIS)

    Tomura, H.; Miyamoto, A.; Futami, F.; Yasuda, S.; Ohtomo, T.

    1995-01-01

    In the construction of buildings for nuclear facilities in Japan, construction companies are generally in charge of the building engineering work, coordinating with plant engineering. An integrated system for buildings (PROMOTE: PROductive MOdeling system for Total nuclear Engineering) described here is a building engineering system including the entire life cycle of buildings for nuclear facilities. A Three-dimensional (3D) building model (PRO-model) is to be in the core of the system (PROMOTE). Data sharing in the PROMOTE is also done with plant engineering systems. By providing these basic technical foundations, PROMOTE is oriented toward offering rational, highquality engineering for the projects. The aim of the system is to provide a technical foundation in building engineering. This paper discusses the characteristics of buildings for nuclear facilities and the outline of the PROMOTE. (author)

  3. Australian national networked tele-test facility for integrated systems

    Science.gov (United States)

    Eshraghian, Kamran; Lachowicz, Stefan W.; Eshraghian, Sholeh

    2001-11-01

    The Australian Commonwealth government recently announced a grant of 4.75 million as part of a 13.5 million program to establish a world class networked IC tele-test facility in Australia. The facility will be based on a state-of-the-art semiconductor tester located at Edith Cowan University in Perth that will operate as a virtual centre spanning Australia. Satellite nodes will be located at the University of Western Australia, Griffith University, Macquarie University, Victoria University and the University of Adelaide. The facility will provide vital equipment to take Australia to the frontier of critically important and expanding fields in microelectronics research and development. The tele-test network will provide state of the art environment for the electronics and microelectronics research and the industry community around Australia to test and prototype Very Large Scale Integrated (VLSI) circuits and other System On a Chip (SOC) devices, prior to moving to the manufacturing stage. Such testing is absolutely essential to ensure that the device performs to specification. This paper presents the current context in which the testing facility is being established, the methodologies behind the integration of design and test strategies and the target shape of the tele-testing Facility.

  4. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    Science.gov (United States)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  5. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Jayatilaka, B. [Fermilab; Levshina, T. [Fermilab; Sehgal, C. [Fermilab; Gardner, R. [Chicago U.; Rynge, M. [USC - ISI, Marina del Rey; Würthwein, F. [UC, San Diego

    2017-11-22

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  6. A personal computer code for seismic evaluations of nuclear power plant facilities

    International Nuclear Information System (INIS)

    Xu, J.; Graves, H.

    1990-01-01

    A wide range of computer programs and modeling approaches are often used to justify the safety of nuclear power plants. It is often difficult to assess the validity and accuracy of the results submitted by various utilities without developing comparable computer solutions. Taken this into consideration, CARES is designed as an integrated computational system which can perform rapid evaluations of structural behavior and examine capability of nuclear power plant facilities, thus CARES may be used by the NRC to determine the validity and accuracy of analysis methodologies employed for structural safety evaluations of nuclear power plants. CARES has been designed to: operate on a PC, have user friendly input/output interface, and have quick turnaround. The CARES program is structured in a modular format. Each module performs a specific type of analysis. The basic modules of the system are associated with capabilities for static, seismic and nonlinear analyses. This paper describes the various features which have been implemented into the Seismic Module of CARES version 1.0. In Section 2 a description of the Seismic Module is provided. The methodologies and computational procedures thus far implemented into the Seismic Module are described in Section 3. Finally, a complete demonstration of the computational capability of CARES in a typical soil-structure interaction analysis is given in Section 4 and conclusions are presented in Section 5. 5 refs., 4 figs

  7. A stand alone computer system to aid the development of mirror fusion test facility RF heating systems

    International Nuclear Information System (INIS)

    Thomas, R.A.

    1983-01-01

    The Mirror Fusion Test Facility (MFTF-B) control system architecture requires the Supervisory Control and Diagnostic System (SCDS) to communicate with a LSI-11 Local Control Computer (LCC) that in turn communicates via a fiber optic link to CAMAC based control hardware located near the machine. In many cases, the control hardware is very complex and requires a sizable development effort prior to being integrated into the overall MFTF-B system. One such effort was the development of the Electron Cyclotron Resonance Heating (ECRH) system. It became clear that a stand alone computer system was needed to simulate the functions of SCDS. This paper describes the hardware and software necessary to implement the SCDS Simulation Computer (SSC). It consists of a Digital Equipment Corporation (DEC) LSI-11 computer and a Winchester/Floppy disk operating under the DEC RT-11 operating system. All application software for MFTF-B is programmed in PASCAL, which allowed us to adapt procedures originally written for SCDS to the SSC. This nearly identical software interface means that software written during the equipment development will be useful to the SCDS programmers in the integration phase

  8. Computer-Assisted School Facility Planning with ONPASS.

    Science.gov (United States)

    Urban Decision Systems, Inc., Los Angeles, CA.

    The analytical capabilities of ONPASS, an on-line computer-aided school facility planning system, are described by its developers. This report describes how, using the Canoga Park-Winnetka-Woodland Hills Planning Area as a test case, the Department of City Planning of the city of Los Angeles employed ONPASS to demonstrate how an on-line system can…

  9. Integrated biofuel facility, with carbon dioxide consumption and power generation

    Energy Technology Data Exchange (ETDEWEB)

    Powell, E.E.; Hill, G.A. [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Chemical Engineering

    2009-07-01

    This presentation provided details of an economical design for a large-scale integrated biofuel facility for coupled production of bioethanol and biodiesel, with carbon dioxide capture and power generation. Several designs were suggested for both batch and continuous culture operations, taking into account all costs and revenues associated with the complete plant integration. The microalgae species Chlorella vulgaris was cultivated in a novel photobioreactor (PBR) in order to consume industrial carbon dioxide (CO{sub 2}). This photosynthetic culture can also act as a biocathode in a microbial fuel cell (MFC), which when coupled to a typical yeast anodic half cell, results in a complete biological MFC. The photosynthetic MFC produces electricity as well as valuable biomass and by-products. The use of this novel photosynthetic microalgae cathodic half cell in an integrated biofuel facility was discussed. A series of novel PBRs for continuous operation can be integrated into a large-scale bioethanol facility, where the PBRs serve as cathodic half cells and are coupled to the existing yeast fermentation tanks which act as anodic half cells. These coupled MFCs generate electricity for use within the biofuel facility. The microalgae growth provides oil for biodiesel production, in addition to the bioethanol from the yeast fermentation. The photosynthetic cultivation in the cathodic PBR also requires carbon dioxide, resulting in consumption of carbon dioxide from bioethanol production. The paper also discussed the effect of plant design on net present worth and internal rate of return. tabs., figs.

  10. Wind Energy Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Laurie, Carol

    2017-02-01

    This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.

  11. Deterministic computation of functional integrals

    International Nuclear Information System (INIS)

    Lobanov, Yu.Yu.

    1995-09-01

    A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the

  12. Neutronic computational modeling of the ASTRA critical facility using MCNPX

    International Nuclear Information System (INIS)

    Rodriguez, L. P.; Garcia, C. R.; Milian, D.; Milian, E. E.; Brayner, C.

    2015-01-01

    The Pebble Bed Very High Temperature Reactor is considered as a prominent candidate among Generation IV nuclear energy systems. Nevertheless the Pebble Bed Very High Temperature Reactor faces an important challenge due to the insufficient validation of computer codes currently available for use in its design and safety analysis. In this paper a detailed IAEA computational benchmark announced by IAEA-TECDOC-1694 in the framework of the Coordinated Research Project 'Evaluation of High Temperature Gas Cooled Reactor (HTGR) Performance' was solved in support of the Generation IV computer codes validation effort using MCNPX ver. 2.6e computational code. In the IAEA-TECDOC-1694 were summarized a set of four calculational benchmark problems performed at the ASTRA critical facility. Benchmark problems include criticality experiments, control rod worth measurements and reactivity measurements. The ASTRA Critical Facility at the Kurchatov Institute in Moscow was used to simulate the neutronic behavior of nuclear pebble bed reactors. (Author)

  13. Oxy-Combustion Burner and Integrated Pollutant Removal Research and Development Test Facility

    Energy Technology Data Exchange (ETDEWEB)

    Mark Schoenfield; Manny Menendez; Thomas Ochs; Rigel Woodside; Danylo Oryshchyn

    2012-09-30

    A high flame temperature oxy-combustion test facility consisting of a 5 MWe equivalent test boiler facility and 20 KWe equivalent IPR® was constructed at the Hammond, Indiana manufacturing site. The test facility was operated natural gas and coal fuels and parametric studies were performed to determine the optimal performance conditions and generated the necessary technical data required to demonstrate the technologies are viable for technical and economic scale-up. Flame temperatures between 4930-6120F were achieved with high flame temperature oxy-natural gas combustion depending on whether additional recirculated flue gases are added to balance the heat transfer. For high flame temperature oxy-coal combustion, flame temperatures in excess of 4500F were achieved and demonstrated to be consistent with computational fluid dynamic modeling of the burner system. The project demonstrated feasibility and effectiveness of the Jupiter Oxygen high flame temperature oxy-combustion process with Integrated Pollutant Removal process for CCS and CCUS. With these technologies total parasitic power requirements for both oxygen production and carbon capture currently are in the range of 20% of the gross power output. The Jupiter Oxygen high flame temperature oxy-combustion process has been demonstrated at a Technology Readiness Level of 6 and is ready for commencement of a demonstration project.

  14. Development of computer model for radionuclide released from shallow-land disposal facility

    International Nuclear Information System (INIS)

    Suganda, D.; Sucipta; Sastrowardoyo, P.B.; Eriendi

    1998-01-01

    Development of 1-dimensional computer model for radionuclide release from shallow land disposal facility (SLDF) has been done. This computer model is used for the SLDF facility at PPTA Serpong. The SLDF facility is above 1.8 metres from groundwater and 150 metres from Cisalak river. Numerical method by implicit method of finite difference solution is chosen to predict the migration of radionuclide with any concentration.The migration starts vertically from the bottom of SLDF until the groundwater layer, then horizontally in the groundwater until the critical population group. Radionuclide Cs-137 is chosen as a sample to know its migration. The result of the assessment shows that the SLDF facility at PPTA Serpong has the high safety criteria. (author)

  15. MIMI: multimodality, multiresource, information integration environment for biomedical core facilities.

    Science.gov (United States)

    Szymanski, Jacek; Wilson, David L; Zhang, Guo-Qiang

    2009-10-01

    The rapid expansion of biomedical research has brought substantial scientific and administrative data management challenges to modern core facilities. Scientifically, a core facility must be able to manage experimental workflow and the corresponding set of large and complex scientific data. It must also disseminate experimental data to relevant researchers in a secure and expedient manner that facilitates collaboration and provides support for data interpretation and analysis. Administratively, a core facility must be able to manage the scheduling of its equipment and to maintain a flexible and effective billing system to track material, resource, and personnel costs and charge for services to sustain its operation. It must also have the ability to regularly monitor the usage and performance of its equipment and to provide summary statistics on resources spent on different categories of research. To address these informatics challenges, we introduce a comprehensive system called MIMI (multimodality, multiresource, information integration environment) that integrates the administrative and scientific support of a core facility into a single web-based environment. We report the design, development, and deployment experience of a baseline MIMI system at an imaging core facility and discuss the general applicability of such a system in other types of core facilities. These initial results suggest that MIMI will be a unique, cost-effective approach to addressing the informatics infrastructure needs of core facilities and similar research laboratories.

  16. The grand challenge of managing the petascale facility.

    Energy Technology Data Exchange (ETDEWEB)

    Aiken, R. J.; Mathematics and Computer Science

    2007-02-28

    This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, we should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected

  17. Strategic interaction among hospitals and nursing facilities: the efficiency effects of payment systems and vertical integration.

    Science.gov (United States)

    Banks, D; Parker, E; Wendel, J

    2001-03-01

    Rising post-acute care expenditures for Medicare transfer patients and increasing vertical integration between hospitals and nursing facilities raise questions about the links between payment system structure, the incentive for vertical integration and the impact on efficiency. In the United States, policy-makers are responding to these concerns by initiating prospective payments to nursing facilities, and are exploring the bundling of payments to hospitals. This paper develops a static profit-maximization model of the strategic interaction between the transferring hospital and a receiving nursing facility. This model suggests that the post-1984 system of prospective payment for hospital care, coupled with nursing facility payments that reimburse for services performed, induces inefficient under-provision of hospital services and encourages vertical integration. It further indicates that the extension of prospective payment to nursing facilities will not eliminate the incentive to vertically integrate, and will not result in efficient production unless such integration takes place. Bundling prospective payments for hospitals and nursing facilities will neither remove the incentive for vertical integration nor induce production efficiency without such vertical integration. However, bundled payment will induce efficient production, with or without vertical integration, if nursing facilities are reimbursed for services performed. Copyright 2001 John Wiley & Sons, Ltd.

  18. Criticality safety considerations. Integral Monitored Retrievable Storage (MRS) Facility

    International Nuclear Information System (INIS)

    1986-09-01

    This report summarizes the criticality analysis performed to address criticality safety concerns and to support facility design during the conceptual design phase of the Monitored Retrievable Storage (MRS) Facility. The report addresses the criticality safety concerns, the design features of the facility relative to criticality, and the results of the analysis of both normal operating and hypothetical off-normal conditions. Key references are provided (Appendix C) if additional information is desired by the reader. The MRS Facility design was developed and the related analysis was performed in accordance with the MRS Facility Functional Design Criteria and the Basis for Design. The detailed description and calculations are documented in the Integral MRS Facility Conceptual Design Report. In addition to the summary portion of this report, explanatary notes for various terms, calculation methodology, and design parameters are presented in Appendix A. Appendix B provides a brief glossary of technical terms

  19. Adequacy of power-to-volume scaling philosophy to simulate natural circulation in Integral Test Facilities

    International Nuclear Information System (INIS)

    Nayak, A.K.; Vijayan, P.K.; Saha, D.; Venkat Raj, V.; Aritomi, Masanori

    1998-01-01

    Theoretical and experimental investigations were carried out to study the adequacy of power-to-volume scaling philosophy for the simulation of natural circulation and to establish the scaling philosophy applicable for the design of the Integral Test Facility (ITF-AHWR) for the Indian Advanced Heavy Water Reactor (AHWR). The results indicate that a reduction in the flow channel diameter of the scaled facility as required by the power-to-volume scaling philosophy may affect the simulation of natural circulation behaviour of the prototype plants. This is caused by the distortions due to the inability to simulate the frictional resistance of the scaled facility. Hence, it is recommended that the flow channel diameter of the scaled facility should be as close as possible to the prototype. This was verified by comparing the natural circulation behaviour of a prototype 220 MWe Indian PHWR and its scaled facility (FISBE-1) designed based on power-to-volume scaling philosophy. It is suggested from examinations using a mathematical model and a computer code that the FISBE-1 simulates the steady state and the general trend of transient natural circulation behaviour of the prototype reactor adequately. Finally the proposed scaling method was applied for the design of the ITF-AHWR. (author)

  20. A personal computer code for seismic evaluations of nuclear power plant facilities

    International Nuclear Information System (INIS)

    Xu, J.; Graves, H.

    1991-01-01

    In the process of review and evaluation of licensing issues related to nuclear power plants, it is essential to understand the behavior of seismic loading, foundation and structural properties and their impact on the overall structural response. In most cases, such knowledge could be obtained by using simplified engineering models which, when properly implemented, can capture the essential parameters describing the physics of the problem. Such models do not require execution on large computer systems and could be implemented through a personal computer (PC) based capability. Recognizing the need for a PC software package that can perform structural response computations required for typical licensing reviews, the US Nuclear Regulatory Commission sponsored the development of a PC operated computer software package CARES (Computer Analysis for Rapid Evaluation of Structures) system. This development was undertaken by Brookhaven National Laboratory (BNL) during FY's 1988 and 1989. A wide range of computer programs and modeling approaches are often used to justify the safety of nuclear power plants. It is often difficult to assess the validity and accuracy of the results submitted by various utilities without developing comparable computer solutions. Taken this into consideration, CARES is designed as an integrated computational system which can perform rapid evaluations of structural behavior and examine capability of nuclear power plant facilities, thus CARES may be used by the NRC to determine the validity and accuracy of analysis methodologies employed for structural safety evaluations of nuclear power plants. CARES has been designed to operate on a PC, have user friendly input/output interface, and have quick turnaround. This paper describes the various features which have been implemented into the seismic module of CARES version 1.0

  1. Integrated Computer System of Management in Logistics

    Science.gov (United States)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  2. The Origin and Constitution of Facilities Management as an integrated corporate fuction

    DEFF Research Database (Denmark)

    Jensen, Per Anker

    2008-01-01

    Purpose – To understand how facilities management (FM) has evolved over time in a complex public corporation from internal functions of building operation and building client and the related service functions to become an integrated corporate function. Design/methodology/approach – The paper...... is based on results from a research project on space strategies and building values, which included a major longitudinal case study of the development of facilities for the Danish Broadcasting Corporation (DR) over time. The research presented here included literature studies, archive studies...... and a fully integrated corporate Facilities Management function are established. Research limitations/implications – The paper presents empirical evidence of the historical development ofFMfrom one case and provides a deeper understanding of the integration processes that are crucial to FM and which can...

  3. The Overview of the National Ignition Facility Distributed Computer Control System

    International Nuclear Information System (INIS)

    Lagin, L.J.; Bettenhausen, R.C.; Carey, R.A.; Estes, C.M.; Fisher, J.M.; Krammen, J.E.; Reed, R.K.; VanArsdall, P.J.; Woodruff, J.P.

    2001-01-01

    The Integrated Computer Control System (ICCS) for the National Ignition Facility (NIF) is a layered architecture of 300 front-end processors (FEP) coordinated by supervisor subsystems including automatic beam alignment and wavefront control, laser and target diagnostics, pulse power, and shot control timed to 30 ps. FEP computers incorporate either VxWorks on PowerPC or Solaris on UltraSPARC processors that interface to over 45,000 control points attached to VME-bus or PCI-bus crates respectively. Typical devices are stepping motors, transient digitizers, calorimeters, and photodiodes. The front-end layer is divided into another segment comprised of an additional 14,000 control points for industrial controls including vacuum, argon, synthetic air, and safety interlocks implemented with Allen-Bradley programmable logic controllers (PLCs). The computer network is augmented asynchronous transfer mode (ATM) that delivers video streams from 500 sensor cameras monitoring the 192 laser beams to operator workstations. Software is based on an object-oriented framework using CORBA distribution that incorporates services for archiving, machine configuration, graphical user interface, monitoring, event logging, scripting, alert management, and access control. Software coding using a mixed language environment of Ada95 and Java is one-third complete at over 300 thousand source lines. Control system installation is currently under way for the first 8 beams, with project completion scheduled for 2008

  4. PANDA: A Multipurpose Integral Test Facility for LWR Safety Investigations

    International Nuclear Information System (INIS)

    Paladino, D.; Dreier, J.

    2012-01-01

    The PANDA facility is a large scale, multicompartmental thermal hydraulic facility suited for investigations related to the safety of current and advanced LWRs. The facility is multipurpose, and the applications cover integral containment response tests, component tests, primary system tests, and separate effect tests. Experimental investigations carried on in the PANDA facility have been embedded in international projects, most of which under the auspices of the EU and OECD and with the support of a large number of organizations (regulatory bodies, technical dupport organizations, national laboratories, electric utilities, industries) worldwide. The paper provides an overview of the research programs performed in the PANDA facility in relation to BWR containment systems and those planned for PWR containment systems.

  5. Assessment of the structural shielding integrity of some selected computed tomography facilities in the Greater Accra Region of Ghana

    International Nuclear Information System (INIS)

    Nkansah, A.

    2010-01-01

    The structural shielding integrity was assessed for four of the CT facilities at Trust Hospital, Korle-Bu Teaching Hospital, the 37 Military Hospital and Medical Imaging Ghana Ltd. in the Greater Accra Region of Ghana. From the shielding calculations, the concrete wall thickness computed are 120, 145, 140 and 155mm, for Medical Imaging Ghana Ltd. 37 Military, Trust Hospital and Korle-Bu Teaching Hospital respectively using Default DLP values. The wall thickness using Derived DLP values are 110, 110, 120 and 168mm for Medical Imaging Ghana Ltd, 37 Military Hospital, Trust Hospital and Korle-Bu Teaching Hospital respectively. These values are within the accepted standard concrete thickness of 102- 152mm prescribed by the National Council of Radiological Protection and measurement. The ultrasonic pulse testing indicated that all the sandcrete walls are of good quality and free of voids since pulse velocities estimated were approximately equal to 3.45km/s. an average dose rate measurement for supervised areas is 3.4 μSv/wk and controlled areas is 18.0 μSv/wk. These dose rates were below the acceptable levels of 100 μSv per week for the occupationally exposed and 20 μSv per week for members of the public provided by the ICRU. The results mean that the structural shielding thickness are adequate to protect members of the public and occupationally exposed workers (au).

  6. Shieldings for X-ray radiotherapy facilities calculated by computer

    International Nuclear Information System (INIS)

    Pedrosa, Paulo S.; Farias, Marcos S.; Gavazza, Sergio

    2005-01-01

    This work presents a methodology for calculation of X-ray shielding in facilities of radiotherapy with help of computer. Even today, in Brazil, the calculation of shielding for X-ray radiotherapy is done based on NCRP-49 recommendation establishing a methodology for calculating required to the elaboration of a project of shielding. With regard to high energies, where is necessary the construction of a labyrinth, the NCRP-49 is not very clear, so that in this field, studies were made resulting in an article that proposes a solution to the problem. It was developed a friendly program in Delphi programming language that, through the manual data entry of a basic design of architecture and some parameters, interprets the geometry and calculates the shields of the walls, ceiling and floor of on X-ray radiation therapy facility. As the final product, this program provides a graphical screen on the computer with all the input data and the calculation of shieldings and the calculation memory. The program can be applied in practical implementation of shielding projects for radiotherapy facilities and can be used in a didactic way compared to NCRP-49.

  7. A Bioinformatics Facility for NASA

    Science.gov (United States)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  8. Computer codes for ventilation in nuclear facilities

    International Nuclear Information System (INIS)

    Mulcey, P.

    1987-01-01

    In this paper the authors present some computer codes, developed in the last years, for ventilation and radioprotection. These codes are used for safety analysis in the conception, exploitation and dismantlement of nuclear facilities. The authors present particularly: DACC1 code used for aerosol deposit in sampling circuit of radiation monitors; PIAF code used for modelization of complex ventilation system; CLIMAT 6 code used for optimization of air conditioning system [fr

  9. An integrated computational tool for precipitation simulation

    Science.gov (United States)

    Cao, W.; Zhang, F.; Chen, S.-L.; Zhang, C.; Chang, Y. A.

    2011-07-01

    Computer aided materials design is of increasing interest because the conventional approach solely relying on experimentation is no longer viable within the constraint of available resources. Modeling of microstructure and mechanical properties during precipitation plays a critical role in understanding the behavior of materials and thus accelerating the development of materials. Nevertheless, an integrated computational tool coupling reliable thermodynamic calculation, kinetic simulation, and property prediction of multi-component systems for industrial applications is rarely available. In this regard, we are developing a software package, PanPrecipitation, under the framework of integrated computational materials engineering to simulate precipitation kinetics. It is seamlessly integrated with the thermodynamic calculation engine, PanEngine, to obtain accurate thermodynamic properties and atomic mobility data necessary for precipitation simulation.

  10. Assessing the economic feasibility of flexible integrated gasification Co-generation facilities

    NARCIS (Netherlands)

    Meerman, J.C.; Ramírez Ramírez, C.A.; Turkenburg, W.C.; Faaij, A.P.C.

    2011-01-01

    This paper evaluated the economic effects of introducing flexibility to state-of-the-art integrated gasification co-generation (IGCG) facilities equipped with CO2 capture. In a previous paper the technical and energetic performances of these flexible IG-CG facilities were evaluated. This paper

  11. Computer control and data acquisition system for the R.F. Test Facility

    International Nuclear Information System (INIS)

    Stewart, K.A.; Burris, R.D.; Mankin, J.B.; Thompson, D.H.

    1986-01-01

    The Radio Frequency Test Facility (RFTF) at Oak Ridge National Laboratory, used to test and evaluate high-power ion cyclotron resonance heating (ICRH) systems and components, is monitored and controlled by a multicomponent computer system. This data acquisition and control system consists of three major hardware elements: (1) an Allen-Bradley PLC-3 programmable controller; (2) a VAX 11/780 computer; and (3) a CAMAC serial highway interface. Operating in LOCAL as well as REMOTE mode, the programmable logic controller (PLC) performs all the control functions of the test facility. The VAX computer acts as the operator's interface to the test facility by providing color mimic panel displays and allowing input via a trackball device. The VAX also provides archiving of trend data acquired by the PLC. Communications between the PLC and the VAX are via the CAMAC serial highway. Details of the hardware, software, and the operation of the system are presented in this paper

  12. ICAT: Integrating data infrastructure for facilities based science

    International Nuclear Information System (INIS)

    Flannery, Damian; Matthews, Brian; Griffin, Tom; Bicarregui, Juan; Gleaves, Michael; Lerusse, Laurent; Downing, Roger; Ashton, Alun; Sufi, Shoaib; Drinkwater, Glen; Kleese van Dam, Kerstin

    2009-01-01

    ICAT: Integrating data infrastructure for facilities based science Damian Flannery, Brian Matthews, Tom Griffin, Juan Bicarregui, Michael Gleaves, Laurent Lerusse, Roger Downing, Alun Ashton, Shoaib Sufi, Glen Drinkwater, Kerstin Kleese Abstract Scientific facilities, in particular large-scale photon and neutron sources, have demanding requirements to manage the increasing quantities of experimental data they generate in a systematic and secure way. In this paper, we describe the ICAT infrastructure for cataloguing facility generated experimental data which has been in development within STFC and DLS for several years. We consider the factors which have influenced its design and describe its architecture and metadata model, a key tool in the management of data. We go on to give an outline of its current implementation and use, with plans for its future development.

  13. Integrated Optoelectronic Networks for Application-Driven Multicore Computing

    Science.gov (United States)

    2017-05-08

    AFRL-AFOSR-VA-TR-2017-0102 Integrated Optoelectronic Networks for Application- Driven Multicore Computing Sudeep Pasricha COLORADO STATE UNIVERSITY...AND SUBTITLE Integrated Optoelectronic Networks for Application-Driven Multicore Computing 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-13-1-0110 5c...and supportive materials with innovative architectural designs that integrate these components according to system-wide application needs. 15

  14. Westinghouse integrated cementation facility. Smart process automation minimizing secondary waste

    International Nuclear Information System (INIS)

    Fehrmann, H.; Jacobs, T.; Aign, J.

    2015-01-01

    The Westinghouse Cementation Facility described in this paper is an example for a typical standardized turnkey project in the area of waste management. The facility is able to handle NPP waste such as evaporator concentrates, spent resins and filter cartridges. The facility scope covers all equipment required for a fully integrated system including all required auxiliary equipment for hydraulic, pneumatic and electric control system. The control system is based on actual PLC technology and the process is highly automated. The equipment is designed to be remotely operated, under radiation exposure conditions. 4 cementation facilities have been built for new CPR-1000 nuclear power stations in China

  15. Centralized computer-based controls of the Nova Laser Facility

    International Nuclear Information System (INIS)

    Krammen, J.

    1985-01-01

    This article introduces the overall architecture of the computer-based Nova Laser Control System and describes its basic components. Use of standard hardware and software components ensures that the system, while specialized and distributed throughout the facility, is adaptable. 9 references, 6 figures

  16. Integration of knowledge management system for the decommissioning of nuclear facilities

    International Nuclear Information System (INIS)

    Iguchi, Yukihiro; Yanagihara, Satoshi

    2016-01-01

    The decommissioning of a nuclear facility is a long term project, handling information which begins from the design, construction and operation. Moreover, the decommissioning project is likely to be extended because of the lack of the waste disposal site especially in Japan. In this situation, because the transfer of knowledge and education to the next generation is a crucial issue, integration and implementation of a system for knowledge management is necessary in order to solve it. For this purpose, the total system of decommissioning knowledge management system (KMS) is proposed. In this system, we have to arrange, organize and systematize the data and information of the plant design, maintenance history, trouble events, waste management records etc. The collected data, information and records should be organized by computer support system e.g. data base system. It becomes a base of the explicit knowledge. Moreover, measures of extracting tacit knowledge from retiring employees are necessary. The experience of the retirees should be documented as much as possible through effective questionnaire or interview process. The integrated knowledge mentioned above should be used for the planning, implementation of dismantlement or education for the future generation. (author)

  17. Computing one of Victor Moll's irresistible integrals with computer algebra

    Directory of Open Access Journals (Sweden)

    Christoph Koutschan

    2008-04-01

    Full Text Available We investigate a certain quartic integral from V. Moll's book “Irresistible Integrals” and demonstrate how it can be solved by computer algebra methods, namely by using non-commutative Gröbner bases. We present recent implementations in the computer algebra systems SINGULAR and MATHEMATICA.

  18. Path-integral computation of superfluid densities

    International Nuclear Information System (INIS)

    Pollock, E.L.; Ceperley, D.M.

    1987-01-01

    The normal and superfluid densities are defined by the response of a liquid to sample boundary motion. The free-energy change due to uniform boundary motion can be calculated by path-integral methods from the distribution of the winding number of the paths around a periodic cell. This provides a conceptually and computationally simple way of calculating the superfluid density for any Bose system. The linear-response formulation relates the superfluid density to the momentum-density correlation function, which has a short-ranged part related to the normal density and, in the case of a superfluid, a long-ranged part whose strength is proportional to the superfluid density. These facts are discussed in the context of path-integral computations and demonstrated for liquid 4 He along the saturated vapor-pressure curve. Below the experimental superfluid transition temperature the computed superfluid fractions agree with the experimental values to within the statistical uncertainties of a few percent in the computations. The computed transition is broadened by finite-sample-size effects

  19. Development of integrated platform for computational material design

    Energy Technology Data Exchange (ETDEWEB)

    Kiyoshi, Matsubara; Kumi, Itai; Nobutaka, Nishikawa; Akifumi, Kato [Center for Computational Science and Engineering, Fuji Research Institute Corporation (Japan); Hideaki, Koike [Advance Soft Corporation (Japan)

    2003-07-01

    The goal of our project is to design and develop a problem-solving environment (PSE) that will help computational scientists and engineers develop large complicated application software and simulate complex phenomena by using networking and parallel computing. The integrated platform, which is designed for PSE in the Japanese national project of Frontier Simulation Software for Industrial Science, is defined by supporting the entire range of problem solving activity from program formulation and data setup to numerical simulation, data management, and visualization. A special feature of our integrated platform is based on a new architecture called TASK FLOW. It integrates the computational resources such as hardware and software on the network and supports complex and large-scale simulation. This concept is applied to computational material design and the project 'comprehensive research for modeling, analysis, control, and design of large-scale complex system considering properties of human being'. Moreover this system will provide the best solution for developing large and complicated software and simulating complex and large-scaled phenomena in computational science and engineering. A prototype has already been developed and the validation and verification of an integrated platform will be scheduled by using the prototype in 2003. In the validation and verification, fluid-structure coupling analysis system for designing an industrial machine will be developed on the integrated platform. As other examples of validation and verification, integrated platform for quantum chemistry and bio-mechanical system are planned.

  20. Development of integrated platform for computational material design

    International Nuclear Information System (INIS)

    Kiyoshi, Matsubara; Kumi, Itai; Nobutaka, Nishikawa; Akifumi, Kato; Hideaki, Koike

    2003-01-01

    The goal of our project is to design and develop a problem-solving environment (PSE) that will help computational scientists and engineers develop large complicated application software and simulate complex phenomena by using networking and parallel computing. The integrated platform, which is designed for PSE in the Japanese national project of Frontier Simulation Software for Industrial Science, is defined by supporting the entire range of problem solving activity from program formulation and data setup to numerical simulation, data management, and visualization. A special feature of our integrated platform is based on a new architecture called TASK FLOW. It integrates the computational resources such as hardware and software on the network and supports complex and large-scale simulation. This concept is applied to computational material design and the project 'comprehensive research for modeling, analysis, control, and design of large-scale complex system considering properties of human being'. Moreover this system will provide the best solution for developing large and complicated software and simulating complex and large-scaled phenomena in computational science and engineering. A prototype has already been developed and the validation and verification of an integrated platform will be scheduled by using the prototype in 2003. In the validation and verification, fluid-structure coupling analysis system for designing an industrial machine will be developed on the integrated platform. As other examples of validation and verification, integrated platform for quantum chemistry and bio-mechanical system are planned

  1. Probabilistic data integration and computational complexity

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.; Mosegaard, K.

    2016-12-01

    Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under

  2. Brookhaven Reactor Experiment Control Facility, a distributed function computer network

    International Nuclear Information System (INIS)

    Dimmler, D.G.; Greenlaw, N.; Kelley, M.A.; Potter, D.W.; Rankowitz, S.; Stubblefield, F.W.

    1975-11-01

    A computer network for real-time data acquisition, monitoring and control of a series of experiments at the Brookhaven High Flux Beam Reactor has been developed and has been set into routine operation. This reactor experiment control facility presently services nine neutron spectrometers and one x-ray diffractometer. Several additional experiment connections are in progress. The architecture of the facility is based on a distributed function network concept. A statement of implementation and results is presented

  3. Mixed Waste Treatment Project: Computer simulations of integrated flowsheets

    International Nuclear Information System (INIS)

    Dietsche, L.J.

    1993-12-01

    The disposal of mixed waste, that is waste containing both hazardous and radioactive components, is a challenging waste management problem of particular concern to DOE sites throughout the United States. Traditional technologies used for the destruction of hazardous wastes need to be re-evaluated for their ability to handle mixed wastes, and in some cases new technologies need to be developed. The Mixed Waste Treatment Project (MWTP) was set up by DOE's Waste Operations Program (EM30) to provide guidance on mixed waste treatment options. One of MWTP's charters is to develop flowsheets for prototype integrated mixed waste treatment facilities which can serve as models for sites developing their own treatment strategies. Evaluation of these flowsheets is being facilitated through the use of computer modelling. The objective of the flowsheet simulations is to provide mass and energy balances, product compositions, and equipment sizing (leading to cost) information. The modelled flowsheets need to be easily modified to examine how alternative technologies and varying feed streams effect the overall integrated process. One such commercially available simulation program is ASPEN PLUS. This report contains details of the Aspen Plus program

  4. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    Science.gov (United States)

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  5. A resource facility for kinetic analysis: modeling using the SAAM computer programs.

    Science.gov (United States)

    Foster, D M; Boston, R C; Jacquez, J A; Zech, L

    1989-01-01

    Kinetic analysis and integrated system modeling have contributed significantly to understanding the physiology and pathophysiology of metabolic systems in humans and animals. Many experimental biologists are aware of the usefulness of these techniques and recognize that kinetic modeling requires special expertise. The Resource Facility for Kinetic Analysis (RFKA) provides this expertise through: (1) development and application of modeling technology for biomedical problems, and (2) development of computer-based kinetic modeling methodologies concentrating on the computer program Simulation, Analysis, and Modeling (SAAM) and its conversational version, CONversational SAAM (CONSAM). The RFKA offers consultation to the biomedical community in the use of modeling to analyze kinetic data and trains individuals in using this technology for biomedical research. Early versions of SAAM were widely applied in solving dosimetry problems; many users, however, are not familiar with recent improvements to the software. The purpose of this paper is to acquaint biomedical researchers in the dosimetry field with RFKA, which, together with the joint National Cancer Institute-National Heart, Lung and Blood Institute project, is overseeing SAAM development and applications. In addition, RFKA provides many service activities to the SAAM user community that are relevant to solving dosimetry problems.

  6. Computer Security at Nuclear Facilities (French Edition)

    International Nuclear Information System (INIS)

    2013-01-01

    The possibility that nuclear or other radioactive material could be used for malicious purposes cannot be ruled out in the current global situation. States have responded to this risk by engaging in a collective commitment to strengthen the protection and control of such material and to respond effectively to nuclear security events. States have agreed to strengthen existing instruments and have established new international legal instruments to enhance nuclear security worldwide. Nuclear security is fundamental in the management of nuclear technologies and in applications where nuclear or other radioactive material is used or transported. Through its Nuclear Security Programme, the IAEA supports States to establish, maintain and sustain an effective nuclear security regime. The IAEA has adopted a comprehensive approach to nuclear security. This recognizes that an effective national nuclear security regime builds on: the implementation of relevant international legal instruments; information protection; physical protection; material accounting and control; detection of and response to trafficking in such material; national response plans; and contingency measures. With its Nuclear Security Series, the IAEA aims to assist States in implementing and sustaining such a regime in a coherent and integrated manner. The IAEA Nuclear Security Series comprises Nuclear Security Fundamentals, which include objectives and essential elements of a State's nuclear security regime; Recommendations; Implementing Guides; and Technical Guidance. Each State carries the full responsibility for nuclear security, specifically: to provide for the security of nuclear and other radioactive material and associated facilities and activities; to ensure the security of such material in use, storage or in transport; to combat illicit trafficking and the inadvertent movement of such material; and to be prepared to respond to a nuclear security event. This publication is in the Technical Guidance

  7. BWR Full Integral Simulation Test (FIST) program: facility description report

    International Nuclear Information System (INIS)

    Stephens, A.G.

    1984-09-01

    A new boiling water reactor safety test facility (FIST, Full Integral Simulation Test) is described. It will be used to investigate small breaks and operational transients and to tie results from such tests to earlier large-break test results determined in the TLTA. The new facility's full height and prototypical components constitute a major scaling improvement over earlier test facilities. A heated feedwater system, permitting steady-state operation, and a large increase in the number of measurements are other significant improvements. The program background is outlined and program objectives defined. The design basis is presented together with a detailed, complete description of the facility and measurements to be made. An extensive component scaling analysis and prediction of performance are presented

  8. An integrated approach for facilities planning by ELECTRE method

    Science.gov (United States)

    Elbishari, E. M. Y.; Hazza, M. H. F. Al; Adesta, E. Y. T.; Rahman, Nur Salihah Binti Abdul

    2018-01-01

    Facility planning is concerned with the design, layout, and accommodation of people, machines and activities of a system. Most of the researchers try to investigate the production area layout and the related facilities. However, few of them try to investigate the relationship between the production space and its relationship with service departments. The aim of this research to is to integrate different approaches in order to evaluate, analyse and select the best facilities planning method that able to explain the relationship between the production area and other supporting departments and its effect on human efforts. To achieve the objective of this research two different approaches have been integrated: Apple’s layout procedure as one of the effective tools in planning factories, ELECTRE method as one of the Multi Criteria Decision Making methods (MCDM) to minimize the risk of getting poor facilities planning. Dalia industries have been selected as a case study to implement our integration the factory have been divided two main different area: the whole facility (layout A), and the manufacturing area (layout B). This article will be concerned with the manufacturing area layout (Layout B). After analysing the data gathered, the manufacturing area was divided into 10 activities. There are five factors that the alternative were compared upon which are: Inter department satisfactory level, total distance travelled for workers, total distance travelled for the product, total time travelled for the workers, and total time travelled for the product. Three different layout alternatives have been developed in addition to the original layouts. Apple’s layout procedure was used to study and evaluate the different alternatives layouts, the study and evaluation of the layouts was done by calculating scores for each of the factors. After obtaining the scores from evaluating the layouts, ELECTRE method was used to compare the proposed alternatives with each other and with

  9. On-line satellite/central computer facility of the Multiparticle Argo Spectrometer System

    International Nuclear Information System (INIS)

    Anderson, E.W.; Fisher, G.P.; Hien, N.C.; Larson, G.P.; Thorndike, A.M.; Turkot, F.; von Lindern, L.; Clifford, T.S.; Ficenec, J.R.; Trower, W.P.

    1974-09-01

    An on-line satellite/central computer facility has been developed at Brookhaven National Laboratory as part of the Multiparticle Argo Spectrometer System (MASS). This facility consisting of a PDP-9 and a CDC-6600, has been successfully used in study of proton-proton interactions at 28.5 GeV/c. (U.S.)

  10. Carbon dioxide neutral, integrated biofuel facility

    Energy Technology Data Exchange (ETDEWEB)

    Powell, E.E.; Hill, G.A. [Department of Chemical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, Saskatchewan, S7N 5A9 (Canada)

    2010-12-15

    Algae are efficient biocatalysts for both capture and conversion of carbon dioxide in the environment. In earlier work, we have optimized the ability of Chlorella vulgaris to rapidly capture CO{sub 2} from man-made emission sources by varying environmental growth conditions and bioreactor design. Here we demonstrate that a coupled biodiesel-bioethanol facility, using yeast to produce ethanol and photosynthetic algae to produce biodiesel, can result in an integrated, economical, large-scale process for biofuel production. Each bioreactor acts as an electrode for a coupled complete microbial fuel cell system; the integrated cultures produce electricity that is consumed as an energy source within the process. Finally, both the produced yeast and spent algae biomass can be used as added value byproducts in the feed or food industries. Using cost and revenue estimations, an IRR of up to 25% is calculated using a 5 year project lifespan. (author)

  11. Computation of integral bases

    NARCIS (Netherlands)

    Bauch, J.H.P.

    2015-01-01

    Let $A$ be a Dedekind domain, $K$ the fraction field of $A$, and $f\\in A[x]$ a monic irreducible separable polynomial. For a given non-zero prime ideal $\\mathfrak{p}$ of $A$ we present in this paper a new method to compute a $\\mathfrak{p}$-integral basis of the extension of $K$ determined by $f$.

  12. Integrated Payment And Delivery Models Offer Opportunities And Challenges For Residential Care Facilities.

    Science.gov (United States)

    Grabowski, David C; Caudry, Daryl J; Dean, Katie M; Stevenson, David G

    2015-10-01

    Under health care reform, new financing and delivery models are being piloted to integrate health and long-term care services for older adults. Programs using these models generally have not included residential care facilities. Instead, most of them have focused on long-term care recipients in the community or the nursing home. Our analyses indicate that individuals living in residential care facilities have similarly high rates of chronic illness and Medicare utilization when compared with matched individuals in the community and nursing home, and rates of functional dependency that fall between those of their counterparts in the other two settings. These results suggest that the residential care facility population could benefit greatly from models that coordinated health and long-term care services. However, few providers have invested in the infrastructure needed to support integrated delivery models. Challenges to greater care integration include the private-pay basis for residential care facility services, which precludes shared savings from reduced Medicare costs, and residents' preference for living in a home-like, noninstitutional environment. Project HOPE—The People-to-People Health Foundation, Inc.

  13. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  14. CSNI Integral test facility validation matrix for the assessment of thermal-hydraulic codes for LWR LOCA and transients

    International Nuclear Information System (INIS)

    1996-07-01

    This report deals with an internationally agreed integral test facility (ITF) matrix for the validation of best estimate thermal-hydraulic computer codes. Firstly, the main physical phenomena that occur during the considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a life of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. The construction of such a matrix is an attempt to collect together in a systematic way the best sets of openly available test data for code validation, assessment and improvement, including quantitative assessment of uncertainties in the modelling of phenomena by the codes. In addition to this objective, it is an attempt to record information which has been generated around the world over the last 20 years so that it is more accessible to present and future workers in that field than would otherwise be the case

  15. A Supply Chain Design Problem Integrated Facility Unavailabilities Management

    Directory of Open Access Journals (Sweden)

    Fouad Maliki

    2016-08-01

    Full Text Available A supply chain is a set of facilities connected together in order to provide products to customers. The supply chain is subject to random failures caused by different factors which cause the unavailability of some sites. Given the current economic context, the management of these unavailabilities is becoming a strategic choice to ensure the desired reliability and availability levels of the different supply chain facilities. In this work, we treat two problems related to the field of supply chain, namely the design and unavailabilities management of logistics facilities. Specifically, we consider a stochastic distribution network with consideration of suppliers' selection, distribution centres location (DCs decisions and DCs’ unavailabilities management. Two resolution approaches are proposed. The first approach called non-integrated consists on define the optimal supply chain structure using an optimization approach based on genetic algorithms (GA, then to simulate the supply chain performance with the presence of DCs failures. The second approach called integrated approach is to consider the design of the supply chain problem and unavailabilities management of DCs in the same model. Note that, we replace each unavailable DC by performing a reallocation using GA in the two approaches. The obtained results of the two approaches are detailed and compared showing their effectiveness.

  16. Oak Ridge Leadership Computing Facility Position Paper

    Energy Technology Data Exchange (ETDEWEB)

    Oral, H Sarp [ORNL; Hill, Jason J [ORNL; Thach, Kevin G [ORNL; Podhorszki, Norbert [ORNL; Klasky, Scott A [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL

    2011-01-01

    This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.

  17. Facility model for the Los Alamos Plutonium Facility

    International Nuclear Information System (INIS)

    Coulter, C.A.; Thomas, K.E.; Sohn, C.L.; Yarbro, T.F.; Hench, K.W.

    1986-01-01

    The Los Alamos Plutonium Facility contains more than sixty unit processes and handles a large variety of nuclear materials, including many forms of plutonium-bearing scrap. The management of the Plutonium Facility is supporting the development of a computer model of the facility as a means of effectively integrating the large amount of information required for material control, process planning, and facility development. The model is designed to provide a flexible, easily maintainable facility description that allows the faciltiy to be represented at any desired level of detail within a single modeling framework, and to do this using a model program and data files that can be read and understood by a technically qualified person without modeling experience. These characteristics were achieved by structuring the model so that all facility data is contained in data files, formulating the model in a simulation language that provides a flexible set of data structures and permits a near-English-language syntax, and using a description for unit processes that can represent either a true unit process or a major subsection of the facility. Use of the model is illustrated by applying it to two configurations of a fictitious nuclear material processing line

  18. Integral Monitored Retrievable Storage (MRS) Facility conceptual basis for design

    International Nuclear Information System (INIS)

    1985-10-01

    The purpose of the Conceptual Basis for Design is to provide a control document that establishes the basis for executing the conceptual design of the Integral Monitored Retrievable Storage (MRS) Facility. This conceptual design shall provide the basis for preparation of a proposal to Congress by the Department of Energy (DOE) for construction of one or more MRS Facilities for storage of spent nuclear fuel, high-level radioactive waste, and transuranic (TRU) waste. 4 figs., 25 tabs

  19. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  20. Computer modeling of commercial refrigerated warehouse facilities

    International Nuclear Information System (INIS)

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-01-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented

  1. Modern integrated environmental monitoring and processing systems for nuclear facilities

    International Nuclear Information System (INIS)

    Oprea, I.

    2000-01-01

    presentation by using on-line dynamic evolution of the events, environment information, evacuation optimization, image and voice processing. These modern systems are proposed for environmental monitoring around nuclear facilities, as open interactive systems supporting the operator in the global overview of the environment and the status of the situation updating the remote GIS data base, assuring man-computer interaction and a good information flow for emergency knowledge exchange, improving the protection of the population and decision makers efforts. The local monitoring systems could be integrated into national or international environmental monitoring systems, achieving desired interoperability between government, civilian and army in disaster preparedness efforts

  2. An analytical model for computation of reliability of waste management facilities with intermediate storages

    International Nuclear Information System (INIS)

    Kallweit, A.; Schumacher, F.

    1977-01-01

    A high reliability is called for waste management facilities within the fuel cycle of nuclear power stations which can be fulfilled by providing intermediate storage facilities and reserve capacities. In this report a model based on the theory of Markov processes is described which allows computation of reliability characteristics of waste management facilities containing intermediate storage facilities. The application of the model is demonstrated by an example. (orig.) [de

  3. Passive BWR integral LOCA testing at the Karlstein test facility INKA

    Energy Technology Data Exchange (ETDEWEB)

    Drescher, Robert [AREVA GmbH, Erlangen (Germany); Wagner, Thomas [AREVA GmbH, Karlstein am Main (Germany); Leyer, Stephan [TH University of Applied Sciences, Deggendorf (Germany)

    2014-05-15

    KERENA is an innovative AREVA GmbH boiling water reactor (BWR) with passive safety systems (Generation III+). In order to verify the functionality of the reactor design an experimental validation program was executed. Therefore the INKA (Integral Teststand Karlstein) test facility was designed and erected. It is a mockup of the BWR containment, with integrated pressure suppression system. While the scaling of the passive components and the levels match the original values, the volume scaling of the containment compartments is approximately 1:24. The storage capacity of the test facility pressure vessel corresponds to approximately 1/6 of the KERENA RPV and is supplied by a benson boiler with a thermal power of 22 MW. In March 2013 the first integral test - Main Steam Line Break (MSLB) - was executed. The test measured the combined response of the passive safety systems to the postulated initiating event. The main goal was to demonstrate the ability of the passive systems to ensure core coverage, decay heat removal and to maintain the containment within defined limits. The results of the test showed that the passive safety systems are capable to bring the plant to stable conditions meeting all required safety targets with sufficient margins. Therefore the test verified the function of those components and the interplay between them. The test proved that INKA is an unique test facility, capable to perform integral tests of passive safety concepts under plant-like conditions. (orig.)

  4. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria

    2016-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others.

  5. Annual Summary of the Integrated Disposal Facility Performance Assessment 2012

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, R. [INTERA, Austin, TX (United States); Nichols, W. E. [CH2M HILL Plateau Remediation Company, Richland, WA (United States)

    2012-12-27

    An annual summary of the adequacy of the Hanford Immobilized Low-Activity Waste (ILAW) Performance Assessment (PA) is required each year (DOE O 435.1 Chg 1,1 DOE M 435.1-1 Chg 1;2 and DOE/ORP-2000-013). The most recently approved PA is DOE/ORP-2000-24.4 The ILAW PA evaluated the adequacy of the ILAW disposal facility, now referred to as the Integrated Disposal Facility (IDF), for the safe disposal of vitrified Hanford Site tank waste.

  6. COMPUTER ORIENTED FACILITIES OF TEACHING AND INFORMATIVE COMPETENCE

    OpenAIRE

    Olga M. Naumenko

    2010-01-01

    In the article it is considered the history of views to the tasks of education, estimations of its effectiveness from the point of view of forming of basic vitally important competences. Opinions to the problem in different countries, international organizations, corresponding experience of the Ukrainian system of education are described. The necessity of forming of informative competence of future teacher is reasonable in the conditions of application of the computer oriented facilities of t...

  7. A description of the demonstration Integral Fast Reactor fuel cycle facility

    International Nuclear Information System (INIS)

    Courtney, J.C.; Carnes, M.D.; Dwight, C.C.; Forrester, R.J.

    1991-01-01

    A fuel examination facility at the Idaho National Engineering Laboratory is being converted into a facility that will electrochemically process spent fuel. This is an important step in the demonstration of the Integral Fast Reactor concept being developed by Argonne National Laboratory. Renovations are designed to bring the facility up to current health and safety and environmental standards and to support its new mission. Improvements include the addition of high-reliability earthquake hardened off-gas and electrical power systems, the upgrading of radiological instrumentation, and the incorporation of advances in contamination control. A major task is the construction of a new equipment repair and decontamination facility in the basement of the building to support operations

  8. Maintenance of reactor safety and control computers at a large government facility

    International Nuclear Information System (INIS)

    Brady, H.G.

    1985-01-01

    In 1950 the US Government contracted the Du Pont Company to design, build, and operate the Savannah River Plant (SRP). At the time, it was the largest construction project ever undertaken by man. It is still the largest of the Department of Energy facilities. In the nearly 35 years that have elapsed, Du Pont has met its commitments to the US Government and set world safety records in the construction and operation of nuclear facilities. Contributing factors in achieving production goals and setting the safety records are a staff of highly qualified personnel, a well maintained plant, and sound maintenance programs. There have been many ''first ever'' achievements at SRP. These ''firsts'' include: (1) computer control of a nuclear rector, and (2) use of computer systems as safety circuits. This presentation discusses the maintenance program provided for these computer systems and all digital systems at SRP. An in-house computer maintenance program that was started in 1966 with five persons has grown to a staff of 40 with investments in computer hardware increasing from $4 million in 1970 to more than $60 million in this decade. 4 figs

  9. Integration of design and inspection

    Science.gov (United States)

    Simmonds, William H.

    1990-08-01

    Developments in advanced computer integrated manufacturing technology, coupled with the emphasis on Total Quality Management, are exposing needs for new techniques to integrate all functions from design through to support of the delivered product. One critical functional area that must be integrated into design is that embracing the measurement, inspection and test activities necessary for validation of the delivered product. This area is being tackled by a collaborative project supported by the UK Government Department of Trade and Industry. The project is aimed at developing techniques for analysing validation needs and for planning validation methods. Within the project an experimental Computer Aided Validation Expert system (CAVE) is being constructed. This operates with a generalised model of the validation process and helps with all design stages: specification of product requirements; analysis of the assurance provided by a proposed design and method of manufacture; development of the inspection and test strategy; and analysis of feedback data. The kernel of the system is a knowledge base containing knowledge of the manufacturing process capabilities and of the available inspection and test facilities. The CAVE system is being integrated into a real life advanced computer integrated manufacturing facility for demonstration and evaluation.

  10. Integrated Framework for Patient Safety and Energy Efficiency in Healthcare Facilities Retrofit Projects.

    Science.gov (United States)

    Mohammadpour, Atefeh; Anumba, Chimay J; Messner, John I

    2016-07-01

    There is a growing focus on enhancing energy efficiency in healthcare facilities, many of which are decades old. Since replacement of all aging healthcare facilities is not economically feasible, the retrofitting of these facilities is an appropriate path, which also provides an opportunity to incorporate energy efficiency measures. In undertaking energy efficiency retrofits, it is vital that the safety of the patients in these facilities is maintained or enhanced. However, the interactions between patient safety and energy efficiency have not been adequately addressed to realize the full benefits of retrofitting healthcare facilities. To address this, an innovative integrated framework, the Patient Safety and Energy Efficiency (PATSiE) framework, was developed to simultaneously enhance patient safety and energy efficiency. The framework includes a step -: by -: step procedure for enhancing both patient safety and energy efficiency. It provides a structured overview of the different stages involved in retrofitting healthcare facilities and improves understanding of the intricacies associated with integrating patient safety improvements with energy efficiency enhancements. Evaluation of the PATSiE framework was conducted through focus groups with the key stakeholders in two case study healthcare facilities. The feedback from these stakeholders was generally positive, as they considered the framework useful and applicable to retrofit projects in the healthcare industry. © The Author(s) 2016.

  11. The Argonne Leadership Computing Facility 2010 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Drugan, C. (LCF)

    2011-05-09

    Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued to provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers

  12. Computer-integrated electric-arc melting process control system

    OpenAIRE

    Дёмин, Дмитрий Александрович

    2014-01-01

    Developing common principles of completing melting process automation systems with hardware and creating on their basis rational choices of computer- integrated electricarc melting control systems is an actual task since it allows a comprehensive approach to the issue of modernizing melting sites of workshops. This approach allows to form the computer-integrated electric-arc furnace control system as part of a queuing system “electric-arc furnace - foundry conveyor” and consider, when taking ...

  13. Integrated Disposal Facility FY2010 Glass Testing Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Pierce, Eric M.; Bacon, Diana H.; Kerisit, Sebastien N.; Windisch, Charles F.; Cantrell, Kirk J.; Valenta, Michelle M.; Burton, Sarah D.; Serne, R Jeffrey; Mattigod, Shas V.

    2010-09-30

    Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to provide the technical basis for estimating radionuclide release from the engineered portion of the disposal facility (e.g., source term). Vitrifying the low-activity waste at Hanford is expected to generate over 1.6 × 105 m3 of glass (Puigh 1999). The volume of immobilized low-activity waste (ILAW) at Hanford is the largest in the DOE complex and is one of the largest inventories (approximately 0.89 × 1018 Bq total activity) of long-lived radionuclides, principally 99Tc (t1/2 = 2.1 × 105), planned for disposal in a low-level waste (LLW) facility. Before the ILAW can be disposed, DOE must conduct a performance assessement (PA) for the Integrated Disposal Facility (IDF) that describes the long-term impacts of the disposal facility on public health and environmental resources. As part of the ILAW glass testing program PNNL is implementing a strategy, consisting of experimentation and modeling, in order to provide the technical basis for estimating radionuclide release from the glass waste form in support of future IDF PAs. The purpose of this report is to summarize the progress made in fiscal year (FY) 2010 toward implementing the strategy with the goal of developing an understanding of the long-term corrosion behavior of low-activity waste glasses. The emphasis in FY2010 was the completing an evaluation of the most sensitive kinetic rate law parameters used to predict glass weathering, documented in Bacon and Pierce (2010), and transitioning from the use of the Subsurface Transport Over Reactive Multi-phases to Subsurface Transport Over Multiple Phases computer code for near-field calculations. The FY2010 activities also consisted of developing a Monte Carlo and Geochemical Modeling framework that links glass composition to alteration phase formation by 1) determining the structure of unreacted and reacted glasses for use as input information into Monte Carlo

  14. Integrated Disposal Facility FY2010 Glass Testing Summary Report

    International Nuclear Information System (INIS)

    Pierce, Eric M.; Bacon, Diana H.; Kerisit, Sebastien N.; Windisch, Charles F.; Cantrell, Kirk J.; Valenta, Michelle M.; Burton, Sarah D.; Serne, R. Jeffrey; Mattigod, Shas V.

    2010-01-01

    Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to provide the technical basis for estimating radionuclide release from the engineered portion of the disposal facility (e.g., source term). Vitrifying the low-activity waste at Hanford is expected to generate over 1.6 A - 105 m 3 of glass (Puigh 1999). The volume of immobilized low-activity waste (ILAW) at Hanford is the largest in the DOE complex and is one of the largest inventories (approximately 0.89 A - 1018 Bq total activity) of long-lived radionuclides, principally 99Tc (t1/2 = 2.1 A - 105), planned for disposal in a low-level waste (LLW) facility. Before the ILAW can be disposed, DOE must conduct a performance assessement (PA) for the Integrated Disposal Facility (IDF) that describes the long-term impacts of the disposal facility on public health and environmental resources. As part of the ILAW glass testing program PNNL is implementing a strategy, consisting of experimentation and modeling, in order to provide the technical basis for estimating radionuclide release from the glass waste form in support of future IDF PAs. The purpose of this report is to summarize the progress made in fiscal year (FY) 2010 toward implementing the strategy with the goal of developing an understanding of the long-term corrosion behavior of low-activity waste glasses. The emphasis in FY2010 was the completing an evaluation of the most sensitive kinetic rate law parameters used to predict glass weathering, documented in Bacon and Pierce (2010), and transitioning from the use of the Subsurface Transport Over Reactive Multi-phases to Subsurface Transport Over Multiple Phases computer code for near-field calculations. The FY2010 activities also consisted of developing a Monte Carlo and Geochemical Modeling framework that links glass composition to alteration phase formation by (1) determining the structure of unreacted and reacted glasses for use as input information into Monte Carlo

  15. Computer-aided engineering of semiconductor integrated circuits

    Science.gov (United States)

    Meindl, J. D.; Dutton, R. W.; Gibbons, J. F.; Helms, C. R.; Plummer, J. D.; Tiller, W. A.; Ho, C. P.; Saraswat, K. C.; Deal, B. E.; Kamins, T. I.

    1980-07-01

    Economical procurement of small quantities of high performance custom integrated circuits for military systems is impeded by inadequate process, device and circuit models that handicap low cost computer aided design. The principal objective of this program is to formulate physical models of fabrication processes, devices and circuits to allow total computer-aided design of custom large-scale integrated circuits. The basic areas under investigation are (1) thermal oxidation, (2) ion implantation and diffusion, (3) chemical vapor deposition of silicon and refractory metal silicides, (4) device simulation and analytic measurements. This report discusses the fourth year of the program.

  16. Integrated computer control system CORBA-based simulator FY98 LDRD project final summary report

    International Nuclear Information System (INIS)

    Bryant, R M; Holloway, F W; Van Arsdall, P J.

    1999-01-01

    The CORBA-based Simulator was a Laboratory Directed Research and Development (LDRD) project that applied simulation techniques to explore critical questions about distributed control architecture. The simulator project used a three-prong approach comprised of a study of object-oriented distribution tools, computer network modeling, and simulation of key control system scenarios. This summary report highlights the findings of the team and provides the architectural context of the study. For the last several years LLNL has been developing the Integrated Computer Control System (ICCS), which is an abstract object-oriented software framework for constructing distributed systems. The framework is capable of implementing large event-driven control systems for mission-critical facilities such as the National Ignition Facility (NIF). Tools developed in this project were applied to the NIF example architecture in order to gain experience with a complex system and derive immediate benefits from this LDRD. The ICCS integrates data acquisition and control hardware with a supervisory system, and reduces the amount of new coding and testing necessary by providing prebuilt components that can be reused and extended to accommodate specific additional requirements. The framework integrates control point hardware with a supervisory system by providing the services needed for distributed control such as database persistence, system start-up and configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. The design is interoperable among computers of different kinds and provides plug-in software connections by leveraging a common object request brokering architecture (CORBA) to transparently distribute software objects across the network of computers. Because object broker distribution applied to control systems is relatively new and its inherent performance is roughly threefold less than traditional point

  17. National electronic medical records integration on cloud computing system.

    Science.gov (United States)

    Mirza, Hebah; El-Masri, Samir

    2013-01-01

    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.

  18. Implementation of the Facility Integrated Inventory Computer System (FICS)

    International Nuclear Information System (INIS)

    McEvers, J.A.; Krichinsky, A.M.; Layman, L.R.; Dunnigan, T.H.; Tuft, R.M.; Murray, W.P.

    1980-01-01

    This paper describes a computer system which has been developed for nuclear material accountability and implemented in an active radiochemical processing plant involving remote operations. The system posesses the following features: comprehensive, timely records of the location and quantities of special nuclear materials; automatically updated book inventory files on the plant and sub-plant levels of detail; material transfer coordination and cataloging; automatic inventory estimation; sample transaction coordination and cataloging; automatic on-line volume determination, limit checking, and alarming; extensive information retrieval capabilities; and terminal access and application software monitoring and logging

  19. Fundamentals of power integrity for computer platforms and systems

    CERN Document Server

    DiBene, Joseph T

    2014-01-01

    An all-encompassing text that focuses on the fundamentals of power integrity Power integrity is the study of power distribution from the source to the load and the system level issues that can occur across it. For computer systems, these issues can range from inside the silicon to across the board and may egress into other parts of the platform, including thermal, EMI, and mechanical. With a focus on computer systems and silicon level power delivery, this book sheds light on the fundamentals of power integrity, utilizing the author's extensive background in the power integrity industry and un

  20. Supporting Facility Management Processes through End-Users’ Integration and Coordinated BIM-GIS Technologies

    Directory of Open Access Journals (Sweden)

    Claudio Mirarchi

    2018-05-01

    Full Text Available The integration of facility management and building information modelling (BIM is an innovative and critical undertaking process to support facility maintenance and management. Even though recent research has proposed various methods and performed an increasing number of case studies, there are still issues of communication processes to be addressed. This paper presents a theoretical framework for digital systems integration of virtual models and smart technologies. Based on the comprehensive analysis of existing technologies for indoor localization, a new workflow is defined and designed, and it is utilized in a practical case study to test the model performance. In the new workflow, a facility management supporting platform is proposed and characterized, featuring indoor positioning systems to allow end users to send geo-referenced reports to central virtual models. In addition, system requirements, information technology (IT architecture and application procedures are presented. Results show that the integration of end users in the maintenance processes through smart and easy tools can overcome the existing limits of barcode systems and building management systems for failure localization. The proposed framework offers several advantages. First, it allows the identification of every element of an asset including wide physical building elements (walls, floors, etc. without requiring a prior mapping. Second, the entire cycle of maintenance activities is managed through a unique integrated system including the territorial dimension. Third, data are collected in a standard structure for future uses. Furthermore, the integration of the process in a centralized BIM-GIS (geographical information system information management system admit a scalable representation of the information supporting facility management processes in terms of assets and supply chain management and monitoring from a spatial perspective.

  1. The Mixed Waste Management Facility. Design basis integrated operations plan (Title I design)

    International Nuclear Information System (INIS)

    1994-12-01

    The Mixed Waste Management Facility (MWMF) will be a fully integrated, pilotscale facility for the demonstration of low-level, organic-matrix mixed waste treatment technologies. It will provide the bridge from bench-scale demonstrated technologies to the deployment and operation of full-scale treatment facilities. The MWMF is a key element in reducing the risk in deployment of effective and environmentally acceptable treatment processes for organic mixed-waste streams. The MWMF will provide the engineering test data, formal evaluation, and operating experience that will be required for these demonstration systems to become accepted by EPA and deployable in waste treatment facilities. The deployment will also demonstrate how to approach the permitting process with the regulatory agencies and how to operate and maintain the processes in a safe manner. This document describes, at a high level, how the facility will be designed and operated to achieve this mission. It frequently refers the reader to additional documentation that provides more detail in specific areas. Effective evaluation of a technology consists of a variety of informal and formal demonstrations involving individual technology systems or subsystems, integrated technology system combinations, or complete integrated treatment trains. Informal demonstrations will typically be used to gather general operating information and to establish a basis for development of formal demonstration plans. Formal demonstrations consist of a specific series of tests that are used to rigorously demonstrate the operation or performance of a specific system configuration

  2. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  3. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  4. Integrative approaches to computational biomedicine

    Science.gov (United States)

    Coveney, Peter V.; Diaz-Zuccarini, Vanessa; Graf, Norbert; Hunter, Peter; Kohl, Peter; Tegner, Jesper; Viceconti, Marco

    2013-01-01

    The new discipline of computational biomedicine is concerned with the application of computer-based techniques and particularly modelling and simulation to human health. Since 2007, this discipline has been synonymous, in Europe, with the name given to the European Union's ambitious investment in integrating these techniques with the eventual aim of modelling the human body as a whole: the virtual physiological human. This programme and its successors are expected, over the next decades, to transform the study and practice of healthcare, moving it towards the priorities known as ‘4P's’: predictive, preventative, personalized and participatory medicine.

  5. Engineering Task Plan for the Integrity Assessment Examination of Double-Contained Receiver Tanks (DCRT), Catch Tanks and Ancillary facilities

    International Nuclear Information System (INIS)

    BECKER, D.L.

    2000-01-01

    This Engineering Task Plan (ETP) presents the integrity assessment examination of three DCRTs, seven catch tanks, and two ancillary facilities located in the 200 East and West Areas of the Hanford Site. The integrity assessment examinations, as described in this ETP, will provide the necessary information to enable the independently qualified registered professional engineer (IQRPE) to assess the condition and integrity of these facilities. The plan is consistent with the Double-Shell Tank Waste Transfer Facilities Integrity Assessment Plan

  6. Heterogeneous Electronics – Wafer Level Integration, Packaging, and Assembly Facility

    Data.gov (United States)

    Federal Laboratory Consortium — This facility integrates active electronics with microelectromechanical (MEMS) devices at the miniature system scale. It obviates current size-, weight-, and power...

  7. Long term integrity of spent fuel and construction materials for dry storage facilities

    Energy Technology Data Exchange (ETDEWEB)

    Saegusa, T [CRIEPI (Japan)

    2012-07-01

    In Japan, two dry storage facilities at reactor sites have already been operating since 1995 and 2002, respectively. Additionally, a large scale dry storage facility away from reactor sites is under safety examination for license near the coast and desired to start its operation in 2010. Its final storage capacity is 5,000tU. It is therefore necessary to obtain and evaluate the related data on integrity of spent fuels loaded into and construction materials of casks during long term dry storage. The objectives are: - Spent fuel rod: To evaluate hydrogen migration along axial fuel direction on irradiated claddings stored for twenty years in air; To evaluate pellet oxidation behaviour for high burn-up UO{sub 2} fuels; - Construction materials for dry storage facilities: To evaluate long term reliability of welded stainless steel canister under stress corrosion cracking (SCC) environment; To evaluate long term integrity of concrete cask under carbonation and salt attack environment; To evaluate integrity of sealability of metal gasket under long term storage and short term accidental impact force.

  8. The challenges of integrating multiple safeguards systems in a large nuclear facility

    International Nuclear Information System (INIS)

    Lavietes, A.; Liguori, C.; Pickrell, M.; Plenteda, R.; Sweet, M.

    2009-01-01

    Full-text: Implementing safeguards in a cost-effective manner in large nuclear facilities such as fuel conditioning, fuel reprocessing, and fuel fabrication plants requires the extensive use of instrumentation that is operated in unattended mode. The collected data is then periodically reviewed by the inspectors either on-site at a central location in the facility or remotely in the IAEA offices. A wide variety of instruments are deployed in large facilities, including video surveillance cameras, electronic sealing devices, non-destructive assay systems based on gamma ray and neutron detection, load cells for mass measurement, ID-readers, and other process-specific monitors. The challenge to integrate these different measurement instruments into an efficient, reliable, and secure system requires implementing standardization at various levels throughout the design process. This standardization includes the data generator behaviour and interface, networking solutions, and data security approaches. This standardization will provide a wide range of savings, including reduced training for inspectors and technicians, reduced periodic technical maintenance, reduced spare parts inventory, increased system robustness, and more predictive system behaviour. The development of standard building blocks will reduce the number of data generators required and allow implementation of simplified architectures that do not require local collection computers but rather utilize transmission of the acquired data directly to a central server via Ethernet connectivity. This approach will result in fewer system components and therefore reduced maintenance efforts and improved reliability. This paper discusses in detail the challenges and the subsequent solutions in the various areas that the IAEA Department of Safeguards has committed to pursue as the best sustainable way of maintaining the ability to implement reliable safeguards systems. (author)

  9. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Graf, F.A. Jr.

    1995-02-27

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System`s pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System.

  10. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    International Nuclear Information System (INIS)

    Graf, F.A. Jr.

    1995-01-01

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System's pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System

  11. Performance of simulated flexible integrated gasification polygeneration facilities. Part A: A technical-energetic assessment

    NARCIS (Netherlands)

    Meerman, J.C.; Ramírez Ramírez, C.A.; Turkenburg, W.C.; Faaij, A.P.C.

    2011-01-01

    This article investigates technical possibilities and performances of flexible integrated gasification polygeneration (IG-PG) facilities equipped with CO2 capture for the near future. These facilities can produce electricity during peak hours, while switching to the production of chemicals during

  12. Computer generation of integrands for Feynman parametric integrals

    International Nuclear Information System (INIS)

    Cvitanovic, Predrag

    1973-01-01

    TECO text editing language, available on PDP-10 computers, is used for the generation and simplification of Feynman integrals. This example shows that TECO can be a useful computational tool in complicated calculations where similar algebraic structures recur many times

  13. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    Science.gov (United States)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-12-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we descrive the WNoDeS architecture.

  14. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    International Nuclear Information System (INIS)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-01-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we describe the WNoDeS architecture.

  15. An integrated computer aided system for integrated design of chemical processes

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Hytoft, Glen; Jaksland, Cecilia

    1997-01-01

    In this paper, an Integrated Computer Aided System (ICAS), which is particularly suitable for solving problems related to integrated design of chemical processes; is presented. ICAS features include a model generator (generation of problem specific models including model simplification and model ...... form the basis for the toolboxes. The available features of ICAS are highlighted through a case study involving the separation of binary azeotropic mixtures. (C) 1997 Elsevier Science Ltd....

  16. Computer integration in the curriculum: promises and problems

    NARCIS (Netherlands)

    Plomp, T.; van den Akker, Jan

    1988-01-01

    This discussion of the integration of computers into the curriculum begins by reviewing the results of several surveys conducted in the Netherlands and the United States which provide insight into the problems encountered by schools and teachers when introducing computers in education. Case studies

  17. Nondestructive assay system development for a plutonium scrap recovery facility

    International Nuclear Information System (INIS)

    Hsue, S.T.; Baker, M.P.

    1984-01-01

    A plutonium scrap recovery facility is being constructed at the Savannah River Plant (SRP). The safeguards groups of the Los Alamos National Laboratory have been working since the early design stage of the facility with SRP and other national laboratories to develop a state-of-the-art assay system for this new facility. Not only will the most current assay techniques be incorporated into the system, but also the various nondestructive assay (NDA) instruments are to be integrated with an Instrument Control Computer (ICC). This undertaking is both challenging and ambitious; an entire assay system of this type has never been done before in a working facility. This paper will describe, in particular, the effort of the Los Alamos Safeguards Assay Group in this endeavor. Our effort in this project can be roughly divided into three phases: NDA development, system integration, and integral testing. 6 references

  18. Computer science in Dutch secondary education: independent or integrated?

    NARCIS (Netherlands)

    van der Sijde, Peter; Doornekamp, B.G.

    1992-01-01

    Nowadays, in Dutch secondary education, computer science is integrated within school subjects. About ten years ago computer science was considered an independent subject, but in the mid-1980s this idea changed. In our study we investigated whether the objectives of teaching computer science as an

  19. A Study of Critical Flowrate in the Integral Effect Test Facilities

    International Nuclear Information System (INIS)

    Kim, Yeongsik; Ryu, Sunguk; Cho, Seok; Yi, Sungjae; Park, Hyunsik

    2014-01-01

    In earlier studies, most of the information available in the literature was either for a saturated two-phase flow or a sub-cooled water flow at medium pressure conditions, e. g., up to about 7.0 MPa. The choking is regarded as a condition of maximum possible discharge through a given orifice and/or nozzle exit area. A critical flow rate can be achieved at a choking under the given thermo-hydraulic conditions. The critical flow phenomena were studied extensively in both single-phase and two-phase systems because of its importance in the LOCA analyses of light water reactors and in the design of other engineering areas. Park suggested a modified correlation for predicting the critical flow for sub-cooled water through a nozzle. Recently, Park et al. performed an experimental study on a two-phase critical flow with a noncondensable gas at high pressure conditions. Various experiments of critical flow using sub-cooled water were performed for a modeling of break simulators in thermohydraulic integral effect test facilities for light water reactors, e. g., an advanced power reactor 1400MWe (APR1400) and a system-integrated modular advanced reactor (SMART). For the design of break simulators of SBLOCA scenarios, the aspect ratio (L/D) is considered to be a key parameter to determine the shape of a break simulator. In this paper, an investigation of critical flow phenomena was performed especially on break simulators for LOCA scenarios in the integral effect test facilities of KAERI, such as ATLAS and FESTA. In this study, various studies on the critical flow models for sub-cooled and/or saturated water were reviewed. For a comparison among the models for the selected test data, discussions of the comparisons on the effect of the diameters, predictions of critical flow models, and break simulators for SBLOCA in the integral effect test facilities were presented

  20. A Study of Critical Flowrate in the Integral Effect Test Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yeongsik; Ryu, Sunguk; Cho, Seok; Yi, Sungjae; Park, Hyunsik [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    In earlier studies, most of the information available in the literature was either for a saturated two-phase flow or a sub-cooled water flow at medium pressure conditions, e. g., up to about 7.0 MPa. The choking is regarded as a condition of maximum possible discharge through a given orifice and/or nozzle exit area. A critical flow rate can be achieved at a choking under the given thermo-hydraulic conditions. The critical flow phenomena were studied extensively in both single-phase and two-phase systems because of its importance in the LOCA analyses of light water reactors and in the design of other engineering areas. Park suggested a modified correlation for predicting the critical flow for sub-cooled water through a nozzle. Recently, Park et al. performed an experimental study on a two-phase critical flow with a noncondensable gas at high pressure conditions. Various experiments of critical flow using sub-cooled water were performed for a modeling of break simulators in thermohydraulic integral effect test facilities for light water reactors, e. g., an advanced power reactor 1400MWe (APR1400) and a system-integrated modular advanced reactor (SMART). For the design of break simulators of SBLOCA scenarios, the aspect ratio (L/D) is considered to be a key parameter to determine the shape of a break simulator. In this paper, an investigation of critical flow phenomena was performed especially on break simulators for LOCA scenarios in the integral effect test facilities of KAERI, such as ATLAS and FESTA. In this study, various studies on the critical flow models for sub-cooled and/or saturated water were reviewed. For a comparison among the models for the selected test data, discussions of the comparisons on the effect of the diameters, predictions of critical flow models, and break simulators for SBLOCA in the integral effect test facilities were presented.

  1. An Integrated Assessment of Location-Dependent Scaling for Microalgae Biofuel Production Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, Andre M.; Abodeely, Jared; Skaggs, Richard; Moeglein, William AM; Newby, Deborah T.; Venteris, Erik R.; Wigmosta, Mark S.

    2014-06-19

    Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting/design through processing/upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are addressed in part by applying the Integrated Assessment Framework (IAF)—an integrated multi-scale modeling, analysis, and data management suite—to address key issues in developing and operating an open-pond facility by analyzing how variability and uncertainty in space and time affect algal feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. The IAF was applied to a set of sites previously identified as having the potential to cumulatively produce 5 billion-gallons/year in the southeastern U.S. and results indicate costs can be reduced by selecting the most effective processing technology pathway and scaling downstream processing capabilities to fit site-specific growing conditions, available resources, and algal strains.

  2. Integrating Computer-Mediated Communication Strategy Instruction

    Science.gov (United States)

    McNeil, Levi

    2016-01-01

    Communication strategies (CSs) play important roles in resolving problematic second language interaction and facilitating language learning. While studies in face-to-face contexts demonstrate the benefits of communication strategy instruction (CSI), there have been few attempts to integrate computer-mediated communication and CSI. The study…

  3. Global nuclear material monitoring with NDA and C/S data through integrated facility monitoring

    International Nuclear Information System (INIS)

    Howell, J.A.; Menlove, H.O.; Argo, P.; Goulding, C.; Klosterbuer, S.; Halbig, J.

    1996-01-01

    This paper focuses on a flexible, integrated demonstration of a monitoring approach for nuclear material monitoring. This includes aspects of item signature identification, perimeter portal monitoring, advanced data analysis, and communication as a part of an unattended continuous monitoring system in an operating nuclear facility. Advanced analysis is applied to the integrated nondestructive assay and containment and surveillance data that are synchronized in time. End result will be the foundation for a cost-effective monitoring system that could provide the necessary transparency even in areas that are denied to foreign nationals of both US and Russia should these processes and materials come under full-scope safeguards or bilateral agreements. Monitoring systems of this kind have the potential to provide additional benefits including improved nuclear facility security and safeguards and lower personnel radiation exposures. Demonstration facilities in this paper include VTRAP-prototype, Los Alamos Critical Assemblies Facility, Kazakhstan BM-350 Reactor monitor, DUPIC radiation monitoring, and JOYO and MONJU radiation monitoring

  4. Conceptual design of a fission-based integrated test facility for fusion reactor components

    International Nuclear Information System (INIS)

    Watts, K.D.; Deis, G.A.; Hsu, P.Y.S.; Longhurst, G.R.; Masson, L.S.; Miller, L.G.

    1982-01-01

    The testing of fusion materials and components in fission reactors will become increasingly important because of lack of fusion engineering test devices in the immediate future and the increasing long-term demand for fusion testing when a fusion reactor test station becomes available. This paper presents the conceptual design of a fission-based Integrated Test Facility (ITF) developed by EG and G Idaho. This facility can accommodate entire first wall/blanket (FW/B) test modules such as those proposed for INTOR and can also accommodate smaller cylindrical modules similar to those designed by Oak Ridge National laboratory (ORNL) and Westinghouse. In addition, the facility can be used to test bulk breeder blanket materials, materials for tritium permeation, and components for performance in a nuclear environment. The ITF provides a cyclic neutron/gamma flux as well as the numerous module and experiment support functions required for truly integrated tests

  5. Evolution of facility layout requirements and CAD [computer-aided design] system development

    International Nuclear Information System (INIS)

    Jones, M.

    1990-06-01

    The overall configuration of the Superconducting Super Collider (SSC) including the infrastructure and land boundary requirements were developed using a computer-aided design (CAD) system. The evolution of the facility layout requirements and the use of the CAD system are discussed. The emphasis has been on minimizing the amount of input required and maximizing the speed by which the output may be obtained. The computer system used to store the data is also described

  6. An Integrated Computer-Aided Approach for Environmental Studies

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Chen, Fei; Jaksland, Cecilia

    1997-01-01

    A general framework for an integrated computer-aided approach to solve process design, control, and environmental problems simultaneously is presented. Physicochemical properties and their relationships to the molecular structure play an important role in the proposed integrated approach. The sco...... and applicability of the integrated approach is highlighted through examples involving estimation of properties and environmental pollution prevention. The importance of mixture effects on some environmentally important properties is also demonstrated....

  7. Integration of the SSPM and STAGE with the MPACT Virtual Facility Distributed Test Bed.

    Energy Technology Data Exchange (ETDEWEB)

    Cipiti, Benjamin B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shoman, Nathan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    The Material Protection Accounting and Control Technologies (MPACT) program within DOE NE is working toward a 2020 milestone to demonstrate a Virtual Facility Distributed Test Bed. The goal of the Virtual Test Bed is to link all MPACT modeling tools, technology development, and experimental work to create a Safeguards and Security by Design capability for fuel cycle facilities. The Separation and Safeguards Performance Model (SSPM) forms the core safeguards analysis tool, and the Scenario Toolkit and Generation Environment (STAGE) code forms the core physical security tool. These models are used to design and analyze safeguards and security systems and generate performance metrics. Work over the past year has focused on how these models will integrate with the other capabilities in the MPACT program and specific model changes to enable more streamlined integration in the future. This report describes the model changes and plans for how the models will be used more collaboratively. The Virtual Facility is not designed to integrate all capabilities into one master code, but rather to maintain stand-alone capabilities that communicate results between codes more effectively.

  8. Integrated facilities modeling using QUEST and IGRIP

    International Nuclear Information System (INIS)

    Davis, K.R.; Haan, E.R.

    1995-01-01

    A QUEST model and associated detailed IGRIP models were developed and used to simulate several workcells in a proposed Plutonium Storage Facility (PSF). The models are being used by team members assigned to the program to improve communication and to assist in evaluating concepts and in performing trade-off studies which will result in recommendations and a final design. The model was designed so that it could be changed easily. The added flexibility techniques used to make changes easily are described in this paper in addition to techniques for integrating the QUEST and IGRIP products. Many of these techniques are generic in nature and can be applied to any modeling endeavor

  9. Using a qualitative approach for understanding hospital-affiliated integrated clinical and fitness facilities: characteristics and members' experiences.

    Science.gov (United States)

    Yang, Jingzhen; Kingsbury, Diana; Nichols, Matthew; Grimm, Kristin; Ding, Kele; Hallam, Jeffrey

    2015-06-19

    With health care shifting away from the traditional sick care model, many hospitals are integrating fitness facilities and programs into their clinical services in order to support health promotion and disease prevention at the community level. Through a series of focus groups, the present study assessed characteristics of hospital-affiliated integrated facilities located in Northeast Ohio, United States and members' experiences with respect to these facilities. Adult members were invited to participate in a focus group using a recruitment flyer. A total of 6 focus groups were conducted in 2013, each lasting one hour, ranging from 5 to 12 participants per group. The responses and discussions were recorded and transcribed verbatim, then analyzed independently by research team members. Major themes were identified after consensus was reached. The participants' average age was 57, with 56.8% currently under a doctor's care. Four major themes associated with integrated facilities and members' experiences emerged across the six focus groups: 1) facility/program, 2) social atmosphere, 3) provider, and 4) member. Within each theme, several sub-themes were also identified. A key feature of integrated facilities is the availability of clinical and fitness services "under one roof". Many participants remarked that they initially attended physical therapy, becoming members of the fitness facility afterwards, or vice versa. The participants had favorable views of and experiences with the superior physical environment and atmosphere, personal attention, tailored programs, and knowledgeable, friendly, and attentive staff. In particular, participants favored the emphasis on preventive care and the promotion of holistic health and wellness. These results support the integration of wellness promotion and programming with traditional medical care and call for the further evaluation of such a model with regard to participants' health outcomes.

  10. A Scheme for Verification on Data Integrity in Mobile Multicloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Laicheng Cao

    2016-01-01

    Full Text Available In order to verify the data integrity in mobile multicloud computing environment, a MMCDIV (mobile multicloud data integrity verification scheme is proposed. First, the computability and nondegeneracy of verification can be obtained by adopting BLS (Boneh-Lynn-Shacham short signature scheme. Second, communication overhead is reduced based on HVR (Homomorphic Verifiable Response with random masking and sMHT (sequence-enforced Merkle hash tree construction. Finally, considering the resource constraints of mobile devices, data integrity is verified by lightweight computing and low data transmission. The scheme improves shortage that mobile device communication and computing power are limited, it supports dynamic data operation in mobile multicloud environment, and data integrity can be verified without using direct source file block. Experimental results also demonstrate that this scheme can achieve a lower cost of computing and communications.

  11. New Mandatory Computer Security Course

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Just like any other organization, CERN is permanently under attack - even right now. Consequently it's important to be vigilant about security risks, protecting CERN's reputation - and your work. The availability, integrity and confidentiality of CERN's computing services and the unhindered operation of its accelerators and experiments come down to the combined efforts of the CERN Security Team and you. In order to remain par with the attack trends, the Security Team regularly reminds CERN users about the computer security risks, and about the rules for using CERN’s computing facilities. Since 2007, newcomers have to follow a dedicated basic computer security course informing them about the “Do’s” and “Dont’s” when using CERNs computing facilities. This course has recently been redesigned. It is now mandatory for all CERN members (users and staff) owning a CERN computer account and must be followed once every three years. Members who...

  12. Computer Security at Nuclear Facilities. Reference Manual (Arabic Edition)

    International Nuclear Information System (INIS)

    2011-01-01

    The possibility that nuclear or other radioactive material could be used for malicious purposes cannot be ruled out in the current global situation. States have responded to this risk by engaging in a collective commitment to strengthen the protection and control of such material and to respond effectively to nuclear security events. States have agreed to strengthen existing instruments and have established new international legal instruments to enhance nuclear security worldwide. Nuclear security is fundamental in the management of nuclear technologies and in applications where nuclear or other radioactive material is used or transported. Through its Nuclear Security Programme, the IAEA supports States to establish, maintain and sustain an effective nuclear security regime. The IAEA has adopted a comprehensive approach to nuclear security. This recognizes that an effective national nuclear security regime builds on: the implementation of relevant international legal instruments; information protection; physical protection; material accounting and control; detection of and response to trafficking in such material; national response plans; and contingency measures. With its Nuclear Security Series, the IAEA aims to assist States in implementing and sustaining such a regime in a coherent and integrated manner. The IAEA Nuclear Security Series comprises Nuclear Security Fundamentals, which include objectives and essential elements of a State's nuclear security regime; Recommendations; Implementing Guides; and Technical Guidance. Each State carries the full responsibility for nuclear security, specifically: to provide for the security of nuclear and other radioactive material and associated facilities and activities; to ensure the security of such material in use, storage or in transport; to combat illicit trafficking and the inadvertent movement of such material; and to be prepared to respond to a nuclear security event. This publication is in the Technical Guidance

  13. Computer Security at Nuclear Facilities. Reference Manual (Russian Edition)

    International Nuclear Information System (INIS)

    2012-01-01

    The possibility that nuclear or other radioactive material could be used for malicious purposes cannot be ruled out in the current global situation. States have responded to this risk by engaging in a collective commitment to strengthen the protection and control of such material and to respond effectively to nuclear security events. States have agreed to strengthen existing instruments and have established new international legal instruments to enhance nuclear security worldwide. Nuclear security is fundamental in the management of nuclear technologies and in applications where nuclear or other radioactive material is used or transported. Through its Nuclear Security Programme, the IAEA supports States to establish, maintain and sustain an effective nuclear security regime. The IAEA has adopted a comprehensive approach to nuclear security. This recognizes that an effective national nuclear security regime builds on: the implementation of relevant international legal instruments; information protection; physical protection; material accounting and control; detection of and response to trafficking in such material; national response plans; and contingency measures. With its Nuclear Security Series, the IAEA aims to assist States in implementing and sustaining such a regime in a coherent and integrated manner. The IAEA Nuclear Security Series comprises Nuclear Security Fundamentals, which include objectives and essential elements of a State's nuclear security regime; Recommendations; Implementing Guides; and Technical Guidance. Each State carries the full responsibility for nuclear security, specifically: to provide for the security of nuclear and other radioactive material and associated facilities and activities; to ensure the security of such material in use, storage or in transport; to combat illicit trafficking and the inadvertent movement of such material; and to be prepared to respond to a nuclear security event. This publication is in the Technical Guidance

  14. Computer Security at Nuclear Facilities. Reference Manual (Chinese Edition)

    International Nuclear Information System (INIS)

    2012-01-01

    The possibility that nuclear or other radioactive material could be used for malicious purposes cannot be ruled out in the current global situation. States have responded to this risk by engaging in a collective commitment to strengthen the protection and control of such material and to respond effectively to nuclear security events. States have agreed to strengthen existing instruments and have established new international legal instruments to enhance nuclear security worldwide. Nuclear security is fundamental in the management of nuclear technologies and in applications where nuclear or other radioactive material is used or transported. Through its Nuclear Security Programme, the IAEA supports States to establish, maintain and sustain an effective nuclear security regime. The IAEA has adopted a comprehensive approach to nuclear security. This recognizes that an effective national nuclear security regime builds on: the implementation of relevant international legal instruments; information protection; physical protection; material accounting and control; detection of and response to trafficking in such material; national response plans; and contingency measures. With its Nuclear Security Series, the IAEA aims to assist States in implementing and sustaining such a regime in a coherent and integrated manner. The IAEA Nuclear Security Series comprises Nuclear Security Fundamentals, which include objectives and essential elements of a State's nuclear security regime; Recommendations; Implementing Guides; and Technical Guidance. Each State carries the full responsibility for nuclear security, specifically: to provide for the security of nuclear and other radioactive material and associated facilities and activities; to ensure the security of such material in use, storage or in transport; to combat illicit trafficking and the inadvertent movement of such material; and to be prepared to respond to a nuclear security event. This publication is in the Technical Guidance

  15. An integrated compact airborne multispectral imaging system using embedded computer

    Science.gov (United States)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  16. Integrating ICT with education: using computer games to enhance ...

    African Journals Online (AJOL)

    Integrating ICT with education: using computer games to enhance learning mathematics at undergraduate level. ... This research seeks to look into ways in which computer games as ICT tools can be used to ... AJOL African Journals Online.

  17. Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.

    2014-12-01

    The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and

  18. Computer mapping and visualization of facilities for planning of D and D operations

    International Nuclear Information System (INIS)

    Wuller, C.E.; Gelb, G.H.; Cramond, R.; Cracraft, J.S.

    1995-01-01

    The lack of as-built drawings for many old nuclear facilities impedes planning for decontamination and decommissioning. Traditional manual walkdowns subject workers to lengthy exposure to radiological and other hazards. The authors have applied close-range photogrammetry, 3D solid modeling, computer graphics, database management, and virtual reality technologies to create geometrically accurate 3D computer models of the interiors of facilities. The required input to the process is a set of photographs that can be acquired in a brief time. They fit 3D primitive shapes to objects of interest in the photos and, at the same time, record attributes such as material type and link patches of texture from the source photos to facets of modeled objects. When they render the model as either static images or at video rates for a walk-through simulation, the phototextures are warped onto the objects, giving a photo-realistic impression. The authors have exported the data to commercial CAD, cost estimating, robotic simulation, and plant design applications. Results from several projects at old nuclear facilities are discussed

  19. INTEGRITY -- Integrated Human Exploration Mission Simulation Facility

    Science.gov (United States)

    Henninger, D.; Tri, T.; Daues, K.

    It is proposed to develop a high -fidelity ground facil ity to carry out long-duration human exploration mission simulations. These would not be merely computer simulations - they would in fact comprise a series of actual missions that just happen to stay on earth. These missions would include all elements of an actual mission, using actual technologies that would be used for the real mission. These missions would also include such elements as extravehicular activities, robotic systems, telepresence and teleoperation, surface drilling technology--all using a simulated planetary landscape. A sequence of missions would be defined that get progressively longer and more robust, perhaps a series of five or six missions over a span of 10 to 15 years ranging in durat ion from 180 days up to 1000 days. This high-fidelity ground facility would operate hand-in-hand with a host of other terrestrial analog sites such as the Antarctic, Haughton Crater, and the Arizona desert. Of course, all of these analog mission simulations will be conducted here on earth in 1-g, and NASA will still need the Shuttle and ISS to carry out all the microgravity and hypogravity science experiments and technology validations. The proposed missions would have sufficient definition such that definitive requirements could be derived from them to serve as direction for all the program elements of the mission. Additionally, specific milestones would be established for the "launch" date of each mission so that R&D programs would have both good requirements and solid milestones from which to build their implementation plans. Mission aspects that could not be directly incorporated into the ground facility would be simulated via software. New management techniques would be developed for evaluation in this ground test facility program. These new techniques would have embedded metrics which would allow them to be continuously evaluated and adjusted so that by the time the sequence of missions is completed

  20. Scaling analysis for the OSU AP600 test facility (APEX)

    International Nuclear Information System (INIS)

    Reyes, J.N.

    1998-01-01

    In this paper, the authors summarize the key aspects of a state-of-the-art scaling analysis (Reyes et al. (1995)) performed to establish the facility design and test conditions for the advanced plant experiment (APEX) at Oregon State University (OSU). This scaling analysis represents the first, and most comprehensive, application of the hierarchical two-tiered scaling (H2TS) methodology (Zuber (1991)) in the design of an integral system test facility. The APEX test facility, designed and constructed on the basis of this scaling analysis, is the most accurate geometric representation of a Westinghouse AP600 nuclear steam supply system. The OSU APEX test facility has served to develop an essential component of the integral system database used to assess the AP600 thermal hydraulic safety analysis computer codes. (orig.)

  1. Advances in Integrated Computational Materials Engineering "ICME"

    Science.gov (United States)

    Hirsch, Jürgen

    The methods of Integrated Computational Materials Engineering that were developed and successfully applied for Aluminium have been constantly improved. The main aspects and recent advances of integrated material and process modeling are simulations of material properties like strength and forming properties and for the specific microstructure evolution during processing (rolling, extrusion, annealing) under the influence of material constitution and process variations through the production process down to the final application. Examples are discussed for the through-process simulation of microstructures and related properties of Aluminium sheet, including DC ingot casting, pre-heating and homogenization, hot and cold rolling, final annealing. New results are included of simulation solution annealing and age hardening of 6xxx alloys for automotive applications. Physically based quantitative descriptions and computer assisted evaluation methods are new ICME methods of integrating new simulation tools also for customer applications, like heat affected zones in welding of age hardening alloys. The aspects of estimating the effect of specific elements due to growing recycling volumes requested also for high end Aluminium products are also discussed, being of special interest in the Aluminium producing industries.

  2. Numerical computation of molecular integrals via optimized (vectorized) FORTRAN code

    International Nuclear Information System (INIS)

    Scott, T.C.; Grant, I.P.; Saunders, V.R.

    1997-01-01

    The calculation of molecular properties based on quantum mechanics is an area of fundamental research whose horizons have always been determined by the power of state-of-the-art computers. A computational bottleneck is the numerical calculation of the required molecular integrals to sufficient precision. Herein, we present a method for the rapid numerical evaluation of molecular integrals using optimized FORTRAN code generated by Maple. The method is based on the exploitation of common intermediates and the optimization can be adjusted to both serial and vectorized computations. (orig.)

  3. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    Science.gov (United States)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  4. Integrating Computational Thinking into Technology and Engineering Education

    Science.gov (United States)

    Hacker, Michael

    2018-01-01

    Computational Thinking (CT) is being promoted as "a fundamental skill used by everyone in the world by the middle of the 21st Century" (Wing, 2006). CT has been effectively integrated into history, ELA, mathematics, art, and science courses (Settle, et al., 2012). However, there has been no analogous effort to integrate CT into…

  5. Computational Acoustics: Computational PDEs, Pseudodifferential Equations, Path Integrals, and All That Jazz

    Science.gov (United States)

    Fishman, Louis

    2000-11-01

    The role of mathematical modeling in the physical sciences will be briefly addressed. Examples will focus on computational acoustics, with applications to underwater sound propagation, electromagnetic modeling, optics, and seismic inversion. Direct and inverse wave propagation problems in both the time and frequency domains will be considered. Focusing on fixed-frequency (elliptic) wave propagation problems, the usual, two-way, partial differential equation formulation will be exactly reformulated, in a well-posed manner, as a one-way (marching) problem. This is advantageous for both direct and inverse considerations, as well as stochastic modeling problems. The reformulation will require the introduction of pseudodifferential operators and their accompanying phase space analysis (calculus), in addition to path integral representations for the fundamental solutions and their subsequent computational algorithms. Unlike the more traditional, purely numerical applications of, for example, finite-difference and finite-element methods, this approach, in effect, writes the exact, or, more generally, the asymptotically correct, answer as a functional integral and, subsequently, computes it directly. The overall computational philosophy is to combine analysis, asymptotics, and numerical methods to attack complicated, real-world problems. Exact and asymptotic analysis will stress the complementary nature of the direct and inverse formulations, as well as indicating the explicit structural connections between the time- and frequency-domain solutions.

  6. Integrated Disposal Facility FY2011 Glass Testing Summary Report

    International Nuclear Information System (INIS)

    Pierce, Eric M.; Bacon, Diana H.; Kerisit, Sebastien N.; Windisch, Charles F.; Cantrell, Kirk J.; Valenta, Michelle M.; Burton, Sarah D.; Westsik, Joseph H.

    2011-01-01

    Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to provide the technical basis for estimating radionuclide release from the engineered portion of the disposal facility (e.g., source term). Vitrifying the low-activity waste at Hanford is expected to generate over 1.6 x 10 5 m 3 of glass (Certa and Wells 2010). The volume of immobilized low-activity waste (ILAW) at Hanford is the largest in the DOE complex and is one of the largest inventories (approximately 8.9 x 10 14 Bq total activity) of long-lived radionuclides, principally 99 Tc (t 1/2 = 2.1 x 10 5 ), planned for disposal in a low-level waste (LLW) facility. Before the ILAW can be disposed, DOE must conduct a performance assessment (PA) for the Integrated Disposal Facility (IDF) that describes the long-term impacts of the disposal facility on public health and environmental resources. As part of the ILAW glass testing program PNNL is implementing a strategy, consisting of experimentation and modeling, in order to provide the technical basis for estimating radionuclide release from the glass waste form in support of future IDF PAs. The purpose of this report is to summarize the progress made in fiscal year (FY) 2011 toward implementing the strategy with the goal of developing an understanding of the long-term corrosion behavior of low-activity waste glasses.

  7. Opportunities for artificial intelligence application in computer- aided management of mixed waste incinerator facilities

    International Nuclear Information System (INIS)

    Rivera, A.L.; Ferrada, J.J.; Singh, S.P.N.

    1992-01-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site. It is designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conservation and Recovery Act (RCRA). This facility, known as the TSCA Incinerator, services seven DOE/OR installations. This incinerator was recently authorized for production operation in the United States for the processing of mixed (radioactively contaminated-chemically hazardous) wastes as regulated under TSCA and RCRA. Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. These requirements impact the characteristics and disposition of incinerator residues, limits the quality of liquid and gaseous effluents, limit the characteristics and rates of waste feeds and operating conditions, and restrict the handling of the waste feed inventories. This incinerator facility presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. Demonstrated computer-aided management systems could be transferred to future mixed waste incinerator facilities

  8. Automated computation of one-loop integrals in massless theories

    International Nuclear Information System (INIS)

    Hameren, A. van; Vollinga, J.; Weinzierl, S.

    2005-01-01

    We consider one-loop tensor and scalar integrals, which occur in a massless quantum field theory, and we report on the implementation into a numerical program of an algorithm for the automated computation of these one-loop integrals. The number of external legs of the loop integrals is not restricted. All calculations are done within dimensional regularization. (orig.)

  9. Integrating Cloud-Computing-Specific Model into Aircraft Design

    Science.gov (United States)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  10. Computation of Surface Integrals of Curl Vector Fields

    Science.gov (United States)

    Hu, Chenglie

    2007-01-01

    This article presents a way of computing a surface integral when the vector field of the integrand is a curl field. Presented in some advanced calculus textbooks such as [1], the technique, as the author experienced, is simple and applicable. The computation is based on Stokes' theorem in 3-space calculus, and thus provides not only a means to…

  11. Integrating numerical computation into the undergraduate education physics curriculum using spreadsheet excel

    Science.gov (United States)

    Fauzi, Ahmad

    2017-11-01

    Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.

  12. CMT scaling analysis and distortion evaluation in passive integral test facility

    International Nuclear Information System (INIS)

    Deng Chengcheng; Qin Benke; Wang Han; Chang Huajian

    2013-01-01

    Core makeup tank (CMT) is the crucial device of AP1000 passive core cooling system, and reasonable scaling analysis of CMT plays a key role in the design of passive integral test facilities. H2TS method was used to perform scaling analysis for both circulating mode and draining mode of CMT. And then, the similarity criteria for CMT important processes were applied in the CMT scaling design of the ACME (advanced core-cooling mechanism experiment) facility now being built in China. Furthermore, the scaling distortion results of CMT characteristic Ⅱ groups of ACME were calculated. At last, the reason of scaling distortion was analyzed and the distortion evaluation was conducted for ACME facility. The dominant processes of CMT circulating mode can be adequately simulated in the ACME facility, but the steam condensation process during CMT draining is not well preserved because the excessive CMT mass leads to more energy to be absorbed by cold metal. However, comprehensive analysis indicates that the ACME facility with high-pressure simulation scheme is able to properly represent CMT's important phenomena and processes of prototype nuclear plant. (authors)

  13. Vitrification Facility integrated system performance testing report

    International Nuclear Information System (INIS)

    Elliott, D.

    1997-01-01

    This report provides a summary of component and system performance testing associated with the Vitrification Facility (VF) following construction turnover. The VF at the West Valley Demonstration Project (WVDP) was designed to convert stored radioactive waste into a stable glass form for eventual disposal in a federal repository. Following an initial Functional and Checkout Testing of Systems (FACTS) Program and subsequent conversion of test stand equipment into the final VF, a testing program was executed to demonstrate successful performance of the components, subsystems, and systems that make up the vitrification process. Systems were started up and brought on line as construction was completed, until integrated system operation could be demonstrated to produce borosilicate glass using nonradioactive waste simulant. Integrated system testing and operation culminated with a successful Operational Readiness Review (ORR) and Department of Energy (DOE) approval to initiate vitrification of high-level waste (HLW) on June 19, 1996. Performance and integrated operational test runs conducted during the test program provided a means for critical examination, observation, and evaluation of the vitrification system. Test data taken for each Test Instruction Procedure (TIP) was used to evaluate component performance against system design and acceptance criteria, while test observations were used to correct, modify, or improve system operation. This process was critical in establishing operating conditions for the entire vitrification process

  14. Derivation of integral energy balance for the manotea facility

    Energy Technology Data Exchange (ETDEWEB)

    Pollman, Anthony, E-mail: pollman@nps.edu [Mechanical and Aeronautical Engineering Department, United States Naval Postgraduate School, Monterey, CA 93943 (United States); Marzo, Marino di [Fire Protection Engineering Department, University of Maryland, College Park, MD 20742 (United States)

    2013-12-15

    Highlights: • An integral energy balance was derived for the MANOTEA facility. • A second equation was derived which frames transients in terms of inventory alone. • Both equations were implemented and showed good agreement with experimental data. • The equations capture the physical mechanisms behind MANOTEA transients. • Physical understanding is required in order to properly model these transients with TRACE. - Abstract: Rapid-condensation-induced fluid motion occurs in several nuclear reactor accident sequences, as well as during normal operation. Modeling these events is central to our ability to regulate and ensure safe reactor operations. The UMD-USNA Near One-dimensional Transient Experimental Apparatus (MANOTEA) was constructed in order to create a rapid-condensation dataset for subsequent comparison to TRACE output. This paper outlines a derivation of the energy balance for the facility. A path integral based on mass and energy, rather than fluid mechanical, considerations is derived in order to characterize the physical mechanisms governing MANOTEA transients. This equation is further simplified to obtain an expression that frames transients in term of liquid inventory alone. Using data obtained from an actual transient, the path integral is implemented using three variables (change in liquid inventory, liquid inventory as a function of time, and change in metal temperature) to predict the outcome of a fourth independently measured variable (condenser pressure as a function of time). The implementation yields a very good approximation of the actual data. The inventory equation is also implemented and shows reasonable agreement. These equations, and the physical intuition that they yield, are key to properly characterizing MANOTEA transients and any subsequent modeling efforts.

  15. Reminder: Mandatory Computer Security Course

    CERN Multimedia

    IT Department

    2011-01-01

    Just like any other organization, CERN is permanently under attack – even right now. Consequently it's important to be vigilant about security risks, protecting CERN's reputation - and your work. The availability, integrity and confidentiality of CERN's computing services and the unhindered operation of its accelerators and experiments come down to the combined efforts of the CERN Security Team and you. In order to remain par with the attack trends, the Security Team regularly reminds CERN users about the computer security risks, and about the rules for using CERN’s computing facilities. Therefore, a new dedicated basic computer security course has been designed informing you about the “Do’s” and “Dont’s” when using CERN's computing facilities. This course is mandatory for all person owning a CERN computer account and must be followed once every three years. Users who have never done the course, or whose course needs to be renewe...

  16. A facility for training Space Station astronauts

    Science.gov (United States)

    Hajare, Ankur R.; Schmidt, James R.

    1992-01-01

    The Space Station Training Facility (SSTF) will be the primary facility for training the Space Station Freedom astronauts and the Space Station Control Center ground support personnel. Conceptually, the SSTF will consist of two parts: a Student Environment and an Author Environment. The Student Environment will contain trainers, instructor stations, computers and other equipment necessary for training. The Author Environment will contain the systems that will be used to manage, develop, integrate, test and verify, operate and maintain the equipment and software in the Student Environment.

  17. EPICS - MDSplus integration in the ITER Neutral Beam Test Facility

    International Nuclear Information System (INIS)

    Luchetta, Adriano; Manduchi, Gabriele; Barbalace, Antonio; Soppelsa, Anton; Taliercio, Cesare

    2011-01-01

    SPIDER, the ITER-size ion-source test bed in the ITER Neutral Beam Test Facility, is a fusion device requiring a complex central system to provide control and data acquisition, referred to as CODAS. The CODAS software architecture will rely on EPICS and MDSplus, two open-source, collaborative software frameworks, targeted at control and data acquisition, respectively. EPICS has been selected as ITER CODAC middleware and, as the final deliverable of the Neutral Beam Test Facility is the procurement of the ITER Heating Neutral Beam Injector, we decided to adopt this ITER technology. MDSplus is a software package for data management, supporting advanced concepts, such as platform and underlying hardware independence, self description data, and data driven model. The combined use of EPICS and MDSplus is not new in fusion, but their level of integration will be new in SPIDER, achieved by a more refined data access layer. The paper presents the integration software to use effectively EPICS and MDSplus, including the definition of appropriate EPICS records to interact with MDSplus. The MDSplus and EPICS archive concepts are also compared on the basis of performance tests and data streaming is investigated by ad-hoc measurements.

  18. Bibliography for computer security, integrity, and safety

    Science.gov (United States)

    Bown, Rodney L.

    1991-01-01

    A bibliography of computer security, integrity, and safety issues is given. The bibliography is divided into the following sections: recent national publications; books; journal, magazine articles, and miscellaneous reports; conferences, proceedings, and tutorials; and government documents and contractor reports.

  19. Vehicle Testing and Integration Facility; NREL (National Renewable Energy Laboratory)

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-03-02

    Engineers at the National Renewable Energy Laboratory’s (NREL’s) Vehicle Testing and Integration Facility (VTIF) are developing strategies to address two separate but equally crucial areas of research: meeting the demands of electric vehicle (EV) grid integration and minimizing fuel consumption related to vehicle climate control. Dedicated to renewable and energy-efficient solutions, the VTIF showcases technologies and systems designed to increase the viability of sustainably powered vehicles. NREL researchers instrument every class of on-road vehicle, conduct hardware and software validation for EV components and accessories, and develop analysis tools and technology for the Department of Energy, other government agencies, and industry partners.

  20. Integrating Computational Science Tools into a Thermodynamics Course

    Science.gov (United States)

    Vieira, Camilo; Magana, Alejandra J.; García, R. Edwin; Jana, Aniruddha; Krafcik, Matthew

    2018-01-01

    Computational tools and methods have permeated multiple science and engineering disciplines, because they enable scientists and engineers to process large amounts of data, represent abstract phenomena, and to model and simulate complex concepts. In order to prepare future engineers with the ability to use computational tools in the context of their disciplines, some universities have started to integrate these tools within core courses. This paper evaluates the effect of introducing three computational modules within a thermodynamics course on student disciplinary learning and self-beliefs about computation. The results suggest that using worked examples paired to computer simulations to implement these modules have a positive effect on (1) student disciplinary learning, (2) student perceived ability to do scientific computing, and (3) student perceived ability to do computer programming. These effects were identified regardless of the students' prior experiences with computer programming.

  1. The CT Scanner Facility at Stellenbosch University: An open access X-ray computed tomography laboratory

    Science.gov (United States)

    du Plessis, Anton; le Roux, Stephan Gerhard; Guelpa, Anina

    2016-10-01

    The Stellenbosch University CT Scanner Facility is an open access laboratory providing non-destructive X-ray computed tomography (CT) and a high performance image analysis services as part of the Central Analytical Facilities (CAF) of the university. Based in Stellenbosch, South Africa, this facility offers open access to the general user community, including local researchers, companies and also remote users (both local and international, via sample shipment and data transfer). The laboratory hosts two CT instruments, i.e. a micro-CT system, as well as a nano-CT system. A workstation-based Image Analysis Centre is equipped with numerous computers with data analysis software packages, which are to the disposal of the facility users, along with expert supervision, if required. All research disciplines are accommodated at the X-ray CT laboratory, provided that non-destructive analysis will be beneficial. During its first four years, the facility has accommodated more than 400 unique users (33 in 2012; 86 in 2013; 154 in 2014; 140 in 2015; 75 in first half of 2016), with diverse industrial and research applications using X-ray CT as means. This paper summarises the existence of the laboratory's first four years by way of selected examples, both from published and unpublished projects. In the process a detailed description of the capabilities and facilities available to users is presented.

  2. The CT Scanner Facility at Stellenbosch University: An open access X-ray computed tomography laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Plessis, Anton du, E-mail: anton2@sun.ac.za [CT Scanner Facility, Central Analytical Facilities, Stellenbosch University, Stellenbosch (South Africa); Physics Department, Stellenbosch University, Stellenbosch (South Africa); Roux, Stephan Gerhard le, E-mail: lerouxsg@sun.ac.za [CT Scanner Facility, Central Analytical Facilities, Stellenbosch University, Stellenbosch (South Africa); Guelpa, Anina, E-mail: aninag@sun.ac.za [CT Scanner Facility, Central Analytical Facilities, Stellenbosch University, Stellenbosch (South Africa)

    2016-10-01

    The Stellenbosch University CT Scanner Facility is an open access laboratory providing non-destructive X-ray computed tomography (CT) and a high performance image analysis services as part of the Central Analytical Facilities (CAF) of the university. Based in Stellenbosch, South Africa, this facility offers open access to the general user community, including local researchers, companies and also remote users (both local and international, via sample shipment and data transfer). The laboratory hosts two CT instruments, i.e. a micro-CT system, as well as a nano-CT system. A workstation-based Image Analysis Centre is equipped with numerous computers with data analysis software packages, which are to the disposal of the facility users, along with expert supervision, if required. All research disciplines are accommodated at the X-ray CT laboratory, provided that non-destructive analysis will be beneficial. During its first four years, the facility has accommodated more than 400 unique users (33 in 2012; 86 in 2013; 154 in 2014; 140 in 2015; 75 in first half of 2016), with diverse industrial and research applications using X-ray CT as means. This paper summarises the existence of the laboratory’s first four years by way of selected examples, both from published and unpublished projects. In the process a detailed description of the capabilities and facilities available to users is presented.

  3. Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC

    Science.gov (United States)

    Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet

    1999-01-01

    The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.

  4. Integrating Computational Chemistry into a Course in Classical Thermodynamics

    Science.gov (United States)

    Martini, Sheridan R.; Hartzell, Cynthia J.

    2015-01-01

    Computational chemistry is commonly addressed in the quantum mechanics course of undergraduate physical chemistry curricula. Since quantum mechanics traditionally follows the thermodynamics course, there is a lack of curricula relating computational chemistry to thermodynamics. A method integrating molecular modeling software into a semester long…

  5. Risk evaluation system for facility safeguards and security planning

    International Nuclear Information System (INIS)

    Udell, C.J.; Carlson, R.L.

    1987-01-01

    The Risk Evaluation System (RES) is an integrated approach to determining safeguards and security effectiveness and risk. RES combines the planning and technical analysis into a format that promotes an orderly development of protection strategies, planing assumptions, facility targets, vulnerability and risk determination, enhancement planning, and implementation. In addition, the RES computer database program enhances the capability of the analyst to perform a risk evaluation of the facility. The computer database is menu driven using data input screens and contains an algorithm for determining the probability of adversary defeat and risk. Also, base case and adjusted risk data records can be maintained and accessed easily

  6. Risk evaluation system for facility safeguards and security planning

    International Nuclear Information System (INIS)

    Udell, C.J.; Carlson, R.L.

    1987-01-01

    The Risk Evaluation System (RES) is an integrated approach to determining safeguards and security effectiveness and risk. RES combines the planning and technical analysis into a format that promotes an orderly development of protection strategies, planning assumptions, facility targets, vulnerability and risk determination, enhancement planning, and implementation. In addition, the RES computer database program enhances the capability of the analyst to perform a risk evaluation of the facility. The computer database is menu driven using data input screens and contains an algorithm for determining the probability of adversary defeat and risk. Also, base case and adjusted risk data records can be maintained and accessed easily

  7. Automation of a cryogenic facility by commercial process-control computer

    International Nuclear Information System (INIS)

    Sondericker, J.H.; Campbell, D.; Zantopp, D.

    1983-01-01

    To insure that Brookhaven's superconducting magnets are reliable and their field quality meets accelerator requirements, each magnet is pre-tested at operating conditions after construction. MAGCOOL, the production magnet test facility, was designed to perform these tests, having the capacity to test ten magnets per five day week. This paper describes the control aspects of MAGCOOL and the advantages afforded the designers by the implementation of a commercial process control computer system

  8. Computing the demagnetizing tensor for finite difference micromagnetic simulations via numerical integration

    International Nuclear Information System (INIS)

    Chernyshenko, Dmitri; Fangohr, Hans

    2015-01-01

    In the finite difference method which is commonly used in computational micromagnetics, the demagnetizing field is usually computed as a convolution of the magnetization vector field with the demagnetizing tensor that describes the magnetostatic field of a cuboidal cell with constant magnetization. An analytical expression for the demagnetizing tensor is available, however at distances far from the cuboidal cell, the numerical evaluation of the analytical expression can be very inaccurate. Due to this large-distance inaccuracy numerical packages such as OOMMF compute the demagnetizing tensor using the explicit formula at distances close to the originating cell, but at distances far from the originating cell a formula based on an asymptotic expansion has to be used. In this work, we describe a method to calculate the demagnetizing field by numerical evaluation of the multidimensional integral in the demagnetizing tensor terms using a sparse grid integration scheme. This method improves the accuracy of computation at intermediate distances from the origin. We compute and report the accuracy of (i) the numerical evaluation of the exact tensor expression which is best for short distances, (ii) the asymptotic expansion best suited for large distances, and (iii) the new method based on numerical integration, which is superior to methods (i) and (ii) for intermediate distances. For all three methods, we show the measurements of accuracy and execution time as a function of distance, for calculations using single precision (4-byte) and double precision (8-byte) floating point arithmetic. We make recommendations for the choice of scheme order and integrating coefficients for the numerical integration method (iii). - Highlights: • We study the accuracy of demagnetization in finite difference micromagnetics. • We introduce a new sparse integration method to compute the tensor more accurately. • Newell, sparse integration and asymptotic method are compared for all ranges

  9. The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    Directory of Open Access Journals (Sweden)

    Wojtek James eGoscinski

    2014-03-01

    Full Text Available The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE is a national imaging and visualisation facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organisation (CSIRO, and the Victorian Partnership for Advanced Computing (VPAC, with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI, x-ray computer tomography (CT, electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i integrated multiple different neuroimaging analysis software components, (ii enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research.

  10. Development of the computer code to monitor gamma radiation in the nuclear facility environment

    International Nuclear Information System (INIS)

    Akhmad, Y. R.; Pudjiyanto, M.S.

    1998-01-01

    Computer codes for gamma radiation monitoring in the vicinity of nuclear facility which have been developed could be introduced to the commercial potable gamma analyzer. The crucial stage of the first year activity was succeeded ; that is the codes have been tested to transfer data file (pulse high distribution) from Micro NOMAD gamma spectrometer (ORTEC product) and the convert them into dosimetry and physics quantities. Those computer codes are called as GABATAN (Gamma Analyzer of Batan) and NAGABAT (Natural Gamma Analyzer of Batan). GABATAN code can isable to used at various nuclear facilities for analyzing gamma field up to 9 MeV, while NAGABAT could be used for analyzing the contribution of natural gamma rays to the exposure rate in the certain location

  11. Integrated Disposal Facility FY2011 Glass Testing Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Pierce, Eric M.; Bacon, Diana H.; Kerisit, Sebastien N.; Windisch, Charles F.; Cantrell, Kirk J.; Valenta, Michelle M.; Burton, Sarah D.; Westsik, Joseph H.

    2011-09-29

    Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to provide the technical basis for estimating radionuclide release from the engineered portion of the disposal facility (e.g., source term). Vitrifying the low-activity waste at Hanford is expected to generate over 1.6 x 10{sup 5} m{sup 3} of glass (Certa and Wells 2010). The volume of immobilized low-activity waste (ILAW) at Hanford is the largest in the DOE complex and is one of the largest inventories (approximately 8.9 x 10{sup 14} Bq total activity) of long-lived radionuclides, principally {sup 99}Tc (t{sub 1/2} = 2.1 x 10{sup 5}), planned for disposal in a low-level waste (LLW) facility. Before the ILAW can be disposed, DOE must conduct a performance assessment (PA) for the Integrated Disposal Facility (IDF) that describes the long-term impacts of the disposal facility on public health and environmental resources. As part of the ILAW glass testing program PNNL is implementing a strategy, consisting of experimentation and modeling, in order to provide the technical basis for estimating radionuclide release from the glass waste form in support of future IDF PAs. The purpose of this report is to summarize the progress made in fiscal year (FY) 2011 toward implementing the strategy with the goal of developing an understanding of the long-term corrosion behavior of low-activity waste glasses.

  12. Sextant: an expert system for transient analysis of nuclear reactors and integral test facilities

    International Nuclear Information System (INIS)

    Barbet, N.; Dumas, M.; Mihelich, G.

    1987-01-01

    Expert systems provide a new way of dealing with the computer-aided management of nuclear plants by combining several knowledge bases and reasoning modes together with a set of numerical models for real-time analysis of transients. New development tools are required together with metaknowledge bases handling temporal hypothetical reasoning and planning. They have to be efficient and robust because during a transient, neither measurements nor models, nor scenarios are hold as absolute references. SEXTANT is a general purpose physical analyzer intended to provide a pattern and avoid duplication of general tools and knowledge bases for similar applications. It combines several knowledge bases concerning measurements, models and qualitative behavior of PWR with a mechanism of conjecture-refutation and a set of simplified models matching the current physical state. A prototype is under assessment by dealing with integral test facility transients. For its development, SEXTANT requires a powerful shell. SPIRAL is such a toolkit, oriented towards online analysis of complex processes and already used in several applications

  13. A computational- And storage-cloud for integration of biodiversity collections

    Science.gov (United States)

    Matsunaga, A.; Thompson, A.; Figueiredo, R. J.; Germain-Aubrey, C.C; Collins, M.; Beeman, R.S; Macfadden, B.J.; Riccardi, G.; Soltis, P.S; Page, L. M.; Fortes, J.A.B

    2013-01-01

    A core mission of the Integrated Digitized Biocollections (iDigBio) project is the building and deployment of a cloud computing environment customized to support the digitization workflow and integration of data from all U.S. nonfederal biocollections. iDigBio chose to use cloud computing technologies to deliver a cyberinfrastructure that is flexible, agile, resilient, and scalable to meet the needs of the biodiversity community. In this context, this paper describes the integration of open source cloud middleware, applications, and third party services using standard formats, protocols, and services. In addition, this paper demonstrates the value of the digitized information from collections in a broader scenario involving multiple disciplines.

  14. A systematic and efficient method to compute multi-loop master integrals

    Science.gov (United States)

    Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu

    2018-04-01

    We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  15. Integrated Payment and Delivery Models Offer Opportunities and Challenges for Residential Care Facilities

    OpenAIRE

    Grabowski, David C.; Caudry, Daryl J.; Dean, Katie M.; Stevenson, David G.

    2015-01-01

    Under health care reform, a series of new financing and delivery models are being piloted to integrate health and long-term care services for older adults. To date, these programs have not encompassed residential care facilities, with most programs focusing on long-term care recipients in the community or the nursing home. Our analyses indicate that individuals living in residential care facilities have similarly high rates of chronic illness and Medicare utilization when compared with simila...

  16. Structural integrity assessment based on the HFR Petten neutron beam facilities

    CERN Document Server

    Ohms, C; Idsert, P V D

    2002-01-01

    Neutrons are becoming recognized as a valuable tool for structural-integrity assessment of industrial components and advanced materials development. Microstructure, texture and residual stress analyses are commonly performed by neutron diffraction and a joint CEN/ISO Pre-Standard for residual stress analysis is under development. Furthermore neutrons provide for defects analyses, i.e. precipitations, voids, pores and cracks, through small-angle neutron scattering (SANS) or radiography. At the High Flux Reactor, 12 beam tubes have been installed for the extraction of thermal neutrons for such applications. Two of them are equipped with neutron diffractometers for residual stress and structure determination and have been extensively used in the past. Several other facilities are currently being reactivated and upgraded. These include the SANS and radiography facilities as well as a powder diffractometer. This paper summarizes the main characteristics and current status of these facilities as well as recently in...

  17. Soft computing integrating evolutionary, neural, and fuzzy systems

    CERN Document Server

    Tettamanzi, Andrea

    2001-01-01

    Soft computing encompasses various computational methodologies, which, unlike conventional algorithms, are tolerant of imprecision, uncertainty, and partial truth. Soft computing technologies offer adaptability as a characteristic feature and thus permit the tracking of a problem through a changing environment. Besides some recent developments in areas like rough sets and probabilistic networks, fuzzy logic, evolutionary algorithms, and artificial neural networks are core ingredients of soft computing, which are all bio-inspired and can easily be combined synergetically. This book presents a well-balanced integration of fuzzy logic, evolutionary computing, and neural information processing. The three constituents are introduced to the reader systematically and brought together in differentiated combinations step by step. The text was developed from courses given by the authors and offers numerous illustrations as

  18. Validation of an integral conceptual model of frailty in older residents of assisted living facilities.

    Science.gov (United States)

    Gobbens, Robbert J J; Krans, Anita; van Assen, Marcel A L M

    2015-01-01

    The aim of this cross-sectional study was to examine the validity of an integral model of the associations between life-course determinants, disease(s), frailty, and adverse outcomes in older persons who are resident in assisted living facilities. Between June 2013 and May 2014 seven assisted living facilities were contacted. A total of 221 persons completed the questionnaire on life-course determinants, frailty (using the Tilburg Frailty Indicator), self-reported chronic diseases, and adverse outcomes disability, quality of life, health care utilization, and falls. Adverse outcomes were analyzed with sequential (logistic) regression analyses. The integral model is partially validated. Life-course determinants and disease(s) affected only physical frailty. All three frailty domains (physical, psychological, social) together affected disability, quality of life, visits to a general practitioner, and falls. Contrary to the model, disease(s) had no effect on adverse outcomes after controlling for frailty. Life-course determinants affected adverse outcomes, with unhealthy lifestyle having consistent negative effects, and women had more disability, scored lower on physical health, and received more personal and informal care after controlling for all other predictors. The integral model of frailty is less useful for predicting adverse outcomes of residents of assisted living facilities than for community-dwelling older persons, because these residents are much frailer and already have access to healthcare facilities. The present study showed that a multidimensional assessment of frailty, distinguishing three domains of frailty (physical, psychological, social), is beneficial with respect to predicting adverse outcomes in residents of assisted living facilities. Copyright © 2015. Published by Elsevier Ireland Ltd.

  19. Three-dimensional coupled Monte Carlo-discrete ordinates computational scheme for shielding calculations of large and complex nuclear facilities

    International Nuclear Information System (INIS)

    Chen, Y.; Fischer, U.

    2005-01-01

    Shielding calculations of advanced nuclear facilities such as accelerator based neutron sources or fusion devices of the tokamak type are complicated due to their complex geometries and their large dimensions, including bulk shields of several meters thickness. While the complexity of the geometry in the shielding calculation can be hardly handled by the discrete ordinates method, the deep penetration of radiation through bulk shields is a severe challenge for the Monte Carlo particle transport technique. This work proposes a dedicated computational scheme for coupled Monte Carlo-Discrete Ordinates transport calculations to handle this kind of shielding problems. The Monte Carlo technique is used to simulate the particle generation and transport in the target region with both complex geometry and reaction physics, and the discrete ordinates method is used to treat the deep penetration problem in the bulk shield. The coupling scheme has been implemented in a program system by loosely integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and a newly developed coupling interface program for mapping process. Test calculations were performed with comparison to MCNP solutions. Satisfactory agreements were obtained between these two approaches. The program system has been chosen to treat the complicated shielding problem of the accelerator-based IFMIF neutron source. The successful application demonstrates that coupling scheme with the program system is a useful computational tool for the shielding analysis of complex and large nuclear facilities. (authors)

  20. An integrated introduction to computer graphics and geometric modeling

    CERN Document Server

    Goldman, Ronald

    2009-01-01

    … this book may be the first book on geometric modelling that also covers computer graphics. In addition, it may be the first book on computer graphics that integrates a thorough introduction to 'freedom' curves and surfaces and to the mathematical foundations for computer graphics. … the book is well suited for an undergraduate course. … The entire book is very well presented and obviously written by a distinguished and creative researcher and educator. It certainly is a textbook I would recommend. …-Computer-Aided Design, 42, 2010… Many books concentrate on computer programming and soon beco

  1. Track Reconstruction with Cosmic Ray Data at the Tracker Integration Facility

    CERN Document Server

    Adam, Wolfgang; Dragicevic, Marko; Friedl, Markus; Fruhwirth, R; Hansel, S; Hrubec, Josef; Krammer, Manfred; Oberegger, Margit; Pernicka, Manfred; Schmid, Siegfried; Stark, Roland; Steininger, Helmut; Uhl, Dieter; Waltenberger, Wolfgang; Widl, Edmund; Van Mechelen, Pierre; Cardaci, Marco; Beaumont, Willem; de Langhe, Eric; de Wolf, Eddi A; Delmeire, Evelyne; Hashemi, Majid; Bouhali, Othmane; Charaf, Otman; Clerbaux, Barbara; Elgammal, J.-P. Dewulf. S; Hammad, Gregory Habib; de Lentdecker, Gilles; Marage, Pierre Edouard; Vander Velde, Catherine; Vanlaer, Pascal; Wickens, John; Adler, Volker; Devroede, Olivier; De Weirdt, Stijn; D'Hondt, Jorgen; Goorens, Robert; Heyninck, Jan; Maes, Joris; Mozer, Matthias Ulrich; Tavernier, Stefaan; Van Lancker, Luc; Van Mulders, Petra; Villella, Ilaria; Wastiels, C; Bonnet, Jean-Luc; Bruno, Giacomo; De Callatay, Bernard; Florins, Benoit; Giammanco, Andrea; Gregoire, Ghislain; Keutgen, Thomas; Kcira, Dorian; Lemaitre, Vincent; Michotte, Daniel; Militaru, Otilia; Piotrzkowski, Krzysztof; Quertermont, L; Roberfroid, Vincent; Rouby, Xavier; Teyssier, Daniel; Daubie, Evelyne; Anttila, Erkki; Czellar, Sandor; Engstrom, Pauli; Harkonen, J; Karimaki, V; Kostesmaa, J; Kuronen, Auli; Lampen, Tapio; Linden, Tomas; Luukka, Panja-Riina; Maenpaa, T; Michal, Sebastien; Tuominen, Eija; Tuominiemi, Jorma; Ageron, Michel; Baulieu, Guillaume; Bonnevaux, Alain; Boudoul, Gaelle; Chabanat, Eric; Chabert, Eric Christian; Chierici, Roberto; Contardo, Didier; Della Negra, Rodolphe; Dupasquier, Thierry; Gelin, Georges; Giraud, Noël; Guillot, Gérard; Estre, Nicolas; Haroutunian, Roger; Lumb, Nicholas; Perries, Stephane; Schirra, Florent; Trocme, Benjamin; Vanzetto, Sylvain; Agram, Jean-Laurent; Blaes, Reiner; Drouhin, Frédéric; Ernenwein, Jean-Pierre; Fontaine, Jean-Charles; Berst, Jean-Daniel; Brom, Jean-Marie; Didierjean, Francois; Goerlach, Ulrich; Graehling, Philippe; Gross, Laurent; Hosselet, J; Juillot, Pierre; Lounis, Abdenour; Maazouzi, Chaker; Olivetto, Christian; Strub, Roger; Van Hove, Pierre; Anagnostou, Georgios; Brauer, Richard; Esser, Hans; Feld, Lutz; Karpinski, Waclaw; Klein, Katja; Kukulies, Christoph; Olzem, Jan; Ostapchuk, Andrey; Pandoulas, Demetrios; Pierschel, Gerhard; Raupach, Frank; Schael, Stefan; Schwering, Georg; Sprenger, Daniel; Thomas, Maarten; Weber, Markus; Wittmer, Bruno; Wlochal, Michael; Beissel, Franz; Bock, E; Flugge, G; Gillissen, C; Hermanns, Thomas; Heydhausen, Dirk; Jahn, Dieter; Kaussen, Gordon; Linn, Alexander; Perchalla, Lars; Poettgens, Michael; Pooth, Oliver; Stahl, Achim; Zoeller, Marc Henning; Buhmann, Peter; Butz, Erik; Flucke, Gero; Hamdorf, Richard Helmut; Hauk, Johannes; Klanner, Robert; Pein, Uwe; Schleper, Peter; Steinbruck, G; Blum, P; De Boer, Wim; Dierlamm, Alexander; Dirkes, Guido; Fahrer, Manuel; Frey, Martin; Furgeri, Alexander; Hartmann, Frank; Heier, Stefan; Hoffmann, Karl-Heinz; Kaminski, Jochen; Ledermann, Bernhard; Liamsuwan, Thiansin; Muller, S; Muller, Th; Schilling, Frank-Peter; Simonis, Hans-Jürgen; Steck, Pia; Zhukov, Valery; Cariola, P; De Robertis, Giuseppe; Ferorelli, Raffaele; Fiore, Luigi; Preda, M; Sala, Giuliano; Silvestris, Lucia; Tempesta, Paolo; Zito, Giuseppe; Creanza, Donato; De Filippis, Nicola; De Palma, Mauro; Giordano, Domenico; Maggi, Giorgio; Manna, Norman; My, Salvatore; Selvaggi, Giovanna; Albergo, Sebastiano; Chiorboli, Massimiliano; Costa, Salvatore; Galanti, Mario; Giudice, Nunzio; Guardone, Nunzio; Noto, Francesco; Potenza, Renato; Saizu, Mirela Angela; Sparti, V; Sutera, Concetta; Tricomi, Alessia; Tuve, Cristina; Brianzi, Mirko; Civinini, Carlo; Maletta, Fernando; Manolescu, Florentina; Meschini, Marco; Paoletti, Simone; Sguazzoni, Giacomo; Broccolo, B; Ciulli, Vitaliano; Focardi, R. D'Alessandro. E; Frosali, Simone; Genta, Chiara; Landi, Gregorio; Lenzi, Piergiulio; Macchiolo, Anna; Magini, Nicolo; Parrini, Giuliano; Scarlini, Enrico; Cerati, Giuseppe Benedetto; Azzi, Patrizia; Bacchetta, Nicola; Candelori, Andrea; Dorigo, Tommaso; Kaminsky, A; Karaevski, S; Khomenkov, Volodymyr; Reznikov, Sergey; Tessaro, Mario; Bisello, Dario; De Mattia, Marco; Giubilato, Piero; Loreti, Maurizio; Mattiazzo, Serena; Nigro, Massimo; Paccagnella, Alessandro; Pantano, Devis; Pozzobon, Nicola; Tosi, Mia; Bilei, Gian Mario; Checcucci, Bruno; Fano, Livio; Servoli, Leonello; Ambroglini, Filippo; Babucci, Ezio; Benedetti, Daniele; Biasini, Maurizio; Caponeri, Benedetta; Covarelli, Roberto; Giorgi, Marco; Lariccia, Paolo; Mantovani, Giancarlo; Marcantonini, Marta; Postolache, Vasile; Santocchia, Attilio; Spiga, Daniele; Bagliesi, Giuseppe; Balestri, Gabriele; Berretta, Luca; Bianucci, S; Boccali, Tommaso; Bosi, Filippo; Bracci, Fabrizio; Castaldi, Rino; Ceccanti, Marco; Cecchi, Roberto; Cerri, Claudio; Cucoanes, Andi Sebastian; Dell'Orso, Roberto; Dobur, Didar; Dutta, Suchandra; Giassi, Alessandro; Giusti, Simone; Kartashov, Dmitry; Kraan, Aafke; Lomtadze, Teimuraz; Lungu, George-Adrian; Magazzu, Guido; Mammini, Paolo; Mariani, Filippo; Martinelli, Giovanni; Moggi, Andrea; Palla, Fabrizio; Palmonari, Francesco; Petragnani, Giulio; Profeti, Alessandro; Raffaelli, Fabrizio; Rizzi, Domenico; Sanguinetti, Giulio; Sarkar, Subir; Sentenac, Daniel; Serban, Alin Titus; Slav, Adrian; Soldani, A; Spagnolo, Paolo; Tenchini, Roberto; Tolaini, Sergio; Venturi, Andrea; Verdini, Piero Giorgio; Vos, Marcel; Zaccarelli, Luciano; Avanzini, Carlo; Basti, Andrea; Benucci, Leonardo; Bocci, Andrea; Cazzola, Ugo; Fiori, Francesco; Linari, Stefano; Massa, Maurizio; Messineo, Alberto; Segneri, Gabriele; Tonelli, Guido; Azzurri, Paolo; Bernardini, Jacopo; Borrello, Laura; Calzolari, Federico; Foa, Lorenzo; Gennai, Simone; Ligabue, Franco; Petrucciani, Giovanni; Rizzi, Andrea; Yang, Zong-Chang; Benotto, Franco; Demaria, Natale; Dumitrache, Floarea; Farano, R; Borgia, Maria Assunta; Castello, Roberto; Costa, Marco; Migliore, Ernesto; Romero, Alessandra; Abbaneo, Duccio; Abbas, M; Ahmed, Ijaz; Akhtar, I; Albert, Eric; Bloch, Christoph; Breuker, Horst; Butt, Shahid Aleem; Buchmuller, Oliver; Cattai, Ariella; Delaere, Christophe; Delattre, Michel; Edera, Laura Maria; Engstrom, Pauli; Eppard, Michael; Gateau, Maryline; Gill, Karl; Giolo-Nicollerat, Anne-Sylvie; Grabit, Robert; Honma, Alan; Huhtinen, Mika; Kloukinas, Kostas; Kortesmaa, Jarmo; Kottelat, Luc-Joseph; Kuronen, Auli; Leonardo, Nuno; Ljuslin, Christer; Mannelli, Marcello; Masetti, Lorenzo; Marchioro, Alessandro; Mersi, Stefano; Michal, Sebastien; Mirabito, Laurent; Muffat-Joly, Jeannine; Onnela, Antti; Paillard, Christian; Pal, Imre; Pernot, Jean-Francois; Petagna, Paolo; Petit, Patrick; Piccut, C; Pioppi, Michele; Postema, Hans; Ranieri, Riccardo; Ricci, Daniel; Rolandi, Gigi; Ronga, Frederic Jean; Sigaud, Christophe; Syed, A; Siegrist, Patrice; Tropea, Paola; Troska, Jan; Tsirou, Andromachi; Vander Donckt, Muriel; Vasey, François; Alagoz, Enver; Amsler, Claude; Chiochia, Vincenzo; Regenfus, Christian; Robmann, Peter; Rochet, Jacky; Rommerskirchen, Tanja; Schmidt, Alexander; Steiner, Stefan; Wilke, Lotte; Church, Ivan; Cole, Joanne; Coughlan, John A; Gay, Arnaud; Taghavi, S; Tomalin, Ian R; Bainbridge, Robert; Cripps, Nicholas; Fulcher, Jonathan; Hall, Geoffrey; Noy, Matthew; Pesaresi, Mark; Radicci, Valeria; Raymond, David Mark; Sharp, Peter; Stoye, Markus; Wingham, Matthew; Zorba, Osman; Goitom, Israel; Hobson, Peter R; Reid, Ivan; Teodorescu, Liliana; Hanson, Gail; Jeng, Geng-Yuan; Liu, Haidong; Pasztor, Gabriella; Satpathy, Asish; Stringer, Robert; Mangano, Boris; Affolder, K; Affolder, T; Allen, Andrea; Barge, Derek; Burke, Samuel; Callahan, D; Campagnari, Claudio; Crook, A; D'Alfonso, Mariarosaria; Dietch, J; Garberson, Jeffrey; Hale, David; Incandela, H; Incandela, Joe; Jaditz, Stephen; Kalavase, Puneeth; Kreyer, Steven Lawrence; Kyre, Susanne; Lamb, James; Mc Guinness, C; Mills, C; Nguyen, Harold; Nikolic, Milan; Lowette, Steven; Rebassoo, Finn; Ribnik, Jacob; Richman, Jeffrey; Rubinstein, Noah; Sanhueza, S; Shah, Yousaf Syed; Simms, L; Staszak, D; Stoner, J; Stuart, David; Swain, Sanjay Kumar; Vlimant, Jean-Roch; White, Dean; Ulmer, Keith; Wagner, Stephen Robert; Bagby, Linda; Bhat, Pushpalatha C; Burkett, Kevin; Cihangir, Selcuk; Gutsche, Oliver; Jensen, Hans; Johnson, Mark; Luzhetskiy, Nikolay; Mason, David; Miao, Ting; Moccia, Stefano; Noeding, Carsten; Ronzhin, Anatoly; Skup, Ewa; Spalding, William J; Spiegel, Leonard; Tkaczyk, Slawek; Yumiceva, Francisco; Zatserklyaniy, Andriy; Zerev, E; Anghel, Ioana Maria; Bazterra, Victor Eduardo; Gerber, Cecilia Elena; Khalatian, S; Shabalina, Elizaveta; Baringer, Philip; Bean, Alice; Chen, Jie; Hinchey, Carl Louis; Martin, Christophe; Moulik, Tania; Robinson, Richard; Gritsan, Andrei; Lae, Chung Khim; Tran, Nhan Viet; Everaerts, Pieter; Hahn, Kristan Allan; Harris, Philip; Nahn, Steve; Rudolph, Matthew; Sung, Kevin; Betchart, Burton; Demina, Regina; Gotra, Yury; Korjenevski, Sergey; Miner, Daniel Carl; Orbaker, Douglas; Christofek, Leonard; Hooper, Ryan; Landsberg, Greg; Nguyen, Duong; Narain, Meenakshi; Speer, Thomas; Tsang, Ka Vang

    2008-01-01

    The subsystems of the CMS silicon strip tracker were integrated and commissioned at the Tracker Integration Facility (TIF) in the period from November 2006 to July 2007. As part of the commissioning, large samples of cosmic ray data were recorded under various running conditions in the absence of a magnetic field. Cosmic rays detected by scintillation counters were used to trigger the readout of up to 15\\,\\% of the final silicon strip detector, and over 4.7~million events were recorded. This document describes the cosmic track reconstruction and presents results on the performance of track and hit reconstruction as from dedicated analyses.

  2. A systematic and efficient method to compute multi-loop master integrals

    Directory of Open Access Journals (Sweden)

    Xiao Liu

    2018-04-01

    Full Text Available We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  3. Automation of electromagnetic compatability (EMC) test facilities

    Science.gov (United States)

    Harrison, C. A.

    1986-01-01

    Efforts to automate electromagnetic compatibility (EMC) test facilities at Marshall Space Flight Center are discussed. The present facility is used to accomplish a battery of nine standard tests (with limited variations) deigned to certify EMC of Shuttle payload equipment. Prior to this project, some EMC tests were partially automated, but others were performed manually. Software was developed to integrate all testing by means of a desk-top computer-controller. Near real-time data reduction and onboard graphics capabilities permit immediate assessment of test results. Provisions for disk storage of test data permit computer production of the test engineer's certification report. Software flexibility permits variation in the tests procedure, the ability to examine more closely those frequency bands which indicate compatibility problems, and the capability to incorporate additional test procedures.

  4. An integral effect test facility of the SMART, SMART ITL

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyun Sik; Moon, Sang Ki; Kim, Yeon Sik; Cho, Seok; Choi, Ki Yong; Bae, Hwang; Kim, Dong Eok; Choi, Nam Hyun; Min, Kyoung Ho; Ko, Yung Joo; Shin, Yong Cheol; Park, Rae Joon; Lee, Won Jae; Song, Chul Hwa; Yi, Sung Jae [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    SMART (System integrated Modular Advanced ReacTor) is a 330 MWth integral pressurized water reactor (iPWR) developed by KAERI and had obtained standard design approval (SDA) from Korean regulatory authority on July 2012. In this SMART design main components including a pressurizer, reactor coolant pumps and steam generators are installed in a reactor pressure vessel without any large connecting pipes. As the LBLOCA scenario is inherently excluded, its safety systems could be simplified only to ensure the safety during the SBLOCA scenarios and the other system transients. An integral effect test loop for the SMART (SMART ITL), or called as FESTA, had been designed to simulate the integral thermal hydraulic behavior of the SMART. The objectives of the SMART ITL are to investigate and understand the integral performance of reactor systems and components and the thermal hydraulic phenomena occurred in the system during normal, abnormal and emergency conditions, and to verify the system safety during various design basis events of the SMART. The integral effect test data will also be used to validate the related thermal hydraulic models of the safety analysis code such as TASS/SMR S, which is used for performance and accident analysis of the SMART design. This paper introduces the scaling analysis and scientific design of the integral test facility of the SMART, SMART ITL and its scaling analysis results.

  5. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  6. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160

    2017-01-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  7. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    OpenAIRE

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and s...

  8. Integrating Xgrid into the HENP distributed computing model

    International Nuclear Information System (INIS)

    Hajdu, L; Lauret, J; Kocoloski, A; Miller, M

    2008-01-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology

  9. Integrating Xgrid into the HENP distributed computing model

    Science.gov (United States)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  10. Integrating Xgrid into the HENP distributed computing model

    Energy Technology Data Exchange (ETDEWEB)

    Hajdu, L; Lauret, J [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kocoloski, A; Miller, M [Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)], E-mail: kocolosk@mit.edu

    2008-07-15

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  11. Computer-based data acquisition system in the Large Coil Test Facility

    International Nuclear Information System (INIS)

    Gould, S.S.; Layman, L.R.; Million, D.L.

    1983-01-01

    The utilization of computers for data acquisition and control is of paramount importance on large-scale fusion experiments because they feature the ability to acquire data from a large number of sensors at various sample rates and provide for flexible data interpretation, presentation, reduction, and analysis. In the Large Coil Test Facility (LCTF) a Digital Equipment Corporation (DEC) PDP-11/60 host computer with the DEC RSX-11M operating system coordinates the activities of five DEC LSI-11/23 front-end processors (FEPs) via direct memory access (DMA) communication links. This provides host control of scheduled data acquisition and FEP event-triggered data collection tasks. Four of the five FEPs have no operating system

  12. Computer-Aided dispatching system design specification

    International Nuclear Information System (INIS)

    Briggs, M.G.

    1996-01-01

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This system is defined as a Commercial-Off the-Shelf computer dispatching system providing both text and graphical display information while interfacing with the diverse reporting system within the Hanford Facility. This system also provided expansion capabilities to integrate Hanford Fire and the Occurrence Notification Center and provides back-up capabilities for the Plutonium Processing Facility

  13. Framework for Integrating Safety, Operations, Security, and Safeguards in the Design and Operation of Nuclear Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Darby, John L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Horak, Karl Emanuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); LaChance, Jeffrey L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tolk, Keith Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitehead, Donnie Wayne [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2007-10-01

    The US is currently on the brink of a nuclear renaissance that will result in near-term construction of new nuclear power plants. In addition, the Department of Energy’s (DOE) ambitious new Global Nuclear Energy Partnership (GNEP) program includes facilities for reprocessing spent nuclear fuel and reactors for transmuting safeguards material. The use of nuclear power and material has inherent safety, security, and safeguards (SSS) concerns that can impact the operation of the facilities. Recent concern over terrorist attacks and nuclear proliferation led to an increased emphasis on security and safeguard issues as well as the more traditional safety emphasis. To meet both domestic and international requirements, nuclear facilities include specific SSS measures that are identified and evaluated through the use of detailed analysis techniques. In the past, these individual assessments have not been integrated, which led to inefficient and costly design and operational requirements. This report provides a framework for a new paradigm where safety, operations, security, and safeguards (SOSS) are integrated into the design and operation of a new facility to decrease cost and increase effectiveness. Although the focus of this framework is on new nuclear facilities, most of the concepts could be applied to any new, high-risk facility.

  14. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  15. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    Energy Technology Data Exchange (ETDEWEB)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  16. Analog Integrated Circuit Design for Spike Time Dependent Encoder and Reservoir in Reservoir Computing Processors

    Science.gov (United States)

    2018-01-01

    HAS BEEN REVIEWED AND IS APPROVED FOR PUBLICATION IN ACCORDANCE WITH ASSIGNED DISTRIBUTION STATEMENT. FOR THE CHIEF ENGINEER : / S / / S...bridged high-performance computing, nanotechnology , and integrated circuits & systems. 15. SUBJECT TERMS neuromorphic computing, neuron design, spike...multidisciplinary effort encompassed high-performance computing, nanotechnology , integrated circuits, and integrated systems. The project’s architecture was

  17. Startup of the Whiteshell irradiation facility

    International Nuclear Information System (INIS)

    Barnard, J.W.; Stanley, F.W.

    1989-01-01

    Recently, a 10-MeV, 1-kW electron linear accelerator was installed in a specially designed irradiation facility at the Whiteshell Nuclear Research Establishment. The facility was designed for radiation applications research in the development of new radiation processes up to the pilot scale level. The accelerator is of advanced design. Automatic startup via computer control makes it compatible with industrial processing. It has been operated successfully as a fully integrated electron irradiator for a number of applications including curing of plastics and composites, sterilization of medical disposables and animal feed irradiation. We report here on our experience during the first six months of operation. (orig.)

  18. Startup of the whiteshell irradiation facility

    Science.gov (United States)

    Barnard, J. W.; Stanley, F. W.

    1989-04-01

    Recently, a 10-MeV, 1-kW electron linear accelerator was installed in a specially designed irradiation facility at the Whiteshell Nuclear Research Establishment. The facility was designed for radiation applications research in the development of new radiation processes up to the pilot scale level. The accelerator is of advanced design. Automatic startup via computer control makes it compatible with industrial processing. It has been operated successfully as a fully integrated electron irradiator for a number of applications including curing of plastics and composites, sterilization of medical disposables and animal feed irradiation. We report here on our experience during the first six months of operation.

  19. Results of 15 years experiments in the PMK-2 integral-type facility for VVERs

    Energy Technology Data Exchange (ETDEWEB)

    Szabados, L.; Ezsoel, G.; Perneczky, L. [KFKI Atomic Energy Research Institute, Budapest (Hungary)

    2001-07-01

    Due to the specific features of the VVER-440/213-type reactors the transient behaviour of such a reactor system is different from the usual PWR system behaviour. To provide an experimental database for the transient behaviour of VVER systems the PMK integral-type facility, the scaled down model of the Paks NPP was designed and constructed in the early 1980's. Since the start-up of the facility 48 experiments have been performed. It was confirmed through the experiments that the facility is a suitable tool for the computer code validation experiments and to the identification of basic thermal-hydraulic phenomena occurring during plant accidents. High international interest was shown by the four Standard Problem Exercises of the IAEA and by the projects financed by the EU-PHARE. A wide range of small- and medium-size LOCA sequences have been studied to know the performance and effectiveness of ECC systems and to evaluate the thermal-hydraulic safety of the core. Extensive studies have been performed to investigate the one- and two-phase natural circulation, the effect of disturbances coming from the secondary circuit and to validate the effectiveness of accident management measures like bleed and feed. The VVER-specific case, the opening of the SG collector cover was also extensively investigated. Examples given in the report show a few results of experiments and the results of calculation analyses performed for validation purposes of codes like RELAP5, ATHLET and CATHARE. There are some other white spots in Cross Reference Matrices for VVER reactors and, therefore, further experiments are planned to perform tests primarily in further support of accident management measures at low power states of plants to facilitate the improved safety management of VVER-440-type reactors. (authors)

  20. Results of 15 years experiments in the PMK-2 integral-type facility for VVERs

    International Nuclear Information System (INIS)

    Szabados, L.; Ezsoel, G.; Perneczky, L.

    2001-01-01

    Due to the specific features of the VVER-440/213-type reactors the transient behaviour of such a reactor system is different from the usual PWR system behaviour. To provide an experimental database for the transient behaviour of VVER systems the PMK integral-type facility, the scaled down model of the Paks NPP was designed and constructed in the early 1980's. Since the start-up of the facility 48 experiments have been performed. It was confirmed through the experiments that the facility is a suitable tool for the computer code validation experiments and to the identification of basic thermal-hydraulic phenomena occurring during plant accidents. High international interest was shown by the four Standard Problem Exercises of the IAEA and by the projects financed by the EU-PHARE. A wide range of small- and medium-size LOCA sequences have been studied to know the performance and effectiveness of ECC systems and to evaluate the thermal-hydraulic safety of the core. Extensive studies have been performed to investigate the one- and two-phase natural circulation, the effect of disturbances coming from the secondary circuit and to validate the effectiveness of accident management measures like bleed and feed. The VVER-specific case, the opening of the SG collector cover was also extensively investigated. Examples given in the report show a few results of experiments and the results of calculation analyses performed for validation purposes of codes like RELAP5, ATHLET and CATHARE. There are some other white spots in Cross Reference Matrices for VVER reactors and, therefore, further experiments are planned to perform tests primarily in further support of accident management measures at low power states of plants to facilitate the improved safety management of VVER-440-type reactors. (authors)

  1. Integrated assessment of thermal hydraulic processes in W7-X fusion experimental facility

    Energy Technology Data Exchange (ETDEWEB)

    Kaliatka, T., E-mail: tadas.kaliatka@lei.lt; Uspuras, E.; Kaliatka, A.

    2017-02-15

    Highlights: • The model of Ingress of Coolant Event experiment facility was developed using the RELAP5 code. • Calculation results were compared with Ingress of Coolant Event experiment data. • Using gained experience, the numerical model of Wendelstein 7-X facility was developed. • Performed analysis approved pressure increase protection system for LOCA event. - Abstract: Energy received from the nuclear fusion reaction is one of the most promising options for generating large amounts of carbon-free energy in the future. However, physical and technical problems existing in this technology are complicated. Several experimental nuclear fusion devices around the world have already been constructed, and several are under construction. However, the processes in the cooling system of the in-vessel components, vacuum vessel and pressure increase protection system of nuclear fusion devices are not widely studied. The largest amount of radioactive materials is concentrated in the vacuum vessel of the fusion device. Vacuum vessel is designed for the vacuum conditions inside the vessel. Rupture of the in-vessel components of the cooling system pipe may lead to a sharp pressure increase and possible damage of the vacuum vessel. To prevent the overpressure, the pressure increase protection system should be designed and implemented. Therefore, systematic and detailed experimental and numerical studies, regarding the thermal-hydraulic processes in cooling system, vacuum vessel and pressure increase protection system, are important and relevant. In this article, the numerical investigation of thermal-hydraulic processes in cooling systems of in-vessel components, vacuum vessels and pressure increase protection system of fusion devices is presented. Using the experience gained from the modelling of “Ingress of Coolant Event” experimental facilities, the numerical model of Wendelstein 7-X (W7-X) experimental fusion device was developed. The integrated analysis of the

  2. Competitiveness in organizational integrated computer system project management

    Directory of Open Access Journals (Sweden)

    Zenovic GHERASIM

    2010-06-01

    Full Text Available The organizational integrated computer system project management aims at achieving competitiveness by unitary, connected and personalised treatment of the requirements for this type of projects, along with the adequate application of all the basic management, administration and project planning principles, as well as of the basic concepts of the organisational information management development. The paper presents some aspects of organizational computer systems project management competitiveness with the specific reference to some Romanian companies’ projects.

  3. Data management and its role in delivering science at DOE BES user facilities - Past, Present, and Future

    Science.gov (United States)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.

    2009-07-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve

  4. Data management and its role in delivering science at DOE BES user facilities - Past, Present, and Future

    International Nuclear Information System (INIS)

    Miller, Stephen D; Herwig, Kenneth W; Ren, Shelly; Vazhkudai, Sudharshan S; Jemian, Pete R; Luitz, Steffen; Salnikov, Andrei A; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L

    2009-01-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better

  5. Data Management and its Role in Delivering Science at DOE BES User Facilities - Past, Present, and Future

    International Nuclear Information System (INIS)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Hagen, Mark E.

    2009-01-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better

  6. Data Management and Its Role in Delivering Science at DOE BES User Facilities Past, Present, and Future

    International Nuclear Information System (INIS)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.

    2009-01-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research (1). We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need (2). Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve

  7. Integrated Disposal Facility FY 2012 Glass Testing Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Pierce, Eric M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kerisit, Sebastien N. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Krogstad, Eirik J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Burton, Sarah D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bjornstad, Bruce N. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Freedman, Vicky L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cantrell, Kirk J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Snyder, Michelle MV [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Crum, Jarrod V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Westsik, Joseph H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-03-29

    PNNL is conducting work to provide the technical basis for estimating radionuclide release from the engineered portion of the disposal facility for Hanford immobilized low-activity waste (ILAW). Before the ILAW can be disposed, DOE must conduct a performance assessment (PA) for the Integrated Disposal Facility (IDF) that describes the long-term impacts of the disposal facility on public health and environmental resources. As part of the ILAW glass testing program, PNNL is implementing a strategy, consisting of experimentation and modeling, to provide the technical basis for estimating radionuclide release from the glass waste form in support of future IDF PAs. Key activities in FY12 include upgrading the STOMP/eSTOMP codes to do near-field modeling, geochemical modeling of PCT tests to determine the reaction network to be used in the STOMP codes, conducting PUF tests on selected glasses to simulate and accelerate glass weathering, developing a Monte Carlo simulation tool to predict the characteristics of the weathered glass reaction layer as a function of glass composition, and characterizing glasses and soil samples exhumed from an 8-year lysimeter test. The purpose of this report is to summarize the progress made in fiscal year (FY) 2012 and the first quarter of FY 2013 toward implementing the strategy with the goal of developing an understanding of the long-term corrosion behavior of LAW glasses.

  8. An algorithm of computing inhomogeneous differential equations for definite integrals

    OpenAIRE

    Nakayama, Hiromasa; Nishiyama, Kenta

    2010-01-01

    We give an algorithm to compute inhomogeneous differential equations for definite integrals with parameters. The algorithm is based on the integration algorithm for $D$-modules by Oaku. Main tool in the algorithm is the Gr\\"obner basis method in the ring of differential operators.

  9. Investigation of analytical and experimental behavior of nuclear facility ventilation systems

    International Nuclear Information System (INIS)

    Smith, P.R.; Ricketts, C.I.; Andrae, R.W.; Bolstad, J.W.; Horak, H.L.; Martin, R.A.; Tang, P.K.; Gregory, W.S.

    1979-01-01

    The behavior of nuclear facility ventilation systems subjected to both natural and man-caused accidents is being investigated. The purpose of the paper is to present a program overview and highlight recent results of the investigations. The program includes both analytical and experimental investigations. Computer codes for predicting accident-induced gas dynamics and test facilities to obtain supportive experimental data to define structural integrity and confinement effectiveness of ventilation system components are described. A unique test facility and recently obtained structural limits for high efficiency particulate air filters are reported

  10. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  11. The HEPCloud Facility: elastic computing for High Energy Physics - The NOvA Use Case

    Science.gov (United States)

    Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Norman, A.; Timm, S.; Tiradani, A.

    2017-10-01

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 38 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper

  12. Integration of radiation and physical safety in large radiator facilities

    International Nuclear Information System (INIS)

    Lima, P.P.M.; Benedito, A.M.; Lima, C.M.A.; Silva, F.C.A. da

    2017-01-01

    Growing international concern about radioactive sources after the Sept. 11, 2001 event has led to a strengthening of physical safety. There is evidence that the illicit use of radioactive sources is a real possibility and may result in harmful radiological consequences for the population and the environment. In Brazil there are about 2000 medical, industrial and research facilities with radioactive sources, of which 400 are Category 1 and 2 classified by the - International Atomic Energy Agency - AIEA, where large irradiators occupy a prominent position due to the very high cobalt-60 activities. The radiological safety is well established in these facilities, due to the intense work of the authorities in the Country. In the paper the main aspects on radiological and physical safety applied in the large radiators are presented, in order to integrate both concepts for the benefit of the safety as a whole. The research showed that the items related to radiation safety are well defined, for example, the tests on the access control devices to the irradiation room. On the other hand, items related to physical security, such as effective control of access to the company, use of safety cameras throughout the company, are not yet fully incorporated. Integration of radiation and physical safety is fundamental for total safety. The elaboration of a Brazilian regulation on the subject is of extreme importance

  13. Integrating computational methods to retrofit enzymes to synthetic pathways.

    Science.gov (United States)

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  14. Why Integrate Educational and Community Facilities?

    Science.gov (United States)

    Fessas-Emmanouil, Helen D.

    1978-01-01

    Discusses coordination of educational and community facilities in order to encourage more rational investments and more efficient use of premises. Such coordination may reduce the economic burden imposed upon citizens for the provision of separate facilities for school and community. However, implementation of such a facility presupposes radical…

  15. A New Automated Instrument Calibration Facility at the Savannah River Site

    International Nuclear Information System (INIS)

    Polz, E.; Rushton, R.O.; Wilkie, W.H.; Hancock, R.C.

    1998-01-01

    The Health Physics Instrument Calibration Facility at the Savannah River Site in Aiken, SC was expressly designed and built to calibrate portable radiation survey instruments. The facility incorporates recent advances in automation technology, building layout and construction, and computer software to improve the calibration process. Nine new calibration systems automate instrument calibration and data collection. The building is laid out so that instruments are moved from one area to another in a logical, efficient manner. New software and hardware integrate all functions such as shipping/receiving, work flow, calibration, testing, and report generation. Benefits include a streamlined and integrated program, improved efficiency, reduced errors, and better accuracy

  16. Specialized computer architectures for computational aerodynamics

    Science.gov (United States)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  17. Double crystal monochromator controlled by integrated computing on BL07A in New SUBARU, Japan

    Energy Technology Data Exchange (ETDEWEB)

    Okui, Masato, E-mail: okui@kohzu.co.jp [Kohzu Precision Co., Ltd., 2-6-15, Kurigi, Asao-ku, Kawasaki-shi, Kanagawa 215-8521 (Japan); Laboratory of Advanced Science and Technology for Industry, University of Hyogo (Japan); Yato, Naoki; Watanabe, Akinobu; Lin, Baiming; Murayama, Norio [Kohzu Precision Co., Ltd., 2-6-15, Kurigi, Asao-ku, Kawasaki-shi, Kanagawa 215-8521 (Japan); Fukushima, Sei, E-mail: FUKUSHIMA.Sei@nims.go.jp [Laboratory of Advanced Science and Technology for Industry, University of Hyogo (Japan); National Institute for Material Sciences (Japan); Kanda, Kazuhiro [Laboratory of Advanced Science and Technology for Industry, University of Hyogo (Japan)

    2016-07-27

    The BL07A beamline in New SUBARU, University of Hyogo, has been used for many studies of new materials. A new double crystal monochromator controlled by integrated computing was designed and installed in the beamline in 2014. In this report we will discuss the unique features of this new monochromator, MKZ-7NS. This monochromator was not designed exclusively for use in BL07A; on the contrary, it was designed to be installed at low cost in various beamlines to facilitate the industrial applications of medium-scale synchrotron radiation facilities. Thus, the design of the monochromator utilized common packages that can satisfy the wide variety of specifications required at different synchrotron radiation facilities. This monochromator can be easily optimized for any beamline due to the fact that a few control parameters can be suitably customized. The beam offset can be fixed precisely even if one of the two slave axes is omitted. This design reduces the convolution of mechanical errors. Moreover, the monochromator’s control mechanism is very compact, making it possible to reduce the size of the vacuum chamber can be made smaller.

  18. Integrated O&M for energy generation and exchange facilities

    International Nuclear Information System (INIS)

    2016-01-01

    Ingeteam Service, part of the Ingeteam Group, is a leading company in the provision of integrated O&M services at energy generation and exchange facilities worldwide. From its head office in the Albacete Science and Technology Park, it manages the work of the 1,300 employees that make up its global workforce, rendering services to wind farms, PV installations and power generation plants. In addition, it maintains an active participation strategy in a range of R&D+i programmes that improve the existing technologies and are geared towards new production systems and new diagnostic techniques, applied to renewables installation maintenance. (Author)

  19. Integral Monitored Retrievable Storage (MRS) Facility conceptual design report

    International Nuclear Information System (INIS)

    1985-09-01

    This report presents a summary design description of the Conceptual Design for an Integral Monitored Retrievable Storage (MRS) Facility, as prepared by The Ralph M. Parsons Company under an A-E services contract with the Richland Operations Office of the Department of Energy. More detailed design requirements and design data are set forth in the Basis for Design and Design Report, bound under separate cover and available for reference by those desiring such information. The design data provided in this Design Report Executive Summary, the Basis for Design, and the Design Report include contributions by the Waste Technology Services Division of Westinghouse Electric Corporation (WEC), which was responsible for the development of the waste receiving, packaging, and storage systems, and Golder Associates Incorporated (GAI), which supported the design development with program studies. The MRS Facility design requirements, which formed the basis for the design effort, were prepared by Pacific Northwest Laboratory for the US Department of Energy, Richland Operations Office, in the form of a Functional Design Criteria (FDC) document, Rev. 4, August 1985. 9 figs., 6 tabs

  20. Integrated monitoring and reviewing systems for the Rokkasho Spent Fuel Receipt and Storage Facility

    International Nuclear Information System (INIS)

    Yokota, Yasuhiro; Ishikawa, Masayuki; Matsuda, Yuji

    1998-01-01

    The Rokkasho Spent Fuel Receipt and Storage (RSFS) Facility at the Rokkasho Reprocessing Plant (RRP) in Japan is expected to begin operations in 1998. Effective safeguarding by International Atomic Energy Agency (IAEA) and Japan Atomic Energy Bureau (JAEB) inspectors requires monitoring the time of transfer, direction of movement, and number of spent fuel assemblies transferred. At peak throughput, up to 1,000 spent fuel assemblies will be accepted by the facility in a 90-day period. In order for the safeguards inspector to efficiently review the resulting large amounts of inspection information, an unattended monitoring system was developed that integrates containment and surveillance (C/S) video with radiation monitors. This allows for an integrated review of the facility's radiation data, C/S video, and operator declaration data. This paper presents an outline of the integrated unattended monitoring hardware and associated data reviewing software. The hardware consists of a multicamera optical surveillance (MOS) system radiation monitoring gamma-ray and neutron detector (GRAND) electronics, and an intelligent local operating network (ILON). The ILON was used for time synchronization and MOS video triggers. The new software consists of a suite of tools, each one specific to a single data type: radiation data, surveillance video, and operator declarations. Each tool can be used in a stand-alone mode as a separate ion application or configured to communicate and match time-synchronized data with any of the other tools. A data summary and comparison application (Integrated Review System [IRS]) coordinates the use of all of the data-specific review tools under a single-user interface. It therefore automates and simplifies the importation of data and the data-specific analyses

  1. Statistical Methodologies to Integrate Experimental and Computational Research

    Science.gov (United States)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  2. CPP-603 Underwater Fuel Storage Facility Site Integrated Stabilization Management Plan (SISMP), Volume I

    International Nuclear Information System (INIS)

    Denney, R.D.

    1995-10-01

    The CPP-603 Underwater Fuel Storage Facility (UFSF) Site Integrated Stabilization Management Plan (SISMP) has been constructed to describe the activities required for the relocation of spent nuclear fuel (SNF) from the CPP-603 facility. These activities are the only Idaho National Engineering Laboratory (INEL) actions identified in the Implementation Plan developed to meet the requirements of the Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 94-1 to the Secretary of Energy regarding an improved schedule for remediation in the Defense Nuclear Facilities Complex. As described in the DNFSB Recommendation 94-1 Implementation Plan, issued February 28, 1995, an INEL Spent Nuclear Fuel Management Plan is currently under development to direct the placement of SNF currently in existing INEL facilities into interim storage, and to address the coordination of intrasite SNF movements with new receipts and intersite transfers that were identified in the DOE SNF Programmatic and INEL Environmental Restoration and Waste Management Environmental Impact Statement Record, of Decision. This SISMP will be a subset of the INEL Spent Nuclear Fuel Management Plan and the activities described are being coordinated with other INEL SNF management activities. The CPP-603 relocation activities have been assigned a high priority so that established milestones will be meet, but there will be some cases where other activities will take precedence in utilization of available resources. The Draft INEL Site Integrated Stabilization Management Plan (SISMP), INEL-94/0279, Draft Rev. 2, dated March 10, 1995, is being superseded by the INEL Spent Nuclear Fuel Management Plan and this CPP-603 specific SISMP

  3. Computer program for storage of historical and routine safety data related to radiologically controlled facilities

    International Nuclear Information System (INIS)

    Marsh, D.A.; Hall, C.J.

    1984-01-01

    A method for tracking and quick retrieval of radiological status of radiation and industrial safety systems in an active or inactive facility has been developed. The system uses a mini computer, a graphics plotter, and mass storage devices. Software has been developed which allows input and storage of architectural details, radiological conditions such as exposure rates, current location of safety systems, and routine and historical information on exposure and contamination levels. A blue print size digitizer is used for input. The computer program retains facility floor plans in three dimensional arrays. The software accesses an eight pen color plotter for output. The plotter generates color plots of the floor plans and safety systems on 8 1/2 x 11 or 20 x 30 paper or on overhead transparencies for reports and presentations

  4. Distributed and multi-core computation of 2-loop integrals

    International Nuclear Information System (INIS)

    De Doncker, E; Yuasa, F

    2014-01-01

    For an automatic computation of Feynman loop integrals in the physical region we rely on an extrapolation technique where the integrals of the sequence are obtained with iterated/repeated adaptive methods from the QUADPACK 1D quadrature package. The integration rule evaluations in the outer level, corresponding to independent inner integral approximations, are assigned to threads dynamically via the OpenMP runtime in the parallel implementation. Furthermore, multi-level (nested) parallelism enables an efficient utilization of hyperthreading or larger numbers of cores. For a class of loop integrals in the unphysical region, which do not suffer from singularities in the interior of the integration domain, we find that the distributed adaptive integration methods in the multivariate PARINT package are highly efficient and accurate. We apply these techniques without resorting to integral transformations and report on the capabilities of the algorithms and the parallel performance for a test set including various types of two-loop integrals

  5. Atmospheric dispersion calculation for posturated accident of nuclear facilities and the computer code: PANDA

    International Nuclear Information System (INIS)

    Kitahara, Yoshihisa; Kishimoto, Yoichiro; Narita, Osamu; Shinohara, Kunihiko

    1979-01-01

    Several Calculation methods for relative concentration (X/Q) and relative cloud-gamma dose (D/Q) of the radioactive materials released from nuclear facilities by posturated accident are presented. The procedure has been formulated as a Computer program PANDA and the usage is explained. (author)

  6. National Ignition Facility sub-system design requirements computer system SSDR 1.5.1

    International Nuclear Information System (INIS)

    Spann, J.; VanArsdall, P.; Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development and test requirements for the Computer System, WBS 1.5.1 which is part of the NIF Integrated Computer Control System (ICCS). This document responds directly to the requirements detailed in ICCS (WBS 1.5) which is the document directly above

  7. Integration of case study approach, project design and computer ...

    African Journals Online (AJOL)

    Integration of case study approach, project design and computer modeling in managerial accounting education ... Journal of Fundamental and Applied Sciences ... in the Laboratory of Management Accounting and Controlling Systems at the ...

  8. A specialized ODE integrator for the efficient computation of parameter sensitivities

    Directory of Open Access Journals (Sweden)

    Gonnet Pedro

    2012-05-01

    Full Text Available Abstract Background Dynamic mathematical models in the form of systems of ordinary differential equations (ODEs play an important role in systems biology. For any sufficiently complex model, the speed and accuracy of solving the ODEs by numerical integration is critical. This applies especially to systems identification problems where the parameter sensitivities must be integrated alongside the system variables. Although several very good general purpose ODE solvers exist, few of them compute the parameter sensitivities automatically. Results We present a novel integration algorithm that is based on second derivatives and contains other unique features such as improved error estimates. These features allow the integrator to take larger time steps than other methods. In practical applications, i.e. systems biology models of different sizes and behaviors, the method competes well with established integrators in solving the system equations, and it outperforms them significantly when local parameter sensitivities are evaluated. For ease-of-use, the solver is embedded in a framework that automatically generates the integrator input from an SBML description of the system of interest. Conclusions For future applications, comparatively ‘cheap’ parameter sensitivities will enable advances in solving large, otherwise computationally expensive parameter estimation and optimization problems. More generally, we argue that substantially better computational performance can be achieved by exploiting characteristics specific to the problem domain; elements of our methods such as the error estimation could find broader use in other, more general numerical algorithms.

  9. INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)

    International Nuclear Information System (INIS)

    Arezzini, S; Carboni, A; Caruso, G; Ciampa, A; Coscetti, S; Mazzoni, E; Piras, S

    2014-01-01

    The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.

  10. A new 3-D integral code for computation of accelerator magnets

    International Nuclear Information System (INIS)

    Turner, L.R.; Kettunen, L.

    1991-01-01

    For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab

  11. Computer algebra in quantum field theory integration, summation and special functions

    CERN Document Server

    Schneider, Carsten

    2013-01-01

    The book focuses on advanced computer algebra methods and special functions that have striking applications in the context of quantum field theory. It presents the state of the art and new methods for (infinite) multiple sums, multiple integrals, in particular Feynman integrals, difference and differential equations in the format of survey articles. The presented techniques emerge from interdisciplinary fields: mathematics, computer science and theoretical physics; the articles are written by mathematicians and physicists with the goal that both groups can learn from the other field, including

  12. Taking the classical large audience university lecture online using tablet computer and webconferencing facilities

    DEFF Research Database (Denmark)

    Brockhoff, Per B.

    2011-01-01

    During four offerings (September 2008 – May 2011) of the course 02402 Introduction to Statistics for Engineering students at DTU, with an average of 256 students, the lecturing was carried out 100% through a tablet computer combined with the web conferencing facility Adobe Connect (version 7...

  13. Coal-fired MHD test progress at the Component Development and Integration Facility

    International Nuclear Information System (INIS)

    Hart, A.T.; Rivers, T.J.; Alsberg, C.M.; Filius, K.D.

    1992-01-01

    The Component Development and Integration Facility (CDIF) is a Department of Energy test facility operated by MSE, Inc. In the fall of 1984, a 50-MW t , pressurized, slag rejecting coal-fired combustor (CFC) replaced the oil-fired combustor in the test train. In the spring of 1989, a coal-fired precombustor was added to the test hardware, and current controls were installed in the spring of 1990. In the fall of 1990, the slag rejector was installed. MSE test hardware activities included installing the final workhorse channel and modifying the coalfired combustor by installing improved design and proof-of-concept (POC) test pieces. This paper discusses the involvement of this hardware in test progress during the past year. Testing during the last year emphasized the final workhorse hardware testing. This testing will be discussed. Facility modifications and system upgrades for improved operation and duration testing will be discussed. In addition, this paper will address long-term testing plans

  14. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    Science.gov (United States)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    relevant European Research infrastructure in the field of Earth Science (EPOS and ICOS), Bioinformatics (BBMRI and ELIXIR) and Space Physics (EISCAT-3D). The first outcome of this activity has been the definition of a generic use case that captures the typical user scenario with respect the integrated use of the EGI and EUDAT infrastructures. This generic use case allows a user to instantiate a set of Virtual Machine images on the EGI Federated Cloud to perform computational jobs that analyse data previously stored on EUDAT long-term storage systems. The results of such analysis can be staged back to EUDAT storages, and if needed, allocated with Permanent identifyers (PIDs) for future use. The implementation of this generic use case requires the following integration activities between EGI and EUDAT: (1) harmonisation of the user authentication and authorisation models, (2) implementing interface connectors between the relevant EGI and EUDAT services, particularly EGI Cloud compute facilities and EUDAT long-term storage and PID systems. In the presentation, the collected user requirements and the implementation status of the universal use case will be showed. Furthermore, how the universal use case is currently applied to satisfy EPOS and ICOS needs will be described.

  15. Integrating computer programs for engineering analysis and design

    Science.gov (United States)

    Wilhite, A. W.; Crisp, V. K.; Johnson, S. C.

    1983-01-01

    The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization.

  16. National Ignition Facility sub-system design requirements integrated timing system SSDR 1.5.3

    International Nuclear Information System (INIS)

    Wiedwald, J.; Van Aersau, P.; Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development, and test requirements for the Integrated Timing System, WBS 1.5.3 which is part of the NIF Integrated Computer Control System (ICCS). The Integrated Timing System provides all temporally-critical hardware triggers to components and equipment in other NIF systems

  17. Animal facilities

    International Nuclear Information System (INIS)

    Fritz, T.E.; Angerman, J.M.; Keenan, W.G.; Linsley, J.G.; Poole, C.M.; Sallese, A.; Simkins, R.C.; Tolle, D.

    1981-01-01

    The animal facilities in the Division are described. They consist of kennels, animal rooms, service areas, and technical areas (examining rooms, operating rooms, pathology labs, x-ray rooms, and 60 Co exposure facilities). The computer support facility is also described. The advent of the Conversational Monitor System at Argonne has launched a new effort to set up conversational computing and graphics software for users. The existing LS-11 data acquisition systems have been further enhanced and expanded. The divisional radiation facilities include a number of gamma, neutron, and x-ray radiation sources with accompanying areas for related equipment. There are five 60 Co irradiation facilities; a research reactor, Janus, is a source for fission-spectrum neutrons; two other neutron sources in the Chicago area are also available to the staff for cell biology studies. The electron microscope facilities are also described

  18. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hendrickson, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individual work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.

  19. Teaching ergonomics to nursing facility managers using computer-based instruction.

    Science.gov (United States)

    Harrington, Susan S; Walker, Bonnie L

    2006-01-01

    This study offers evidence that computer-based training is an effective tool for teaching nursing facility managers about ergonomics and increasing their awareness of potential problems. Study participants (N = 45) were randomly assigned into a treatment or control group. The treatment group completed the ergonomics training and a pre- and posttest. The control group completed the pre- and posttests without training. Treatment group participants improved significantly from 67% on the pretest to 91% on the posttest, a gain of 24%. Differences between mean scores for the control group were not significant for the total score or for any of the subtests.

  20. The integration of expert knowledge in decision support systems for facility location planning

    NARCIS (Netherlands)

    Arentze, T.A.; Borgers, A.W.J.; Timmermans, H.J.P.

    1995-01-01

    The integration of expert systems in DSS has led to a new generation of systems commonly referred to as knowledge-based or intelligent DSS. This paper investigates the use of expert system technology for the development of a knowledge-based DSS for the planning of retail and service facilities. The

  1. Computational Science Facility (CSF)

    Data.gov (United States)

    Federal Laboratory Consortium — PNNL Institutional Computing (PIC) is focused on meeting DOE's mission needs and is part of PNNL's overarching research computing strategy. PIC supports large-scale...

  2. DNA-Enabled Integrated Molecular Systems for Computation and Sensing

    Science.gov (United States)

    2014-05-21

    Computational devices can be chemically conjugated to different strands of DNA that are then self-assembled according to strict Watson − Crick binding rules... DNA -Enabled Integrated Molecular Systems for Computation and Sensing Craig LaBoda,† Heather Duschl,† and Chris L. Dwyer*,†,‡ †Department of...guided folding of DNA , inspired by nature, allows designs to manipulate molecular-scale processes unlike any other material system. Thus, DNA can be

  3. IOTA (Integrable Optics Test Accelerator): facility and experimental beam physics program

    Science.gov (United States)

    Antipov, S.; Broemmelsiek, D.; Bruhwiler, D.; Edstrom, D.; Harms, E.; Lebedev, V.; Leibfritz, J.; Nagaitsev, S.; Park, C. S.; Piekarz, H.; Piot, P.; Prebys, E.; Romanov, A.; Ruan, J.; Sen, T.; Stancari, G.; Thangaraj, C.; Thurman-Keup, R.; Valishev, A.; Shiltsev, V.

    2017-03-01

    The Integrable Optics Test Accelerator (IOTA) is a storage ring for advanced beam physics research currently being built and commissioned at Fermilab. It will operate with protons and electrons using injectors with momenta of 70 and 150 MeV/c, respectively. The research program includes the study of nonlinear focusing integrable optical beam lattices based on special magnets and electron lenses, beam dynamics of space-charge effects and their compensation, optical stochastic cooling, and several other experiments. In this article, we present the design and main parameters of the facility, outline progress to date and provide the timeline of the construction, commissioning and research. The physical principles, design, and hardware implementation plans for the major IOTA experiments are also discussed.

  4. IOTA (Integrable Optics Test Accelerator): Facility and experimental beam physics program

    International Nuclear Information System (INIS)

    Antipov, Sergei; Broemmelsiek, Daniel; Bruhwiler, David; Edstrom, Dean; Harms, Elvin

    2017-01-01

    The Integrable Optics Test Accelerator (IOTA) is a storage ring for advanced beam physics research currently being built and commissioned at Fermilab. It will operate with protons and electrons using injectors with momenta of 70 and 150 MeV/c, respectively. The research program includes the study of nonlinear focusing integrable optical beam lattices based on special magnets and electron lenses, beam dynamics of space-charge effects and their compensation, optical stochastic cooling, and several other experiments. In this article, we present the design and main parameters of the facility, outline progress to date and provide the timeline of the construction, commissioning and research. Finally, the physical principles, design, and hardware implementation plans for the major IOTA experiments are also discussed.

  5. Integrated five station nondestructive assay system for the support of decontamination and decommissioning of a former plutonium mixed oxide fuel fabrication facility

    International Nuclear Information System (INIS)

    Caldwell, J.T.; Bieri, J.M.; Hastings, R.D.; Horton, W.S.; Kuckertz, T.H.; Kunz, W.E.; Plettenberg, K.; Smith, L.D.

    1990-01-01

    The goal of a safe, efficient and economic decontamination and decommissioning of plutonium facilities can be greatly enhanced through the intelligent use of an integrated system of nondestructive assay equipment. We have designed and fabricated such a system utilizing five separate NDA stations integrated through a single data acquisition and management personal computer-based controller. The initial station utilizes a passive neutron measurement to determine item Pu inventory to the 0.1 gm level prior to insertion into the decontamination cell. A large active neutron station integrated into the cell is used to measure decontamination effectiveness at the 10 nci/gm level. Cell Pu buildup at critical points is monitored with passive neutron detectors. An active neutron station having better than 1 mg Pu assay sensitivity is used to quantify final compacted waste pucks outside the cell. Bulk Pu in various forms and isotopic enrichments is quantified in a combined passive neutron coincidence and high resolution gamma ray spectrometer station outside the cell. Item control and Pu inventory are managed with bar code labeling and a station integrating algorithm. Overall economy is achieved by multiple station use of the same expensive hardware such as the neutron generator

  6. Comparing the influence of spectro-temporal integration in computational speech segregation

    DEFF Research Database (Denmark)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail Anne

    2016-01-01

    The goal of computational speech segregation systems is to automatically segregate a target speaker from interfering maskers. Typically, these systems include a feature extraction stage in the front-end and a classification stage in the back-end. A spectrotemporal integration strategy can...... be applied in either the frontend, using the so-called delta features, or in the back-end, using a second classifier that exploits the posterior probability of speech from the first classifier across a spectro-temporal window. This study systematically analyzes the influence of such stages on segregation...... metric that comprehensively predicts computational segregation performance and correlates well with intelligibility. The outcome of this study could help to identify the most effective spectro-temporal integration strategy for computational segregation systems....

  7. Structural Integrity Program for the Calcined Solids Storage Facilities at the Idaho Nuclear Technology and Engineering Center

    International Nuclear Information System (INIS)

    Bryant, J.W.; Nenni, J.A.

    2003-01-01

    This report documents the activities of the structural integrity program at the Idaho Nuclear Technology and Engineering Center relevant to the high-level waste Calcined Solids Storage Facilities and associated equipment, as required by DOE M 435.1-1, ''Radioactive Waste Management Manual.'' Based on the evaluation documented in this report, the Calcined Solids Storage Facilities are not leaking and are structurally sound for continued service. Recommendations are provided for continued monitoring of the Calcined Solids Storage Facilities

  8. Structural Integrity Program for the Calcined Solids Storage Facilities at the Idaho Nuclear Technology and Engineering Center

    International Nuclear Information System (INIS)

    Jeffrey Bryant

    2008-01-01

    This report documents the activities of the structural integrity program at the Idaho Nuclear Technology and Engineering Center relevant to the high-level waste Calcined Solids Storage Facilities and associated equipment, as required by DOE M 435.1-1, 'Radioactive Waste Management Manual'. Based on the evaluation documented in this report, the Calcined Solids Storage Facilities are not leaking and are structurally sound for continued service. Recommendations are provided for continued monitoring of the Calcined Solids Storage Facilities

  9. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele

    2012-01-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  10. The HEPCloud Facility: elastic computing for High Energy Physics – The NOvA Use Case

    Energy Technology Data Exchange (ETDEWEB)

    Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Holzman, B. [Fermilab; Kennedy, R. [Fermilab; Norman, A. [Fermilab; Timm, S. [Fermilab; Tiradani, A. [Fermilab

    2017-03-15

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 25 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper

  11. Integrated operations plan for the MFTF-B Mirror Fusion Test Facility. Volume II. Integrated operations plan

    Energy Technology Data Exchange (ETDEWEB)

    1981-12-01

    This document defines an integrated plan for the operation of the Lawrence Livermore National Laboratory (LLNL) Mirror Fusion Test Facility (MFTF-B). The plan fulfills and further delineates LLNL policies and provides for accomplishing the functions required by the program. This plan specifies the management, operations, maintenance, and engineering support responsibilities. It covers phasing into sustained operations as well as the sustained operations themselves. Administrative and Plant Engineering support, which are now being performed satisfactorily, are not part of this plan unless there are unique needs.

  12. Integrated operations plan for the MFTF-B Mirror Fusion Test Facility. Volume II. Integrated operations plan

    International Nuclear Information System (INIS)

    1981-12-01

    This document defines an integrated plan for the operation of the Lawrence Livermore National Laboratory (LLNL) Mirror Fusion Test Facility (MFTF-B). The plan fulfills and further delineates LLNL policies and provides for accomplishing the functions required by the program. This plan specifies the management, operations, maintenance, and engineering support responsibilities. It covers phasing into sustained operations as well as the sustained operations themselves. Administrative and Plant Engineering support, which are now being performed satisfactorily, are not part of this plan unless there are unique needs

  13. A personal computer code for seismic evaluations of nuclear power plants facilities

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulos, A.J.; Graves, H.

    1990-01-01

    The program CARES (Computer Analysis for Rapid Evaluation of Structures) is an integrated computational system being developed by Brookhaven National Laboratory (BNL) for the U.S. Nuclear Regulatory Commission. It is specifically designed to be a personal computer (PC) operated package which may be used to determine the validity and accuracy of analysis methodologies used for structural safety evaluations of nuclear power plants. CARES is structured in a modular format. Each module performs a specific type of analysis i.e., static or dynamic, linear or nonlinear, etc. This paper describes the various features which have been implemented into the Seismic Module of CARES

  14. User guide to the SRS data logging facility

    International Nuclear Information System (INIS)

    Tyson, B.E.

    1979-02-01

    The state of the SRS is recorded every two minutes, thus providing a detailed History of its parameters. Recording of History is done via the SRS Computer Network. This consists of a Master Computer, an Interdata 7/32, and three Minicomputers, Interdata 7/16s. Each of the Minicomputers controls one of the accelerators, Linac, Booster and Storage Ring. The Master Computer is connected to the Central Computer, an IBM 370/165, for jobs where greater computing power and storage are required. The Master Computer has a total of 20 Megabytes of fixed and movable disc space but only about 5 Megabytes are available for History storage. The Minicomputers have no storage facilities. The user guide is set out as follows: History filing system, History storage on the Master Computer, transfer of the History to the Central Computer, transferring History to tapes, job integrity, the SRS tape catalogue system. (author)

  15. Development of multimedia computer-based training for VXI integrated fuel monitors

    International Nuclear Information System (INIS)

    Keeffe, R.; Ellacott, T.; Truong, Q.S.

    1999-01-01

    The Canadian Safeguards Support Program has developed the VXI Integrated Fuel Monitor (VFIM) which is based on the international VXI instrument bus standard. This equipment is a generic radiation monitor which can be used in an integrated mode where several detection systems can be connected to a common system where information is collected, displayed, and analyzed via a virtual control panel with the aid of computers, trackball and computer monitor. The equipment can also be used in an autonomous mode as a portable radiation monitor with a very low power consumption. The equipment has been described at previous international symposia. Integration of several monitoring systems (bundle counter, core discharge monitor, and yes/no monitor) has been carried out at Wolsong 2. Performance results from one of the monitoring systems which was installed at CANDU nuclear stations are discussed in a companion paper at this symposium. This paper describes the development of an effective multimedia computer-based training package for the primary users of the equipment; namely IAEA inspectors and technicians. (author)

  16. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  17. Complexity estimates based on integral transforms induced by computational units

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra

    2012-01-01

    Roč. 33, September (2012), s. 160-167 ISSN 0893-6080 R&D Projects: GA ČR GAP202/11/1368 Institutional research plan: CEZ:AV0Z10300504 Institutional support: RVO:67985807 Keywords : neural networks * estimates of model complexity * approximation from a dictionary * integral transforms * norms induced by computational units Subject RIV: IN - Informatics, Computer Science Impact factor: 1.927, year: 2012

  18. Integrating Computer-Assisted Language Learning in Saudi Schools: A Change Model

    Science.gov (United States)

    Alresheed, Saleh; Leask, Marilyn; Raiker, Andrea

    2015-01-01

    Computer-assisted language learning (CALL) technology and pedagogy have gained recognition globally for their success in supporting second language acquisition (SLA). In Saudi Arabia, the government aims to provide most educational institutions with computers and networking for integrating CALL into classrooms. However, the recognition of CALL's…

  19. MPL-A program for computations with iterated integrals on moduli spaces of curves of genus zero

    Science.gov (United States)

    Bogner, Christian

    2016-06-01

    We introduce the Maple program MPL for computations with multiple polylogarithms. The program is based on homotopy invariant iterated integrals on moduli spaces M0,n of curves of genus 0 with n ordered marked points. It includes the symbol map and procedures for the analytic computation of period integrals on M0,n. It supports the automated computation of a certain class of Feynman integrals.

  20. Consistent Posttest Calculations for LOCA Scenarios in LOBI Integral Facility

    Directory of Open Access Journals (Sweden)

    F. Reventós

    2012-01-01

    Full Text Available Integral test facilities (ITFs are one of the main tools for the validation of best estimate thermalhydraulic system codes. The experimental data are also of great value when compared to the experiment-scaled conditions in a full NPP. The LOBI was a single plus a triple-loop (simulated by one loop test facility electrically heated to simulate a 1300 MWe PWR. The scaling factor was 712 for the core power, volume, and mass flow. Primary and secondary sides contained all main active elements. Tests were performed for the characterization of phenomenologies relevant to large and small break LOCAs and special transients in PWRs. The paper presents the results of three posttest calculations of LOBI experiments. The selected experiments are BL-30, BL-44, and A1-84. They are LOCA scenarios of different break sizes and with different availability of safety injection components. The goal of the analysis is to improve the knowledge of the phenomena occurred in the facility in order to use it in further studies related to qualifying nodalizations of actual plants or to establish accuracy data bases for uncertainty methodologies. An example of procedure of implementing changes in a common nodalization valid for simulating tests occurred in a specific ITF is presented along with its confirmation based on posttests results.

  1. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-05-01

    A proposal has been made at LBL to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multilevel, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data through typical transformations and correlations in under 30 s. The throughput for such a facility, for five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600. 3 figures

  2. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-01-01

    A proposal has been made to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multi-level, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data, through typical transformations and correlations, in under 30 sec. The throughput for such a facility, assuming five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600

  3. Application of personal computer to development of entrance management system for radiating facilities

    International Nuclear Information System (INIS)

    Suzuki, Shogo; Hirai, Shouji

    1989-01-01

    The report describes a system for managing the entrance and exit of personnel to radiating facilities. A personal computer is applied to its development. Major features of the system is outlined first. The computer is connected to the gate and two magnetic card readers provided at the gate. The gate, which is installed at the entrance to a room under control, opens only for those who have a valid card. The entrance-exit management program developed is described next. The following three files are used: ID master file (random file of the magnetic card number, name, qualification, etc., of each card carrier), entrance-exit management file (random file of time of entrance/exit, etc., updated everyday), and entrance-exit record file (sequential file of card number, name, date, etc.), which are stored on floppy disks. A display is provided to show various lists including a list of workers currently in the room and a list of workers who left the room at earlier times of the day. This system is useful for entrance management of a relatively small facility. Though small in required cost, it requires only a few operators to perform effective personnel management. (N.K.)

  4. Development of an integrated facility for processing transuranium solid wastes at the Savannah River Plant

    International Nuclear Information System (INIS)

    Boersma, M.D.; Hootman, H.E.; Permar, P.H.

    1978-01-01

    An integrated facility is being designed for processing solid wastes contaminated with long-lived alpha emitting (TRU) nuclides; this waste has been stored retrievably at the Savannah River Plant since 1965. The stored waste, having a volume of 10 4 m 3 and containing 3x10 5 Ci of transuranics, consists of both mixed combustible trash and failed and obsolete equipment primarily from transuranic production and associated laboratory operations. The facility for processing solid transuranic waste will consist of five processing modules: 1) unpackaging, sorting, and assaying; 2) treatment of combustibles by controlled air incineration; 3) size reduction of noncombustibles by plasma-arc cutting followed by decontamination by electropolishing; 4) fixation of the processed waste in cement; and 5) packaging for shipment to a federal repository. The facility is projected for construction in the mid-1980's. Pilot facilities, sized to manage currently generated wastes, will also demonstrate the key process steps of incineration of combustibles and size reduction/decontamination of noncombustibles; these facilities are projected for 1980-81. Development programs leading to these extensive new facilities are described

  5. Development of an integrated facility for processing TRU solid wastes at the Savannah River Plant

    International Nuclear Information System (INIS)

    Boersma, M.D.; Hootman, H.E.; Permar, P.H.

    1977-01-01

    An integrated facility is being designed for processing solid wastes contaminated with long-lived alpha emitting (TRU) nuclides; this waste has been stored retrievably at the Savannah River Plant since 1965. The stored waste, having a volume of 10 4 m 3 and containing 3 x 10 5 Ci of transuranics, consists of both mixed combustible trash and failed and obsolete equipment primarily from transuranic production and associated laboratory operations. The facility for processing solid transuranic waste will consist of five processing modules: (1) unpackaging, sorting, and assaying; (2) treatment of combustibles by controlled air incineration; (3) size reduction of noncombustibles by plasma-arc cutting followed by decontamination by electropolishing; (4) fixation of the processed waste in cement; and (5) packaging for shipment to a federal repository. The facility is projected for construction in the mid-1980's. Pilot facilities, sized to manage currently generated wastes, will also demonstrate the key process steps of incineration of combustibles and size reduction/decontamination of noncombustibles; these facilities are projected for 1980-81. Development programs leading to these extensive new facilities are described

  6. Integrated Computational Material Engineering Technologies for Additive Manufacturing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — QuesTek Innovations, a pioneer in Integrated Computational Materials Engineering (ICME) and a Tibbetts Award recipient, is teaming with University of Pittsburgh,...

  7. Physics detector simulation facility system software description

    International Nuclear Information System (INIS)

    Allen, J.; Chang, C.; Estep, P.; Huang, J.; Liu, J.; Marquez, M.; Mestad, S.; Pan, J.; Traversat, B.

    1991-12-01

    Large and costly detectors will be constructed during the next few years to study the interactions produced by the SSC. Efficient, cost-effective designs for these detectors will require careful thought and planning. Because it is not possible to test fully a proposed design in a scaled-down version, the adequacy of a proposed design will be determined by a detailed computer model of the detectors. Physics and detector simulations will be performed on the computer model using high-powered computing system at the Physics Detector Simulation Facility (PDSF). The SSCL has particular computing requirements for high-energy physics (HEP) Monte Carlo calculations for the simulation of SSCL physics and detectors. The numerical calculations to be performed in each simulation are lengthy and detailed; they could require many more months per run on a VAX 11/780 computer and may produce several gigabytes of data per run. Consequently, a distributed computing environment of several networked high-speed computing engines is envisioned to meet these needs. These networked computers will form the basis of a centralized facility for SSCL physics and detector simulation work. Our computer planning groups have determined that the most efficient, cost-effective way to provide these high-performance computing resources at this time is with RISC-based UNIX workstations. The modeling and simulation application software that will run on the computing system is usually written by physicists in FORTRAN language and may need thousands of hours of supercomputing time. The system software is the ''glue'' which integrates the distributed workstations and allows them to be managed as a single entity. This report will address the computing strategy for the SSC

  8. Computer integrated manufacturing in the chemical industry : Theory & practice

    NARCIS (Netherlands)

    Ashayeri, J.; Teelen, A.; Selen, W.J.

    1995-01-01

    This paper addresses the possibilities of implementing Computer Integrated Manufacturing in the process industry, and the chemical industry in particular. After presenting some distinct differences of the process industry in relation to discrete manufacturing, a number of focal points are discussed.

  9. Experiments on injection performance of SMART ECC facility using SWAT

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Young Il; Cho, Seok; Ko, Yung Joo; Min, Kyoung Ho; Shin, Yong Cheol; Kwon, Tae Soon; Yi, Sung Jae; Lee, Won Jae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    SMART (System-integrated Modular Advanced ReacTor), an advanced integrated PWR is now in the under developing stages by KAERI. Such integral PWR excludes large-size piping of the primary system of conventional PWR and incorporates the SGs into RPV, which means no LBLOCA could occur in SMART. Therefore, the SBLOCA is considered as a major DBA (Design Basis Accident) in SMART and it is mainly analyzed by using TASS/SMR computer code. The TASS/SMR code should be validated using experimental data from both Integral Effect Test and Separate Effect Test facilities. To investigate injection performance of the ECC system, on SET facility, named as SWAT (SMART ECC Water Asymmetric Two-phase choking test facility), has been constructed at KAERI. The SWAT simulates the geometric configurations of the SG-side upper downcomer annulus and ECCSs of those of SMART. It is designed based on the modified linear scaling method with a scaling ratio of 1/5, to preserve the geometrical similarity and minimize gravitational distortion. The purpose of the SWAT tests is to investigate the safety injection performance, such as the ECC bypass in the downcomer and the penetration rate in the core during the SBLOCA, and hence to produce experimental data to validate and the prediction capability of safety analysis codes, TASS/SMR

  10. Chemical Entity Semantic Specification: Knowledge representation for efficient semantic cheminformatics and facile data integration

    Science.gov (United States)

    2011-01-01

    Background Over the past several centuries, chemistry has permeated virtually every facet of human lifestyle, enriching fields as diverse as medicine, agriculture, manufacturing, warfare, and electronics, among numerous others. Unfortunately, application-specific, incompatible chemical information formats and representation strategies have emerged as a result of such diverse adoption of chemistry. Although a number of efforts have been dedicated to unifying the computational representation of chemical information, disparities between the various chemical databases still persist and stand in the way of cross-domain, interdisciplinary investigations. Through a common syntax and formal semantics, Semantic Web technology offers the ability to accurately represent, integrate, reason about and query across diverse chemical information. Results Here we specify and implement the Chemical Entity Semantic Specification (CHESS) for the representation of polyatomic chemical entities, their substructures, bonds, atoms, and reactions using Semantic Web technologies. CHESS provides means to capture aspects of their corresponding chemical descriptors, connectivity, functional composition, and geometric structure while specifying mechanisms for data provenance. We demonstrate that using our readily extensible specification, it is possible to efficiently integrate multiple disparate chemical data sources, while retaining appropriate correspondence of chemical descriptors, with very little additional effort. We demonstrate the impact of some of our representational decisions on the performance of chemically-aware knowledgebase searching and rudimentary reaction candidate selection. Finally, we provide access to the tools necessary to carry out chemical entity encoding in CHESS, along with a sample knowledgebase. Conclusions By harnessing the power of Semantic Web technologies with CHESS, it is possible to provide a means of facile cross-domain chemical knowledge integration with full

  11. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  12. Computer programs for capital cost estimation, lifetime economic performance simulation, and computation of cost indexes for laser fusion and other advanced technology facilities

    International Nuclear Information System (INIS)

    Pendergrass, J.H.

    1978-01-01

    Three FORTRAN programs, CAPITAL, VENTURE, and INDEXER, have been developed to automate computations used in assessing the economic viability of proposed or conceptual laser fusion and other advanced-technology facilities, as well as conventional projects. The types of calculations performed by these programs are, respectively, capital cost estimation, lifetime economic performance simulation, and computation of cost indexes. The codes permit these three topics to be addressed with considerable sophistication commensurate with user requirements and available data

  13. The role of dedicated data computing centers in the age of cloud computing

    Science.gov (United States)

    Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2017-10-01

    Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.

  14. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00291854; The ATLAS collaboration; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computin...

  15. Status of integration of small computers into NDE systems

    International Nuclear Information System (INIS)

    Dau, G.J.; Behravesh, M.M.

    1988-01-01

    Introduction of computers in nondestructive evaluations (NDE) has enabled data acquisition devices to provide a more thorough and complete coverage in the scanning process, and has aided human inspectors in their data analysis and decision making efforts. The price and size/weight of small computers, coupled with recent increases in processing and storage capacity, have made small personal computers (PC's) the most viable platform for NDE equipment. Several NDE systems using minicomputers and newer PC-based systems, capable of automatic data acquisition, and knowledge-based analysis of the test data, have been field tested in the nuclear power plant environment and are currently available through commercial sources. While computers have been in common use for several NDE methods during the last few years, their greatest impact, however, has been on ultrasonic testing. This paper discusses the evolution of small computers and their integration into the ultrasonic testing process

  16. Energy Systems Integration Laboratory | Energy Systems Integration Facility

    Science.gov (United States)

    | NREL Integration Laboratory Energy Systems Integration Laboratory Research in the Energy Systems Integration Laboratory is advancing engineering knowledge and market deployment of hydrogen technologies. Applications include microgrids, energy storage for renewables integration, and home- and station

  17. Computationally based methodology for reengineering the high-level waste planning process at SRS

    International Nuclear Information System (INIS)

    Paul, P.K.; Gregory, M.V.; Wells, M.N.

    1997-01-01

    The Savannah River Site (SRS) has started processing its legacy of 34 million gallons of high-level radioactive waste into its final disposable form. The SRS high-level waste (HLW) complex consists of 51 waste storage tanks, 3 evaporators, 6 waste treatment operations, and 2 waste disposal facilities. It is estimated that processing wastes to clean up all tanks will take 30+ yr of operation. Integrating all the highly interactive facility operations through the entire life cycle in an optimal fashion-while meeting all the budgetary, regulatory, and operational constraints and priorities-is a complex and challenging planning task. The waste complex operating plan for the entire time span is periodically published as an SRS report. A computationally based integrated methodology has been developed that has streamlined the planning process while showing how to run the operations at economically and operationally optimal conditions. The integrated computational model replaced a host of disconnected spreadsheet calculations and the analysts' trial-and-error solutions using various scenario choices. This paper presents the important features of the integrated computational methodology and highlights the parameters that are core components of the planning process

  18. Computer integration of engineering design and production: A national opportunity

    Science.gov (United States)

    1984-01-01

    The National Aeronautics and Space Administration (NASA), as a purchaser of a variety of manufactured products, including complex space vehicles and systems, clearly has a stake in the advantages of computer-integrated manufacturing (CIM). Two major NASA objectives are to launch a Manned Space Station by 1992 with a budget of $8 billion, and to be a leader in the development and application of productivity-enhancing technology. At the request of NASA, a National Research Council committee visited five companies that have been leaders in using CIM. Based on these case studies, technical, organizational, and financial issues that influence computer integration are described, guidelines for its implementation in industry are offered, and the use of CIM to manage the space station program is recommended.

  19. Grid Integration Webinars | Energy Systems Integration Facility | NREL

    Science.gov (United States)

    Grid Integration Webinars Grid Integration Webinars Watch presentations from NREL analysts on various topics related to grid integration. Wind Curtailment and the Value of Transmission under a 2050 renewable curtailment under these high wind scenarios. Text Version Grid Integration Webinar: Exploring

  20. Microwave integrated circuit mask design, using computer aided microfilm techniques

    Energy Technology Data Exchange (ETDEWEB)

    Reymond, J.M.; Batliwala, E.R.; Ajose, S.O.

    1977-01-01

    This paper examines the possibility of using a computer interfaced with a precision film C.R.T. information retrieval system, to produce photomasks suitable for the production of microwave integrated circuits.

  1. Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES

    Science.gov (United States)

    Hoerger, J.

    1984-01-01

    Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.

  2. FIRAC - a computer code to predict fire accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Foster, R.D.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire. A basic material transport capability that features the effects of convection, deposition, entrainment, and filtration of material is included. The interrelated effects of filter plugging, heat transfer, gas dynamics, and material transport are taken into account. In this paper the authors summarize the physical models used to describe the gas dynamics, material transport, and heat transfer processes. They also illustrate how a typical facility is modeled using the code

  3. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    Science.gov (United States)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  4. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    International Nuclear Information System (INIS)

    Limosani, Antonio; Boland, Lucien; Crosby, Sean; Huang, Joanna; Sevior, Martin; Coddington, Paul; Zhang, Shunde; Wilson, Ross

    2014-01-01

    The Australian Government is making a $AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  5. Potential applications of artificial intelligence in computer-based management systems for mixed waste incinerator facility operation

    International Nuclear Information System (INIS)

    Rivera, A.L.; Singh, S.P.N.; Ferrada, J.J.

    1991-01-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site, designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conversion and Recovery Act (RCRA). Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. This presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. This paper describes mixed waste incinerator facility performance-oriented tasks that could be assisted by Artificial Intelligence (AI) and the requirements for AI tools that would implement these algorithms in a computer-based system. 4 figs., 1 tab

  6. Three-dimensional integrated CAE system applying computer graphic technique

    International Nuclear Information System (INIS)

    Kato, Toshisada; Tanaka, Kazuo; Akitomo, Norio; Obata, Tokayasu.

    1991-01-01

    A three-dimensional CAE system for nuclear power plant design is presented. This system utilizes high-speed computer graphic techniques for the plant design review, and an integrated engineering database for handling the large amount of nuclear power plant engineering data in a unified data format. Applying this system makes it possible to construct a nuclear power plant using only computer data from the basic design phase to the manufacturing phase, and it increases the productivity and reliability of the nuclear power plants. (author)

  7. Integration of active pauses and pattern of muscular activity during computer work.

    Science.gov (United States)

    St-Onge, Nancy; Samani, Afshin; Madeleine, Pascal

    2017-09-01

    Submaximal isometric muscle contractions have been reported to increase variability of muscle activation during computer work; however, other types of active contractions may be more beneficial. Our objective was to determine which type of active pause vs. rest is more efficient in changing muscle activity pattern during a computer task. Asymptomatic regular computer users performed a standardised 20-min computer task four times, integrating a different type of pause: sub-maximal isometric contraction, dynamic contraction, postural exercise and rest. Surface electromyographic (SEMG) activity was recorded bilaterally from five neck/shoulder muscles. Root-mean-square decreased with isometric pauses in the cervical paraspinals, upper trapezius and middle trapezius, whereas it increased with rest. Variability in the pattern of muscular activity was not affected by any type of pause. Overall, no detrimental effects on the level of SEMG during active pauses were found suggesting that they could be implemented without a cost on activation level or variability. Practitioner Summary: We aimed to determine which type of active pause vs. rest is best in changing muscle activity pattern during a computer task. Asymptomatic computer users performed a standardised computer task integrating different types of pauses. Muscle activation decreased with isometric pauses in neck/shoulder muscles, suggesting their implementation during computer work.

  8. Integrating cut-and-solve and semi-Lagrangean based dual ascent for the single-source capacitated facility location problem

    DEFF Research Database (Denmark)

    Gadegaard, Sune Lauth

    polytope with generalized upper bounds. From our computational study, we show that the semi-Lagrangean relaxation approach has its merits when the instances are tightly constrained with regards to the capacity of the system, but that it is very hard to compete with a standalone implementation of the cut......This paper describes how the cut-and-solve framework and semi-Lagrangean based dual ascent algorithms can be integrated in two natural ways in order to solve the single source capacitated facility location problem. The first uses the cut-and-solve framework both as a heuristic and as an exact...... solver for the semi-Lagrangean subproblems. The other uses a semi-Lagrangean based dual ascent algorithm to solve the sparse problems arising in the cut-and-solve algorithm. Furthermore, we developed a simple way to separate a special type of cutting planes from what we denote the effective capacity...

  9. MEASURE: An integrated data-analysis and model identification facility

    Science.gov (United States)

    Singh, Jaidip; Iyer, Ravi K.

    1990-01-01

    The first phase of the development of MEASURE, an integrated data analysis and model identification facility is described. The facility takes system activity data as input and produces as output representative behavioral models of the system in near real time. In addition a wide range of statistical characteristics of the measured system are also available. The usage of the system is illustrated on data collected via software instrumentation of a network of SUN workstations at the University of Illinois. Initially, statistical clustering is used to identify high density regions of resource-usage in a given environment. The identified regions form the states for building a state-transition model to evaluate system and program performance in real time. The model is then solved to obtain useful parameters such as the response-time distribution and the mean waiting time in each state. A graphical interface which displays the identified models and their characteristics (with real time updates) was also developed. The results provide an understanding of the resource-usage in the system under various workload conditions. This work is targeted for a testbed of UNIX workstations with the initial phase ported to SUN workstations on the NASA, Ames Research Center Advanced Automation Testbed.

  10. Experience of developing an integrated nondestructive assay system

    International Nuclear Information System (INIS)

    Hsue, S.T.; Baker, M.P.

    1987-01-01

    A consortium of laboratories is collaborating with the Savannah River Plant to develop an integrated system of state-of-the-art nondestructive assay (NDA) instrumentation to provide nuclear materials accounting and process control information for a new plutonium scrap recovery facility. Individual instruments report assay results to an instrument control computer (ICC); the ICC, in turn, is part of a larger computer network that includes computers that perform process control and materials accounting functions. The design of the integrated NDA measurement system is shown. Each NDA instrument that is part of the integrated system is microcomputer-based and thus is capable of stand-alone operation if the central computer is out of service. Certain hardware features, such as microcomputers, pulse processing modules, and multichannel analyzers, are standardized throughout the system. Another standard feature is the communication between individual NDA instruments and the ICC. The most unique phase of the project is the integral staging. The primary purpose of this phase is to check the communications between various computers and to verify the ICC software during the operation of the NDA instruments. Implementing this integrated system in a process environment represents a major step in realizing the full capabilities of modern NDA instrumentation

  11. Integrated controls

    International Nuclear Information System (INIS)

    Hollaway, F.W.

    1985-01-01

    During 1984, all portions of the Nova control system that were necessary for the support of laser activation and completion of the Nova project were finished and placed in service on time. The Nova control system has been unique in providing, on schedule, the capabilities required in the central control room and in various local control areas throughout the facility. The ambitious goal of deploying this system early enough to use it as an aid in the activation of the laser was accomplished; thus the control system made a major contribution to the completion of Nova activation on schedule. Support and enhancement activities continued during the year on the VAX computer systems, central control room, operator consoles and displays, Novanet data communications network, system-level software for both the VAX and LSI-11 computers, Praxis control system computer language, software management tools, and the development system, which includes office terminals. Computational support was also supplied for a wide variety of test fixtures required by the optical and mechanical subsystems. Significant new advancements were made in four areas in integrated controls this year: the integration software (which includes the shot scheduler), the Praxis language, software quality assurance audit, and software development and data handling. A description of the accomplishments in each of these areas follows

  12. Integrated evolutionary computation neural network quality controller for automated systems

    Energy Technology Data Exchange (ETDEWEB)

    Patro, S.; Kolarik, W.J. [Texas Tech Univ., Lubbock, TX (United States). Dept. of Industrial Engineering

    1999-06-01

    With increasing competition in the global market, more and more stringent quality standards and specifications are being demands at lower costs. Manufacturing applications of computing power are becoming more common. The application of neural networks to identification and control of dynamic processes has been discussed. The limitations of using neural networks for control purposes has been pointed out and a different technique, evolutionary computation, has been discussed. The results of identifying and controlling an unstable, dynamic process using evolutionary computation methods has been presented. A framework for an integrated system, using both neural networks and evolutionary computation, has been proposed to identify the process and then control the product quality, in a dynamic, multivariable system, in real-time.

  13. A framework for different levels of integration of computational models into web-based virtual patients.

    Science.gov (United States)

    Kononowicz, Andrzej A; Narracott, Andrew J; Manini, Simone; Bayley, Martin J; Lawford, Patricia V; McCormack, Keith; Zary, Nabil

    2014-01-23

    Virtual patients are increasingly common tools used in health care education to foster learning of clinical reasoning skills. One potential way to expand their functionality is to augment virtual patients' interactivity by enriching them with computational models of physiological and pathological processes. The primary goal of this paper was to propose a conceptual framework for the integration of computational models within virtual patients, with particular focus on (1) characteristics to be addressed while preparing the integration, (2) the extent of the integration, (3) strategies to achieve integration, and (4) methods for evaluating the feasibility of integration. An additional goal was to pilot the first investigation of changing framework variables on altering perceptions of integration. The framework was constructed using an iterative process informed by Soft System Methodology. The Virtual Physiological Human (VPH) initiative has been used as a source of new computational models. The technical challenges associated with development of virtual patients enhanced by computational models are discussed from the perspectives of a number of different stakeholders. Concrete design and evaluation steps are discussed in the context of an exemplar virtual patient employing the results of the VPH ARCH project, as well as improvements for future iterations. The proposed framework consists of four main elements. The first element is a list of feasibility features characterizing the integration process from three perspectives: the computational modelling researcher, the health care educationalist, and the virtual patient system developer. The second element included three integration levels: basic, where a single set of simulation outcomes is generated for specific nodes in the activity graph; intermediate, involving pre-generation of simulation datasets over a range of input parameters; advanced, including dynamic solution of the model. The third element is the

  14. The development of functional requirement for integrated test facility

    International Nuclear Information System (INIS)

    Sim, B.S.; Oh, I.S.; Cha, K.H.; Lee, H.C.

    1994-01-01

    An Integrated Test Facility (ITF) is a human factors experimental environment comprised of a nuclear power plant function simulator, man-machine interfaces (MMI), human performance recording systems, and signal control and data analysis systems. In this study, we are going to describe how the functional requirements are developed by identification of both the characteristics of generic advanced control rooms and the research topics of world-wide research interest in human factors community. The functional requirements of user interface developed in this paper together with those of the other elements will be used for the design and implementation of the ITF which will serve as the basis for experimental research on a line of human factors topics. (author). 15 refs, 1 fig

  15. Burnup calculations for KIPT accelerator driven subcritical facility using Monte Carlo computer codes-MCB and MCNPX

    International Nuclear Information System (INIS)

    Gohar, Y.; Zhong, Z.; Talamo, A.

    2009-01-01

    Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is ∼375 kW including the fission power of ∼260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the electrons and the

  16. Applying Integrated Computer Assisted Media (ICAM in Teaching Vocabulary

    Directory of Open Access Journals (Sweden)

    Opick Dwi Indah

    2015-02-01

    Full Text Available The objective of this research was to find out whether the use of integrated computer assisted media (ICAM is effective to improve the vocabulary achievement of the second semester students of Cokroaminoto Palopo University. The population of this research was the second semester students of English department of Cokroaminoto Palopo University in academic year 2013/2014. The samples of this research were 60 students and they were placed into two groups: experimental and control group where each group consisted of 30 students. This research used cluster random sampling technique. The research data was collected by applying vocabulary test and it was analyzed by using descriptive and inferential statistics. The result of this research was integrated computer assisted media (ICAM can improve vocabulary achievement of the students of English department of Cokroaminoto Palopo University. It can be concluded that the use of ICAM in the teaching vocabulary is effective to be implemented in improving the students’ vocabulary achievement.

  17. Computer-integrated design and information management for nuclear projects

    International Nuclear Information System (INIS)

    Gonzalez, A.; Martin-Guirado, L.; Nebrera, F.

    1987-01-01

    Over the past seven years, Empresarios Agrupados has been developing a comprehensive, computer-integrated system to perform the majority of the engineering, design, procurement and construction management activities in nuclear, fossil-fired as well as hydro power plant projects. This system, which is already in a production environment, comprises a large number of computer programs and data bases designed using a modular approach. Each software module, dedicated to meeting the needs of a particular design group or project discipline, facilitates the performance of functional tasks characteristic of the power plant engineering process

  18. Geology of the Integrated Disposal Facility Trench

    International Nuclear Information System (INIS)

    Reidel, Steve P.; Fecht, Karl R.

    2005-01-01

    This report describes the geology of the integrated Disposal Facility (IDF) Trench. The stratigraphy consists of some of the youngest sediments of the Missoula floods (younger than 770 ka). The lithology is dominated sands with minor silts and gravels that are largely unconsolidated. The stratigraphy can be subdivided into five geologic units that can be mapped throughout the trench. Four of the units were deposited by the Missoula floods and the youngest consists of windblown sand and silt. The sediment has little moisture and is consistent with that observed in the characterization boreholes. The sedimentary layers are flat lying and there are no faults or folds present. Two clastic dikes were encountered, one along the west wall and one that can be traced from the north to the southwall. The north-south clastic dike nearly bifurcates the trench but the west wall clastic dike can not be traced very far east into the trench. The classic dikes consist mainly of sand with clay-lined walls. The sediment in the dikes is compacted to partly cemented and are more resistant than the layered sediments

  19. Using Amazon's Elastic Compute Cloud to scale CMS' compute hardware dynamically.

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud-computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely on-demand as limits and caps on usage are imposed. Our trial workflows allow us t...

  20. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  1. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    International Nuclear Information System (INIS)

    Lavoie-Courchesne, S; Chouinard-Decorte, F; Doyon, J; Bellec, P; Rioux, P; Sherif, T; Rousseau, M-E; Das, S; Adalat, R; Evans, A C; Craddock, C; Margulies, D; Chu, C; Lyttelton, O

    2012-01-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  2. Integral computer-generated hologram via a modified Gerchberg-Saxton algorithm

    International Nuclear Information System (INIS)

    Wu, Pei-Jung; Lin, Bor-Shyh; Chen, Chien-Yue; Huang, Guan-Syun; Deng, Qing-Long; Chang, Hsuan T

    2015-01-01

    An integral computer-generated hologram, which modulates the phase function of an object based on a modified Gerchberg–Saxton algorithm and compiles a digital cryptographic diagram with phase synthesis, is proposed in this study. When the diagram completes position demultiplexing decipherment, multi-angle elemental images can be reconstructed. Furthermore, an integral CGH with a depth of 225 mm and a visual angle of ±11° is projected through the lens array. (paper)

  3. Integrating Free Computer Software in Chemistry and Biochemistry Instruction: An International Collaboration

    Science.gov (United States)

    Cedeno, David L.; Jones, Marjorie A.; Friesen, Jon A.; Wirtz, Mark W.; Rios, Luz Amalia; Ocampo, Gonzalo Taborda

    2010-01-01

    At the Universidad de Caldas, Manizales, Colombia, we used their new computer facilities to introduce chemistry graduate students to biochemical database mining and quantum chemistry calculations using freeware. These hands-on workshops allowed the students a strong introduction to easily accessible software and how to use this software to begin…

  4. Integrating computation into the undergraduate curriculum: A vision and guidelines for future developments

    Science.gov (United States)

    Chonacky, Norman; Winch, David

    2008-04-01

    There is substantial evidence of a need to make computation an integral part of the undergraduate physics curriculum. This need is consistent with data from surveys in both the academy and the workplace, and has been reinforced by two years of exploratory efforts by a group of physics faculty for whom computation is a special interest. We have examined past and current efforts at reform and a variety of strategic, organizational, and institutional issues involved in any attempt to broadly transform existing practice. We propose a set of guidelines for development based on this past work and discuss our vision of computationally integrated physics.

  5. Software for computing and annotating genomic ranges.

    Science.gov (United States)

    Lawrence, Michael; Huber, Wolfgang; Pagès, Hervé; Aboyoun, Patrick; Carlson, Marc; Gentleman, Robert; Morgan, Martin T; Carey, Vincent J

    2013-01-01

    We describe Bioconductor infrastructure for representing and computing on annotated genomic ranges and integrating genomic data with the statistical computing features of R and its extensions. At the core of the infrastructure are three packages: IRanges, GenomicRanges, and GenomicFeatures. These packages provide scalable data structures for representing annotated ranges on the genome, with special support for transcript structures, read alignments and coverage vectors. Computational facilities include efficient algorithms for overlap and nearest neighbor detection, coverage calculation and other range operations. This infrastructure directly supports more than 80 other Bioconductor packages, including those for sequence analysis, differential expression analysis and visualization.

  6. Computer graphics application in the engineering design integration system

    Science.gov (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  7. Solving a mathematical model integrating unequal-area facilities layout and part scheduling in a cellular manufacturing system by a genetic algorithm.

    Science.gov (United States)

    Ebrahimi, Ahmad; Kia, Reza; Komijan, Alireza Rashidi

    2016-01-01

    In this article, a novel integrated mixed-integer nonlinear programming model is presented for designing a cellular manufacturing system (CMS) considering machine layout and part scheduling problems simultaneously as interrelated decisions. The integrated CMS model is formulated to incorporate several design features including part due date, material handling time, operation sequence, processing time, an intra-cell layout of unequal-area facilities, and part scheduling. The objective function is to minimize makespan, tardiness penalties, and material handling costs of inter-cell and intra-cell movements. Two numerical examples are solved by the Lingo software to illustrate the results obtained by the incorporated features. In order to assess the effects and importance of integration of machine layout and part scheduling in designing a CMS, two approaches, sequentially and concurrent are investigated and the improvement resulted from a concurrent approach is revealed. Also, due to the NP-hardness of the integrated model, an efficient genetic algorithm is designed. As a consequence, computational results of this study indicate that the best solutions found by GA are better than the solutions found by B&B in much less time for both sequential and concurrent approaches. Moreover, the comparisons between the objective function values (OFVs) obtained by sequential and concurrent approaches demonstrate that the OFV improvement is averagely around 17 % by GA and 14 % by B&B.

  8. Determinants of facility readiness for integration of family planning with HIV testing and counseling services: evidence from the Tanzania service provision assessment survey, 2014-2015.

    Science.gov (United States)

    Bintabara, Deogratius; Nakamura, Keiko; Seino, Kaoruko

    2017-12-22

    Global policy reports, national frameworks, and programmatic tools and guidance emphasize the integration of family planning and HIV testing and counseling services to ensure universal access to reproductive health care and HIV prevention. However, the status of integration between these two services in Tanzanian health facilities is unclear. This study examined determinants of facility readiness for integration of family planning with HIV testing and counseling services in Tanzania. Data from the 2014-2015 Tanzania Service Provision Assessment Survey were analyzed. Facilities were considered ready for integration of family planning with HIV testing and counseling services if they scored ≥ 50% on both family planning and HIV testing and counseling service readiness indices as identified by the World Health Organization. All analyses were adjusted for clustering effects, and estimates were weighted to correct for non-responses and disproportionate sampling. Descriptive, bivariate, and multivariate logistic regression analyses were performed. A total of 1188 health facilities were included in the study. Of all of the health facilities, 915 (77%) reported offering both family planning and HIV testing and counseling services, while only 536 (45%) were considered ready to integrate these two services. Significant determinants of facility readiness for integrating these two services were being government owned [AOR = 3.2; 95%CI, 1.9-5.6], having routine management meetings [AOR = 1.9; 95%CI, 1.1-3.3], availability of guidelines [AOR = 3.8; 95%CI, 2.4-5.8], in-service training of staff [AOR = 2.6; 95%CI, 1.3-5.2], and availability of laboratories for HIV testing [AOR = 17.1; 95%CI, 8.2-35.6]. The proportion of facility readiness for the integration of family planning with HIV testing and counseling in Tanzania is unsatisfactory. The Ministry of Health should distribute and ensure constant availability of guidelines, availability of rapid diagnostic

  9. Several problems of algorithmization in integrated computation programs on third generation computers for short circuit currents in complex power networks

    Energy Technology Data Exchange (ETDEWEB)

    Krylov, V.A.; Pisarenko, V.P.

    1982-01-01

    Methods of modeling complex power networks with short circuits in the networks are described. The methods are implemented in integrated computation programs for short circuit currents and equivalents in electrical networks with a large number of branch points (up to 1000) on a computer with a limited on line memory capacity (M equals 4030 for the computer).

  10. Automated entry control system for nuclear facilities

    International Nuclear Information System (INIS)

    Ream, W.K.; Espinoza, J.

    1985-01-01

    An entry control system to automatically control access to nuclear facilities is described. The design uses a centrally located console, integrated into the regular security system, to monitor the computer-controlled passage into and out of sensitive areas. Four types of entry control points are used: an unmanned enclosed portal with metal and SNM detectors for contraband detection with positive personnel identification, a bypass portal for contraband search after a contraband alarm in a regular portal also with positive personnel identification, a single door entry point with positive personnel identification, and a single door entry point with only a magnetic card-type identification. Security force action is required only as a response to an alarm. The integration of the entry control function into the security system computer is also described. The interface between the entry control system and the monitoring security personnel utilizing a color graphics display with touch screen input is emphasized. 2 refs., 7 figs

  11. Computer simulation of thermal and fluid systems for MIUS integration and subsystems test /MIST/ laboratory. [Modular Integrated Utility System

    Science.gov (United States)

    Rochelle, W. C.; Liu, D. K.; Nunnery, W. J., Jr.; Brandli, A. E.

    1975-01-01

    This paper describes the application of the SINDA (systems improved numerical differencing analyzer) computer program to simulate the operation of the NASA/JSC MIUS integration and subsystems test (MIST) laboratory. The MIST laboratory is designed to test the integration capability of the following subsystems of a modular integrated utility system (MIUS): (1) electric power generation, (2) space heating and cooling, (3) solid waste disposal, (4) potable water supply, and (5) waste water treatment. The SINDA/MIST computer model is designed to simulate the response of these subsystems to externally impressed loads. The computer model determines the amount of recovered waste heat from the prime mover exhaust, water jacket and oil/aftercooler and from the incinerator. This recovered waste heat is used in the model to heat potable water, for space heating, absorption air conditioning, waste water sterilization, and to provide for thermal storage. The details of the thermal and fluid simulation of MIST including the system configuration, modes of operation modeled, SINDA model characteristics and the results of several analyses are described.

  12. Research Facilities | Wind | NREL

    Science.gov (United States)

    Research Facilities Research Facilities NREL's state-of-the-art wind research facilities at the Research Facilities Photo of five men in hard hards observing the end of a turbine blade while it's being tested. Structural Research Facilities A photo of two people silhouetted against a computer simulation of

  13. On the computation of the Nijboer-Zernike aberration integrals at arbitrary defocus

    NARCIS (Netherlands)

    Janssen, A.J.E.M.; Braat, J.J.M.; Dirksen, P.

    2004-01-01

    We present a new computation scheme for the integral expressions describing the contributions of single aberrations to the diffraction integral in the context of an extended Nijboer-Zernike approach. Such a scheme, in the form of a power series involving the defocus parameter with coefficients given

  14. Validation of an integral conceptual model of frailty in older residents of assisted living facilities

    NARCIS (Netherlands)

    Gobbens, R.J.J.; Krans, A.; van Assen, M.A.L.M.

    2015-01-01

    Objective The aim of this cross-sectional study was to examine the validity of an integral model of the associations between life-course determinants, disease(s), frailty, and adverse outcomes in older persons who are resident in assisted living facilities. Methods Between June 2013 and May 2014

  15. Validation of an integral conceptual model of frailty in older residents of assisted living facilities

    NARCIS (Netherlands)

    Gobbens, Robbert J J; Krans, Anita; van Assen, Marcel A L M

    2015-01-01

    Objective: The aim of this cross-sectional study was to examine the validity of an integral model of the associations between life-course determinants, disease(s), frailty, and adverse outcomes in older persons who are resident in assisted living facilities. Methods: Between June 2013 and May 2014

  16. An integrated prediction and optimization model of biogas production system at a wastewater treatment facility.

    Science.gov (United States)

    Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih

    2015-11-01

    This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Planning Tools For Estimating Radiation Exposure At The National Ignition Facility

    International Nuclear Information System (INIS)

    Verbeke, J.; Young, M.; Brereton, S.; Dauffy, L.; Hall, J.; Hansen, L.; Khater, H.; Kim, S.; Pohl, B.; Sitaraman, S.

    2010-01-01

    A set of computational tools was developed to help estimate and minimize potential radiation exposure to workers from material activation in the National Ignition Facility (NIF). AAMI (Automated ALARA-MCNP Interface) provides an efficient, automated mechanism to perform the series of calculations required to create dose rate maps for the entire facility with minimal manual user input. NEET (NIF Exposure Estimation Tool) is a web application that combines the information computed by AAMI with a given shot schedule to compute and display the dose rate maps as a function of time. AAMI and NEET are currently used as work planning tools to determine stay-out times for workers following a given shot or set of shots, and to help in estimating integrated doses associated with performing various maintenance activities inside the target bay. Dose rate maps of the target bay were generated following a low-yield 10 16 D-T shot and will be presented in this paper.

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  19. The management of mechanical integrity inspections at small-sized 'Seveso' facilities

    International Nuclear Information System (INIS)

    Bragatto, Paolo A.; Pittiglio, Paolo; Ansaldi, Silvia

    2009-01-01

    The mechanical integrity (MI) of equipment has been controlled at all industrial facilities for many decades. Control methods and intervals are regulated by laws or codes and best practices. In European countries, the legislations implementing the Seveso Directives on the control of major accident hazards require the owner of establishments where hazardous chemicals are handled, to implement a safety management system (SMS). MI controls should be an integral part of the SMS. At large establishments this goal is achieved by adopting the RBI method, but in small-sized establishments with a limited budget and scanty personnel, a heuristic approach is more suitable. This paper demonstrates the feasibility and advantages of integrating SMS and MI by means of a simple method that includes a few basic concepts of RBI without additional costs for operator. This method, supported by a software tool, is resilient as it functions effectively in spite of eventual budget reductions and personnel turnover. The results of MI controls can also be exploited to monitor equipment condition and demonstrate the adequacy of technical systems to the Competent Authorities (CA). Furthermore, the SMS can 'capture' knowledge resulting from MI experience and exploit it for a better understanding of risk

  20. TUNL computer facilities

    International Nuclear Information System (INIS)

    Boyd, M.; Edwards, S.E.; Gould, C.R.; Roberson, N.R.; Westerfeldt, C.R.

    1985-01-01

    The XSYS system has been relatively stable during the last year, and most of our efforts have involved routine software maintenance and enhancement of existing XSYS capabilities. Modifications were made in the MBD program GDAP to increase the execution speed in key GDAP routines. A package of routines has been developed to allow communication between the XSYS and the new Wien filter microprocessor. Recently the authors have upgraded their operating system from VSM V3.7 to V4.1. This required numerous modifications to XSYS, mostly in the command procedures. A new reorganized edition of the XSYS manual will be issued shortly. The TUNL High Resolution Laboratory's VAX 11/750 computer has been in operation for its first full year as a replacement for the PRIME 300 computer which was purchased in 1974 and retired nine months ago. The data acquisition system on the VAX has been in use for the past twelve months performing a number of experiments

  1. HPCAT: an integrated high-pressure synchrotron facility at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Shen, Guoyin; Chow, Paul; Xiao, Yuming; Sinogeikin, Stanislav; Meng, Yue; Yang, Wenge; Liermann, Hans-Peter; Shebanova, Olga; Rod, Eric; Bommannavar, Arunkumar; Mao, Ho-Kwang

    2008-01-01

    The high pressure collaborative access team (HPCAT) was established to advance cutting edge, multidisciplinary, high-pressure (HP) science and technology using synchrotron radiation at sector 16 of the Advanced Photon Source of Argonne National Laboratory. The integrated HPCAT facility has established four operating beamlines in nine hutches. Two beamlines are split in energy space from the insertion device (16ID) line, whereas the other two are spatially divided into two fans from the bending magnet (16BM) line. An array of novel X-ray diffraction and spectroscopic techniques has been integrated with HP and extreme temperature instrumentation at HPCAT. With a multidisciplinary approach and multi-institution collaborations, the HP program at the HPCAT has been enabling myriad scientific breakthroughs in HP physics, chemistry, materials, and Earth and planetary sciences.

  2. Remediation Approach for the Integrated Facility Disposition Project at the Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Kirk, P.G.; Stephens, Jr.J.M.

    2009-01-01

    The Integrated Facility Disposition Project (IFDP) is a multi-billion-dollar remediation effort being conducted by the U.S. Department of Energy (DOE) Office of Environmental Management in Oak Ridge, Tennessee. The scope of the IFDP encompasses remedial actions related to activities conducted over the past 65 years at the Oak Ridge National Laboratory (ORNL) and the Y-12 National Security Complex (Y-12). Environmental media and facilities became contaminated as a result of operations, leaks, spills, and past waste disposal practices. ORNL's mission includes energy, environmental, nuclear security, computational, and materials research and development. Remediation activities will be implemented at ORNL as part of IFDP scope to meet remedial action objectives established in existing and future decision documents. Remedial actions are necessary (1) to comply with environmental regulations to reduce human health and environmental risk and (2) to release strategic real estate needed for modernization initiatives at ORNL. The scope of remedial actions includes characterization, waste management, transportation and disposal, stream restoration, and final remediation of contaminated soils, sediments, and groundwater. Activities include removal of at or below-grade substructures such as slabs, underground utilities, underground piping, tanks, basins, pits, ducts, equipment housings, manholes, and concrete-poured structures associated with equipment housings and basement walls/floors/columns. Many interim remedial actions involving groundwater and surface water that have not been completed are included in the IFDP remedial action scope. The challenges presented by the remediation of Bethel Valley at ORNL are formidable. The proposed approach to remediation endeavors to use the best available technologies and technical approaches from EPA and other federal agencies and lessons learned from previous cleanup efforts. The objective is to minimize cost, maximize remedial

  3. GASFLOW: A computational model to analyze accidents in nuclear containment and facility buildings

    International Nuclear Information System (INIS)

    Travis, J.R.; Nichols, B.D.; Wilson, T.L.; Lam, K.L.; Spore, J.W.; Niederauer, G.F.

    1993-01-01

    GASFLOW is a finite-volume computer code that solves the time-dependent, compressible Navier-Stokes equations for multiple gas species. The fluid-dynamics algorithm is coupled to the chemical kinetics of combusting liquids or gases to simulate diffusion or propagating flames in complex geometries of nuclear containment or confinement and facilities' buildings. Fluid turbulence is calculated to enhance the transport and mixing of gases in rooms and volumes that may be connected by a ventilation system. The ventilation system may consist of extensive ductwork, filters, dampers or valves, and fans. Condensation and heat transfer to walls, floors, ceilings, and internal structures are calculated to model the appropriate energy sinks. Solid and liquid aerosol behavior is simulated to give the time and space inventory of radionuclides. The solution procedure of the governing equations is a modified Los Alamos ICE'd-ALE methodology. Complex facilities can be represented by separate computational domains (multiblocks) that communicate through overlapping boundary conditions. The ventilation system is superimposed throughout the multiblock mesh. Gas mixtures and aerosols are transported through the free three-dimensional volumes and the restricted one-dimensional ventilation components as the accident and fluid flow fields evolve. Combustion may occur if sufficient fuel and reactant or oxidizer are present and have an ignition source. Pressure and thermal loads on the building, structural components, and safety-related equipment can be determined for specific accident scenarios. GASFLOW calculations have been compared with large oil-pool fire tests in the 1986 HDR containment test T52.14, which is a 3000-kW fire experiment. The computed results are in good agreement with the observed data

  4. Integration of smart wearable mobile devices and cloud computing in South African healthcare

    CSIR Research Space (South Africa)

    Mvelase, PS

    2015-11-01

    Full Text Available Integration of Smart Wearable Mobile Devices and Cloud Computing in South African Healthcare Promise MVELASE, Zama DLAMINI, Angeline DLUDLA, Happy SITHOLE Abstract: The acceptance of cloud computing is increasing in a fast pace in distributed...

  5. Conjunctive operation of river facilities for integrated water resources management in Korea

    Directory of Open Access Journals (Sweden)

    H. Kim

    2016-10-01

    Full Text Available With the increasing trend of water-related disasters such as floods and droughts resulting from climate change, the integrated management of water resources is gaining importance recently. Korea has worked towards preventing disasters caused by floods and droughts, managing water resources efficiently through the coordinated operation of river facilities such as dams, weirs, and agricultural reservoirs. This has been pursued to enable everyone to enjoy the benefits inherent to the utilization of water resources, by preserving functional rivers, improving their utility and reducing the degradation of water quality caused by floods and droughts. At the same time, coordinated activities are being conducted in multi-purpose dams, hydro-power dams, weirs, agricultural reservoirs and water use facilities (featuring a daily water intake of over 100 000 m3 day−1 with the purpose of monitoring the management of such facilities. This is being done to ensure the protection of public interest without acting as an obstacle to sound water management practices. During Flood Season, each facilities contain flood control capacity by limited operating level which determined by the Regulation Council in advance. Dam flood discharge decisions are approved through the flood forecasting and management of Flood Control Office due to minimize flood damage for both upstream and downstream. The operational plan is implemented through the council's predetermination while dry season for adequate quantity and distribution of water.

  6. Integrated Computational Materials Engineering (ICME) for Third Generation Advanced High-Strength Steel Development

    Energy Technology Data Exchange (ETDEWEB)

    Savic, Vesna; Hector, Louis G.; Ezzat, Hesham; Sachdev, Anil K.; Quinn, James; Krupitzer, Ronald; Sun, Xin

    2015-06-01

    This paper presents an overview of a four-year project focused on development of an integrated computational materials engineering (ICME) toolset for third generation advanced high-strength steels (3GAHSS). Following a brief look at ICME as an emerging discipline within the Materials Genome Initiative, technical tasks in the ICME project will be discussed. Specific aims of the individual tasks are multi-scale, microstructure-based material model development using state-of-the-art computational and experimental techniques, forming, toolset assembly, design optimization, integration and technical cost modeling. The integrated approach is initially illustrated using a 980 grade transformation induced plasticity (TRIP) steel, subject to a two-step quenching and partitioning (Q&P) heat treatment, as an example.

  7. A Computer Simulation to Assess the Nuclear Material Accountancy System of a MOX Fuel Fabrication Facility

    International Nuclear Information System (INIS)

    Portaix, C.G.; Binner, R.; John, H.

    2015-01-01

    SimMOX is a computer programme that simulates container histories as they pass through a MOX facility. It performs two parallel calculations: · the first quantifies the actual movements of material that might be expected to occur, given certain assumptions about, for instance, the accumulation of material and waste, and of their subsequent treatment; · the second quantifies the same movements on the basis of the operator's perception of the quantities involved; that is, they are based on assumptions about quantities contained in the containers. Separate skeletal Excel computer programmes are provided, which can be configured to generate further accountancy results based on these two parallel calculations. SimMOX is flexible in that it makes few assumptions about the order and operational performance of individual activities that might take place at each stage of the process. It is able to do this because its focus is on material flows, and not on the performance of individual processes. Similarly there are no pre-conceptions about the different types of containers that might be involved. At the macroscopic level, the simulation takes steady operation as its base case, i.e., the same quantity of material is deemed to enter and leave the simulated area, over any given period. Transient situations can then be superimposed onto this base scene, by simulating them as operational incidents. A general facility has been incorporated into SimMOX to enable the user to create an ''act of a play'' based on a number of operational incidents that have been built into the programme. By doing this a simulation can be constructed that predicts the way the facility would respond to any number of transient activities. This computer programme can help assess the nuclear material accountancy system of a MOX fuel fabrication facility; for instance the implications of applying NRTA (near real time accountancy). (author)

  8. iTools: a framework for classification, categorization and integration of computational biology resources.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2008-05-01

    Full Text Available The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long

  9. Health workers' knowledge of and attitudes towards computer applications in rural African health facilities.

    Science.gov (United States)

    Sukums, Felix; Mensah, Nathan; Mpembeni, Rose; Kaltschmidt, Jens; Haefeli, Walter E; Blank, Antje

    2014-01-01

    The QUALMAT (Quality of Maternal and Prenatal Care: Bridging the Know-do Gap) project has introduced an electronic clinical decision support system (CDSS) for pre-natal and maternal care services in rural primary health facilities in Burkina Faso, Ghana, and Tanzania. To report an assessment of health providers' computer knowledge, experience, and attitudes prior to the implementation of the QUALMAT electronic CDSS. A cross-sectional study was conducted with providers in 24 QUALMAT project sites. Information was collected using structured questionnaires. Chi-squared tests and one-way ANOVA describe the association between computer knowledge, attitudes, and other factors. Semi-structured interviews and focus groups were conducted to gain further insights. A total of 108 providers responded, 63% were from Tanzania and 37% from Ghana. The mean age was 37.6 years, and 79% were female. Only 40% had ever used computers, and 29% had prior computer training. About 80% were computer illiterate or beginners. Educational level, age, and years of work experience were significantly associated with computer knowledge (pworkplace. Given the low levels of computer knowledge among rural health workers in Africa, it is important to provide adequate training and support to ensure the successful uptake of electronic CDSSs in these settings. The positive attitudes to computers found in this study underscore that also rural care providers are ready to use such technology.

  10. Integrating supervision, control and data acquisition—The ITER Neutral Beam Test Facility experience

    Energy Technology Data Exchange (ETDEWEB)

    Luchetta, A., E-mail: adriano.luchetta@igi.cnr.it; Manduchi, G.; Taliercio, C.; Breda, M.; Capobianco, R.; Molon, F.; Moressa, M.; Simionato, P.; Zampiva, E.

    2016-11-15

    Highlights: • The paper describes the experience gained in the integration of different systems for the control and data acquisition system of the ITER Neutral Beam Test Facility. • It describes the way the different frameworks have been integrated. • It reports some lessons learnt during system integration. • It reports some authors’ considerations about the development the ITER CODAC. - Abstract: The ITER Neutral Beam (NBI) Test Facility, under construction in Padova, Italy consists in the ITER full scale ion source for the heating neutral beam injector, referred to as SPIDER, and the full size prototype injector, referred to as MITICA. The Control and Data Acquisition System (CODAS) for SPIDER has been developed and is going to be in operation in 2016. The system is composed of four main components: Supervision, Slow Control, Fast Control and Data Acquisition. These components interact with each other to carry out the system operation and, since they represent a common pattern in fusion experiments, software frameworks have been used for each (set of) component. In order to reuse as far as possible the architecture developed for SPIDER, it is important to clearly define the boundaries and the interfaces among the system components so that the implementation of any component can be replaced without affecting the overall architecture. This work reports the experience gained in the development of SPIDER components, highlighting the importance in the definition of generic interfaces among component, showing how the specific solutions have been adapted to such interfaces and suggesting possible approaches for the development of other ITER subsystems.

  11. Integrating supervision, control and data acquisition—The ITER Neutral Beam Test Facility experience

    International Nuclear Information System (INIS)

    Luchetta, A.; Manduchi, G.; Taliercio, C.; Breda, M.; Capobianco, R.; Molon, F.; Moressa, M.; Simionato, P.; Zampiva, E.

    2016-01-01

    Highlights: • The paper describes the experience gained in the integration of different systems for the control and data acquisition system of the ITER Neutral Beam Test Facility. • It describes the way the different frameworks have been integrated. • It reports some lessons learnt during system integration. • It reports some authors’ considerations about the development the ITER CODAC. - Abstract: The ITER Neutral Beam (NBI) Test Facility, under construction in Padova, Italy consists in the ITER full scale ion source for the heating neutral beam injector, referred to as SPIDER, and the full size prototype injector, referred to as MITICA. The Control and Data Acquisition System (CODAS) for SPIDER has been developed and is going to be in operation in 2016. The system is composed of four main components: Supervision, Slow Control, Fast Control and Data Acquisition. These components interact with each other to carry out the system operation and, since they represent a common pattern in fusion experiments, software frameworks have been used for each (set of) component. In order to reuse as far as possible the architecture developed for SPIDER, it is important to clearly define the boundaries and the interfaces among the system components so that the implementation of any component can be replaced without affecting the overall architecture. This work reports the experience gained in the development of SPIDER components, highlighting the importance in the definition of generic interfaces among component, showing how the specific solutions have been adapted to such interfaces and suggesting possible approaches for the development of other ITER subsystems.

  12. Design and first integral test of MUSE facility in ALPHA program

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyun-sun; Yamano, Norihiro; Maruyama, Yu; Moriyama, Kiyofumi; Kudo, Tamotsu; Yang, Yanhua; Sugimoto, Jun [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    Vapor explosion (Steam explosion or energetic Fuel-Coolant Interaction) is a phenomenon in which a hot liquid rapidly releases its internal energy into a surrounding colder and more volatile liquid when these liquids come into a sudden contact. This rapid energy release leads to rapid vapor production within a timescale short compared to vapor expansion causes local pressurization similar to an explosion and eventually threatens the surroundings by dynamic pressures and the subsequent expansion. It has been recognized that the energetics of vapor explosions strongly depend on the initial mixing geometry established by the contact of hot and cold liquids. Therefore, a new program has been initiated to investigate the energetics of vapor explosions in various contact geometries; i.e., pouring, stratified, coolant and melt injection modes in a facility which is able to measure the energy conversion ratio and eventually to provide data to evaluate the mechanistic analytical models. In the report, this new facility, called MUSE (MUlti-configuration in Steam Explosions), and the results of the first integral test are described in detail. (author)

  13. A conceptual design of multidisciplinary-integrated C.F.D. simulation on parallel computers

    International Nuclear Information System (INIS)

    Onishi, Ryoichi; Ohta, Takashi; Kimura, Toshiya.

    1996-11-01

    A design of a parallel aeroelastic code for aircraft integrated simulations is conducted. The method for integrating aerodynamics and structural dynamics software on parallel computers is devised by using the Euler/Navier-Stokes equations coupled with wing-box finite element structures. A synthesis of modern aircraft requires the optimizations of aerodynamics, structures, controls, operabilities, or other design disciplines, and the R and D efforts to implement Multidisciplinary Design Optimization environments using high performance computers are made especially among the U.S. aerospace industries. This report describes a Multiple Program Multiple Data (MPMD) parallelization of aerodynamics and structural dynamics codes with a dynamic deformation grid. A three-dimensional computation of a flowfield with dynamic deformation caused by a structural deformation is performed, and a pressure data calculated is used for a computation of the structural deformation which is input again to a fluid dynamics code. This process is repeated exchanging the computed data of pressures and deformations between flowfield grids and structural elements. It enables to simulate the structure movements which take into account of the interaction of fluid and structure. The conceptual design for achieving the aforementioned various functions is reported. Also the future extensions to incorporate control systems, which enable to simulate a realistic aircraft configuration to be a major tool for Aircraft Integrated Simulation, are investigated. (author)

  14. Integrated circuit design using design automation

    International Nuclear Information System (INIS)

    Gwyn, C.W.

    1976-09-01

    Although the use of computer aids to develop integrated circuits is relatively new at Sandia, the program has been very successful. The results have verified the utility of the in-house CAD design capability. Custom IC's have been developed in much shorter times than available through semiconductor device manufacturers. In addition, security problems were minimized and a saving was realized in circuit cost. The custom CMOS IC's were designed at less than half the cost of designing with conventional techniques. In addition to the computer aided design, the prototype fabrication and testing capability provided by the semiconductor development laboratory and microelectronics computer network allows the circuits to be fabricated and evaluated before the designs are transferred to the commercial semiconductor manufacturers for production. The Sandia design and prototype fabrication facilities provide the capability of complete custom integrated circuit development entirely within the ERDA laboratories

  15. Refurbishment and Automation of the Thermal/Vacuum Facilities at the Goddard Space Flight Center

    Science.gov (United States)

    Donohue, John T.; Johnson, Chris; Ogden, Rick; Sushon, Janet

    1998-01-01

    The thermal/vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the 11 facilities, currently 10 of the systems are scheduled for refurbishment and/or replacement as part of a 5-year implementation. Expected return on investment includes the reduction in test schedules, improvements in the safety of facility operations, reduction in the complexity of a test and the reduction in personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering and for the automation of thermal/vacuum facilities and thermal/vacuum tests. Automation of the thermal/vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs) and the use of Supervisory Control and Data Acquisition (SCADA) systems. These components allow the computer control and automation of mechanical components such as valves and pumps. In some cases, the chamber and chamber shroud require complete replacement while others require only mechanical component retrofit or replacement. The project of refurbishment and automation began in 1996 and has resulted in the computer control of one Facility (Facility #225) and the integration of electronically controlled devices and PLCs within several other facilities. Facility 225 has been successfully controlled by PLC and SCADA for over one year. Insignificant anomalies have occurred and were resolved with minimal impact to testing and operations. The amount of work remaining to be performed will occur over the next four to five years. Fiscal year 1998 includes the complete refurbishment of one facility, computer control of the thermal systems in two facilities, implementation of SCADA and PLC systems to support multiple facilities and the implementation of a Database server to allow efficient test management and data analysis.

  16. Computer Profile of School Facilities Energy Consumption.

    Science.gov (United States)

    Oswalt, Felix E.

    This document outlines a computerized management tool designed to enable building managers to identify energy consumption as related to types and uses of school facilities for the purpose of evaluating and managing the operation, maintenance, modification, and planning of new facilities. Specifically, it is expected that the statistics generated…

  17. A study on development of Pyro process integrated inactive demonstration facility

    International Nuclear Information System (INIS)

    Cho, I.; Lee, E.; Choung, W.; You, G.; Kim, H.

    2010-10-01

    Since 2007, the Pride (Pyro process integrated inactive demonstration facility) has been developed to demonstrate the integrated engineering-scale pyro processing using natural uranium with surrogate materials. In this paper, safety evaluation on hypothetical accident case is carried out to ensure the release of radioactivity being negligible to the environment and the performance of indoor argon flow for the argon cell has been investigated by means of CFD analysis. The worst accident case, even in the firing of the all uranium metal in argon cell, cause dose rate are negligible comparing to 0.25 Sv of effective dose rate to whole body or 3 Sv of equivalent dose rate to the thyroid preliminary CFD analyses show the temperature and velocity distribution of argon cell, and give the information to change the argon exchange rate and displace the argon supply or exhaust duct. CFD will allow design change and improvements in ventilation systems at lower cost. (Author)

  18. Monitoring land- and water-use dynamics in the Columbia Plateau using remote-sensing computer analysis and integration techniques

    International Nuclear Information System (INIS)

    Wukelic, G.E.; Foote, H.P.; Blair, S.C.; Begej, C.D.

    1981-09-01

    This study successfully utilized advanced, remote-sensing computer-analysis techniques to quantify and map land- and water-use trends potentially relevant to siting, developing, and operating a national high-level nuclear waste repository on the US Department of Energy's (DOE) Hanford Site in eastern Washington State. Specifically, using a variety of digital data bases (primarily multidate Landsat data) and digital analysis programs, the study produced unique numerical data and integrated data reference maps relevant to regional (Columbia Plateau) and localized (Pasco Basin) hydrologic considerations associated with developing such a facility. Accordingly, study results should directly contribute to the preparation of the Basalt Waste Isolation Project site-characterization report currently in progress. Moreover, since all study data developed are in digital form, they can be called upon to contribute to furute reference repository location monitoring and reporting efforts, as well as be utilized in other DOE programmatic areas having technical and/or environmental interest in the Columbia Plateau region. The results obtained indicate that multidate digital Landsat data provide an inexpensive, up-to-date, and accurate data base and reference map of natural and cultural features existing in any region. These data can be (1) computer enhanced to highlight selected surface features of interest; (2) processed/analyzed to provide regional land-cover/use information and trend data; and (3) combined with other line and point data files to accomodate interactive, correlative analyses and integrated color-graphic displays to aid interpretation and modeling efforts

  19. AI/OR computational model for integrating qualitative and quantitative design methods

    Science.gov (United States)

    Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor

    1990-01-01

    A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.

  20. Vehicle-to-Grid Integration | Energy Systems Integration Facility | NREL

    Science.gov (United States)

    Vehicle-to-Grid Integration Vehicle-to-Grid Integration NREL's research stands at the forefront of vehicle charging station Our work focuses on building the infrastructure and integration needed for benefit each other. Electric Vehicles NREL's research on electric vehicle (EV) grid integration examines

  1. Application of Framework for Integrating Safety, Security and Safeguards (3Ss) into the Design Of Used Nuclear Fuel Storage Facility

    Energy Technology Data Exchange (ETDEWEB)

    Badwan, Faris M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demuth, Scott F [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-06

    Department of Energy’s Office of Nuclear Energy, Fuel Cycle Research and Development develops options to the current commercial fuel cycle management strategy to enable the safe, secure, economic, and sustainable expansion of nuclear energy while minimizing proliferation risks by conducting research and development focused on used nuclear fuel recycling and waste management to meet U.S. needs. Used nuclear fuel is currently stored onsite in either wet pools or in dry storage systems, with disposal envisioned in interim storage facility and, ultimately, in a deep-mined geologic repository. The safe management and disposition of used nuclear fuel and/or nuclear waste is a fundamental aspect of any nuclear fuel cycle. Integrating safety, security, and safeguards (3Ss) fully in the early stages of the design process for a new nuclear facility has the potential to effectively minimize safety, proliferation, and security risks. The 3Ss integration framework could become the new national and international norm and the standard process for designing future nuclear facilities. The purpose of this report is to develop a framework for integrating the safety, security and safeguards concept into the design of Used Nuclear Fuel Storage Facility (UNFSF). The primary focus is on integration of safeguards and security into the UNFSF based on the existing Nuclear Regulatory Commission (NRC) approach to addressing the safety/security interface (10 CFR 73.58 and Regulatory Guide 5.73) for nuclear power plants. The methodology used for adaptation of the NRC safety/security interface will be used as the basis for development of the safeguards /security interface and later will be used as the basis for development of safety and safeguards interface. Then this will complete the integration cycle of safety, security, and safeguards. The overall methodology for integration of 3Ss will be proposed, but only the integration of safeguards and security will be applied to the design of the

  2. Integrated optical circuits for numerical computation

    Science.gov (United States)

    Verber, C. M.; Kenan, R. P.

    1983-01-01

    The development of integrated optical circuits (IOC) for numerical-computation applications is reviewed, with a focus on the use of systolic architectures. The basic architecture criteria for optical processors are shown to be the same as those proposed by Kung (1982) for VLSI design, and the advantages of IOCs over bulk techniques are indicated. The operation and fabrication of electrooptic grating structures are outlined, and the application of IOCs of this type to an existing 32-bit, 32-Mbit/sec digital correlator, a proposed matrix multiplier, and a proposed pipeline processor for polynomial evaluation is discussed. The problems arising from the inherent nonlinearity of electrooptic gratings are considered. Diagrams and drawings of the application concepts are provided.

  3. What Is Energy Systems Integration? | Energy Systems Integration Facility |

    Science.gov (United States)

    NREL What Is Energy Systems Integration? What Is Energy Systems Integration? Energy systems integration (ESI) is an approach to solving big energy challenges that explores ways for energy systems to Research Community NREL is a founding member of the International Institute for Energy Systems Integration

  4. Factors Hindering the Integration of CALL in a Tertiary Institution

    Directory of Open Access Journals (Sweden)

    Izaham Shah Ismail

    2008-12-01

    Full Text Available The field of Computer Assisted Language Learning (CALL is a field that is constantly evolving as it is very much dependent on the advancement of computer technologies. With new technologies being invented almost every day, experts in the field are looking for ways to apply these new technologies in the language classroom. Despite that, teachers are said to be slow at adopting technology in their classrooms and language teachers, whether at schools or tertiary institutions, are no exception. This study attempts to investigate the factors that hinder ESL instructors at an institution of higher learning from integrating CALL in their lessons. Interviews were conducted with five ESL instructors and results revealed that factors which hinder them from integrating CALL in their teaching are universal factors such as knowledge in technology and pedagogy, computer facilities and resources, absence of exemplary integration of CALL, personal beliefs on language teaching, views on the role of a computers as teacher, and evaluation of learning outcomes.

  5. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  6. Future Computer Requirements for Computational Aerodynamics

    Science.gov (United States)

    1978-01-01

    Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.

  7. Projects at the component development and integration facility. Quarterly technical progress report, April 1, 1994--June 30, 1994

    International Nuclear Information System (INIS)

    1994-01-01

    This quarterly technical progress report presents progress on the projects at the Component Development and Integration Facility (CDIF) during the third quarter of FY94. The CDIF is a major Department of Energy test facility in Butte, Montana, operated by MSE, Inc. Projects in progress include: Biomass Remediation Project; Heavy Metal-Contaminated Soil Project; MHD Shutdown; Mine Waste Technology Pilot Program; Plasma Projects; Resource Recovery Project; and Spray Casting Project

  8. Software for computing and annotating genomic ranges.

    Directory of Open Access Journals (Sweden)

    Michael Lawrence

    Full Text Available We describe Bioconductor infrastructure for representing and computing on annotated genomic ranges and integrating genomic data with the statistical computing features of R and its extensions. At the core of the infrastructure are three packages: IRanges, GenomicRanges, and GenomicFeatures. These packages provide scalable data structures for representing annotated ranges on the genome, with special support for transcript structures, read alignments and coverage vectors. Computational facilities include efficient algorithms for overlap and nearest neighbor detection, coverage calculation and other range operations. This infrastructure directly supports more than 80 other Bioconductor packages, including those for sequence analysis, differential expression analysis and visualization.

  9. Surface Water Modeling Using an EPA Computer Code for Tritiated Waste Water Discharge from the heavy Water Facility

    International Nuclear Information System (INIS)

    Chen, K.F.

    1998-06-01

    Tritium releases from the D-Area Heavy Water Facilities to the Savannah River have been analyzed. The U.S. EPA WASP5 computer code was used to simulate surface water transport for tritium releases from the D-Area Drum Wash, Rework, and DW facilities. The WASP5 model was qualified with the 1993 tritium measurements at U.S. Highway 301. At the maximum tritiated waste water concentrations, the calculated tritium concentration in the Savannah River at U.S. Highway 301 due to concurrent releases from D-Area Heavy Water Facilities varies from 5.9 to 18.0 pCi/ml as a function of the operation conditions of these facilities. The calculated concentration becomes the lowest when the batch releases method for the Drum Wash Waste Tanks is adopted

  10. Computational integration of the phases and procedures of calibration processes for radioprotection

    International Nuclear Information System (INIS)

    Santos, Gleice R. dos; Thiago, Bibiana dos S.; Rocha, Felicia D.G.; Santos, Gelson P. dos; Potiens, Maria da Penha A.; Vivolo, Vitor

    2011-01-01

    This work proceed the computational integration of the processes phases by using only a single computational software, from the entrance of the instrument at the Instrument Calibration Laboratory (LCI-IPEN) to the conclusion of calibration procedures. So, the initial information such as trade mark, model, manufacturer, owner, and the calibration records are digitized once until the calibration certificate emission

  11. Flow analysis of HANARO flow simulated test facility

    International Nuclear Information System (INIS)

    Park, Yong-Chul; Cho, Yeong-Garp; Wu, Jong-Sub; Jun, Byung-Jin

    2002-01-01

    The HANARO, a multi-purpose research reactor of 30 MWth open-tank-in-pool type, has been under normal operation since its initial critical in February, 1995. Many experiments should be safely performed to activate the utilization of the NANARO. A flow simulated test facility is being developed for the endurance test of reactivity control units for extended life times and the verification of structural integrity of those experimental facilities prior to loading in the HANARO. This test facility is composed of three major parts; a half-core structure assembly, flow circulation system and support system. The half-core structure assembly is composed of plenum, grid plate, core channel with flow tubes, chimney and dummy pool. The flow channels are to be filled with flow orifices to simulate core channels. This test facility must simulate similar flow characteristics to the HANARO. This paper, therefore, describes an analytical analysis to study the flow behavior of the test facility. The computational flow analysis has been performed for the verification of flow structure and similarity of this test facility assuming that flow rates and pressure differences of the core channel are constant. The shapes of flow orifices were determined by the trial and error method based on the design requirements of core channel. The computer analysis program with standard k - ε turbulence model was applied to three-dimensional analysis. The results of flow simulation showed a similar flow characteristic with that of the HANARO and satisfied the design requirements of this test facility. The shape of flow orifices used in this numerical simulation can be adapted for manufacturing requirements. The flow rate and the pressure difference through core channel proved by this simulation can be used as the design requirements of the flow system. The analysis results will be verified with the results of the flow test after construction of the flow system. (author)

  12. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    Energy Technology Data Exchange (ETDEWEB)

    Kostin, Mikhail [Michigan State Univ., East Lansing, MI (United States); Mokhov, Nikolai [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Niita, Koji [Research Organization for Information Science and Technology, Ibaraki-ken (Japan)

    2013-09-25

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  13. An automated entry control system for nuclear facilities

    International Nuclear Information System (INIS)

    Ream, W.K.; Espinoza, J.

    1985-01-01

    An entry control system to automatically control access to nuclear facilities is described. The design uses a centrally located console, integrated into the regular security system, to monitor the computer-controlled passage into and out of sensitive areas. Four types of entry control points are used: an unmanned enclosed portal with metal and SNM detectors for contraband detection with positive personnel identification, a bypass portal for contraband search after a contraband alarm in a regular portal also with positive personnel identification, a single door entry point with positive personnel identification, and a single door entry point with only a magnetic card-type identification. Security force action is required only as a response to an alarm. The integration of the entry control function into the security system computer is also described. The interface between the entry control system and the monitoring security personnel utilizing a color graphics display with touch screen input is emphasized

  14. Computational Simulations of the NASA Langley HyMETS Arc-Jet Facility

    Science.gov (United States)

    Brune, A. J.; Bruce, W. E., III; Glass, D. E.; Splinter, S. C.

    2017-01-01

    The Hypersonic Materials Environmental Test System (HyMETS) arc-jet facility located at the NASA Langley Research Center in Hampton, Virginia, is primarily used for the research, development, and evaluation of high-temperature thermal protection systems for hypersonic vehicles and reentry systems. In order to improve testing capabilities and knowledge of the test article environment, an effort is underway to computationally simulate the flow-field using computational fluid dynamics (CFD). A detailed three-dimensional model of the arc-jet nozzle and free-jet portion of the flow-field has been developed and compared to calibration probe Pitot pressure and stagnation-point heat flux for three test conditions at low, medium, and high enthalpy. The CFD model takes into account uniform pressure and non-uniform enthalpy profiles at the nozzle inlet as well as catalytic recombination efficiency effects at the probe surface. Comparing the CFD results and test data indicates an effectively fully-catalytic copper surface on the heat flux probe of about 10% efficiency and a 2-3 kpa pressure drop from the arc heater bore, where the pressure is measured, to the plenum section, prior to the nozzle. With these assumptions, the CFD results are well within the uncertainty of the stagnation pressure and heat flux measurements. The conditions at the nozzle exit were also compared with radial and axial velocimetry. This simulation capability will be used to evaluate various three-dimensional models that are tested in the HyMETS facility. An end-to-end aerothermal and thermal simulation of HyMETS test articles will follow this work to provide a better understanding of the test environment, test results, and to aid in test planning. Additional flow-field diagnostic measurements will also be considered to improve the modeling capability.

  15. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Phillips, Julia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wampler, Cheryl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Meisner, Robert [National Nuclear Security Administration (NNSA), Washington, DC (United States)

    2010-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality, and scientific details); to quantify critical margins and uncertainties; and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  16. Photon echo quantum random access memory integration in a quantum computer

    International Nuclear Information System (INIS)

    Moiseev, Sergey A; Andrianov, Sergey N

    2012-01-01

    We have analysed an efficient integration of multi-qubit echo quantum memory (QM) into the quantum computer scheme based on squids, quantum dots or atomic resonant ensembles in a quantum electrodynamics cavity. Here, one atomic ensemble with controllable inhomogeneous broadening is used for the QM node and other nodes characterized by the homogeneously broadened resonant line are used for processing. We have found the optimal conditions for the efficient integration of the multi-qubit QM modified for the analysed scheme, and we have determined the self-temporal modes providing a perfect reversible transfer of the photon qubits between the QM node and arbitrary processing nodes. The obtained results open the way for realization of a full-scale solid state quantum computing based on the efficient multi-qubit QM. (paper)

  17. Integrating publicly-available data to generate computationally ...

    Science.gov (United States)

    The adverse outcome pathway (AOP) framework provides a way of organizing knowledge related to the key biological events that result in a particular health outcome. For the majority of environmental chemicals, the availability of curated pathways characterizing potential toxicity is limited. Methods are needed to assimilate large amounts of available molecular data and quickly generate putative AOPs for further testing and use in hazard assessment. A graph-based workflow was used to facilitate the integration of multiple data types to generate computationally-predicted (cp) AOPs. Edges between graph entities were identified through direct experimental or literature information or computationally inferred using frequent itemset mining. Data from the TG-GATEs and ToxCast programs were used to channel large-scale toxicogenomics information into a cpAOP network (cpAOPnet) of over 20,000 relationships describing connections between chemical treatments, phenotypes, and perturbed pathways measured by differential gene expression and high-throughput screening targets. Sub-networks of cpAOPs for a reference chemical (carbon tetrachloride, CCl4) and outcome (hepatic steatosis) were extracted using the network topology. Comparison of the cpAOP subnetworks to published mechanistic descriptions for both CCl4 toxicity and hepatic steatosis demonstrate that computational approaches can be used to replicate manually curated AOPs and identify pathway targets that lack genomic mar

  18. COGMIR: A computer model for knowledge integration

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z.X.

    1988-01-01

    This dissertation explores some aspects of knowledge integration, namely, accumulation of scientific knowledge and performing analogical reasoning on the acquired knowledge. Knowledge to be integrated is conveyed by paragraph-like pieces referred to as documents. By incorporating some results from cognitive science, the Deutsch-Kraft model of information retrieval is extended to a model for knowledge engineering, which integrates acquired knowledge and performs intelligent retrieval. The resulting computer model is termed COGMIR, which stands for a COGnitive Model for Intelligent Retrieval. A scheme, named query invoked memory reorganization, is used in COGMIR for knowledge integration. Unlike some other schemes which realize knowledge integration through subjective understanding by representing new knowledge in terms of existing knowledge, the proposed scheme suggests at storage time only recording the possible connection of knowledge acquired from different documents. The actual binding of the knowledge acquired from different documents is deferred to query time. There is only one way to store knowledge and numerous ways to utilize the knowledge. Each document can be represented as a whole as well as its meaning. In addition, since facts are constructed from the documents, document retrieval and fact retrieval are treated in a unified way. When the requested knowledge is not available, query invoked memory reorganization can generate suggestion based on available knowledge through analogical reasoning. This is done by revising the algorithms developed for document retrieval and fact retrieval, and by incorporating Gentner's structure mapping theory. Analogical reasoning is treated as a natural extension of intelligent retrieval, so that two previously separate research areas are combined. A case study is provided. All the components are implemented as list structures similar to relational data-bases.

  19. Blockchain-based database to ensure data integrity in cloud computing environments

    OpenAIRE

    Gaetani, Edoardo; Aniello, Leonardo; Baldoni, Roberto; Lombardi, Federico; Margheri, Andrea; Sassone, Vladimiro

    2017-01-01

    Data is nowadays an invaluable resource, indeed it guides all business decisions in most of the computer-aided human activities. Threats to data integrity are thus of paramount relevance, as tampering with data may maliciously affect crucial business decisions. This issue is especially true in cloud computing environments, where data owners cannot control fundamental data aspects, like the physical storage of data and the control of its accesses. Blockchain has recently emerged as a fascinati...

  20. The computational design of Geological Disposal Technology Integration System

    International Nuclear Information System (INIS)

    Ishihara, Yoshinao; Iwamoto, Hiroshi; Kobayashi, Shigeki; Neyama, Atsushi; Endo, Shuji; Shindo, Tomonori

    2002-03-01

    In order to develop 'Geological Disposal Technology Integration System' that is intended to systematize as knowledge base for fundamental study, the computational design of an indispensable database and image processing function to 'Geological Disposal Technology Integration System' was done, the prototype was made for trial purposes, and the function was confirmed. (1) Database of Integration System which systematized necessary information and relating information as an examination of a whole of repository composition and managed were constructed, and the system function was constructed as a system composed of image processing, analytical information management, the repository component management, and the system security function. (2) The range of the data treated with this system and information was examined, the design examination of the database structure was done, and the design examination of the image processing function of the data preserved in an integrated database was done. (3) The prototype of the database concerning a basic function, the system operation interface, and the image processing function was manufactured to verify the feasibility of the 'Geological Disposal Technology Integration System' based on the result of the design examination and the function was confirmed. (author)

  1. FIRAC: a computer code to predict fire-accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Krause, F.R.; Tang, P.K.; Andrae, R.W.; Martin, R.A.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire

  2. Energy Systems Integration News - October 2016 | Energy Systems Integration

    Science.gov (United States)

    Facility | NREL October 2016 Energy Systems Integration News A monthly recap of the latest energy systems integration (ESI) developments at NREL and around the world. Subscribe Archives October Integration Facility's main control room. OMNETRIC Group Demonstrates a Distributed Control Hierarchy for

  3. COMPUTER INTEGRATED MANUFACTURING: OVERVIEW OF MODERN STANDARDS

    Directory of Open Access Journals (Sweden)

    A. Рupena

    2016-09-01

    Full Text Available The article deals with modern international standards ISA-95 and ISA-88 on the development of computer inegreted manufacturing. It is shown scope of standards in the context of a hierarchical model of the enterprise. Article is built in such a way to describe the essence of the standards in the light of the basic descriptive models: product definition, resources, schedules and actual performance of industrial activity. Description of the product definition is given by hierarchical presentation of products at various levels of management. Much attention is given to describe this type of resources like equipment, which is logical chain to all these standards. For example, the standard batch process control shows the relationship between the definition of product and equipment on which it is made. The article shows the hierarchy of planning ERP-MES / MOM-SCADA (in terms of standard ISA-95, which traces the decomposition of common production plans of enterprises for specific works at APCS. We consider the appointment of the actual performance of production at MES / MOM considering KPI. Generalized picture of operational activity on a level MES / MOM is shown via general circuit diagrams of the relationship of activities and information flows between the functions. The article is finished by a substantiation of necessity of distribution, approval and development of standards ISA-88 and ISA-95 in Ukraine. The article is an overview and can be useful to specialists in computer-integrated systems control and management of industrial enterprises, system integrators and suppliers.

  4. Challenges in scaling NLO generators to leadership computers

    Science.gov (United States)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  5. EPA Facility Registry System (FRS): NCES

    Science.gov (United States)

    This web feature service contains location and facility identification information from EPA's Facility Registry System (FRS) for the subset of facilities that link to the National Center for Education Statistics (NCES). The primary federal database for collecting and analyzing data related to education in the United States and other Nations, NCES is located in the U.S. Department of Education, within the Institute of Education Sciences. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA00e2??s national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to NCES school facilities once the NCES data has been integrated into the FRS database. Additional information on FRS is available at the EPA website http://www.epa.gov/enviro/html/fii/index.html.

  6. An integrated computer design environment for the development of micro-computer critical software

    International Nuclear Information System (INIS)

    De Agostino, E.; Massari, V.

    1986-01-01

    The paper deals with the development of micro-computer software for Nuclear Safety System. More specifically, it describes an experimental work in the field of software development methodologies to be used for the implementation of micro-computer based safety systems. An investigation of technological improvements that are provided by state-of-the-art integrated packages for micro-based systems development has been carried out. The work has aimed to assess a suitable automated tools environment for the whole software life-cycle. The main safety functions, as DNBR, KW/FT, of a nuclear power reactor have been implemented in a host-target approach. A prototype test-bed microsystem has been implemented to run the safety functions in order to derive a concrete evaluation on the feasibility of critical software according to new technological trends of ''Software Factories''. (author)

  7. The Social Dimension of Computer-Integrated Manufacturing: An Extended Comment.

    Science.gov (United States)

    Badham, Richard J.

    1991-01-01

    The effect of computer-integrated manufacturing (CIM) on working conditions depends on the way in which the technologies are designed to fit operator requirements, work organization, and organizational objectives. Recent attempts to promote skill-based human-centered approaches to CIM design are aimed at introducing humane working conditions…

  8. Tavaxy: integrating Taverna and Galaxy workflows with cloud computing support.

    Science.gov (United States)

    Abouelhoda, Mohamed; Issa, Shadi Alaa; Ghanem, Moustafa

    2012-05-04

    Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis.The system can be accessed either through a

  9. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    Directory of Open Access Journals (Sweden)

    Abouelhoda Mohamed

    2012-05-01

    Full Text Available Abstract Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub- workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and

  10. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    Science.gov (United States)

    2012-01-01

    Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system

  11. An integrated computer-based procedure for teamwork in digital nuclear power plants.

    Science.gov (United States)

    Gao, Qin; Yu, Wenzhu; Jiang, Xiang; Song, Fei; Pan, Jiajie; Li, Zhizhong

    2015-01-01

    Computer-based procedures (CBPs) are expected to improve operator performance in nuclear power plants (NPPs), but they may reduce the openness of interaction between team members and harm teamwork consequently. To support teamwork in the main control room of an NPP, this study proposed a team-level integrated CBP that presents team members' operation status and execution histories to one another. Through a laboratory experiment, we compared the new integrated design and the existing individual CBP design. Sixty participants, randomly divided into twenty teams of three people each, were assigned to the two conditions to perform simulated emergency operating procedures. The results showed that compared with the existing CBP design, the integrated CBP reduced the effort of team communication and improved team transparency. The results suggest that this novel design is effective to optim team process, but its impact on the behavioural outcomes may be moderated by more factors, such as task duration. The study proposed and evaluated a team-level integrated computer-based procedure, which present team members' operation status and execution history to one another. The experimental results show that compared with the traditional procedure design, the integrated design reduces the effort of team communication and improves team transparency.

  12. ANS main control complex three-dimensional computer model development

    International Nuclear Information System (INIS)

    Cleaves, J.E.; Fletcher, W.M.

    1993-01-01

    A three-dimensional (3-D) computer model of the Advanced Neutron Source (ANS) main control complex is being developed. The main control complex includes the main control room, the technical support center, the materials irradiation control room, computer equipment rooms, communications equipment rooms, cable-spreading rooms, and some support offices and breakroom facilities. The model will be used to provide facility designers and operations personnel with capabilities for fit-up/interference analysis, visual ''walk-throughs'' for optimizing maintain-ability, and human factors and operability analyses. It will be used to determine performance design characteristics, to generate construction drawings, and to integrate control room layout, equipment mounting, grounding equipment, electrical cabling, and utility services into ANS building designs. This paper describes the development of the initial phase of the 3-D computer model for the ANS main control complex and plans for its development and use

  13. Integrated safeguards and security for a highly automated process

    International Nuclear Information System (INIS)

    Zack, N.R.; Hunteman, W.J.; Jaeger, C.D.

    1993-01-01

    Before the cancellation of the New Production Reactor Programs for the production of tritium, the reactors and associated processing were being designed to contain some of the most highly automated and remote systems conceived for a Department of Energy facility. Integrating safety, security, materials control and accountability (MC and A), and process systems at the proposed facilities would enhance the overall information and protection-in-depth available. Remote, automated fuel handling and assembly/disassembly techniques would deny access to the nuclear materials while upholding ALARA principles but would also require the full integration of all data/information systems. Such systems would greatly enhance MC and A as well as facilitate materials tracking. Physical protection systems would be connected with materials control features to cross check activities and help detect and resolve anomalies. This paper will discuss the results of a study of the safeguards and security benefits achieved from a highly automated and integrated remote nuclear facility and the impacts that such systems have on safeguards and computer and information security

  14. Integrated quality status and inventory tracking system for FFTF driver fuel pins

    International Nuclear Information System (INIS)

    Gottschalk, G.P.

    1979-11-01

    An integrated system for quality status and inventory tracking of Fast Flux Test Facility (FFTF) driver fuel pins has been developed. Automated fuel pin identification systems, a distributed computer network, and a data base are used to implement the tracking system

  15. The role of computer modelling in participatory integrated assessments

    International Nuclear Information System (INIS)

    Siebenhuener, Bernd; Barth, Volker

    2005-01-01

    In a number of recent research projects, computer models have been included in participatory procedures to assess global environmental change. The intention was to support knowledge production and to help the involved non-scientists to develop a deeper understanding of the interactions between natural and social systems. This paper analyses the experiences made in three projects with the use of computer models from a participatory and a risk management perspective. Our cross-cutting analysis of the objectives, the employed project designs and moderation schemes and the observed learning processes in participatory processes with model use shows that models play a mixed role in informing participants and stimulating discussions. However, no deeper reflection on values and belief systems could be achieved. In terms of the risk management phases, computer models serve best the purposes of problem definition and option assessment within participatory integrated assessment (PIA) processes

  16. Fuel cycle facility control system for the Integral Fast Reactor Program

    International Nuclear Information System (INIS)

    Benedict, R.W.; Tate, D.A.

    1993-01-01

    As part of the Integral Fast Reactor (IFR) Fuel Demonstration, a new distributed control system designed, implemented and installed. The Fuel processes are a combination of chemical and machining processes operated remotely. To meet this special requirement, the new control system provides complete sequential logic control motion and positioning control and continuous PID loop control. Also, a centralized computer system provides near-real time nuclear material tracking, product quality control data archiving and a centralized reporting function. The control system was configured to use programmable logic controllers, small logic controllers, personal computers with touch screens, engineering work stations and interconnecting networks. By following a structured software development method the operator interface was standardized. The system has been installed and is presently being tested for operations

  17. Integration of Traditional Birth Attendants into Prevention of Mother-to-Child Transmission at Primary Health Facilities in Kaduna, North-West Nigeria.

    Science.gov (United States)

    Nsirim, Reward O; Iyongo, Joseph A; Adekugbe, Olayinka; Ugochuku, Maureen

    2015-03-31

    One of the fundamental challenges to implementing successful prevention of mother-to-child transmission (PMTCT) programs in Nigeria is the uptake of PMTCT services at health facilities. Several issues usually discourage many pregnant women from receiving antenatal care services at designated health facilities within their communities. The CRS Nigeria PMTCT Project funded by the Global Fund in its Round 9 Phase 1 in Nigeria, sought to increase demand for HIV counseling and testing services for pregnant women at 25 supported primary health centers (PHCs) in Kaduna State, North-West Nigeria by integrating traditional birth attendants (TBAs) across the communities where the PHCs were located into the project. Community dialogues were held with the TBAs, community leaders and women groups. These dialogues focused on modes of mother to child transmission of HIV and the need for TBAs to refer their clients to PHCs for testing. Subsequently, data on number of pregnant women who were counseled, tested and received results was collected on a monthly basis from the 25 facilities using the national HIV/AIDS tools. Prior to this integration, the average number of pregnant women that were counseled, tested and received results was 200 pregnant women across all the 25 health facilities monthly. After the integration of TBAs into the program, the number of pregnant women that were counseled, tested and received results kept increasing month after month up to an average of 1500 pregnant women per month across the 25 health facilities. TBAs can thus play a key role in improving service uptake and utilization for pregnant women at primary health centers in the community - especially in the context of HIV/AIDS. They thus need to be integrated, rather than alienated, from primary healthcare service delivery.

  18. Integration of traditional birth attendants into prevention of mother-to-child transmission at primary health facilities in Kaduna, North-West Nigeria

    Directory of Open Access Journals (Sweden)

    Reward O. Nsirim

    2016-05-01

    Full Text Available One of the fundamental challenges to implementing successful prevention of mother-tochild transmission (PMTCT programs in Nigeria is the uptake of PMTCT services at health facilities. Several issues usually discourage many pregnant women from receiving antenatal care services at designated health facilities within their communities. The CRS Nigeria PMTCT Project funded by the Global Fund in its Round 9 Phase 1 in Nigeria, sought to increase demand for HIV counseling and testing services for pregnant women at 25 supported primary health centers (PHCs in Kaduna State, North-West Nigeria by integrating traditional birth attendants (TBAs across the communities where the PHCs were located into the project. Community dialogues were held with the TBAs, community leaders and women groups. These dialogues focused on modes of mother to child transmission of HIV and the need for TBAs to refer their clients to PHCs for testing. Subsequently, data on number of pregnant women who were counseled, tested and received results was collected on a monthly basis from the 25 facilities using the national HIV/AIDS tools. Prior to this integration, the average number of pregnant women that were counseled, tested and received results was 200 pregnant women across all the 25 health facilities monthly. After the integration of TBAs into the program, the number of pregnant women that were counseled, tested and received results kept increasing month after month up to an average of 1500 pregnant women per month across the 25 health facilities. TBAs can thus play a key role in improving service uptake and utilization for pregnant women at primary health centers in the community – especially in the context of HIV/AIDS. They thus need to be integrated, rather than alienated, from primary healthcare service delivery.

  19. Integration of Digital Dental Casts in Cone-Beam Computed Tomography Scans

    OpenAIRE

    Rangel, Frits A.; Maal, Thomas J. J.; Bergé, Stefaan J.; Kuijpers-Jagtman, Anne Marie

    2012-01-01

    Cone-beam computed tomography (CBCT) is widely used in maxillofacial surgery. The CBCT image of the dental arches, however, is of insufficient quality to use in digital planning of orthognathic surgery. Several authors have described methods to integrate digital dental casts into CBCT scans, but all reported methods have drawbacks. The aim of this feasibility study is to present a new simplified method to integrate digital dental casts into CBCT scans. In a patient scheduled for orthognathic ...

  20. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    International Nuclear Information System (INIS)

    Snyder, Abigail C.; Jiao, Yu

    2010-01-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  1. Development of Onboard Computer Complex for Russian Segment of ISS

    Science.gov (United States)

    Branets, V.; Brand, G.; Vlasov, R.; Graf, I.; Clubb, J.; Mikrin, E.; Samitov, R.

    1998-01-01

    Report present a description of the Onboard Computer Complex (CC) that was developed during the period of 1994-1998 for the Russian Segment of ISS. The system was developed in co-operation with NASA and ESA. ESA developed a new computation system under the RSC Energia Technical Assignment, called DMS-R. The CC also includes elements developed by Russian experts and organizations. A general architecture of the computer system and the characteristics of primary elements of this system are described. The system was integrated at RSC Energia with the participation of American and European specialists. The report contains information on software simulators, verification and de-bugging facilities witch were been developed for both stand-alone and integrated tests and verification. This CC serves as the basis for the Russian Segment Onboard Control Complex on ISS.

  2. PWR station blackout transient simulation in the INER integral system test facility

    International Nuclear Information System (INIS)

    Liu, T.J.; Lee, C.H.; Hong, W.T.; Chang, Y.H.

    2004-01-01

    Station blackout transient (or TMLB' scenario) in a pressurized water reactor (PWR) was simulated using the INER Integral System Test Facility (IIST) which is a 1/400 volumetrically-scaled reduce-height and reduce-pressure (RHRP) simulator of a Westinghouse three-loop PWR. Long-term thermal-hydraulic responses including the secondary boil-off and the subsequent primary saturation, pressurization and core uncovery were simulated based on the assumptions of no offsite and onsite power, feedwater and operator actions. The results indicate that two-phase discharge is the major depletion mode since it covers 81.3% of the total amount of the coolant inventory loss. The primary coolant inventory has experienced significant re-distribution during a station blackout transient. The decided parameter to avoid the core overheating is not the total amount of the coolant inventory remained in the primary core cooling system but only the part of coolant left in the pressure vessel. The sequence of significant events during transient for the IIST were also compared with those of the ROSA-IV large-scale test facility (LSTF), which is a 1/48 volumetrically-scaled full-height and full-pressure (FHFP) simulator of a PWR. The comparison indicates that the sequence and timing of these events during TMLB' transient studied in the RHRP IIST facility are generally consistent with those of the FHFP LSTF. (author)

  3. Integrated leak rate test of the FFTF [Fast Flux Test Facility] containment vessel

    International Nuclear Information System (INIS)

    Grygiel, M.L.; Davis, R.H.; Polzin, D.L.; Yule, W.D.

    1987-04-01

    The third integrated leak rate test (ILRT) performed at the Fast Flux Test Facility (FFTF) demonstrated that effective leak rate measurements could be obtained at a pressure of 2 psig. In addition, innovative data reduction methods demonstrated the ability to accurately account for diurnal variations in containment pressure and temperature. Further development of methods used in this test indicate significant savings in the time and effort required to perform an ILRT on Liquid Metal Reactor Systems with consequent reduction in test costs

  4. Current state of the construction of an integrated test facility for hydrogen risk

    Energy Technology Data Exchange (ETDEWEB)

    Na, Young Su; Hong, Seong-Ho; Hong, Seong-Wan [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Experimental research on hydrogen as a combustible gas is important for an assessment of the integrity of a containment building under a severe accident. The Korea Atomic Energy Research Institute (KAERI) is preparing a large-scaled test facility, called SPARC (SPray-Aerosol-Recombiner-Combustion), to estimate the hydrogen behavior such as the distribution, combustion and mitigation. This paper introduces the experimental research activity on hydrogen risk, which was presented at International Congress on Advances in Nuclear Power Plants (ICAPP) this year. The KAERI is preparing a test facility, called SPARC (SPray-Aerosol-Recombiner-Combustion test facility), for an assessment of the hydrogen risk. In the SPARC, hydrogen behavior such as mixing with steam and air, distribution, and combustion in the containment atmosphere will be observed. The SPARC consists of a pressure vessel with a 9.5 m height and 3.4 m in diameter and the operating system to control the thermal hydraulic conditions up to 1.5 MPa at 453 K in a vessel. The temperature, pressure, and gas concentration at various locations will be measured to estimate the atmospheric behavior in a vessel. To install the SPARC, an experimental building, called LIFE (Laboratory for Innovative mitigation of threats from Fission products and Explosion), was constructed at the KAERI site. LIFE has an area of 480 m''2 and height of 18.6 m, and it was designed by considering the experimental safety and specification of a large-sized test facility.

  5. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  6. Computer software design description for the Treated Effluent Disposal Facility (TEDF), Project L-045H, Operator Training Station (OTS)

    International Nuclear Information System (INIS)

    Carter, R.L. Jr.

    1994-01-01

    The Treated Effluent Disposal Facility (TEDF) Operator Training Station (OTS) is a computer-based training tool designed to aid plant operations and engineering staff in familiarizing themselves with the TEDF Central Control System (CCS)

  7. Integration of the PHIN RF Gun into the CLIC Test Facility

    CERN Document Server

    Döbert, Steffen

    2006-01-01

    CERN is a collaborator within the European PHIN project, a joint research activity for Photo injectors within the CARE program. A deliverable of this project is an rf Gun equipped with high quantum efficiency Cs2Te cathodes and a laser to produce the nominal beam for the CLIC Test Facility (CTF3). The nominal beam for CTF3 has an average current of 3.5 A, 1.5 GHz bunch repetition frequency and a pulse length of 1.5 ìs (2332 bunches) with quite tight stability requirements. In addition a phase shift of 180 deg is needed after each train of 140 ns for the special CLIC combination scheme. This rf Gun will be tested at CERN in fall 2006 and shall be integrated as a new injector into the CTF3 linac, replacing the existing injector consisting of a thermionic gun and a subharmonic bunching system. The paper studies the optimal integration into the machine trying to optimize transverse and longitudinal phase space of the beam while respecting the numerous constraints of the existing accelerator. The presented scheme...

  8. Reverse Engineering in Data Integration Software

    Directory of Open Access Journals (Sweden)

    Vlad DIACONITA

    2013-05-01

    Full Text Available Integrated applications are complex solutions that help build better consolidated and standardized systems from existing (usually transactional systems. Integrated applications are complex solutions, whose complexity are determined by the economic processes they implement, the amount of data employed (millions of records grouped in hundreds of tables, databases, hundreds of GB and the number of users [11]. Oracle, once mainly known for his database and e-business solutions has been constantly expanding its product portfolio, providing solutions for SOA, BPA, Warehousing, Big Data and Cloud Computing. In this article I will review the facilities and the power of using a dedicated integration tool in an environment with multiple data sources and a target data mart.

  9. Monte Carlo in radiotherapy: experience in a distributed computational environment

    Science.gov (United States)

    Caccia, B.; Mattia, M.; Amati, G.; Andenna, C.; Benassi, M.; D'Angelo, A.; Frustagli, G.; Iaccarino, G.; Occhigrossi, A.; Valentini, S.

    2007-06-01

    New technologies in cancer radiotherapy need a more accurate computation of the dose delivered in the radiotherapeutical treatment plan, and it is important to integrate sophisticated mathematical models and advanced computing knowledge into the treatment planning (TP) process. We present some results about using Monte Carlo (MC) codes in dose calculation for treatment planning. A distributed computing resource located in the Technologies and Health Department of the Italian National Institute of Health (ISS) along with other computer facilities (CASPUR - Inter-University Consortium for the Application of Super-Computing for Universities and Research) has been used to perform a fully complete MC simulation to compute dose distribution on phantoms irradiated with a radiotherapy accelerator. Using BEAMnrc and GEANT4 MC based codes we calculated dose distributions on a plain water phantom and air/water phantom. Experimental and calculated dose values below ±2% (for depth between 5 mm and 130 mm) were in agreement both in PDD (Percentage Depth Dose) and transversal sections of the phantom. We consider these results a first step towards a system suitable for medical physics departments to simulate a complete treatment plan using remote computing facilities for MC simulations.

  10. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  11. Facility Registry Service (FRS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Facility Registry Service (FRS) provides an integrated source of comprehensive (air, water, and waste) environmental information about facilities across EPA,...

  12. A Performance Measurement and Implementation Methodology in a Department of Defense CIM (Computer Integrated Manufacturing) Environment

    Science.gov (United States)

    1988-01-24

    vanes.-The new facility is currently being called the Engine Blade/ Vape Facility (EB/VF). There are three primary goals in automating this proc..e...earlier, the search led primarily into the areas of CIM Justification, Automation Strategies , Performance Measurement, and Integration issues. Of...of living, has been steadily eroding. One dangerous trend that has developed in keenly competitive world markets , says Rohan [33], has been for U.S

  13. Integrated Computational Materials Engineering for Magnesium in Automotive Body Applications

    Science.gov (United States)

    Allison, John E.; Liu, Baicheng; Boyle, Kevin P.; Hector, Lou; McCune, Robert

    This paper provides an overview and progress report for an international collaborative project which aims to develop an ICME infrastructure for magnesium for use in automotive body applications. Quantitative processing-micro structure-property relationships are being developed for extruded Mg alloys, sheet-formed Mg alloys and high pressure die cast Mg alloys. These relationships are captured in computational models which are then linked with manufacturing process simulation and used to provide constitutive models for component performance analysis. The long term goal is to capture this information in efficient computational models and in a web-centered knowledge base. The work is being conducted at leading universities, national labs and industrial research facilities in the US, China and Canada. This project is sponsored by the U.S. Department of Energy, the U.S. Automotive Materials Partnership (USAMP), Chinese Ministry of Science and Technology (MOST) and Natural Resources Canada (NRCan).

  14. Computer facilities for ISABELLE data handling

    International Nuclear Information System (INIS)

    Kramer, M.A.; Love, W.A.; Miller, R.J.; Zeller, M.

    1977-01-01

    The analysis of data produced by ISABELLE experiments will need a large system of computers. An official group of prospective users and operators of that system should begin planning now. Included in the array will be a substantial computer system at each ISABELLE intersection in use. These systems must include enough computer power to keep experimenters aware of the health of the experiment. This will require at least one very fast sophisticated processor in the system, the size depending on the experiment. Other features of the intersection systems must be a good, high speed graphic display, ability to record data on magnetic tape at 500 to 1000 KB, and a high speed link to a central computer. The operating system software must support multiple interactive users. A substantially larger capacity computer system, shared by the six intersection region experiments, must be available with good turnaround for experimenters while ISABELLE is running. A computer support group will be required to maintain the computer system and to provide and maintain software common to all experiments. Special superfast computing hardware or special function processors constructed with microprocessor circuitry may be necessary both in the data gathering and data processing work. Thus both the local and central processors should be chosen with the possibility of interfacing such devices in mind

  15. Magnetostatic fields computed using an integral equation derived from Green's theorems

    International Nuclear Information System (INIS)

    Simkin, J.; Trowbridge, C.W.

    1976-04-01

    A method of computing magnetostatic fields is described that is based on a numerical solution of the integral equation obtained from Green's Theorems. The magnetic scalar potential and its normal derivative on the surfaces of volumes are found by solving a set of linear equations. These are obtained from Green's Second Theorem and the continuity conditions at interfaces between volumes. Results from a two-dimensional computer program are presented and these show the method to be accurate and efficient. (author)

  16. Fast computation of complete elliptic integrals and Jacobian elliptic functions

    Science.gov (United States)

    Fukushima, Toshio

    2009-12-01

    As a preparation step to compute Jacobian elliptic functions efficiently, we created a fast method to calculate the complete elliptic integral of the first and second kinds, K( m) and E( m), for the standard domain of the elliptic parameter, 0 procedure to compute simultaneously three Jacobian elliptic functions, sn( u| m), cn( u| m), and dn( u| m), by repeated usage of the double argument formulae starting from the Maclaurin series expansions with respect to the elliptic argument, u, after its domain is reduced to the standard range, 0 ≤ u procedure is 25-70% faster than the methods based on the Gauss transformation such as Bulirsch’s algorithm, sncndn, quoted in the Numerical Recipes even if the acceleration of computation of K( m) is not taken into account.

  17. Experimental assessment of computer codes used for safety analysis of integral reactors

    Energy Technology Data Exchange (ETDEWEB)

    Falkov, A.A.; Kuul, V.S.; Samoilov, O.B. [OKB Mechanical Engineering, Nizhny Novgorod (Russian Federation)

    1995-09-01

    Peculiarities of integral reactor thermohydraulics in accidents are associated with presence of noncondensable gas in built-in pressurizer, absence of pumped ECCS, use of guard vessel for LOCAs localisation and passive RHRS through in-reactor HX`s. These features defined the main trends in experimental investigations and verification efforts for computer codes applied. The paper reviews briefly the performed experimental investigation of thermohydraulics of AST-500, VPBER600-type integral reactors. The characteristic of UROVEN/MB-3 code for LOCAs analysis in integral reactors and results of its verification are given. The assessment of RELAP5/mod3 applicability for accident analysis in integral reactor is presented.

  18. An Integration Testing Facility for the CERN Accelerator Controls System

    CERN Document Server

    Stapley, N; Bau, J C; Deghaye, S; Dehavay, C; Sliwinski, W; Sobczak, M

    2009-01-01

    A major effort has been invested in the design, development, and deployment of the LHC Control System. This large control system is made up of a set of core components and dependencies, which although tested individually, are often not able to be tested together on a system capable of representing the complete control system environment, including hardware. Furthermore this control system is being adapted and applied to CERN's whole accelerator complex, and in particular for the forthcoming renovation of the PS accelerators. To ensure quality is maintained as the system evolves, and toimprove defect prevention, the Controls Group launched a project to provide a dedicated facility for continuous, automated, integration testing of its core components to incorporate into its production process. We describe the project, initial lessons from its application, status, and future directions.

  19. CMS Distributed Computing Integration in the LHC sustained operations era

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Bockelman, B; Fisk, I

    2011-01-01

    After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.

  20. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  1. Armament Technology Facility (ATF)

    Data.gov (United States)

    Federal Laboratory Consortium — The Armament Technology Facility is a 52,000 square foot, secure and environmentally-safe, integrated small arms and cannon caliber design and evaluation facility....

  2. Integrated economics

    International Nuclear Information System (INIS)

    Bratton, T.J.

    1992-01-01

    This article offers ideas for evaluating integrated solid waste management systems through the use of a conceptual cost overview. The topics of the article include the integrated solid waste management system; making assumptions about community characteristics, waste generation rates, waste collection responsibility, integrated system components, sizing and economic life of system facilities, system implementation schedule, facility ownership, and system administration; integrated system costs; integrated system revenues; system financing; cost projections; and making decisions

  3. Monitoring land and water uses in the Columbia Plateau using remote-sensing computer analysis and integration techniques

    International Nuclear Information System (INIS)

    Leonhart, L.S.; Wukelic, G.E.; Foote, H.P.; Blair, S.C.

    1983-09-01

    This study successfully utilized advanced, remote-sensing computer-analysis techniques to quantify and map land- and water-use trends potentially relevant to siting, developing, and operating a high-level national, nuclear waste repository on the US Department of Energy's Hanford Site in eastern Washington State. Specifically, using a variety of digital data bases (primarily multidate LANDSAT data) and digital analysis programs, the study produced unique numerical data and integrated data reference maps relevant to regional (Columbia Plateau) and localized (Pasco Basin) hydrologic considerations associated with developing such a facility. Because all study data developed are in digital form, they can be called upon to contribute to future reference repository location monitoring and reporting efforts, as well as to be utilized in other US Department of Energy programmatic areas having technical and/or environmental interest in the Columbia Plateau region. The results obtained indicate that multidate digital LANDSAT data provide an inexpensive, up-to-date, and accurate data base and reference map of natural and cultural features existing in any region. These data can be (1) computer enhanced to highlight selected surface features of interest; (2) processed/analyzed to provide regional land cover/use information and trend data; and (3) combined with other line and point data files to accommodate interactive, correlative analyses and integrated colorgraphic displays to aid interpretation and modeling efforts. Once the digital base is established, selected site information can be assessed immediately, various forms of data can be accessed concurrently or separately, and data sets may be displayed or mapped at any scale. Available editing software provides the opportunity to generate credible scenarios for a site while preserving the actual data base. 6 references

  4. Final deactivation project report on the Integrated Process Demonstration Facility, Building 7602 Oak Ridge National Laboratory, Oak Ridge, Tennessee

    International Nuclear Information System (INIS)

    1997-09-01

    The purpose of this report is to document the condition of the Integrated Process Demonstration Facility (Building 7602) at Oak Ridge National Laboratory (ORNL) after completion of deactivation activities by the High Ranking Facilities Deactivation Project (HRFDP). This report identifies the activities conducted to place the facility in a safe and environmentally sound condition prior to transfer to the U.S. Department of Energy (DOE) Environmental Restoration EM-40 Program. This report provides a history and description of the facility prior to commencing deactivation activities and documents the condition of the building after completion of all deactivation activities. Turnover items, such as the Post-Deactivation Surveillance and Maintenance (S ampersand M) Plan, remaining hazardous and radioactive materials inventory, radiological controls, Safeguards and Security, and supporting documentation provided in the Office of Nuclear Material and Facility Stabilization Program (EM-60) Turnover package are discussed

  5. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  6. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  7. Validation of RETRAN-03 by simulating a peach bottom turbine trip and boiloff at the full integral simulation test facility

    International Nuclear Information System (INIS)

    Westacott, J.L.; Peterson, C.E.

    1992-01-01

    This paper reports that the RETRAN-03 computer code is validated by simulating two tests that were performed at the Full Integral Simulation Test (FIST) facility. The RETRAN-03 results of a turbine trip (test 4PTT1) and failure to maintain water level at decay power (test T1QUV) are compared with the FIST test data. The RETRAN-03 analysis of test 4PTT1 is compared with a previous TRAC-BWR analysis of the test. Sensitivity to various model nodalizations and RETRAN-03 slip options are studied by comparing results of test T1QUV. The predicted thermal-hydraulic responses of both tests agree well with the test data. The pressure response of test 4PTT1 and the boiloff rate for test T1QUV are accurately predicted. Core uncovery time is found to be sensitive to the upper downcomer and upper plenum nodalization. The RETRAN-03 algebraic and dynamic slip options produce similar results for test T1QUV

  8. Advanced Simulation and Computing FY17 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hendrickson, Bruce [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wade, Doug [National Nuclear Security Administration (NNSA), Washington, DC (United States). Office of Advanced Simulation and Computing and Institutional Research and Development; Hoang, Thuc [National Nuclear Security Administration (NNSA), Washington, DC (United States). Computational Systems and Software Environment

    2016-08-29

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

  9. BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way

    Science.gov (United States)

    Wu, Wenjing; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo; Kan, Wenxiao; Urquijo, Phillip

    2017-10-01

    The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer computing resources, therefore one big challenge to utilize the volunteer computing resources is how to integrate them into the grid middleware in a secure way. The DIRAC interware which is commonly used as the major component of the grid computing infrastructure for several HEP experiments proposes an even bigger challenge to this paradox as its pilot is more closely coupled with operations requiring the X.509 authentication compared to the implementations of pilot in its peer grid interware. The Belle II experiment is a B-factory experiment at KEK, and it uses DIRAC for its distributed computing. In the project of BelleII@home, in order to integrate the volunteer computing resources into the Belle II distributed computing platform in a secure way, we adopted a new approach which detaches the payload running from the Belle II DIRAC pilot which is a customized pilot pulling and processing jobs from the Belle II distributed computing platform, so that the payload can run on volunteer computers without requiring any X.509 authentication. In this approach we developed a gateway service running on a trusted server which handles all the operations requiring the X.509 authentication. So far, we have developed and deployed the prototype of BelleII@home, and tested its full workflow which proves the feasibility of this approach. This approach can also be applied on HPC systems whose work nodes do not have outbound connectivity to interact with the DIRAC system in general.

  10. On a new method to compute photon skyshine doses around radiotherapy facilities

    Energy Technology Data Exchange (ETDEWEB)

    Falcao, R.; Facure, A. [Comissao Nacional de Eenrgia Nuclear, Rio de Janeiro (Brazil); Xavier, A. [PEN/Coppe -UFRJ, Rio de Janeiro (Brazil)

    2006-07-01

    Full text of publication follows: Nowadays, in a great number of situations constructions are raised around radiotherapy facilities. In cases where the constructions would not be in the primary x-ray beam, 'skyshine' radiation is normally accounted for. The skyshine method is commonly used to to calculate the dose contribution from scattered radiation in such circumstances, when the roof shielding is projected considering there will be no occupancy upstairs. In these cases, there will be no need to have the usual 1,5-2,0 m thick ceiling, and the construction costs can be considerably reduced. The existing expression to compute these doses do not accomplish to explain mathematically the existence of a shadow area just around the outer room walls, and its growth, as we get away from these walls. In this paper we propose a new method to compute photon skyshine doses, using geometrical considerations to find the maximum dose point. An empirical equation is derived, and its validity is tested using M.C.N.P. 5 Monte Carlo calculation to simulate radiotherapy rooms configurations. (authors)

  11. Operating procedures: Fusion Experiments Analysis Facility

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, R.A.; Carey, R.W.

    1984-03-20

    The Fusion Experiments Analysis Facility (FEAF) is a computer facility based on a DEC VAX 11/780 computer. It became operational in late 1982. At that time two manuals were written to aid users and staff in their interactions with the facility. This manual is designed as a reference to assist the FEAF staff in carrying out their responsibilities. It is meant to supplement equipment and software manuals supplied by the vendors. Also this manual provides the FEAF staff with a set of consistent, written guidelines for the daily operation of the facility.

  12. Operating procedures: Fusion Experiments Analysis Facility

    International Nuclear Information System (INIS)

    Lerche, R.A.; Carey, R.W.

    1984-01-01

    The Fusion Experiments Analysis Facility (FEAF) is a computer facility based on a DEC VAX 11/780 computer. It became operational in late 1982. At that time two manuals were written to aid users and staff in their interactions with the facility. This manual is designed as a reference to assist the FEAF staff in carrying out their responsibilities. It is meant to supplement equipment and software manuals supplied by the vendors. Also this manual provides the FEAF staff with a set of consistent, written guidelines for the daily operation of the facility

  13. Investigation of development and management of treatment planning systems for BNCT at foreign facilities

    International Nuclear Information System (INIS)

    2001-03-01

    A new computational dosimetry system for BNCT: JCDS is developed by JAERI in order to carry out BNCT with epithermal neutron beam at present. The development and management situation of computational dosimetry system, which are developed and are used in BNCT facilities in foreign countries, were investigated in order to accurately grasp functions necessary for preparation of the treatment planning and its future subjects. In present state, 'SERA', which are developed by Idaho National Engineering and Environmental Laboratory (INEEL), is used in many BNCT facilities. Followings are necessary for development and management of the treatment planning system. (1) Reliability confirmation of system performance by verification as comparison examination of calculated value with actual experimental measured value. (2) Confirmation systems such as periodic maintenance for retention of the system quality. (3) The improvement system, which always considered relative merits and demerits with other computational dosimetry system. (4) The development of integrated system with patient setting. (author)

  14. Research on integrated simulation of fluid-structure system by computation science techniques

    International Nuclear Information System (INIS)

    Yamaguchi, Akira

    1996-01-01

    In Power Reactor and Nuclear Fuel Development Corporation, the research on the integrated simulation of fluid-structure system by computation science techniques has been carried out, and by its achievement, the verification of plant systems which has depended on large scale experiments is substituted by computation science techniques, in this way, it has been aimed at to reduce development costs and to attain the optimization of FBR systems. For the purpose, it is necessary to establish the technology for integrally and accurately analyzing complicated phenomena (simulation technology), the technology for applying it to large scale problems (speed increasing technology), and the technology for assuring the reliability of the results of analysis when simulation technology is utilized for the permission and approval of FBRs (verifying technology). The simulation of fluid-structure interaction, the heat flow simulation in the space with complicated form and the related technologies are explained. As the utilization of computation science techniques, the elucidation of phenomena by numerical experiment and the numerical simulation as the substitute for tests are discussed. (K.I.)

  15. Applications integration in a hybrid cloud computing environment: modelling and platform

    Science.gov (United States)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  16. Recommended practice for the design of a computer driven Alarm Display Facility for central control rooms of nuclear power generating stations

    International Nuclear Information System (INIS)

    Ben-Yaacov, G.

    1984-01-01

    This paper's objective is to explain the process by which design can prevent human errors in nuclear plant operation. Human factor engineering principles, data, and methods used in the design of computer driven alarm display facilities are discussed. A ''generic'', advanced Alarm Display Facility is described. It considers operator capabilities and limitations in decision-making processes, response dynamics, and human memory limitations. Highlighted are considerations of human factor criteria in the designing and layout of alarm displays. Alarm data sources are described, and their use within the Alarm Display Facility are illustrated

  17. Integrated multimodal human-computer interface and augmented reality for interactive display applications

    Science.gov (United States)

    Vassiliou, Marius S.; Sundareswaran, Venkataraman; Chen, S.; Behringer, Reinhold; Tam, Clement K.; Chan, M.; Bangayan, Phil T.; McGee, Joshua H.

    2000-08-01

    We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.

  18. Brain systems for probabilistic and dynamic prediction: computational specificity and integration.

    Directory of Open Access Journals (Sweden)

    Jill X O'Reilly

    2013-09-01

    Full Text Available A computational approach to functional specialization suggests that brain systems can be characterized in terms of the types of computations they perform, rather than their sensory or behavioral domains. We contrasted the neural systems associated with two computationally distinct forms of predictive model: a reinforcement-learning model of the environment obtained through experience with discrete events, and continuous dynamic forward modeling. By manipulating the precision with which each type of prediction could be used, we caused participants to shift computational strategies within a single spatial prediction task. Hence (using fMRI we showed that activity in two brain systems (typically associated with reward learning and motor control could be dissociated in terms of the forms of computations that were performed there, even when both systems were used to make parallel predictions of the same event. A region in parietal cortex, which was sensitive to the divergence between the predictions of the models and anatomically connected to both computational networks, is proposed to mediate integration of the two predictive modes to produce a single behavioral output.

  19. The DIII-D Computing Environment: Characteristics and Recent Changes

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.

    1999-01-01

    The DIII-D tokamak national fusion research facility along with its predecessor Doublet III has been operating for over 21 years. The DIII-D computing environment consists of real-time systems controlling the tokamak, heating systems, and diagnostics, and systems acquiring experimental data from instrumentation; major data analysis server nodes performing short term and long term data access and data analysis; and systems providing mechanisms for remote collaboration and the dissemination of information over the world wide web. Computer systems for the facility have undergone incredible changes over the course of time as the computer industry has changed dramatically. Yet there are certain valuable characteristics of the DIII-D computing environment that have been developed over time and have been maintained to this day. Some of these characteristics include: continuous computer infrastructure improvements, distributed data and data access, computing platform integration, and remote collaborations. These characteristics are being carried forward as well as new characteristics resulting from recent changes which have included: a dedicated storage system and a hierarchical storage management system for raw shot data, various further infrastructure improvements including deployment of Fast Ethernet, the introduction of MDSplus, LSF and common IDL based tools, and improvements to remote collaboration capabilities. This paper will describe this computing environment, important characteristics that over the years have contributed to the success of DIII-D computing systems, and recent changes to computer systems

  20. Vehicle Thermal Management Facilities | Transportation Research | NREL

    Science.gov (United States)

    Integration Facility The Vehicle Testing and Integration Facility features a pad to conduct vehicle thermal station next to the pad provides a continuous data stream on temperature, humidity, wind speed, and solar