WorldWideScience

Sample records for facility integrated computer

  1. National Ignition Facility integrated computer control system

    International Nuclear Information System (INIS)

    Van Arsdall, P.J. LLNL

    1998-01-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance

  2. Integration of small computers in the low budget facility

    International Nuclear Information System (INIS)

    Miller, G.E.; Crofoot, T.A.

    1988-01-01

    Inexpensive computers (PC's) are well within the reach of low budget reactor facilities. It is possible to envisage many uses that will both improve capabilities of existing instrumentation and also assist operators and staff with certain routine tasks. Both of these opportunities are important for survival at facilities with severe budget and staffing limitations. (author)

  3. Operational facility-integrated computer system for safeguards

    International Nuclear Information System (INIS)

    Armento, W.J.; Brooksbank, R.E.; Krichinsky, A.M.

    1980-01-01

    A computer system for safeguards in an active, remotely operated, nuclear fuel processing pilot plant has been developed. This sytem maintains (1) comprehensive records of special nuclear materials, (2) automatically updated book inventory files, (3) material transfer catalogs, (4) timely inventory estimations, (5) sample transactions, (6) automatic, on-line volume balances and alarmings, and (7) terminal access and applications software monitoring and logging. Future development will include near-real-time SNM mass balancing as both a static, in-tank summation and a dynamic, in-line determination. It is planned to incorporate aspects of site security and physical protection into the computer monitoring

  4. Implementation of the Facility Integrated Inventory Computer System (FICS)

    International Nuclear Information System (INIS)

    McEvers, J.A.; Krichinsky, A.M.; Layman, L.R.; Dunnigan, T.H.; Tuft, R.M.; Murray, W.P.

    1980-01-01

    This paper describes a computer system which has been developed for nuclear material accountability and implemented in an active radiochemical processing plant involving remote operations. The system posesses the following features: comprehensive, timely records of the location and quantities of special nuclear materials; automatically updated book inventory files on the plant and sub-plant levels of detail; material transfer coordination and cataloging; automatic inventory estimation; sample transaction coordination and cataloging; automatic on-line volume determination, limit checking, and alarming; extensive information retrieval capabilities; and terminal access and application software monitoring and logging

  5. Integration of distributed plant process computer systems to nuclear power generation facilities

    International Nuclear Information System (INIS)

    Bogard, T.; Finlay, K.

    1996-01-01

    Many operating nuclear power generation facilities are replacing their plant process computer. Such replacement projects are driven by equipment obsolescence issues and associated objectives to improve plant operability, increase plant information access, improve man machine interface characteristics, and reduce operation and maintenance costs. This paper describes a few recently completed and on-going replacement projects with emphasis upon the application integrated distributed plant process computer systems. By presenting a few recent projects, the variations of distributed systems design show how various configurations can address needs for flexibility, open architecture, and integration of technological advancements in instrumentation and control technology. Architectural considerations for optimal integration of the plant process computer and plant process instrumentation ampersand control are evident from variations of design features

  6. Software quality assurance plan for the National Ignition Facility integrated computer control system

    International Nuclear Information System (INIS)

    Woodruff, J.

    1996-11-01

    Quality achievement is the responsibility of the line organizations of the National Ignition Facility (NIF) Project. This Software Quality Assurance Plan (SQAP) applies to the activities of the Integrated Computer Control System (ICCS) organization and its subcontractors. The Plan describes the activities implemented by the ICCS section to achieve quality in the NIF Project's controls software and implements the NIF Quality Assurance Program Plan (QAPP, NIF-95-499, L-15958-2) and the Department of Energy's (DOE's) Order 5700.6C. This SQAP governs the quality affecting activities associated with developing and deploying all control system software during the life cycle of the NIF Project

  7. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    Energy Technology Data Exchange (ETDEWEB)

    Zynovyev, Mykhaylo

    2012-06-29

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  8. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    International Nuclear Information System (INIS)

    Zynovyev, Mykhaylo

    2012-01-01

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  9. CSNI Integral Test Facility Matrices for Validation of Best-Estimate Thermal-Hydraulic Computer Codes

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Internationally agreed Integral Test Facility (ITF) matrices for validation of realistic thermal hydraulic system computer codes were established. ITF development is mainly for Pressurised Water Reactors (PWRs) and Boiling Water Reactors (BWRs). A separate activity was for Russian Pressurised Water-cooled and Water-moderated Energy Reactors (WWER). Firstly, the main physical phenomena that occur during considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. In this paper some specific examples from the ITF matrices will also be provided. The matrices will be a guide for code validation, will be a basis for comparisons of code predictions performed with different system codes, and will contribute to the quantification of the uncertainty range of code model predictions. In addition to this objective, the construction of such a matrix is an attempt to record information which has been generated around the world over the last years, so that it is more accessible to present and future workers in that field than would otherwise be the case.

  10. Software quality assurance plan for the National Ignition Facility integrated computer control system

    Energy Technology Data Exchange (ETDEWEB)

    Woodruff, J.

    1996-11-01

    Quality achievement is the responsibility of the line organizations of the National Ignition Facility (NIF) Project. This Software Quality Assurance Plan (SQAP) applies to the activities of the Integrated Computer Control System (ICCS) organization and its subcontractors. The Plan describes the activities implemented by the ICCS section to achieve quality in the NIF Project`s controls software and implements the NIF Quality Assurance Program Plan (QAPP, NIF-95-499, L-15958-2) and the Department of Energy`s (DOE`s) Order 5700.6C. This SQAP governs the quality affecting activities associated with developing and deploying all control system software during the life cycle of the NIF Project.

  11. MONITOR: A computer model for estimating the costs of an integral monitored retrievable storage facility

    International Nuclear Information System (INIS)

    Reimus, P.W.; Sevigny, N.L.; Schutz, M.E.; Heller, R.A.

    1986-12-01

    The MONITOR model is a FORTRAN 77 based computer code that provides parametric life-cycle cost estimates for a monitored retrievable storage (MRS) facility. MONITOR is very flexible in that it can estimate the costs of an MRS facility operating under almost any conceivable nuclear waste logistics scenario. The model can also accommodate input data of varying degrees of complexity and detail (ranging from very simple to more complex) which makes it ideal for use in the MRS program, where new designs and new cost data are frequently offered for consideration. MONITOR can be run as an independent program, or it can be interfaced with the Waste System Transportation and Economic Simulation (WASTES) model, a program that simulates the movement of waste through a complete nuclear waste disposal system. The WASTES model drives the MONITOR model by providing it with the annual quantities of waste that are received, stored, and shipped at the MRS facility. Three runs of MONITOR are documented in this report. Two of the runs are for Version 1 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2A (backup) version of the MRS cost estimate. In one of these runs MONITOR was run as an independent model, and in the other run MONITOR was run using an input file generated by the WASTES model. The two runs correspond to identical cases, and the fact that they gave identical results verified that the code performed the same calculations in both modes of operation. The third run was made for Version 2 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2B (integral) version of the MRS cost estimate. This run was made with MONITOR being run as an independent model. The results of several cases have been verified by hand calculations

  12. National Ignition Facility system design requirements NIF integrated computer controls SDR004

    International Nuclear Information System (INIS)

    Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development, and test requirements for the NIF Integrated Computer Control System. The Integrated Computer Control System (ICCS) is covered in NIF WBS element 1.5. This document responds directly to the requirements detailed in the NIF Functional Requirements/Primary Criteria, and is supported by subsystem design requirements documents for each major ICCS Subsystem

  13. Status of the National Ignition Facility Integrated Computer Control System (ICCS) on the Path to Ignition

    International Nuclear Information System (INIS)

    Lagin, L J; Bettenhauasen, R C; Bowers, G A; Carey, R W; Edwards, O D; Estes, C M; Demaret, R D; Ferguson, S W; Fisher, J M; Ho, J C; Ludwigsen, A P; Mathisen, D G; Marshall, C D; Matone, J M; McGuigan, D L; Sanchez, R J; Shelton, R T; Stout, E A; Tekle, E; Townsend, S L; Van Arsdall, P J; Wilson, E F

    2007-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility under construction that will contain a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber with room for multiple experimental diagnostics. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. NIF's laser beams are designed to compress fusion targets to conditions required for thermonuclear burn, liberating more energy than required to initiate the fusion reactions. NIF is comprised of 24 independent bundles of 8 beams each using laser hardware that is modularized into more than 6,000 line replaceable units such as optical assemblies, laser amplifiers, and multifunction sensor packages containing 60,000 control and diagnostic points. NIF is operated by the large-scale Integrated Computer Control System (ICCS) in an architecture partitioned by bundle and distributed among over 800 front-end processors and 50 supervisory servers. NIF's automated control subsystems are built from a common object-oriented software framework based on CORBA distribution that deploys the software across the computer network and achieves interoperation between different languages and target architectures. A shot automation framework has been deployed during the past year to orchestrate and automate shots performed at the NIF using the ICCS. In December 2006, a full cluster of 48 beams of NIF was fired simultaneously, demonstrating that the independent bundle control system will scale to full scale of 192 beams. At present, 72 beams have been commissioned and have demonstrated 1.4-Megajoule capability of infrared light. During the next two years, the control system will be expanded to include automation of target area systems including final optics, target positioners and

  14. Status of the National Ignition Facility Integrated Computer Control System (ICCS) on the path to ignition

    International Nuclear Information System (INIS)

    Lagin, L.J.; Bettenhausen, R.C.; Bowers, G.A.; Carey, R.W.; Edwards, O.D.; Estes, C.M.; Demaret, R.D.; Ferguson, S.W.; Fisher, J.M.; Ho, J.C.; Ludwigsen, A.P.; Mathisen, D.G.; Marshall, C.D.; Matone, J.T.; McGuigan, D.L.; Sanchez, R.J.; Stout, E.A.; Tekle, E.A.; Townsend, S.L.; Van Arsdall, P.J.

    2008-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility under construction that will contain a 192-beam, 1.8-MJ, 500-TW, ultraviolet laser system together with a 10-m diameter target chamber with room for multiple experimental diagnostics. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. NIF's laser beams are designed to compress fusion targets to conditions required for thermonuclear burn, liberating more energy than required to initiate the fusion reactions. NIF is comprised of 24 independent bundles of eight beams each using laser hardware that is modularized into more than 6000 line replaceable units such as optical assemblies, laser amplifiers, and multi-function sensor packages containing 60,000 control and diagnostic points. NIF is operated by the large-scale Integrated Computer Control System (ICCS) in an architecture partitioned by bundle and distributed among over 800 front-end processors and 50 supervisory servers. NIF's automated control subsystems are built from a common object-oriented software framework based on CORBA distribution that deploys the software across the computer network and achieves interoperation between different languages and target architectures. A shot automation framework has been deployed during the past year to orchestrate and automate shots performed at the NIF using the ICCS. In December 2006, a full cluster of 48 beams of NIF was fired simultaneously, demonstrating that the independent bundle control system will scale to full scale of 192 beams. At present, 72 beams have been commissioned and have demonstrated 1.4-MJ capability of infrared light. During the next 2 years, the control system will be expanded in preparation for project completion in 2009 to include automation of target area systems including final optics

  15. Joint Computing Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Raised Floor Computer Space for High Performance ComputingThe ERDC Information Technology Laboratory (ITL) provides a robust system of IT facilities to develop and...

  16. Integrated Disposal Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Located near the center of the 586-square-mile Hanford Site is the Integrated Disposal Facility, also known as the IDF.This facility is a landfill similar in concept...

  17. Assessment of the integrity of structural shielding of four computed tomography facilities in the greater Accra region of Ghana

    International Nuclear Information System (INIS)

    Nkansah, A.; Schandorf, C.; Boadu, M.; Fletcher, J. J.

    2013-01-01

    The structural shielding thicknesses of the walls of four computed tomography (CT) facilities in Ghana were re-evaluated to verify the shielding integrity using the new shielding design methods recommended by the National Council on Radiological Protection and Measurements (NCRP). The shielding thickness obtained ranged from 120 to 155 mm using default DLP values proposed by the European Commission and 110 to 168 mm using derived DLP values from the four CT manufacturers. These values are within the accepted standard concrete wall thickness ranging from 102 to 152 mm prescribed by the NCRP. The ultrasonic pulse testing of all walls indicated that these are of good quality and free of voids since pulse velocities estimated were within the range of 3.496±0.005 km s -1 . An average dose equivalent rate estimated for supervised areas is 3.4±0.27 μSv week -1 and that for the controlled area is 18.0±0.15 μSv week -1 , which are within acceptable values. (authors)

  18. Assessment of the structural shielding integrity of some selected computed tomography facilities in the Greater Accra Region of Ghana

    International Nuclear Information System (INIS)

    Nkansah, A.

    2010-01-01

    The structural shielding integrity was assessed for four of the CT facilities at Trust Hospital, Korle-Bu Teaching Hospital, the 37 Military Hospital and Medical Imaging Ghana Ltd. in the Greater Accra Region of Ghana. From the shielding calculations, the concrete wall thickness computed are 120, 145, 140 and 155mm, for Medical Imaging Ghana Ltd. 37 Military, Trust Hospital and Korle-Bu Teaching Hospital respectively using Default DLP values. The wall thickness using Derived DLP values are 110, 110, 120 and 168mm for Medical Imaging Ghana Ltd, 37 Military Hospital, Trust Hospital and Korle-Bu Teaching Hospital respectively. These values are within the accepted standard concrete thickness of 102- 152mm prescribed by the National Council of Radiological Protection and measurement. The ultrasonic pulse testing indicated that all the sandcrete walls are of good quality and free of voids since pulse velocities estimated were approximately equal to 3.45km/s. an average dose rate measurement for supervised areas is 3.4 μSv/wk and controlled areas is 18.0 μSv/wk. These dose rates were below the acceptable levels of 100 μSv per week for the occupationally exposed and 20 μSv per week for members of the public provided by the ICRU. The results mean that the structural shielding thickness are adequate to protect members of the public and occupationally exposed workers (au).

  19. Computational Science Facility (CSF)

    Data.gov (United States)

    Federal Laboratory Consortium — PNNL Institutional Computing (PIC) is focused on meeting DOE's mission needs and is part of PNNL's overarching research computing strategy. PIC supports large-scale...

  20. Energy Systems Integration Facility Videos | Energy Systems Integration

    Science.gov (United States)

    Facility | NREL Energy Systems Integration Facility Videos Energy Systems Integration Facility Integration Facility NREL + SolarCity: Maximizing Solar Power on Electrical Grids Redefining What's Possible for Renewable Energy: Grid Integration Robot-Powered Reliability Testing at NREL's ESIF Microgrid

  1. TUNL computer facilities

    International Nuclear Information System (INIS)

    Boyd, M.; Edwards, S.E.; Gould, C.R.; Roberson, N.R.; Westerfeldt, C.R.

    1985-01-01

    The XSYS system has been relatively stable during the last year, and most of our efforts have involved routine software maintenance and enhancement of existing XSYS capabilities. Modifications were made in the MBD program GDAP to increase the execution speed in key GDAP routines. A package of routines has been developed to allow communication between the XSYS and the new Wien filter microprocessor. Recently the authors have upgraded their operating system from VSM V3.7 to V4.1. This required numerous modifications to XSYS, mostly in the command procedures. A new reorganized edition of the XSYS manual will be issued shortly. The TUNL High Resolution Laboratory's VAX 11/750 computer has been in operation for its first full year as a replacement for the PRIME 300 computer which was purchased in 1974 and retired nine months ago. The data acquisition system on the VAX has been in use for the past twelve months performing a number of experiments

  2. AMRITA -- A computational facility

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, J.E. [California Inst. of Tech., CA (US); Quirk, J.J.

    1998-02-23

    Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

  3. Computer Security at Nuclear Facilities

    International Nuclear Information System (INIS)

    Cavina, A.

    2013-01-01

    This series of slides presents the IAEA policy concerning the development of recommendations and guidelines for computer security at nuclear facilities. A document of the Nuclear Security Series dedicated to this issue is on the final stage prior to publication. This document is the the first existing IAEA document specifically addressing computer security. This document was necessary for 3 mains reasons: first not all national infrastructures have recognized and standardized computer security, secondly existing international guidance is not industry specific and fails to capture some of the key issues, and thirdly the presence of more or less connected digital systems is increasing in the design of nuclear power plants. The security of computer system must be based on a graded approach: the assignment of computer system to different levels and zones should be based on their relevance to safety and security and the risk assessment process should be allowed to feed back into and influence the graded approach

  4. Integrated Facilities and Infrastructure Plan.

    Energy Technology Data Exchange (ETDEWEB)

    Reisz Westlund, Jennifer Jill

    2017-03-01

    Our facilities and infrastructure are a key element of our capability-based science and engineering foundation. The focus of the Integrated Facilities and Infrastructure Plan is the development and implementation of a comprehensive plan to sustain the capabilities necessary to meet national research, design, and fabrication needs for Sandia National Laboratories’ (Sandia’s) comprehensive national security missions both now and into the future. A number of Sandia’s facilities have reached the end of their useful lives and many others are not suitable for today’s mission needs. Due to the continued aging and surge in utilization of Sandia’s facilities, deferred maintenance has continued to increase. As part of our planning focus, Sandia is committed to halting the growth of deferred maintenance across its sites through demolition, replacement, and dedicated funding to reduce the backlog of maintenance needs. Sandia will become more agile in adapting existing space and changing how space is utilized in response to the changing requirements. This Integrated Facilities & Infrastructure (F&I) Plan supports the Sandia Strategic Plan’s strategic objectives, specifically Strategic Objective 2: Strengthen our Laboratories’ foundation to maximize mission impact, and Strategic Objective 3: Advance an exceptional work environment that enables and inspires our people in service to our nation. The Integrated F&I Plan is developed through a planning process model to understand the F&I needs, analyze solution options, plan the actions and funding, and then execute projects.

  5. The Integral Test Facility Karlstein

    Directory of Open Access Journals (Sweden)

    Stephan Leyer

    2012-01-01

    Full Text Available The Integral Test Facility Karlstein (INKA test facility was designed and erected to test the performance of the passive safety systems of KERENA, the new AREVA Boiling Water Reactor design. The experimental program included single component/system tests of the Emergency Condenser, the Containment Cooling Condenser and the Passive Core Flooding System. Integral system tests, including also the Passive Pressure Pulse Transmitter, will be performed to simulate transients and Loss of Coolant Accident scenarios at the test facility. The INKA test facility represents the KERENA Containment with a volume scaling of 1 : 24. Component heights and levels are in full scale. The reactor pressure vessel is simulated by the accumulator vessel of the large valve test facility of Karlstein—a vessel with a design pressure of 11 MPa and a storage capacity of 125 m3. The vessel is fed by a benson boiler with a maximum power supply of 22 MW. The INKA multi compartment pressure suppression Containment meets the requirements of modern and existing BWR designs. As a result of the large power supply at the facility, INKA is capable of simulating various accident scenarios, including a full train of passive systems, starting with the initiating event—for example pipe rupture.

  6. DKIST facility management system integration

    Science.gov (United States)

    White, Charles R.; Phelps, LeEllen

    2016-07-01

    The Daniel K. Inouye Solar Telescope (DKIST) Observatory is under construction at Haleakalā, Maui, Hawai'i. When complete, the DKIST will be the largest solar telescope in the world. The Facility Management System (FMS) is a subsystem of the high-level Facility Control System (FCS) and directly controls the Facility Thermal System (FTS). The FMS receives operational mode information from the FCS while making process data available to the FCS and includes hardware and software to integrate and control all aspects of the FTS including the Carousel Cooling System, the Telescope Chamber Environmental Control Systems, and the Temperature Monitoring System. In addition it will integrate the Power Energy Management System and several service systems such as heating, ventilation, and air conditioning (HVAC), the Domestic Water Distribution System, and the Vacuum System. All of these subsystems must operate in coordination to provide the best possible observing conditions and overall building management. Further, the FMS must actively react to varying weather conditions and observational requirements. The physical impact of the facility must not interfere with neighboring installations while operating in a very environmentally and culturally sensitive area. The FMS system will be comprised of five Programmable Automation Controllers (PACs). We present a pre-build overview of the functional plan to integrate all of the FMS subsystems.

  7. Energy Systems Integration Facility News | Energy Systems Integration

    Science.gov (United States)

    Facility | NREL Energy Systems Integration Facility News Energy Systems Integration Facility Energy Dataset A massive amount of wind data was recently made accessible online, greatly expanding the Energy's National Renewable Energy Laboratory (NREL) has completed technology validation testing for Go

  8. Computer-Aided Facilities Management Systems (CAFM).

    Science.gov (United States)

    Cyros, Kreon L.

    Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…

  9. Steam condensation induced water hammer in a vertical up-fill configuration within an integral test facility. Experiments and computational simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dirndorfer, Stefan

    2017-01-17

    Condensation induced water hammer is a source of danger and unpredictable loads in pipe systems. Studies concerning condensation induced water hammer were predominantly made for horizontal pipes, studies concerning vertical pipe geometries are quite rare. This work presents a new integral test facility and an analysis of condensation induced water hammer in a vertical up-fill configuration. Thanks to the state of the art technology, the phenomenology of vertical condensation induced water hammer can be analysed by means of sufficient high-sampled experimental data. The system code ATHLET is used to simulate UniBw condensation induced water hammer experiments. A newly developed and implemented direct contact condensation model enables ATHLET to calculate condensation induced water hammer. Selected experiments are validated by the modified ATHLET system code. A sensitivity analysis in ATHLET, together with the experimental data, allows to assess the performance of ATHLET to compute condensation induced water hammer in a vertical up-fill configuration.

  10. Steam condensation induced water hammer in a vertical up-fill configuration within an integral test facility. Experiments and computational simulations

    International Nuclear Information System (INIS)

    Dirndorfer, Stefan

    2017-01-01

    Condensation induced water hammer is a source of danger and unpredictable loads in pipe systems. Studies concerning condensation induced water hammer were predominantly made for horizontal pipes, studies concerning vertical pipe geometries are quite rare. This work presents a new integral test facility and an analysis of condensation induced water hammer in a vertical up-fill configuration. Thanks to the state of the art technology, the phenomenology of vertical condensation induced water hammer can be analysed by means of sufficient high-sampled experimental data. The system code ATHLET is used to simulate UniBw condensation induced water hammer experiments. A newly developed and implemented direct contact condensation model enables ATHLET to calculate condensation induced water hammer. Selected experiments are validated by the modified ATHLET system code. A sensitivity analysis in ATHLET, together with the experimental data, allows to assess the performance of ATHLET to compute condensation induced water hammer in a vertical up-fill configuration.

  11. Computation of integral bases

    NARCIS (Netherlands)

    Bauch, J.H.P.

    2015-01-01

    Let $A$ be a Dedekind domain, $K$ the fraction field of $A$, and $f\\in A[x]$ a monic irreducible separable polynomial. For a given non-zero prime ideal $\\mathfrak{p}$ of $A$ we present in this paper a new method to compute a $\\mathfrak{p}$-integral basis of the extension of $K$ determined by $f$.

  12. Power Systems Integration Laboratory | Energy Systems Integration Facility

    Science.gov (United States)

    | NREL Power Systems Integration Laboratory Power Systems Integration Laboratory Research in the Energy System Integration Facility's Power Systems Integration Laboratory focuses on the microgrid applications. Photo of engineers testing an inverter in the Power Systems Integration Laboratory

  13. Design Integration of Facilities Management

    DEFF Research Database (Denmark)

    Jensen, Per Anker

    2009-01-01

    One of the problems in the building industry is a limited degree of learning from experiences of use and operation of existing buildings. Development of professional facilities management (FM) can be seen as the missing link to bridge the gap between building operation and building design....... Strategies, methods and barriers for the transfer and integration of operational knowledge into the design process are discussed. Multiple strategies are needed to improve the integration of FM in design. Building clients must take on a leading role in defining and setting up requirements and procedures...... on literature studies and case studies from the Nordic countries in Europe, including research reflections on experiences from a main case study, where the author, before becoming a university researcher, was engaged in the client organization as deputy project director with responsibility for the integration...

  14. 2015 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  15. 2014 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  16. Integrated computer aided design simulation and manufacture

    OpenAIRE

    Diko, Faek

    1989-01-01

    Computer Aided Design (CAD) and Computer Aided Manufacture (CAM) have been investigated and developed since twenty years as standalone systems. A large number of very powerful but independent packages have been developed for Computer Aided Design,Aanlysis and Manufacture. However, in most cases these packages have poor facility for communicating with other packages. Recently attempts have been made to develop integrated CAD/CAM systems and many software companies a...

  17. Conducting Computer Security Assessments at Nuclear Facilities

    International Nuclear Information System (INIS)

    2016-06-01

    Computer security is increasingly recognized as a key component in nuclear security. As technology advances, it is anticipated that computer and computing systems will be used to an even greater degree in all aspects of plant operations including safety and security systems. A rigorous and comprehensive assessment process can assist in strengthening the effectiveness of the computer security programme. This publication outlines a methodology for conducting computer security assessments at nuclear facilities. The methodology can likewise be easily adapted to provide assessments at facilities with other radioactive materials

  18. INTEGRITY -- Integrated Human Exploration Mission Simulation Facility

    Science.gov (United States)

    Henninger, D.; Tri, T.; Daues, K.

    It is proposed to develop a high -fidelity ground facil ity to carry out long-duration human exploration mission simulations. These would not be merely computer simulations - they would in fact comprise a series of actual missions that just happen to stay on earth. These missions would include all elements of an actual mission, using actual technologies that would be used for the real mission. These missions would also include such elements as extravehicular activities, robotic systems, telepresence and teleoperation, surface drilling technology--all using a simulated planetary landscape. A sequence of missions would be defined that get progressively longer and more robust, perhaps a series of five or six missions over a span of 10 to 15 years ranging in durat ion from 180 days up to 1000 days. This high-fidelity ground facility would operate hand-in-hand with a host of other terrestrial analog sites such as the Antarctic, Haughton Crater, and the Arizona desert. Of course, all of these analog mission simulations will be conducted here on earth in 1-g, and NASA will still need the Shuttle and ISS to carry out all the microgravity and hypogravity science experiments and technology validations. The proposed missions would have sufficient definition such that definitive requirements could be derived from them to serve as direction for all the program elements of the mission. Additionally, specific milestones would be established for the "launch" date of each mission so that R&D programs would have both good requirements and solid milestones from which to build their implementation plans. Mission aspects that could not be directly incorporated into the ground facility would be simulated via software. New management techniques would be developed for evaluation in this ground test facility program. These new techniques would have embedded metrics which would allow them to be continuously evaluated and adjusted so that by the time the sequence of missions is completed

  19. Oak Ridge Leadership Computing Facility (OLCF)

    Data.gov (United States)

    Federal Laboratory Consortium — The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of standing up a supercomputer 100 times...

  20. Computing facility at SSC for detectors

    International Nuclear Information System (INIS)

    Leibold, P.; Scipiono, B.

    1990-01-01

    A description of the RISC-based distributed computing facility for detector simulaiton being developed at the SSC Laboratory is discussed. The first phase of this facility is scheduled for completion in early 1991. Included is the status of the project, overview of the concepts used to model and define system architecture, networking capabilities for user access, plans for support of physics codes and related topics concerning the implementation of this facility

  1. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  2. Integration of facility modeling capabilities for nuclear nonproliferation analysis

    International Nuclear Information System (INIS)

    Garcia, Humberto; Burr, Tom; Coles, Garill A.; Edmunds, Thomas A.; Garrett, Alfred; Gorensek, Maximilian; Hamm, Luther; Krebs, John; Kress, Reid L.; Lamberti, Vincent; Schoenwald, David; Tzanos, Constantine P.; Ward, Richard C.

    2012-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  3. Integration Of Facility Modeling Capabilities For Nuclear Nonproliferation Analysis

    International Nuclear Information System (INIS)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  4. 2016 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, Jim [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.

  5. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  6. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  7. Energy Systems Integration News | Energy Systems Integration Facility |

    Science.gov (United States)

    the Energy Systems Integration Facility as part of NREL's work with SolarCity and the Hawaiian Electric Companies. Photo by Amy Glickson, NREL Welcome to Energy Systems Integration News, NREL's monthly date on the latest energy systems integration (ESI) developments at NREL and worldwide. Have an item

  8. Survey of computer codes applicable to waste facility performance evaluations

    International Nuclear Information System (INIS)

    Alsharif, M.; Pung, D.L.; Rivera, A.L.; Dole, L.R.

    1988-01-01

    This study is an effort to review existing information that is useful to develop an integrated model for predicting the performance of a radioactive waste facility. A summary description of 162 computer codes is given. The identified computer programs address the performance of waste packages, waste transport and equilibrium geochemistry, hydrological processes in unsaturated and saturated zones, and general waste facility performance assessment. Some programs also deal with thermal analysis, structural analysis, and special purposes. A number of these computer programs are being used by the US Department of Energy, the US Nuclear Regulatory Commission, and their contractors to analyze various aspects of waste package performance. Fifty-five of these codes were identified as being potentially useful on the analysis of low-level radioactive waste facilities located above the water table. The code summaries include authors, identification data, model types, and pertinent references. 14 refs., 5 tabs

  9. Computation of integral bases

    NARCIS (Netherlands)

    Bauch, J.D.

    2016-01-01

    Let A be a Dedekind domain, K the fraction field of A, and f∈. A[. x] a monic irreducible separable polynomial. For a given non-zero prime ideal p of A we present in this paper a new characterization of a p-integral basis of the extension of K determined by f. This characterization yields in an

  10. Computer Security at Nuclear Facilities (French Edition)

    International Nuclear Information System (INIS)

    2013-01-01

    The possibility that nuclear or other radioactive material could be used for malicious purposes cannot be ruled out in the current global situation. States have responded to this risk by engaging in a collective commitment to strengthen the protection and control of such material and to respond effectively to nuclear security events. States have agreed to strengthen existing instruments and have established new international legal instruments to enhance nuclear security worldwide. Nuclear security is fundamental in the management of nuclear technologies and in applications where nuclear or other radioactive material is used or transported. Through its Nuclear Security Programme, the IAEA supports States to establish, maintain and sustain an effective nuclear security regime. The IAEA has adopted a comprehensive approach to nuclear security. This recognizes that an effective national nuclear security regime builds on: the implementation of relevant international legal instruments; information protection; physical protection; material accounting and control; detection of and response to trafficking in such material; national response plans; and contingency measures. With its Nuclear Security Series, the IAEA aims to assist States in implementing and sustaining such a regime in a coherent and integrated manner. The IAEA Nuclear Security Series comprises Nuclear Security Fundamentals, which include objectives and essential elements of a State's nuclear security regime; Recommendations; Implementing Guides; and Technical Guidance. Each State carries the full responsibility for nuclear security, specifically: to provide for the security of nuclear and other radioactive material and associated facilities and activities; to ensure the security of such material in use, storage or in transport; to combat illicit trafficking and the inadvertent movement of such material; and to be prepared to respond to a nuclear security event. This publication is in the Technical Guidance

  11. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  12. Computational Science at the Argonne Leadership Computing Facility

    Science.gov (United States)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  13. Computer codes for ventilation in nuclear facilities

    International Nuclear Information System (INIS)

    Mulcey, P.

    1987-01-01

    In this paper the authors present some computer codes, developed in the last years, for ventilation and radioprotection. These codes are used for safety analysis in the conception, exploitation and dismantlement of nuclear facilities. The authors present particularly: DACC1 code used for aerosol deposit in sampling circuit of radiation monitors; PIAF code used for modelization of complex ventilation system; CLIMAT 6 code used for optimization of air conditioning system [fr

  14. Integration of facility modeling capabilities for nuclear nonproliferation analysis

    International Nuclear Information System (INIS)

    Burr, Tom; Gorensek, M.B.; Krebs, John; Kress, Reid L.; Lamberti, Vincent; Schoenwald, David; Ward, Richard C.

    2012-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclearnonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facilitymodeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facilitymodeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facilitymodelingcapabilities and illustrates how they could be integrated and utilized for nonproliferationanalysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facilitymodeling tools. After considering a representative sampling of key facilitymodelingcapabilities, the proposed integration framework is illustrated with several examples.

  15. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  16. Deterministic computation of functional integrals

    International Nuclear Information System (INIS)

    Lobanov, Yu.Yu.

    1995-09-01

    A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the

  17. Integrated engineering system for nuclear facilities building

    International Nuclear Information System (INIS)

    Tomura, H.; Miyamoto, A.; Futami, F.; Yasuda, S.; Ohtomo, T.

    1995-01-01

    In the construction of buildings for nuclear facilities in Japan, construction companies are generally in charge of the building engineering work, coordinating with plant engineering. An integrated system for buildings (PROMOTE: PROductive MOdeling system for Total nuclear Engineering) described here is a building engineering system including the entire life cycle of buildings for nuclear facilities. A Three-dimensional (3D) building model (PRO-model) is to be in the core of the system (PROMOTE). Data sharing in the PROMOTE is also done with plant engineering systems. By providing these basic technical foundations, PROMOTE is oriented toward offering rational, highquality engineering for the projects. The aim of the system is to provide a technical foundation in building engineering. This paper discusses the characteristics of buildings for nuclear facilities and the outline of the PROMOTE. (author)

  18. Development of an integrated assay facility

    International Nuclear Information System (INIS)

    Molesworth, T.V.; Bailey, M.; Findlay, D.J.S.; Parsons, T.V.; Sene, M.R.; Swinhoe, M.T.

    1990-01-01

    The I.R.I.S. concept proposed the use of passive examination and active interrogation techniques in an integrated assay facility. A linac would generate the interrogating gamma and neutron beams. Insufficiently detailed knowledge about active neutron and gamma interrogation of 500 litre drums of cement immobilised intermediate level waste led to a research programme which is now in its main experimental stage. Measurements of interrogation responses are being made using simulated waste drums containing actinide samples and calibration sources, in an experimental assay assembly. Results show that responses are generally consistent with theory, but that improvements are needed in some areas. A preliminary appraisal of the engineering and economic aspects of integrated assay shows that correct operational sequencing is required to achieve the short cycle time needed for high throughput. The main engineering features of a facility have been identified

  19. Integrative approaches to computational biomedicine

    Science.gov (United States)

    Coveney, Peter V.; Diaz-Zuccarini, Vanessa; Graf, Norbert; Hunter, Peter; Kohl, Peter; Tegner, Jesper; Viceconti, Marco

    2013-01-01

    The new discipline of computational biomedicine is concerned with the application of computer-based techniques and particularly modelling and simulation to human health. Since 2007, this discipline has been synonymous, in Europe, with the name given to the European Union's ambitious investment in integrating these techniques with the eventual aim of modelling the human body as a whole: the virtual physiological human. This programme and its successors are expected, over the next decades, to transform the study and practice of healthcare, moving it towards the priorities known as ‘4P's’: predictive, preventative, personalized and participatory medicine.

  20. Oak Ridge Leadership Computing Facility Position Paper

    Energy Technology Data Exchange (ETDEWEB)

    Oral, H Sarp [ORNL; Hill, Jason J [ORNL; Thach, Kevin G [ORNL; Podhorszki, Norbert [ORNL; Klasky, Scott A [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL

    2011-01-01

    This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.

  1. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  2. Computer modeling of commercial refrigerated warehouse facilities

    International Nuclear Information System (INIS)

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-01-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented

  3. Integrated safeguards and facility design and operations

    International Nuclear Information System (INIS)

    Tape, J.W.; Coulter, C.A.; Markin, J.T.; Thomas, K.E.

    1987-01-01

    The integration of safeguards functions to deter or detect unauthorized actions by an insider requires the careful communication and management of safeguards-relevant information on a timely basis. The traditional separation of safeguards functions into physical protection, materials control, and materials accounting often inhibits important information flows. Redefining the major safeguards functions as authorization, enforcement, and verification, and careful attention to management of information from acquisition to organization, to analysis, to decision making can result in effective safeguards integration. The careful inclusion of these ideas in facility designs and operations will lead to cost-effective safeguards systems. The safeguards authorization function defines, for example, personnel access requirements, processing activities, and materials movements/locations that are permitted to accomplish the mission of the facility. Minimizing the number of authorized personnel, limiting the processing flexibility, and maintaining up-to-date flow sheets will facilitate the detection of unauthorized activities. Enforcement of the authorized activities can be achieved in part through the use of barriers, access control systems, process sensors, and health and safety information. Consideration of safeguards requirements during facility design can improve the enforcement function. Verification includes the familiar materials accounting activities as well as auditing and testing of the other functions

  4. Integrated facilities modeling using QUEST and IGRIP

    International Nuclear Information System (INIS)

    Davis, K.R.; Haan, E.R.

    1995-01-01

    A QUEST model and associated detailed IGRIP models were developed and used to simulate several workcells in a proposed Plutonium Storage Facility (PSF). The models are being used by team members assigned to the program to improve communication and to assist in evaluating concepts and in performing trade-off studies which will result in recommendations and a final design. The model was designed so that it could be changed easily. The added flexibility techniques used to make changes easily are described in this paper in addition to techniques for integrating the QUEST and IGRIP products. Many of these techniques are generic in nature and can be applied to any modeling endeavor

  5. Integrated computer-aided design using minicomputers

    Science.gov (United States)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  6. Carbon dioxide neutral, integrated biofuel facility

    Energy Technology Data Exchange (ETDEWEB)

    Powell, E.E.; Hill, G.A. [Department of Chemical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, Saskatchewan, S7N 5A9 (Canada)

    2010-12-15

    Algae are efficient biocatalysts for both capture and conversion of carbon dioxide in the environment. In earlier work, we have optimized the ability of Chlorella vulgaris to rapidly capture CO{sub 2} from man-made emission sources by varying environmental growth conditions and bioreactor design. Here we demonstrate that a coupled biodiesel-bioethanol facility, using yeast to produce ethanol and photosynthetic algae to produce biodiesel, can result in an integrated, economical, large-scale process for biofuel production. Each bioreactor acts as an electrode for a coupled complete microbial fuel cell system; the integrated cultures produce electricity that is consumed as an energy source within the process. Finally, both the produced yeast and spent algae biomass can be used as added value byproducts in the feed or food industries. Using cost and revenue estimations, an IRR of up to 25% is calculated using a 5 year project lifespan. (author)

  7. Vitrification Facility integrated system performance testing report

    International Nuclear Information System (INIS)

    Elliott, D.

    1997-01-01

    This report provides a summary of component and system performance testing associated with the Vitrification Facility (VF) following construction turnover. The VF at the West Valley Demonstration Project (WVDP) was designed to convert stored radioactive waste into a stable glass form for eventual disposal in a federal repository. Following an initial Functional and Checkout Testing of Systems (FACTS) Program and subsequent conversion of test stand equipment into the final VF, a testing program was executed to demonstrate successful performance of the components, subsystems, and systems that make up the vitrification process. Systems were started up and brought on line as construction was completed, until integrated system operation could be demonstrated to produce borosilicate glass using nonradioactive waste simulant. Integrated system testing and operation culminated with a successful Operational Readiness Review (ORR) and Department of Energy (DOE) approval to initiate vitrification of high-level waste (HLW) on June 19, 1996. Performance and integrated operational test runs conducted during the test program provided a means for critical examination, observation, and evaluation of the vitrification system. Test data taken for each Test Instruction Procedure (TIP) was used to evaluate component performance against system design and acceptance criteria, while test observations were used to correct, modify, or improve system operation. This process was critical in establishing operating conditions for the entire vitrification process

  8. Shielding Calculations for Positron Emission Tomography - Computed Tomography Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Baasandorj, Khashbayar [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Yang, Jeongseon [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-10-15

    Integrated PET-CT has been shown to be more accurate for lesion localization and characterization than PET or CT alone, and the results obtained from PET and CT separately and interpreted side by side or following software based fusion of the PET and CT datasets. At the same time, PET-CT scans can result in high patient and staff doses; therefore, careful site planning and shielding of this imaging modality have become challenging issues in the field. In Mongolia, the introduction of PET-CT facilities is currently being considered in many hospitals. Thus, additional regulatory legislation for nuclear and radiation applications is necessary, for example, in regulating licensee processes and ensuring radiation safety during the operations. This paper aims to determine appropriate PET-CT shielding designs using numerical formulas and computer code. Since presently there are no PET-CT facilities in Mongolia, contact was made with radiological staff at the Nuclear Medicine Center of the National Cancer Center of Mongolia (NCCM) to get information about facilities where the introduction of PET-CT is being considered. Well-designed facilities do not require additional shielding, which should help cut down overall costs related to PET-CT installation. According to the results of this study, building barrier thicknesses of the NCCM building is not sufficient to keep radiation dose within the limits.

  9. ASCR Cybersecurity for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Piesert, Sean

    2015-02-27

    The Department of Energy (DOE) has the responsibility to address the energy, environmental, and nuclear security challenges that face our nation. Much of DOE’s enterprise involves distributed, collaborative teams; a signi¬cant fraction involves “open science,” which depends on multi-institutional, often international collaborations that must access or share signi¬cant amounts of information between institutions and over networks around the world. The mission of the Office of Science is the delivery of scienti¬c discoveries and major scienti¬c tools to transform our understanding of nature and to advance the energy, economic, and national security of the United States. The ability of DOE to execute its responsibilities depends critically on its ability to assure the integrity and availability of scienti¬c facilities and computer systems, and of the scienti¬c, engineering, and operational software and data that support its mission.

  10. Design of integrated safeguards systems for nuclear facilities

    International Nuclear Information System (INIS)

    de Montmollin, J.M.; Walton, R.B.

    1978-06-01

    Safeguards systems that are capable of countering postulated threats to nuclear facilities must be closely integrated with plant layout and processes if they are to be effective and if potentially-severe impacts on plant operations are to be averted. This paper describes a facilities safeguards system suitable for production plant, in which the traditional elements of physical protection and periodic material-balance accounting are extended and augmented to provide close control of material flows. Discrete material items are subjected to direct, overriding physical control where appropriate. Materials in closely-coupled process streams are protected by on-line NDA and weight measurements, with rapid computation of material balances to provide immediate indication of large-scale diversion. The system provides information and actions at the safeguards/operations interface

  11. Design of integrated safeguards systems for nuclear facilities

    International Nuclear Information System (INIS)

    de Montmollin, J.M.; Walton, R.B.

    1976-01-01

    Safeguards systems that are capable of countering postulated threats to nuclear facilities must be closely integrated with plant layout and processes if they are to be effective and if potentially severe impacts on plant operations are to be averted. A facilities safeguards system suitable for a production plant is described in which the traditional elements of physical protection and periodic material-balance accounting are extended and augmented to provide close control of material flows. Discrete material items are subjected to direct, overriding physical control where appropriate. Materials in closely coupled process streams are protected by on-line NDA and weight measurements, with rapid computation of material balances to provide immediate indication of large-scale diversion. The system provides an information and actions at the safeguards/operations interface

  12. Computer Profile of School Facilities Energy Consumption.

    Science.gov (United States)

    Oswalt, Felix E.

    This document outlines a computerized management tool designed to enable building managers to identify energy consumption as related to types and uses of school facilities for the purpose of evaluating and managing the operation, maintenance, modification, and planning of new facilities. Specifically, it is expected that the statistics generated…

  13. Why Integrate Educational and Community Facilities?

    Science.gov (United States)

    Fessas-Emmanouil, Helen D.

    1978-01-01

    Discusses coordination of educational and community facilities in order to encourage more rational investments and more efficient use of premises. Such coordination may reduce the economic burden imposed upon citizens for the provision of separate facilities for school and community. However, implementation of such a facility presupposes radical…

  14. Geology of the Integrated Disposal Facility Trench

    International Nuclear Information System (INIS)

    Reidel, Steve P.; Fecht, Karl R.

    2005-01-01

    This report describes the geology of the integrated Disposal Facility (IDF) Trench. The stratigraphy consists of some of the youngest sediments of the Missoula floods (younger than 770 ka). The lithology is dominated sands with minor silts and gravels that are largely unconsolidated. The stratigraphy can be subdivided into five geologic units that can be mapped throughout the trench. Four of the units were deposited by the Missoula floods and the youngest consists of windblown sand and silt. The sediment has little moisture and is consistent with that observed in the characterization boreholes. The sedimentary layers are flat lying and there are no faults or folds present. Two clastic dikes were encountered, one along the west wall and one that can be traced from the north to the southwall. The north-south clastic dike nearly bifurcates the trench but the west wall clastic dike can not be traced very far east into the trench. The classic dikes consist mainly of sand with clay-lined walls. The sediment in the dikes is compacted to partly cemented and are more resistant than the layered sediments

  15. FFTF integrated leak rate computer system

    International Nuclear Information System (INIS)

    Hubbard, J.A.

    1987-01-01

    The Fast Flux Test Facility (FFTF) is a liquid-metal-cooled test reactor located on the Hanford site. The FFTF is the only reactor of this type designed and operated to meet the licensing requirements of the Nuclear Regulatory Commission. Unique characteristics of the FFTF that present special challenges related to leak rate testing include thin wall containment vessel construction, cover gas systems that penetrate containment, and a low-pressure design basis accident. The successful completion of the third FFTF integrated leak rate test 5 days ahead of schedule and 10% under budget was a major achievement for the Westinghouse Hanford Company. The success of this operational safety test was due in large part to a special network (LAN) of three IBM PC/XT computers, which monitored the sensor data, calculated the containment vessel leak rate, and displayed test results. The equipment configuration allowed continuous monitoring of the progress of the test independent of the data acquisition and analysis functions, and it also provided overall improved system reliability by permitting immediate switching to backup computers in the event of equipment failure

  16. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  17. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  18. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  19. Thermal Distribution System | Energy Systems Integration Facility | NREL

    Science.gov (United States)

    Thermal Distribution System Thermal Distribution System The Energy Systems Integration Facility's . Photo of the roof of the Energy Systems Integration Facility. The thermal distribution bus allows low as 10% of its full load level). The 60-ton chiller cools water with continuous thermal control

  20. Integrated Computer System of Management in Logistics

    Science.gov (United States)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  1. Utilizing Computer Integration to Assist Nursing

    OpenAIRE

    Hujcs, Marianne

    1990-01-01

    As the use of computers in health care continues to increase, methods of using these computers to assist nursing practice are also increasing. This paper describes how integration within a hospital information system (HIS) contributed to the development of a report format and computer generated alerts used by nurses. Discussion also includes how the report and alerts impact those nurses providing bedside care as well as how integration of an HIS creates challenges for nursing.

  2. Integral Monitored Retrievable Storage (MRS) Facility conceptual design report

    International Nuclear Information System (INIS)

    1985-09-01

    The Basis for Design established the functional requirements and design criteria for an Integral Monitored Retrievable Storage (MRS) facility. The MRS Facility design, described in this report, is based on those requirements and includes all infrastructure, facilities, and equipment required to routinely receive, unload, prepare for storage, and store spent fuel (SF), high-level waste (HLW), and transuranic waste (TRU), and to decontaminate and return shipping casks received by both rail and truck. The facility is complete with all supporting facilities to make the MRS Facility a self-sufficient installation

  3. Computer facilities for ISABELLE data handling

    International Nuclear Information System (INIS)

    Kramer, M.A.; Love, W.A.; Miller, R.J.; Zeller, M.

    1977-01-01

    The analysis of data produced by ISABELLE experiments will need a large system of computers. An official group of prospective users and operators of that system should begin planning now. Included in the array will be a substantial computer system at each ISABELLE intersection in use. These systems must include enough computer power to keep experimenters aware of the health of the experiment. This will require at least one very fast sophisticated processor in the system, the size depending on the experiment. Other features of the intersection systems must be a good, high speed graphic display, ability to record data on magnetic tape at 500 to 1000 KB, and a high speed link to a central computer. The operating system software must support multiple interactive users. A substantially larger capacity computer system, shared by the six intersection region experiments, must be available with good turnaround for experimenters while ISABELLE is running. A computer support group will be required to maintain the computer system and to provide and maintain software common to all experiments. Special superfast computing hardware or special function processors constructed with microprocessor circuitry may be necessary both in the data gathering and data processing work. Thus both the local and central processors should be chosen with the possibility of interfacing such devices in mind

  4. Energy Systems Integration Laboratory | Energy Systems Integration Facility

    Science.gov (United States)

    | NREL Integration Laboratory Energy Systems Integration Laboratory Research in the Energy Systems Integration Laboratory is advancing engineering knowledge and market deployment of hydrogen technologies. Applications include microgrids, energy storage for renewables integration, and home- and station

  5. Integrated Facilities Management and Fixed Asset Accounting.

    Science.gov (United States)

    Golz, W. C., Jr.

    1984-01-01

    A record of a school district's assets--land, buildings, machinery, and equipment--can be a useful management tool that meets accounting requirements and provides appropriate information for budgeting, forecasting, and facilities management. (MLF)

  6. Computing one of Victor Moll's irresistible integrals with computer algebra

    Directory of Open Access Journals (Sweden)

    Christoph Koutschan

    2008-04-01

    Full Text Available We investigate a certain quartic integral from V. Moll's book “Irresistible Integrals” and demonstrate how it can be solved by computer algebra methods, namely by using non-commutative Gröbner bases. We present recent implementations in the computer algebra systems SINGULAR and MATHEMATICA.

  7. High resolution muon computed tomography at neutrino beam facilities

    International Nuclear Information System (INIS)

    Suerfu, B.; Tully, C.G.

    2016-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pion decay pipe at a neutrino beam facility and what can be achieved for momentum resolution in a muon spectrometer. Such an imaging system can be applied in archaeology, art history, engineering, material identification and whenever there is a need to image inside a transportable object constructed of dense materials

  8. Bibliography for computer security, integrity, and safety

    Science.gov (United States)

    Bown, Rodney L.

    1991-01-01

    A bibliography of computer security, integrity, and safety issues is given. The bibliography is divided into the following sections: recent national publications; books; journal, magazine articles, and miscellaneous reports; conferences, proceedings, and tutorials; and government documents and contractor reports.

  9. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  10. What Is Energy Systems Integration? | Energy Systems Integration Facility |

    Science.gov (United States)

    NREL What Is Energy Systems Integration? What Is Energy Systems Integration? Energy systems integration (ESI) is an approach to solving big energy challenges that explores ways for energy systems to Research Community NREL is a founding member of the International Institute for Energy Systems Integration

  11. Vehicle-to-Grid Integration | Energy Systems Integration Facility | NREL

    Science.gov (United States)

    Vehicle-to-Grid Integration Vehicle-to-Grid Integration NREL's research stands at the forefront of vehicle charging station Our work focuses on building the infrastructure and integration needed for benefit each other. Electric Vehicles NREL's research on electric vehicle (EV) grid integration examines

  12. Grid Integration Webinars | Energy Systems Integration Facility | NREL

    Science.gov (United States)

    Grid Integration Webinars Grid Integration Webinars Watch presentations from NREL analysts on various topics related to grid integration. Wind Curtailment and the Value of Transmission under a 2050 renewable curtailment under these high wind scenarios. Text Version Grid Integration Webinar: Exploring

  13. Towards higher reliability of CMS computing facilities

    International Nuclear Information System (INIS)

    Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A

    2012-01-01

    The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.

  14. Analysis on working pressure selection of ACME integral test facility

    International Nuclear Information System (INIS)

    Chen Lian; Chang Huajian; Li Yuquan; Ye Zishen; Qin Benke

    2011-01-01

    An integral effects test facility, advanced core cooling mechanism experiment facility (ACME) was designed to verify the performance of the passive safety system and validate its safety analysis codes of a pressurized water reactor power plant. Three test facilities for AP1000 design were introduced and review was given. The problems resulted from the different working pressures of its test facilities were analyzed. Then a detailed description was presented on the working pressure selection of ACME facility as well as its characteristics. And the approach of establishing desired testing initial condition was discussed. The selected 9.3 MPa working pressure covered almost all important passive safety system enables the ACME to simulate the LOCAs with the same pressure and property similitude as the prototype. It's expected that the ACME design would be an advanced core cooling integral test facility design. (authors)

  15. Integration of Biosafety into Core Facility Management

    Science.gov (United States)

    Fontes, Benjamin

    2013-01-01

    This presentation will discuss the implementation of biosafety policies for small, medium and large core laboratories with primary shared objectives of ensuring the control of biohazards to protect core facility operators and assure conformity with applicable state and federal policies, standards and guidelines. Of paramount importance is the educational process to inform core laboratories of biosafety principles and policies and to illustrate the technology and process pathways of the core laboratory for biosafety professionals. Elevating awareness of biohazards and the biosafety regulatory landscape among core facility operators is essential for the establishment of a framework for both project and material risk assessment. The goal of the biohazard risk assessment process is to identify the biohazard risk management parameters to conduct the procedure safely and in compliance with applicable regulations. An evaluation of the containment, protective equipment and work practices for the procedure for the level of risk identified is facilitated by the establishment of a core facility registration form for work with biohazards and other biological materials with potential risk. The final step in the biocontainment process is the assumption of Principal Investigator role with full responsibility for the structure of the site-specific biosafety program plan by core facility leadership. The presentation will provide example biohazard protocol reviews and accompanying containment measures for core laboratories at Yale University.

  16. Computers in experimental nuclear power facilities

    International Nuclear Information System (INIS)

    Jukl, M.

    1982-01-01

    The CIS 3000 information system is described used for monitoring the operating modes of large technological equipment. The CIS system consists of two ADT computers, an external drum store an analog input side, a bivalent input side, 4 control consoles with monitors and acoustic signalling, a print-out area with typewriters and punching machines and linear recorders. Various applications are described of the installed CIS configuration as is the general-purpose program for processing measured values into a protocol. The program operates in the conversational mode. Different processing variants are shown on the display monitor. (M.D.)

  17. Call Centre- Computer Telephone Integration

    Directory of Open Access Journals (Sweden)

    Dražen Kovačević

    2012-10-01

    Full Text Available Call centre largely came into being as a result of consumerneeds converging with enabling technology- and by the companiesrecognising the revenue opportunities generated by meetingthose needs thereby increasing customer satisfaction. Regardlessof the specific application or activity of a Call centre, customersatisfaction with the interaction is critical to the revenuegenerated or protected by the Call centre. Physical(v, Call centreset up is a place that includes computer, telephone and supervisorstation. Call centre can be available 24 hours a day - whenthe customer wants to make a purchase, needs information, orsimply wishes to register a complaint.

  18. Integration of Biosafety into Core Facility Management

    OpenAIRE

    Fontes, Benjamin

    2013-01-01

    This presentation will discuss the implementation of biosafety policies for small, medium and large core laboratories with primary shared objectives of ensuring the control of biohazards to protect core facility operators and assure conformity with applicable state and federal policies, standards and guidelines. Of paramount importance is the educational process to inform core laboratories of biosafety principles and policies and to illustrate the technology and process pathways of the core l...

  19. ATLAS experience with HEP software at the Argonne leadership computing facility

    International Nuclear Information System (INIS)

    Uram, Thomas D; LeCompte, Thomas J; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  20. ATLAS Experience with HEP Software at the Argonne Leadership Computing Facility

    CERN Document Server

    LeCompte, T; The ATLAS collaboration; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  1. An integrated lean-methods approach to hospital facilities redesign.

    Science.gov (United States)

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  2. Natural circulation in an integral CANDU test facility

    International Nuclear Information System (INIS)

    Ingham, P.J.; Sanderson, T.V.; Luxat, J.C.; Melnyk, A.J.

    2000-01-01

    Over 70 single- and two-phase natural circulation experiments have been completed in the RD-14M facility, an integral CANDU thermalhydraulic test loop. This paper describes the RD-14M facility and provides an overview of the impact of key parameters on the results of natural circulation experiments. Particular emphasis will be on phenomena which led to heat up at high system inventories in a small subset of experiments. Clarification of misunderstandings in a recently published comparison of the effectiveness of natural circulation flows in RD-14M to integral facilities simulating other reactor geometries will also be provided. (author)

  3. Brookhaven Reactor Experiment Control Facility, a distributed function computer network

    International Nuclear Information System (INIS)

    Dimmler, D.G.; Greenlaw, N.; Kelley, M.A.; Potter, D.W.; Rankowitz, S.; Stubblefield, F.W.

    1975-11-01

    A computer network for real-time data acquisition, monitoring and control of a series of experiments at the Brookhaven High Flux Beam Reactor has been developed and has been set into routine operation. This reactor experiment control facility presently services nine neutron spectrometers and one x-ray diffractometer. Several additional experiment connections are in progress. The architecture of the facility is based on a distributed function network concept. A statement of implementation and results is presented

  4. Westinghouse integrated cementation facility. Smart process automation minimizing secondary waste

    International Nuclear Information System (INIS)

    Fehrmann, H.; Jacobs, T.; Aign, J.

    2015-01-01

    The Westinghouse Cementation Facility described in this paper is an example for a typical standardized turnkey project in the area of waste management. The facility is able to handle NPP waste such as evaporator concentrates, spent resins and filter cartridges. The facility scope covers all equipment required for a fully integrated system including all required auxiliary equipment for hydraulic, pneumatic and electric control system. The control system is based on actual PLC technology and the process is highly automated. The equipment is designed to be remotely operated, under radiation exposure conditions. 4 cementation facilities have been built for new CPR-1000 nuclear power stations in China

  5. Design of an integrated non-destructive plutonium assay facility

    International Nuclear Information System (INIS)

    Moore, C.B.

    1984-01-01

    The Department of Energy requires improved technology for nuclear materials accounting as an essential part of new plutonium processing facilities. New facilities are being constructed at the Savannah River Plant by the Du Pont Company, Operating Contractor, to recover plutonium from scrap and waste material generated at SRP and other DOE contract processing facilities. This paper covers design concepts and planning required to incorporate state-of-the-art plutonium assay instruments developed at several national laboratories into an integrated, at-line nuclear material accounting facility operating in the production area. 3 figures

  6. COMPUTER ORIENTED FACILITIES OF TEACHING AND INFORMATIVE COMPETENCE

    Directory of Open Access Journals (Sweden)

    Olga M. Naumenko

    2010-09-01

    Full Text Available In the article it is considered the history of views to the tasks of education, estimations of its effectiveness from the point of view of forming of basic vitally important competences. Opinions to the problem in different countries, international organizations, corresponding experience of the Ukrainian system of education are described. The necessity of forming of informative competence of future teacher is reasonable in the conditions of application of the computer oriented facilities of teaching at the study of naturally scientific cycle subjects in pedagogical colleges. Prognosis estimations concerning the development of methods of application of computer oriented facilities of teaching are presented.

  7. Microgrids | Energy Systems Integration Facility | NREL

    Science.gov (United States)

    . Diesel generators: because diesel generators are traditional microgrid components, three different generator sets with various control options are available for microgrid integration efforts. DC power supplies: the ESIF's full suite of DC simulation capability includes a 1.5-MW photovoltaic (PV) simulator

  8. Double-shell tank waste transfer facilities integrity assessment plan

    International Nuclear Information System (INIS)

    Hundal, T.S.

    1998-01-01

    This document presents the integrity assessment plan for the existing double-shell tank waste transfer facilities system in the 200 East and 200 West Areas of Hanford Site. This plan identifies and proposes the integrity assessment elements and techniques to be performed for each facility. The integrity assessments of existing tank systems that stores or treats dangerous waste is required to be performed to be in compliance with the Washington State Department of Ecology Dangerous Waste Regulations, Washington Administrative Code WAC-173-303-640 requirements

  9. Criticality safety considerations. Integral Monitored Retrievable Storage (MRS) Facility

    International Nuclear Information System (INIS)

    1986-09-01

    This report summarizes the criticality analysis performed to address criticality safety concerns and to support facility design during the conceptual design phase of the Monitored Retrievable Storage (MRS) Facility. The report addresses the criticality safety concerns, the design features of the facility relative to criticality, and the results of the analysis of both normal operating and hypothetical off-normal conditions. Key references are provided (Appendix C) if additional information is desired by the reader. The MRS Facility design was developed and the related analysis was performed in accordance with the MRS Facility Functional Design Criteria and the Basis for Design. The detailed description and calculations are documented in the Integral MRS Facility Conceptual Design Report. In addition to the summary portion of this report, explanatary notes for various terms, calculation methodology, and design parameters are presented in Appendix A. Appendix B provides a brief glossary of technical terms

  10. Computer security at ukrainian nuclear facilities: interface between nuclear safety and security

    International Nuclear Information System (INIS)

    Chumak, D.; Klevtsov, O.

    2015-01-01

    Active introduction of information technology, computer instrumentation and control systems (I and C systems) in the nuclear field leads to a greater efficiency and management of technological processes at nuclear facilities. However, this trend brings a number of challenges related to cyber-attacks on the above elements, which violates computer security as well as nuclear safety and security of a nuclear facility. This paper considers regulatory support to computer security at the nuclear facilities in Ukraine. The issue of computer and information security considered in the context of physical protection, because it is an integral component. The paper focuses on the computer security of I and C systems important to nuclear safety. These systems are potentially vulnerable to cyber threats and, in case of cyber-attacks, the potential negative impact on the normal operational processes can lead to a breach of the nuclear facility security. While ensuring nuclear security of I and C systems, it interacts with nuclear safety, therefore, the paper considers an example of an integrated approach to the requirements of nuclear safety and security

  11. Neutronic computational modeling of the ASTRA critical facility using MCNPX

    International Nuclear Information System (INIS)

    Rodriguez, L. P.; Garcia, C. R.; Milian, D.; Milian, E. E.; Brayner, C.

    2015-01-01

    The Pebble Bed Very High Temperature Reactor is considered as a prominent candidate among Generation IV nuclear energy systems. Nevertheless the Pebble Bed Very High Temperature Reactor faces an important challenge due to the insufficient validation of computer codes currently available for use in its design and safety analysis. In this paper a detailed IAEA computational benchmark announced by IAEA-TECDOC-1694 in the framework of the Coordinated Research Project 'Evaluation of High Temperature Gas Cooled Reactor (HTGR) Performance' was solved in support of the Generation IV computer codes validation effort using MCNPX ver. 2.6e computational code. In the IAEA-TECDOC-1694 were summarized a set of four calculational benchmark problems performed at the ASTRA critical facility. Benchmark problems include criticality experiments, control rod worth measurements and reactivity measurements. The ASTRA Critical Facility at the Kurchatov Institute in Moscow was used to simulate the neutronic behavior of nuclear pebble bed reactors. (Author)

  12. An integrated computational tool for precipitation simulation

    Science.gov (United States)

    Cao, W.; Zhang, F.; Chen, S.-L.; Zhang, C.; Chang, Y. A.

    2011-07-01

    Computer aided materials design is of increasing interest because the conventional approach solely relying on experimentation is no longer viable within the constraint of available resources. Modeling of microstructure and mechanical properties during precipitation plays a critical role in understanding the behavior of materials and thus accelerating the development of materials. Nevertheless, an integrated computational tool coupling reliable thermodynamic calculation, kinetic simulation, and property prediction of multi-component systems for industrial applications is rarely available. In this regard, we are developing a software package, PanPrecipitation, under the framework of integrated computational materials engineering to simulate precipitation kinetics. It is seamlessly integrated with the thermodynamic calculation engine, PanEngine, to obtain accurate thermodynamic properties and atomic mobility data necessary for precipitation simulation.

  13. Modern integrated environmental monitoring and processing systems for nuclear facilities

    International Nuclear Information System (INIS)

    Oprea, I.

    2000-01-01

    presentation by using on-line dynamic evolution of the events, environment information, evacuation optimization, image and voice processing. These modern systems are proposed for environmental monitoring around nuclear facilities, as open interactive systems supporting the operator in the global overview of the environment and the status of the situation updating the remote GIS data base, assuring man-computer interaction and a good information flow for emergency knowledge exchange, improving the protection of the population and decision makers efforts. The local monitoring systems could be integrated into national or international environmental monitoring systems, achieving desired interoperability between government, civilian and army in disaster preparedness efforts

  14. Computer Security at Nuclear Facilities. Reference Manual (Arabic Edition)

    International Nuclear Information System (INIS)

    2011-01-01

    The possibility that nuclear or other radioactive material could be used for malicious purposes cannot be ruled out in the current global situation. States have responded to this risk by engaging in a collective commitment to strengthen the protection and control of such material and to respond effectively to nuclear security events. States have agreed to strengthen existing instruments and have established new international legal instruments to enhance nuclear security worldwide. Nuclear security is fundamental in the management of nuclear technologies and in applications where nuclear or other radioactive material is used or transported. Through its Nuclear Security Programme, the IAEA supports States to establish, maintain and sustain an effective nuclear security regime. The IAEA has adopted a comprehensive approach to nuclear security. This recognizes that an effective national nuclear security regime builds on: the implementation of relevant international legal instruments; information protection; physical protection; material accounting and control; detection of and response to trafficking in such material; national response plans; and contingency measures. With its Nuclear Security Series, the IAEA aims to assist States in implementing and sustaining such a regime in a coherent and integrated manner. The IAEA Nuclear Security Series comprises Nuclear Security Fundamentals, which include objectives and essential elements of a State's nuclear security regime; Recommendations; Implementing Guides; and Technical Guidance. Each State carries the full responsibility for nuclear security, specifically: to provide for the security of nuclear and other radioactive material and associated facilities and activities; to ensure the security of such material in use, storage or in transport; to combat illicit trafficking and the inadvertent movement of such material; and to be prepared to respond to a nuclear security event. This publication is in the Technical Guidance

  15. Computer Security at Nuclear Facilities. Reference Manual (Russian Edition)

    International Nuclear Information System (INIS)

    2012-01-01

    The possibility that nuclear or other radioactive material could be used for malicious purposes cannot be ruled out in the current global situation. States have responded to this risk by engaging in a collective commitment to strengthen the protection and control of such material and to respond effectively to nuclear security events. States have agreed to strengthen existing instruments and have established new international legal instruments to enhance nuclear security worldwide. Nuclear security is fundamental in the management of nuclear technologies and in applications where nuclear or other radioactive material is used or transported. Through its Nuclear Security Programme, the IAEA supports States to establish, maintain and sustain an effective nuclear security regime. The IAEA has adopted a comprehensive approach to nuclear security. This recognizes that an effective national nuclear security regime builds on: the implementation of relevant international legal instruments; information protection; physical protection; material accounting and control; detection of and response to trafficking in such material; national response plans; and contingency measures. With its Nuclear Security Series, the IAEA aims to assist States in implementing and sustaining such a regime in a coherent and integrated manner. The IAEA Nuclear Security Series comprises Nuclear Security Fundamentals, which include objectives and essential elements of a State's nuclear security regime; Recommendations; Implementing Guides; and Technical Guidance. Each State carries the full responsibility for nuclear security, specifically: to provide for the security of nuclear and other radioactive material and associated facilities and activities; to ensure the security of such material in use, storage or in transport; to combat illicit trafficking and the inadvertent movement of such material; and to be prepared to respond to a nuclear security event. This publication is in the Technical Guidance

  16. Computer Security at Nuclear Facilities. Reference Manual (Chinese Edition)

    International Nuclear Information System (INIS)

    2012-01-01

    The possibility that nuclear or other radioactive material could be used for malicious purposes cannot be ruled out in the current global situation. States have responded to this risk by engaging in a collective commitment to strengthen the protection and control of such material and to respond effectively to nuclear security events. States have agreed to strengthen existing instruments and have established new international legal instruments to enhance nuclear security worldwide. Nuclear security is fundamental in the management of nuclear technologies and in applications where nuclear or other radioactive material is used or transported. Through its Nuclear Security Programme, the IAEA supports States to establish, maintain and sustain an effective nuclear security regime. The IAEA has adopted a comprehensive approach to nuclear security. This recognizes that an effective national nuclear security regime builds on: the implementation of relevant international legal instruments; information protection; physical protection; material accounting and control; detection of and response to trafficking in such material; national response plans; and contingency measures. With its Nuclear Security Series, the IAEA aims to assist States in implementing and sustaining such a regime in a coherent and integrated manner. The IAEA Nuclear Security Series comprises Nuclear Security Fundamentals, which include objectives and essential elements of a State's nuclear security regime; Recommendations; Implementing Guides; and Technical Guidance. Each State carries the full responsibility for nuclear security, specifically: to provide for the security of nuclear and other radioactive material and associated facilities and activities; to ensure the security of such material in use, storage or in transport; to combat illicit trafficking and the inadvertent movement of such material; and to be prepared to respond to a nuclear security event. This publication is in the Technical Guidance

  17. Integrated Disposal Facility FY2010 Glass Testing Summary Report

    International Nuclear Information System (INIS)

    Pierce, Eric M.; Bacon, Diana H.; Kerisit, Sebastien N.; Windisch, Charles F.; Cantrell, Kirk J.; Valenta, Michelle M.; Burton, Sarah D.; Serne, R. Jeffrey; Mattigod, Shas V.

    2010-01-01

    Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to provide the technical basis for estimating radionuclide release from the engineered portion of the disposal facility (e.g., source term). Vitrifying the low-activity waste at Hanford is expected to generate over 1.6 A - 105 m 3 of glass (Puigh 1999). The volume of immobilized low-activity waste (ILAW) at Hanford is the largest in the DOE complex and is one of the largest inventories (approximately 0.89 A - 1018 Bq total activity) of long-lived radionuclides, principally 99Tc (t1/2 = 2.1 A - 105), planned for disposal in a low-level waste (LLW) facility. Before the ILAW can be disposed, DOE must conduct a performance assessement (PA) for the Integrated Disposal Facility (IDF) that describes the long-term impacts of the disposal facility on public health and environmental resources. As part of the ILAW glass testing program PNNL is implementing a strategy, consisting of experimentation and modeling, in order to provide the technical basis for estimating radionuclide release from the glass waste form in support of future IDF PAs. The purpose of this report is to summarize the progress made in fiscal year (FY) 2010 toward implementing the strategy with the goal of developing an understanding of the long-term corrosion behavior of low-activity waste glasses. The emphasis in FY2010 was the completing an evaluation of the most sensitive kinetic rate law parameters used to predict glass weathering, documented in Bacon and Pierce (2010), and transitioning from the use of the Subsurface Transport Over Reactive Multi-phases to Subsurface Transport Over Multiple Phases computer code for near-field calculations. The FY2010 activities also consisted of developing a Monte Carlo and Geochemical Modeling framework that links glass composition to alteration phase formation by (1) determining the structure of unreacted and reacted glasses for use as input information into Monte Carlo

  18. Integrated Disposal Facility FY2010 Glass Testing Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Pierce, Eric M.; Bacon, Diana H.; Kerisit, Sebastien N.; Windisch, Charles F.; Cantrell, Kirk J.; Valenta, Michelle M.; Burton, Sarah D.; Serne, R Jeffrey; Mattigod, Shas V.

    2010-09-30

    Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to provide the technical basis for estimating radionuclide release from the engineered portion of the disposal facility (e.g., source term). Vitrifying the low-activity waste at Hanford is expected to generate over 1.6 × 105 m3 of glass (Puigh 1999). The volume of immobilized low-activity waste (ILAW) at Hanford is the largest in the DOE complex and is one of the largest inventories (approximately 0.89 × 1018 Bq total activity) of long-lived radionuclides, principally 99Tc (t1/2 = 2.1 × 105), planned for disposal in a low-level waste (LLW) facility. Before the ILAW can be disposed, DOE must conduct a performance assessement (PA) for the Integrated Disposal Facility (IDF) that describes the long-term impacts of the disposal facility on public health and environmental resources. As part of the ILAW glass testing program PNNL is implementing a strategy, consisting of experimentation and modeling, in order to provide the technical basis for estimating radionuclide release from the glass waste form in support of future IDF PAs. The purpose of this report is to summarize the progress made in fiscal year (FY) 2010 toward implementing the strategy with the goal of developing an understanding of the long-term corrosion behavior of low-activity waste glasses. The emphasis in FY2010 was the completing an evaluation of the most sensitive kinetic rate law parameters used to predict glass weathering, documented in Bacon and Pierce (2010), and transitioning from the use of the Subsurface Transport Over Reactive Multi-phases to Subsurface Transport Over Multiple Phases computer code for near-field calculations. The FY2010 activities also consisted of developing a Monte Carlo and Geochemical Modeling framework that links glass composition to alteration phase formation by 1) determining the structure of unreacted and reacted glasses for use as input information into Monte Carlo

  19. Evaluation of scaling concepts for integral system test facilities

    International Nuclear Information System (INIS)

    Condie, K.G.; Larson, T.K.; Davis, C.B.

    1987-01-01

    A study was conducted by EG and G Idaho, Inc., to identify and technically evaluate potential concepts which will allow the U.S. Nuclear Regulatory Commission to maintain the capability to conduct future integral, thermal-hydraulic facility experiments of interest to light water reactor safety. This paper summarizes the methodology used in the study and presents a rankings for each facility concept relative to its ability to simulate phenomena identified as important in selected reactor transients in Babcock and Wilcox and Westinghouse large pressurized water reactors. Established scaling methodologies are used to develop potential concepts for scaled integral thermal-hydraulic experiment facilities. Concepts selected included: full height, full pressure water; reduced height, reduced pressure water; reduced height, full pressure water; one-tenth linear, full pressure water; and reduced height, full scaled pressure Freon. Results from this study suggest that a facility capable of operating at typical reactor operating conditions will scale most phenomena reasonably well. Local heat transfer phenomena is best scaled by the full height facility, while the reduced height facilities provide better scaling where multi-dimensional phenomena are considered important. Although many phenomena in facilities using Freon or water at nontypical pressure will scale reasonably well, those phenomena which are heavily dependent on quality can be distorted. Furthermore, relation of data produced in facilities operating with nontypical fluids or at nontypical pressures to large plants will be a difficult and time-consuming process

  20. Centralized computer-based controls of the Nova Laser Facility

    International Nuclear Information System (INIS)

    Krammen, J.

    1985-01-01

    This article introduces the overall architecture of the computer-based Nova Laser Control System and describes its basic components. Use of standard hardware and software components ensures that the system, while specialized and distributed throughout the facility, is adaptable. 9 references, 6 figures

  1. Computer-Assisted School Facility Planning with ONPASS.

    Science.gov (United States)

    Urban Decision Systems, Inc., Los Angeles, CA.

    The analytical capabilities of ONPASS, an on-line computer-aided school facility planning system, are described by its developers. This report describes how, using the Canoga Park-Winnetka-Woodland Hills Planning Area as a test case, the Department of City Planning of the city of Los Angeles employed ONPASS to demonstrate how an on-line system can…

  2. Computer usage among nurses in rural health-care facilities in South Africa: obstacles and challenges.

    Science.gov (United States)

    Asah, Flora

    2013-04-01

    This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.

  3. Integral Monitored Retrievable Storage (MRS) Facility conceptual basis for design

    International Nuclear Information System (INIS)

    1985-10-01

    The purpose of the Conceptual Basis for Design is to provide a control document that establishes the basis for executing the conceptual design of the Integral Monitored Retrievable Storage (MRS) Facility. This conceptual design shall provide the basis for preparation of a proposal to Congress by the Department of Energy (DOE) for construction of one or more MRS Facilities for storage of spent nuclear fuel, high-level radioactive waste, and transuranic (TRU) waste. 4 figs., 25 tabs

  4. Integrating Computer-Mediated Communication Strategy Instruction

    Science.gov (United States)

    McNeil, Levi

    2016-01-01

    Communication strategies (CSs) play important roles in resolving problematic second language interaction and facilitating language learning. While studies in face-to-face contexts demonstrate the benefits of communication strategy instruction (CSI), there have been few attempts to integrate computer-mediated communication and CSI. The study…

  5. PANDA: A Multipurpose Integral Test Facility for LWR Safety Investigations

    International Nuclear Information System (INIS)

    Paladino, D.; Dreier, J.

    2012-01-01

    The PANDA facility is a large scale, multicompartmental thermal hydraulic facility suited for investigations related to the safety of current and advanced LWRs. The facility is multipurpose, and the applications cover integral containment response tests, component tests, primary system tests, and separate effect tests. Experimental investigations carried on in the PANDA facility have been embedded in international projects, most of which under the auspices of the EU and OECD and with the support of a large number of organizations (regulatory bodies, technical dupport organizations, national laboratories, electric utilities, industries) worldwide. The paper provides an overview of the research programs performed in the PANDA facility in relation to BWR containment systems and those planned for PWR containment systems.

  6. MIMI: multimodality, multiresource, information integration environment for biomedical core facilities.

    Science.gov (United States)

    Szymanski, Jacek; Wilson, David L; Zhang, Guo-Qiang

    2009-10-01

    The rapid expansion of biomedical research has brought substantial scientific and administrative data management challenges to modern core facilities. Scientifically, a core facility must be able to manage experimental workflow and the corresponding set of large and complex scientific data. It must also disseminate experimental data to relevant researchers in a secure and expedient manner that facilitates collaboration and provides support for data interpretation and analysis. Administratively, a core facility must be able to manage the scheduling of its equipment and to maintain a flexible and effective billing system to track material, resource, and personnel costs and charge for services to sustain its operation. It must also have the ability to regularly monitor the usage and performance of its equipment and to provide summary statistics on resources spent on different categories of research. To address these informatics challenges, we introduce a comprehensive system called MIMI (multimodality, multiresource, information integration environment) that integrates the administrative and scientific support of a core facility into a single web-based environment. We report the design, development, and deployment experience of a baseline MIMI system at an imaging core facility and discuss the general applicability of such a system in other types of core facilities. These initial results suggest that MIMI will be a unique, cost-effective approach to addressing the informatics infrastructure needs of core facilities and similar research laboratories.

  7. Concept of development of integrated computer - based control system for 'Ukryttia' object

    International Nuclear Information System (INIS)

    Buyal'skij, V.M.; Maslov, V.P.

    2003-01-01

    The structural concept of Chernobyl NPP 'Ukryttia' Object's integrated computer - based control system development is presented on the basis of general concept of integrated Computer - based Control System (CCS) design process for organizing and technical management subjects.The concept is aimed at state-of-the-art architectural design technique application and allows using modern computer-aided facilities for functional model,information (logical and physical) models development,as well as for system object model under design

  8. Energy Systems Integration News | Energy Systems Integration Facility |

    Science.gov (United States)

    , utilities can operate more efficiently and profitably. That can increase the use of renewable energy sources challenge to utility companies, grid operators, and other stakeholders involved in wind energy integration recording is available from the July 16 webinar "Smart Grid Research at NREL's Energy Systems

  9. Structural integrity monitoring of critical components in nuclear facilities

    International Nuclear Information System (INIS)

    Roth, Maria; Constantinescu, Dan Mihai; Brad, Sebastian; Ducu, Catalin; Malinovschi, Viorel

    2007-01-01

    Full text: The paper presents the results obtained as part of the Project 'Integrated Network for Structural Integrity Monitoring of Critical Components in Nuclear Facilities', RIMIS, a research work underway within the framework of the Ministry of Education and Research Programme 'Research of Excellence'. The main objective of the Project is to constitute a network integrating the national R and D institutes with preoccupations in the structural integrity assessment of critical components in the nuclear facilities operating in Romania, in order to elaborate a specific procedure for this field. The degradation mechanisms of the structural materials used in the CANDU type reactors, operated by Unit 1 and Unit 2 at Cernavoda (pressure tubes, fuel elements sheaths, steam generator tubing) and in the nuclear facilities relating to reactors of this type as, for instance, the Hydrogen Isotopes Separation facility, will be investigated. The development of a flexible procedure will offer the opportunity to extend the applications to other structural materials used in the nuclear field and in the non-nuclear fields as well, in cooperation with other institutes involved in the developed network. The expected results of the project will allow the integration of the network developed at national level in the structures of similar networks operating within the EU, the enhancement of the scientific importance of Romanian R and D organizations as well as the increase of our country's contribution in solving the major issues of the nuclear field. (authors)

  10. Probabilistic data integration and computational complexity

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.; Mosegaard, K.

    2016-12-01

    Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under

  11. Energy Systems Integration Facility (ESIF) Facility Stewardship Plan: Revision 2.1

    Energy Technology Data Exchange (ETDEWEB)

    Torres, Juan [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Anderson, Art [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-02

    The U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy (EERE), has established the Energy Systems Integration Facility (ESIF) on the campus of the National Renewable Energy Laboratory (NREL) and has designated it as a DOE user facility. This 182,500-ft2 research facility provides state-of-the-art laboratory and support infrastructure to optimize the design and performance of electrical, thermal, fuel, and information technologies and systems at scale. This Facility Stewardship Plan provides DOE and other decision makers with information about the existing and expected capabilities of the ESIF and the expected performance metrics to be applied to ESIF operations. This plan is a living document that will be updated and refined throughout the lifetime of the facility.

  12. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, S; Berzano, D; Brunetti, R; Lusso, S; Vallero, S

    2014-01-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  13. Path-integral computation of superfluid densities

    International Nuclear Information System (INIS)

    Pollock, E.L.; Ceperley, D.M.

    1987-01-01

    The normal and superfluid densities are defined by the response of a liquid to sample boundary motion. The free-energy change due to uniform boundary motion can be calculated by path-integral methods from the distribution of the winding number of the paths around a periodic cell. This provides a conceptually and computationally simple way of calculating the superfluid density for any Bose system. The linear-response formulation relates the superfluid density to the momentum-density correlation function, which has a short-ranged part related to the normal density and, in the case of a superfluid, a long-ranged part whose strength is proportional to the superfluid density. These facts are discussed in the context of path-integral computations and demonstrated for liquid 4 He along the saturated vapor-pressure curve. Below the experimental superfluid transition temperature the computed superfluid fractions agree with the experimental values to within the statistical uncertainties of a few percent in the computations. The computed transition is broadened by finite-sample-size effects

  14. Integrated biofuel facility, with carbon dioxide consumption and power generation

    Energy Technology Data Exchange (ETDEWEB)

    Powell, E.E.; Hill, G.A. [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Chemical Engineering

    2009-07-01

    This presentation provided details of an economical design for a large-scale integrated biofuel facility for coupled production of bioethanol and biodiesel, with carbon dioxide capture and power generation. Several designs were suggested for both batch and continuous culture operations, taking into account all costs and revenues associated with the complete plant integration. The microalgae species Chlorella vulgaris was cultivated in a novel photobioreactor (PBR) in order to consume industrial carbon dioxide (CO{sub 2}). This photosynthetic culture can also act as a biocathode in a microbial fuel cell (MFC), which when coupled to a typical yeast anodic half cell, results in a complete biological MFC. The photosynthetic MFC produces electricity as well as valuable biomass and by-products. The use of this novel photosynthetic microalgae cathodic half cell in an integrated biofuel facility was discussed. A series of novel PBRs for continuous operation can be integrated into a large-scale bioethanol facility, where the PBRs serve as cathodic half cells and are coupled to the existing yeast fermentation tanks which act as anodic half cells. These coupled MFCs generate electricity for use within the biofuel facility. The microalgae growth provides oil for biodiesel production, in addition to the bioethanol from the yeast fermentation. The photosynthetic cultivation in the cathodic PBR also requires carbon dioxide, resulting in consumption of carbon dioxide from bioethanol production. The paper also discussed the effect of plant design on net present worth and internal rate of return. tabs., figs.

  15. Multiloop Integral System Test (MIST): MIST Facility Functional Specification

    International Nuclear Information System (INIS)

    Habib, T.F.; Koksal, C.G.; Moskal, T.E.; Rush, G.C.; Gloudemans, J.R.

    1991-04-01

    The Multiloop Integral System Test (MIST) is part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock and Wilcox designed plants. MIST is sponsored by the US Nuclear Regulatory Commission, the Babcock ampersand Wilcox Owners Group, the Electric Power Research Institute, and Babcock and Wilcox. The unique features of the Babcock and Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral facilities to address the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility -- the Once Through Integral System (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST Functional Specification documents as-built design features, dimensions, instrumentation, and test approach. It also presents the scaling basis for the facility and serves to define the scope of work for the facility design and construction. 13 refs., 112 figs., 38 tabs

  16. BWR Full Integral Simulation Test (FIST) program: facility description report

    International Nuclear Information System (INIS)

    Stephens, A.G.

    1984-09-01

    A new boiling water reactor safety test facility (FIST, Full Integral Simulation Test) is described. It will be used to investigate small breaks and operational transients and to tie results from such tests to earlier large-break test results determined in the TLTA. The new facility's full height and prototypical components constitute a major scaling improvement over earlier test facilities. A heated feedwater system, permitting steady-state operation, and a large increase in the number of measurements are other significant improvements. The program background is outlined and program objectives defined. The design basis is presented together with a detailed, complete description of the facility and measurements to be made. An extensive component scaling analysis and prediction of performance are presented

  17. Modern computer hardware and the role of central computing facilities in particle physics

    International Nuclear Information System (INIS)

    Zacharov, V.

    1981-01-01

    Important recent changes in the hardware technology of computer system components are reviewed, and the impact of these changes assessed on the present and future pattern of computing in particle physics. The place of central computing facilities is particularly examined, to answer the important question as to what, if anything, should be their future role. Parallelism in computing system components is considered to be an important property that can be exploited with advantage. The paper includes a short discussion of the position of communications and network technology in modern computer systems. (orig.)

  18. COMPUTER ORIENTED FACILITIES OF TEACHING AND INFORMATIVE COMPETENCE

    OpenAIRE

    Olga M. Naumenko

    2010-01-01

    In the article it is considered the history of views to the tasks of education, estimations of its effectiveness from the point of view of forming of basic vitally important competences. Opinions to the problem in different countries, international organizations, corresponding experience of the Ukrainian system of education are described. The necessity of forming of informative competence of future teacher is reasonable in the conditions of application of the computer oriented facilities of t...

  19. Australian national networked tele-test facility for integrated systems

    Science.gov (United States)

    Eshraghian, Kamran; Lachowicz, Stefan W.; Eshraghian, Sholeh

    2001-11-01

    The Australian Commonwealth government recently announced a grant of 4.75 million as part of a 13.5 million program to establish a world class networked IC tele-test facility in Australia. The facility will be based on a state-of-the-art semiconductor tester located at Edith Cowan University in Perth that will operate as a virtual centre spanning Australia. Satellite nodes will be located at the University of Western Australia, Griffith University, Macquarie University, Victoria University and the University of Adelaide. The facility will provide vital equipment to take Australia to the frontier of critically important and expanding fields in microelectronics research and development. The tele-test network will provide state of the art environment for the electronics and microelectronics research and the industry community around Australia to test and prototype Very Large Scale Integrated (VLSI) circuits and other System On a Chip (SOC) devices, prior to moving to the manufacturing stage. Such testing is absolutely essential to ensure that the device performs to specification. This paper presents the current context in which the testing facility is being established, the methodologies behind the integration of design and test strategies and the target shape of the tele-testing Facility.

  20. Annual Summary of the Integrated Disposal Facility Performance Assessment 2012

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, R. [INTERA, Austin, TX (United States); Nichols, W. E. [CH2M HILL Plateau Remediation Company, Richland, WA (United States)

    2012-12-27

    An annual summary of the adequacy of the Hanford Immobilized Low-Activity Waste (ILAW) Performance Assessment (PA) is required each year (DOE O 435.1 Chg 1,1 DOE M 435.1-1 Chg 1;2 and DOE/ORP-2000-013). The most recently approved PA is DOE/ORP-2000-24.4 The ILAW PA evaluated the adequacy of the ILAW disposal facility, now referred to as the Integrated Disposal Facility (IDF), for the safe disposal of vitrified Hanford Site tank waste.

  1. Integral test facilities for validation of the performance of passive safety systems and natural circulation

    International Nuclear Information System (INIS)

    Choi, J. H.

    2010-10-01

    Passive safety systems are becoming an important component in advanced reactor designs. This has led to an international interest in examining natural circulation phenomena as this may play an important role in the operation of these passive safety systems. Understanding reactor system behaviour is a challenging process due to the complex interactions between components and associated phenomena. Properly scaled integral test facilities can be used to explore these complex interactions. In addition, system analysis computer codes can be used as predictive tools in understanding the complex reactor system behaviour. However, before the application of system analysis computer codes for reactor design, it is capability in making predictions needs to be validated against the experimental data from a properly scaled integral test facility. The IAEA has organized a coordinated research project (CRP) on natural circulation phenomena, modeling and reliability of passive systems that utilize natural circulation. This paper is a part of research results from this CRP and describes representative international integral test facilities that can be used for data collection for reactor types in which natural circulation may play an important role. Example experiments were described along with the analyses of these example cases in order to examine the ability of system codes to model the phenomena that are occurring in the test facilities. (Author)

  2. Shieldings for X-ray radiotherapy facilities calculated by computer

    International Nuclear Information System (INIS)

    Pedrosa, Paulo S.; Farias, Marcos S.; Gavazza, Sergio

    2005-01-01

    This work presents a methodology for calculation of X-ray shielding in facilities of radiotherapy with help of computer. Even today, in Brazil, the calculation of shielding for X-ray radiotherapy is done based on NCRP-49 recommendation establishing a methodology for calculating required to the elaboration of a project of shielding. With regard to high energies, where is necessary the construction of a labyrinth, the NCRP-49 is not very clear, so that in this field, studies were made resulting in an article that proposes a solution to the problem. It was developed a friendly program in Delphi programming language that, through the manual data entry of a basic design of architecture and some parameters, interprets the geometry and calculates the shields of the walls, ceiling and floor of on X-ray radiation therapy facility. As the final product, this program provides a graphical screen on the computer with all the input data and the calculation of shieldings and the calculation memory. The program can be applied in practical implementation of shielding projects for radiotherapy facilities and can be used in a didactic way compared to NCRP-49.

  3. Advances in Integrated Computational Materials Engineering "ICME"

    Science.gov (United States)

    Hirsch, Jürgen

    The methods of Integrated Computational Materials Engineering that were developed and successfully applied for Aluminium have been constantly improved. The main aspects and recent advances of integrated material and process modeling are simulations of material properties like strength and forming properties and for the specific microstructure evolution during processing (rolling, extrusion, annealing) under the influence of material constitution and process variations through the production process down to the final application. Examples are discussed for the through-process simulation of microstructures and related properties of Aluminium sheet, including DC ingot casting, pre-heating and homogenization, hot and cold rolling, final annealing. New results are included of simulation solution annealing and age hardening of 6xxx alloys for automotive applications. Physically based quantitative descriptions and computer assisted evaluation methods are new ICME methods of integrating new simulation tools also for customer applications, like heat affected zones in welding of age hardening alloys. The aspects of estimating the effect of specific elements due to growing recycling volumes requested also for high end Aluminium products are also discussed, being of special interest in the Aluminium producing industries.

  4. ICAT: Integrating data infrastructure for facilities based science

    International Nuclear Information System (INIS)

    Flannery, Damian; Matthews, Brian; Griffin, Tom; Bicarregui, Juan; Gleaves, Michael; Lerusse, Laurent; Downing, Roger; Ashton, Alun; Sufi, Shoaib; Drinkwater, Glen; Kleese van Dam, Kerstin

    2009-01-01

    ICAT: Integrating data infrastructure for facilities based science Damian Flannery, Brian Matthews, Tom Griffin, Juan Bicarregui, Michael Gleaves, Laurent Lerusse, Roger Downing, Alun Ashton, Shoaib Sufi, Glen Drinkwater, Kerstin Kleese Abstract Scientific facilities, in particular large-scale photon and neutron sources, have demanding requirements to manage the increasing quantities of experimental data they generate in a systematic and secure way. In this paper, we describe the ICAT infrastructure for cataloguing facility generated experimental data which has been in development within STFC and DLS for several years. We consider the factors which have influenced its design and describe its architecture and metadata model, a key tool in the management of data. We go on to give an outline of its current implementation and use, with plans for its future development.

  5. Vehicle Testing and Integration Facility; NREL (National Renewable Energy Laboratory)

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-03-02

    Engineers at the National Renewable Energy Laboratory’s (NREL’s) Vehicle Testing and Integration Facility (VTIF) are developing strategies to address two separate but equally crucial areas of research: meeting the demands of electric vehicle (EV) grid integration and minimizing fuel consumption related to vehicle climate control. Dedicated to renewable and energy-efficient solutions, the VTIF showcases technologies and systems designed to increase the viability of sustainably powered vehicles. NREL researchers instrument every class of on-road vehicle, conduct hardware and software validation for EV components and accessories, and develop analysis tools and technology for the Department of Energy, other government agencies, and industry partners.

  6. Study on system integration of robots operated in nuclear fusion facility and nuclear power plant facilities

    International Nuclear Information System (INIS)

    Oka, Kiyoshi

    2004-07-01

    A present robot is required to apply to many fields such as amusement, welfare and protection against disasters. The are however only limited numbers of the robots, which can work under the actual conditions as a robot system. It is caused by the following reasons: (1) the robot system cannot be realized by the only collection of the elemental technologies, (2) the performance of the robot is determined by that of the integrated system composed of the complicated elements with many functions, and (3) the respective elements have to be optimized in the integrated robot system with a well balance among them, through their examination, adjustment and improvement. Therefore, the system integration of the robot composed of a large number of elements is the most critical issue to realize the robot system for actual use. In the present paper, I describe the necessary approaches and elemental technologies to solve the issues on the system integration of the typical robot systems for maintenance in the nuclear fusion facility and rescue in the accident of the nuclear power plant facilities. These robots work under the intense radiation condition and restricted space in place of human. In particular, I propose a new approach to realize the system integration of the robot for actual use from the viewpoints of not only the environment and working conditions but also the restructure and optimization of the required elemental technologies with a well balance in the robot system. Based on the above approach, I have a contribution to realize the robot systems working under the actual conditions for maintenance in the nuclear fusion facility and rescue in the accident of the nuclear power plant facilities. (author)

  7. An integral effect test facility of the SMART, SMART ITL

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyun Sik; Moon, Sang Ki; Kim, Yeon Sik; Cho, Seok; Choi, Ki Yong; Bae, Hwang; Kim, Dong Eok; Choi, Nam Hyun; Min, Kyoung Ho; Ko, Yung Joo; Shin, Yong Cheol; Park, Rae Joon; Lee, Won Jae; Song, Chul Hwa; Yi, Sung Jae [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    SMART (System integrated Modular Advanced ReacTor) is a 330 MWth integral pressurized water reactor (iPWR) developed by KAERI and had obtained standard design approval (SDA) from Korean regulatory authority on July 2012. In this SMART design main components including a pressurizer, reactor coolant pumps and steam generators are installed in a reactor pressure vessel without any large connecting pipes. As the LBLOCA scenario is inherently excluded, its safety systems could be simplified only to ensure the safety during the SBLOCA scenarios and the other system transients. An integral effect test loop for the SMART (SMART ITL), or called as FESTA, had been designed to simulate the integral thermal hydraulic behavior of the SMART. The objectives of the SMART ITL are to investigate and understand the integral performance of reactor systems and components and the thermal hydraulic phenomena occurred in the system during normal, abnormal and emergency conditions, and to verify the system safety during various design basis events of the SMART. The integral effect test data will also be used to validate the related thermal hydraulic models of the safety analysis code such as TASS/SMR S, which is used for performance and accident analysis of the SMART design. This paper introduces the scaling analysis and scientific design of the integral test facility of the SMART, SMART ITL and its scaling analysis results.

  8. Integrating the Media Computation API with Pythy, an Online IDE for Novice Python Programmers

    OpenAIRE

    Athri, Ashima

    2015-01-01

    Improvements in both software and curricula have helped introductory computer science courses attract and retain more students. Pythy is one such online learning environment that aims to reduce software setup related barriers to learning Python while providing facilities like course management and grading to instructors. To further enable its goals of being beginner-centric, we want to integrate full support for media-computation-style programming activities. The media computation curriculum ...

  9. COMPUTER INTEGRATED MANUFACTURING: OVERVIEW OF MODERN STANDARDS

    Directory of Open Access Journals (Sweden)

    A. Рupena

    2016-09-01

    Full Text Available The article deals with modern international standards ISA-95 and ISA-88 on the development of computer inegreted manufacturing. It is shown scope of standards in the context of a hierarchical model of the enterprise. Article is built in such a way to describe the essence of the standards in the light of the basic descriptive models: product definition, resources, schedules and actual performance of industrial activity. Description of the product definition is given by hierarchical presentation of products at various levels of management. Much attention is given to describe this type of resources like equipment, which is logical chain to all these standards. For example, the standard batch process control shows the relationship between the definition of product and equipment on which it is made. The article shows the hierarchy of planning ERP-MES / MOM-SCADA (in terms of standard ISA-95, which traces the decomposition of common production plans of enterprises for specific works at APCS. We consider the appointment of the actual performance of production at MES / MOM considering KPI. Generalized picture of operational activity on a level MES / MOM is shown via general circuit diagrams of the relationship of activities and information flows between the functions. The article is finished by a substantiation of necessity of distribution, approval and development of standards ISA-88 and ISA-95 in Ukraine. The article is an overview and can be useful to specialists in computer-integrated systems control and management of industrial enterprises, system integrators and suppliers.

  10. COGMIR: A computer model for knowledge integration

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z.X.

    1988-01-01

    This dissertation explores some aspects of knowledge integration, namely, accumulation of scientific knowledge and performing analogical reasoning on the acquired knowledge. Knowledge to be integrated is conveyed by paragraph-like pieces referred to as documents. By incorporating some results from cognitive science, the Deutsch-Kraft model of information retrieval is extended to a model for knowledge engineering, which integrates acquired knowledge and performs intelligent retrieval. The resulting computer model is termed COGMIR, which stands for a COGnitive Model for Intelligent Retrieval. A scheme, named query invoked memory reorganization, is used in COGMIR for knowledge integration. Unlike some other schemes which realize knowledge integration through subjective understanding by representing new knowledge in terms of existing knowledge, the proposed scheme suggests at storage time only recording the possible connection of knowledge acquired from different documents. The actual binding of the knowledge acquired from different documents is deferred to query time. There is only one way to store knowledge and numerous ways to utilize the knowledge. Each document can be represented as a whole as well as its meaning. In addition, since facts are constructed from the documents, document retrieval and fact retrieval are treated in a unified way. When the requested knowledge is not available, query invoked memory reorganization can generate suggestion based on available knowledge through analogical reasoning. This is done by revising the algorithms developed for document retrieval and fact retrieval, and by incorporating Gentner's structure mapping theory. Analogical reasoning is treated as a natural extension of intelligent retrieval, so that two previously separate research areas are combined. A case study is provided. All the components are implemented as list structures similar to relational data-bases.

  11. A Supply Chain Design Problem Integrated Facility Unavailabilities Management

    Directory of Open Access Journals (Sweden)

    Fouad Maliki

    2016-08-01

    Full Text Available A supply chain is a set of facilities connected together in order to provide products to customers. The supply chain is subject to random failures caused by different factors which cause the unavailability of some sites. Given the current economic context, the management of these unavailabilities is becoming a strategic choice to ensure the desired reliability and availability levels of the different supply chain facilities. In this work, we treat two problems related to the field of supply chain, namely the design and unavailabilities management of logistics facilities. Specifically, we consider a stochastic distribution network with consideration of suppliers' selection, distribution centres location (DCs decisions and DCs’ unavailabilities management. Two resolution approaches are proposed. The first approach called non-integrated consists on define the optimal supply chain structure using an optimization approach based on genetic algorithms (GA, then to simulate the supply chain performance with the presence of DCs failures. The second approach called integrated approach is to consider the design of the supply chain problem and unavailabilities management of DCs in the same model. Note that, we replace each unavailable DC by performing a reallocation using GA in the two approaches. The obtained results of the two approaches are detailed and compared showing their effectiveness.

  12. The Argonne Leadership Computing Facility 2010 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Drugan, C. (LCF)

    2011-05-09

    Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued to provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers

  13. Integrated Disposal Facility FY2011 Glass Testing Summary Report

    International Nuclear Information System (INIS)

    Pierce, Eric M.; Bacon, Diana H.; Kerisit, Sebastien N.; Windisch, Charles F.; Cantrell, Kirk J.; Valenta, Michelle M.; Burton, Sarah D.; Westsik, Joseph H.

    2011-01-01

    Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to provide the technical basis for estimating radionuclide release from the engineered portion of the disposal facility (e.g., source term). Vitrifying the low-activity waste at Hanford is expected to generate over 1.6 x 10 5 m 3 of glass (Certa and Wells 2010). The volume of immobilized low-activity waste (ILAW) at Hanford is the largest in the DOE complex and is one of the largest inventories (approximately 8.9 x 10 14 Bq total activity) of long-lived radionuclides, principally 99 Tc (t 1/2 = 2.1 x 10 5 ), planned for disposal in a low-level waste (LLW) facility. Before the ILAW can be disposed, DOE must conduct a performance assessment (PA) for the Integrated Disposal Facility (IDF) that describes the long-term impacts of the disposal facility on public health and environmental resources. As part of the ILAW glass testing program PNNL is implementing a strategy, consisting of experimentation and modeling, in order to provide the technical basis for estimating radionuclide release from the glass waste form in support of future IDF PAs. The purpose of this report is to summarize the progress made in fiscal year (FY) 2011 toward implementing the strategy with the goal of developing an understanding of the long-term corrosion behavior of low-activity waste glasses.

  14. Integrated Disposal Facility FY2011 Glass Testing Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Pierce, Eric M.; Bacon, Diana H.; Kerisit, Sebastien N.; Windisch, Charles F.; Cantrell, Kirk J.; Valenta, Michelle M.; Burton, Sarah D.; Westsik, Joseph H.

    2011-09-29

    Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to provide the technical basis for estimating radionuclide release from the engineered portion of the disposal facility (e.g., source term). Vitrifying the low-activity waste at Hanford is expected to generate over 1.6 x 10{sup 5} m{sup 3} of glass (Certa and Wells 2010). The volume of immobilized low-activity waste (ILAW) at Hanford is the largest in the DOE complex and is one of the largest inventories (approximately 8.9 x 10{sup 14} Bq total activity) of long-lived radionuclides, principally {sup 99}Tc (t{sub 1/2} = 2.1 x 10{sup 5}), planned for disposal in a low-level waste (LLW) facility. Before the ILAW can be disposed, DOE must conduct a performance assessment (PA) for the Integrated Disposal Facility (IDF) that describes the long-term impacts of the disposal facility on public health and environmental resources. As part of the ILAW glass testing program PNNL is implementing a strategy, consisting of experimentation and modeling, in order to provide the technical basis for estimating radionuclide release from the glass waste form in support of future IDF PAs. The purpose of this report is to summarize the progress made in fiscal year (FY) 2011 toward implementing the strategy with the goal of developing an understanding of the long-term corrosion behavior of low-activity waste glasses.

  15. Integrated optical circuits for numerical computation

    Science.gov (United States)

    Verber, C. M.; Kenan, R. P.

    1983-01-01

    The development of integrated optical circuits (IOC) for numerical-computation applications is reviewed, with a focus on the use of systolic architectures. The basic architecture criteria for optical processors are shown to be the same as those proposed by Kung (1982) for VLSI design, and the advantages of IOCs over bulk techniques are indicated. The operation and fabrication of electrooptic grating structures are outlined, and the application of IOCs of this type to an existing 32-bit, 32-Mbit/sec digital correlator, a proposed matrix multiplier, and a proposed pipeline processor for polynomial evaluation is discussed. The problems arising from the inherent nonlinearity of electrooptic gratings are considered. Diagrams and drawings of the application concepts are provided.

  16. An integrated approach for facilities planning by ELECTRE method

    Science.gov (United States)

    Elbishari, E. M. Y.; Hazza, M. H. F. Al; Adesta, E. Y. T.; Rahman, Nur Salihah Binti Abdul

    2018-01-01

    Facility planning is concerned with the design, layout, and accommodation of people, machines and activities of a system. Most of the researchers try to investigate the production area layout and the related facilities. However, few of them try to investigate the relationship between the production space and its relationship with service departments. The aim of this research to is to integrate different approaches in order to evaluate, analyse and select the best facilities planning method that able to explain the relationship between the production area and other supporting departments and its effect on human efforts. To achieve the objective of this research two different approaches have been integrated: Apple’s layout procedure as one of the effective tools in planning factories, ELECTRE method as one of the Multi Criteria Decision Making methods (MCDM) to minimize the risk of getting poor facilities planning. Dalia industries have been selected as a case study to implement our integration the factory have been divided two main different area: the whole facility (layout A), and the manufacturing area (layout B). This article will be concerned with the manufacturing area layout (Layout B). After analysing the data gathered, the manufacturing area was divided into 10 activities. There are five factors that the alternative were compared upon which are: Inter department satisfactory level, total distance travelled for workers, total distance travelled for the product, total time travelled for the workers, and total time travelled for the product. Three different layout alternatives have been developed in addition to the original layouts. Apple’s layout procedure was used to study and evaluate the different alternatives layouts, the study and evaluation of the layouts was done by calculating scores for each of the factors. After obtaining the scores from evaluating the layouts, ELECTRE method was used to compare the proposed alternatives with each other and with

  17. Integrated multiscale modeling of molecular computing devices

    International Nuclear Information System (INIS)

    Cummings, Peter T; Leng Yongsheng

    2005-01-01

    Molecular electronics, in which single organic molecules are designed to perform the functions of transistors, diodes, switches and other circuit elements used in current siliconbased microelecronics, is drawing wide interest as a potential replacement technology for conventional silicon-based lithographically etched microelectronic devices. In addition to their nanoscopic scale, the additional advantage of molecular electronics devices compared to silicon-based lithographically etched devices is the promise of being able to produce them cheaply on an industrial scale using wet chemistry methods (i.e., self-assembly from solution). The design of molecular electronics devices, and the processes to make them on an industrial scale, will require a thorough theoretical understanding of the molecular and higher level processes involved. Hence, the development of modeling techniques for molecular electronics devices is a high priority from both a basic science point of view (to understand the experimental studies in this field) and from an applied nanotechnology (manufacturing) point of view. Modeling molecular electronics devices requires computational methods at all length scales - electronic structure methods for calculating electron transport through organic molecules bonded to inorganic surfaces, molecular simulation methods for determining the structure of self-assembled films of organic molecules on inorganic surfaces, mesoscale methods to understand and predict the formation of mesoscale patterns on surfaces (including interconnect architecture), and macroscopic scale methods (including finite element methods) for simulating the behavior of molecular electronic circuit elements in a larger integrated device. Here we describe a large Department of Energy project involving six universities and one national laboratory aimed at developing integrated multiscale methods for modeling molecular electronics devices. The project is funded equally by the Office of Basic

  18. Integrated O&M for energy generation and exchange facilities

    International Nuclear Information System (INIS)

    2016-01-01

    Ingeteam Service, part of the Ingeteam Group, is a leading company in the provision of integrated O&M services at energy generation and exchange facilities worldwide. From its head office in the Albacete Science and Technology Park, it manages the work of the 1,300 employees that make up its global workforce, rendering services to wind farms, PV installations and power generation plants. In addition, it maintains an active participation strategy in a range of R&D+i programmes that improve the existing technologies and are geared towards new production systems and new diagnostic techniques, applied to renewables installation maintenance. (Author)

  19. Integrated network for structural integrity monitoring of critical components in nuclear facilities, RIMIS

    International Nuclear Information System (INIS)

    Roth, Maria; Constantinescu, Dan Mihai; Brad, Sebastian; Ducu, Catalin; Malinovschi, Viorel

    2008-01-01

    The round table aims to join specialists working in the research area of the Romanian R and D Institutes and Universities involved in structural integrity assessment of materials, especially those working in the nuclear field, together with the representatives of the end user, the Cernavoda NPP. This scientific event will offer the opportunity to disseminate the theoretical, experimental and modelling activities, carried out to date, in the framework of the National Program 'Research of Excellence', Module I 2006-2008, managed by the National Authority for Scientific Research. Entitled 'Integrated Network for Structural Integrity Monitoring of Critical Components in Nuclear Facilities, RIMIS, the project has two main objectives: 1. - to elaborate a procedure applicable to the structural integrity assessment of critical components used in Romanian nuclear facilities (CANDU type Reactor, Hydrogen Isotopes Separation installations); 2. - to integrate the national networking into a similar one of European level, and to enhance the scientific significance of Romanian R and D organisations as well as to increase the contribution in solving major issues of the nuclear field. The topics of the round table will be focused on: 1. Development of a Structural Integrity Assessment Methodology applicable to the nuclear facilities components; 2. Experimental investigation methods and procedures; 3. Numeric simulation of nuclear components behaviour; 4. Further activities to finalize the assessment procedure. Also participations and contributions to sustain the activity in the European Network NULIFE, FP6 will be discussed. (authors)

  20. Consistent Posttest Calculations for LOCA Scenarios in LOBI Integral Facility

    Directory of Open Access Journals (Sweden)

    F. Reventós

    2012-01-01

    Full Text Available Integral test facilities (ITFs are one of the main tools for the validation of best estimate thermalhydraulic system codes. The experimental data are also of great value when compared to the experiment-scaled conditions in a full NPP. The LOBI was a single plus a triple-loop (simulated by one loop test facility electrically heated to simulate a 1300 MWe PWR. The scaling factor was 712 for the core power, volume, and mass flow. Primary and secondary sides contained all main active elements. Tests were performed for the characterization of phenomenologies relevant to large and small break LOCAs and special transients in PWRs. The paper presents the results of three posttest calculations of LOBI experiments. The selected experiments are BL-30, BL-44, and A1-84. They are LOCA scenarios of different break sizes and with different availability of safety injection components. The goal of the analysis is to improve the knowledge of the phenomena occurred in the facility in order to use it in further studies related to qualifying nodalizations of actual plants or to establish accuracy data bases for uncertainty methodologies. An example of procedure of implementing changes in a common nodalization valid for simulating tests occurred in a specific ITF is presented along with its confirmation based on posttests results.

  1. Integrated Disposal Facility FY 2012 Glass Testing Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Pierce, Eric M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kerisit, Sebastien N. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Krogstad, Eirik J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Burton, Sarah D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bjornstad, Bruce N. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Freedman, Vicky L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cantrell, Kirk J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Snyder, Michelle MV [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Crum, Jarrod V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Westsik, Joseph H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-03-29

    PNNL is conducting work to provide the technical basis for estimating radionuclide release from the engineered portion of the disposal facility for Hanford immobilized low-activity waste (ILAW). Before the ILAW can be disposed, DOE must conduct a performance assessment (PA) for the Integrated Disposal Facility (IDF) that describes the long-term impacts of the disposal facility on public health and environmental resources. As part of the ILAW glass testing program, PNNL is implementing a strategy, consisting of experimentation and modeling, to provide the technical basis for estimating radionuclide release from the glass waste form in support of future IDF PAs. Key activities in FY12 include upgrading the STOMP/eSTOMP codes to do near-field modeling, geochemical modeling of PCT tests to determine the reaction network to be used in the STOMP codes, conducting PUF tests on selected glasses to simulate and accelerate glass weathering, developing a Monte Carlo simulation tool to predict the characteristics of the weathered glass reaction layer as a function of glass composition, and characterizing glasses and soil samples exhumed from an 8-year lysimeter test. The purpose of this report is to summarize the progress made in fiscal year (FY) 2012 and the first quarter of FY 2013 toward implementing the strategy with the goal of developing an understanding of the long-term corrosion behavior of LAW glasses.

  2. EPICS - MDSplus integration in the ITER Neutral Beam Test Facility

    International Nuclear Information System (INIS)

    Luchetta, Adriano; Manduchi, Gabriele; Barbalace, Antonio; Soppelsa, Anton; Taliercio, Cesare

    2011-01-01

    SPIDER, the ITER-size ion-source test bed in the ITER Neutral Beam Test Facility, is a fusion device requiring a complex central system to provide control and data acquisition, referred to as CODAS. The CODAS software architecture will rely on EPICS and MDSplus, two open-source, collaborative software frameworks, targeted at control and data acquisition, respectively. EPICS has been selected as ITER CODAC middleware and, as the final deliverable of the Neutral Beam Test Facility is the procurement of the ITER Heating Neutral Beam Injector, we decided to adopt this ITER technology. MDSplus is a software package for data management, supporting advanced concepts, such as platform and underlying hardware independence, self description data, and data driven model. The combined use of EPICS and MDSplus is not new in fusion, but their level of integration will be new in SPIDER, achieved by a more refined data access layer. The paper presents the integration software to use effectively EPICS and MDSplus, including the definition of appropriate EPICS records to interact with MDSplus. The MDSplus and EPICS archive concepts are also compared on the basis of performance tests and data streaming is investigated by ad-hoc measurements.

  3. Integration of radiation and physical safety in large radiator facilities

    International Nuclear Information System (INIS)

    Lima, P.P.M.; Benedito, A.M.; Lima, C.M.A.; Silva, F.C.A. da

    2017-01-01

    Growing international concern about radioactive sources after the Sept. 11, 2001 event has led to a strengthening of physical safety. There is evidence that the illicit use of radioactive sources is a real possibility and may result in harmful radiological consequences for the population and the environment. In Brazil there are about 2000 medical, industrial and research facilities with radioactive sources, of which 400 are Category 1 and 2 classified by the - International Atomic Energy Agency - AIEA, where large irradiators occupy a prominent position due to the very high cobalt-60 activities. The radiological safety is well established in these facilities, due to the intense work of the authorities in the Country. In the paper the main aspects on radiological and physical safety applied in the large radiators are presented, in order to integrate both concepts for the benefit of the safety as a whole. The research showed that the items related to radiation safety are well defined, for example, the tests on the access control devices to the irradiation room. On the other hand, items related to physical security, such as effective control of access to the company, use of safety cameras throughout the company, are not yet fully incorporated. Integration of radiation and physical safety is fundamental for total safety. The elaboration of a Brazilian regulation on the subject is of extreme importance

  4. Integrated software package for nuclear material safeguards in a MOX fuel fabrication facility

    International Nuclear Information System (INIS)

    Schreiber, H.J.; Piana, M.; Moussalli, G.; Saukkonen, H.

    2000-01-01

    Since computerized data processing was introduced to Safeguards at large bulk handling facilities, a large number of individual software applications have been developed for nuclear material Safeguards implementation. Facility inventory and flow data are provided in computerized format for performing stratification, sample size calculation and selection of samples for destructive and non-destructive assay. Data is collected from nuclear measurement systems running in attended, unattended mode and more recently from remote monitoring systems controlled. Data sets from various sources have to be evaluated for Safeguards purposes, such as raw data, processed data and conclusions drawn from data evaluation results. They are reported in computerized format at the International Atomic Energy Agency headquarters and feedback from the Agency's mainframe computer system is used to prepare and support Safeguards inspection activities. The integration of all such data originating from various sources cannot be ensured without the existence of a common data format and a database system. This paper describes the fundamental relations between data streams, individual data processing tools, data evaluation results and requirements for an integrated software solution to facilitate nuclear material Safeguards at a bulk handling facility. The paper also explains the basis for designing a software package to manage data streams from various data sources and for incorporating diverse data processing tools that until now have been used independently from each other and under different computer operating systems. (author)

  5. Derivation of integral energy balance for the manotea facility

    Energy Technology Data Exchange (ETDEWEB)

    Pollman, Anthony, E-mail: pollman@nps.edu [Mechanical and Aeronautical Engineering Department, United States Naval Postgraduate School, Monterey, CA 93943 (United States); Marzo, Marino di [Fire Protection Engineering Department, University of Maryland, College Park, MD 20742 (United States)

    2013-12-15

    Highlights: • An integral energy balance was derived for the MANOTEA facility. • A second equation was derived which frames transients in terms of inventory alone. • Both equations were implemented and showed good agreement with experimental data. • The equations capture the physical mechanisms behind MANOTEA transients. • Physical understanding is required in order to properly model these transients with TRACE. - Abstract: Rapid-condensation-induced fluid motion occurs in several nuclear reactor accident sequences, as well as during normal operation. Modeling these events is central to our ability to regulate and ensure safe reactor operations. The UMD-USNA Near One-dimensional Transient Experimental Apparatus (MANOTEA) was constructed in order to create a rapid-condensation dataset for subsequent comparison to TRACE output. This paper outlines a derivation of the energy balance for the facility. A path integral based on mass and energy, rather than fluid mechanical, considerations is derived in order to characterize the physical mechanisms governing MANOTEA transients. This equation is further simplified to obtain an expression that frames transients in term of liquid inventory alone. Using data obtained from an actual transient, the path integral is implemented using three variables (change in liquid inventory, liquid inventory as a function of time, and change in metal temperature) to predict the outcome of a fourth independently measured variable (condenser pressure as a function of time). The implementation yields a very good approximation of the actual data. The inventory equation is also implemented and shows reasonable agreement. These equations, and the physical intuition that they yield, are key to properly characterizing MANOTEA transients and any subsequent modeling efforts.

  6. Computer generation of integrands for Feynman parametric integrals

    International Nuclear Information System (INIS)

    Cvitanovic, Predrag

    1973-01-01

    TECO text editing language, available on PDP-10 computers, is used for the generation and simplification of Feynman integrals. This example shows that TECO can be a useful computational tool in complicated calculations where similar algebraic structures recur many times

  7. Integrating ICT with education: using computer games to enhance ...

    African Journals Online (AJOL)

    Integrating ICT with education: using computer games to enhance learning mathematics at undergraduate level. ... This research seeks to look into ways in which computer games as ICT tools can be used to ... AJOL African Journals Online.

  8. Integrated Electrical and Thermal Grid Facility - Testing of Future Microgrid Technologies

    Directory of Open Access Journals (Sweden)

    Sundar Raj Thangavelu

    2015-09-01

    Full Text Available This paper describes the Experimental Power Grid Centre (EPGC microgrid test facility, which was developed to enable research, development and testing for a wide range of distributed generation and microgrid technologies. The EPGC microgrid facility comprises a integrated electrical and thermal grid with a flexible and configurable architecture, and includes various distributed energy resources and emulators, such as generators, renewable, energy storage technologies and programmable load banks. The integrated thermal grid provides an opportunity to harness waste heat produced by the generators for combined heat, power and cooling applications, and support research in optimization of combined electrical-thermal systems. Several case studies are presented to demonstrate the testing of different control and operation strategies for storage systems in grid-connected and islanded microgrids. One of the case studies also demonstrates an integrated thermal grid to convert waste heat to useful energy, which thus far resulted in a higher combined energy efficiency. Experiment results confirm that the facility enables testing and evaluation of grid technologies and practical problems that may not be apparent in a computer simulated environment.

  9. Adequacy of power-to-volume scaling philosophy to simulate natural circulation in Integral Test Facilities

    International Nuclear Information System (INIS)

    Nayak, A.K.; Vijayan, P.K.; Saha, D.; Venkat Raj, V.; Aritomi, Masanori

    1998-01-01

    Theoretical and experimental investigations were carried out to study the adequacy of power-to-volume scaling philosophy for the simulation of natural circulation and to establish the scaling philosophy applicable for the design of the Integral Test Facility (ITF-AHWR) for the Indian Advanced Heavy Water Reactor (AHWR). The results indicate that a reduction in the flow channel diameter of the scaled facility as required by the power-to-volume scaling philosophy may affect the simulation of natural circulation behaviour of the prototype plants. This is caused by the distortions due to the inability to simulate the frictional resistance of the scaled facility. Hence, it is recommended that the flow channel diameter of the scaled facility should be as close as possible to the prototype. This was verified by comparing the natural circulation behaviour of a prototype 220 MWe Indian PHWR and its scaled facility (FISBE-1) designed based on power-to-volume scaling philosophy. It is suggested from examinations using a mathematical model and a computer code that the FISBE-1 simulates the steady state and the general trend of transient natural circulation behaviour of the prototype reactor adequately. Finally the proposed scaling method was applied for the design of the ITF-AHWR. (author)

  10. Summarisation of construction and commissioning experience for nuclear power integrated test facility

    International Nuclear Information System (INIS)

    Xiao Zejun; Jia Dounan; Jiang Xulun; Chen Bingde

    2003-01-01

    Since the foundation of Nuclear Power Institute of China, it has successively designed various engineering experimental facilities, and constructed nuclear power experimental research base, and accumulated rich construction experiences of nuclear power integrated test facility. The author presents experience on design, construction and commissioning of nuclear power integrated test facility

  11. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  12. The development of functional requirement for integrated test facility

    International Nuclear Information System (INIS)

    Sim, B.S.; Oh, I.S.; Cha, K.H.; Lee, H.C.

    1994-01-01

    An Integrated Test Facility (ITF) is a human factors experimental environment comprised of a nuclear power plant function simulator, man-machine interfaces (MMI), human performance recording systems, and signal control and data analysis systems. In this study, we are going to describe how the functional requirements are developed by identification of both the characteristics of generic advanced control rooms and the research topics of world-wide research interest in human factors community. The functional requirements of user interface developed in this paper together with those of the other elements will be used for the design and implementation of the ITF which will serve as the basis for experimental research on a line of human factors topics. (author). 15 refs, 1 fig

  13. An Integration Testing Facility for the CERN Accelerator Controls System

    CERN Document Server

    Stapley, N; Bau, J C; Deghaye, S; Dehavay, C; Sliwinski, W; Sobczak, M

    2009-01-01

    A major effort has been invested in the design, development, and deployment of the LHC Control System. This large control system is made up of a set of core components and dependencies, which although tested individually, are often not able to be tested together on a system capable of representing the complete control system environment, including hardware. Furthermore this control system is being adapted and applied to CERN's whole accelerator complex, and in particular for the forthcoming renovation of the PS accelerators. To ensure quality is maintained as the system evolves, and toimprove defect prevention, the Controls Group launched a project to provide a dedicated facility for continuous, automated, integration testing of its core components to incorporate into its production process. We describe the project, initial lessons from its application, status, and future directions.

  14. MEASURE: An integrated data-analysis and model identification facility

    Science.gov (United States)

    Singh, Jaidip; Iyer, Ravi K.

    1990-01-01

    The first phase of the development of MEASURE, an integrated data analysis and model identification facility is described. The facility takes system activity data as input and produces as output representative behavioral models of the system in near real time. In addition a wide range of statistical characteristics of the measured system are also available. The usage of the system is illustrated on data collected via software instrumentation of a network of SUN workstations at the University of Illinois. Initially, statistical clustering is used to identify high density regions of resource-usage in a given environment. The identified regions form the states for building a state-transition model to evaluate system and program performance in real time. The model is then solved to obtain useful parameters such as the response-time distribution and the mean waiting time in each state. A graphical interface which displays the identified models and their characteristics (with real time updates) was also developed. The results provide an understanding of the resource-usage in the system under various workload conditions. This work is targeted for a testbed of UNIX workstations with the initial phase ported to SUN workstations on the NASA, Ames Research Center Advanced Automation Testbed.

  15. Integral Monitored Retrievable Storage (MRS) Facility conceptual design report

    International Nuclear Information System (INIS)

    1985-09-01

    This report presents a summary design description of the Conceptual Design for an Integral Monitored Retrievable Storage (MRS) Facility, as prepared by The Ralph M. Parsons Company under an A-E services contract with the Richland Operations Office of the Department of Energy. More detailed design requirements and design data are set forth in the Basis for Design and Design Report, bound under separate cover and available for reference by those desiring such information. The design data provided in this Design Report Executive Summary, the Basis for Design, and the Design Report include contributions by the Waste Technology Services Division of Westinghouse Electric Corporation (WEC), which was responsible for the development of the waste receiving, packaging, and storage systems, and Golder Associates Incorporated (GAI), which supported the design development with program studies. The MRS Facility design requirements, which formed the basis for the design effort, were prepared by Pacific Northwest Laboratory for the US Department of Energy, Richland Operations Office, in the form of a Functional Design Criteria (FDC) document, Rev. 4, August 1985. 9 figs., 6 tabs

  16. Integration of process computer systems to Cofrentes NPP

    International Nuclear Information System (INIS)

    Saettone Justo, A.; Pindado Andres, R.; Buedo Jimenez, J.L.; Jimenez Fernandez-Sesma, A.; Delgado Muelas, J.A.

    1997-01-01

    The existence of three different process computer systems in Cofrentes NPP and the ageing of two of them have led to the need for their integration into a single real time computer system, known as Integrated ERIS-Computer System (SIEC), which covers the functionality of the three systems: Process Computer (PC), Emergency Response Information System (ERIS) and Nuclear Calculation Computer (OCN). The paper describes the integration project developed, which has essentially consisted in the integration of PC, ERIS and OCN databases into a single database, the migration of programs from the old process computer into the new SIEC hardware-software platform and the installation of a communications programme to transmit all necessary data for OCN programs from the SIEC computer, which in the new configuration is responsible for managing the databases of the whole system. (Author)

  17. Introduction to Large-sized Test Facility for validating Containment Integrity under Severe Accidents

    International Nuclear Information System (INIS)

    Na, Young Su; Hong, Seongwan; Hong, Seongho; Min, Beongtae

    2014-01-01

    An overall assessment of containment integrity can be conducted properly by examining the hydrogen behavior in the containment building. Under severe accidents, an amount of hydrogen gases can be generated by metal oxidation and corium-concrete interaction. Hydrogen behavior in the containment building strongly depends on complicated thermal hydraulic conditions with mixed gases and steam. The performance of a PAR can be directly affected by the thermal hydraulic conditions, steam contents, gas mixture behavior and aerosol characteristics, as well as the operation of other engineering safety systems such as a spray. The models in computer codes for a severe accident assessment can be validated based on the experiment results in a large-sized test facility. The Korea Atomic Energy Research Institute (KAERI) is now preparing a large-sized test facility to examine in detail the safety issues related with hydrogen including the performance of safety devices such as a PAR in various severe accident situations. This paper introduces the KAERI test facility for validating the containment integrity under severe accidents. To validate the containment integrity, a large-sized test facility is necessary for simulating complicated phenomena induced by an amount of steam and gases, especially hydrogen released into the containment building under severe accidents. A pressure vessel 9.5 m in height and 3.4 m in diameter was designed at the KAERI test facility for the validating containment integrity, which was based on the THAI test facility with the experimental safety and the reliable measurement systems certified for a long time. This large-sized pressure vessel operated in steam and iodine as a corrosive agent was made by stainless steel 316L because of corrosion resistance for a long operating time, and a vessel was installed in at KAERI in March 2014. In the future, the control systems for temperature and pressure in a vessel will be constructed, and the measurement system

  18. Computer Integration into the Early Childhood Curriculum

    Science.gov (United States)

    Mohammad, Mona; Mohammad, Heyam

    2012-01-01

    Navin and Mark are playing at the computer in their preschool classroom. Like the rest of their classmates, these four-year-old children fearlessly experiment with computer as they navigate through the art program they are using. As they draw and paint on the computer screen, Mark and Navin talk about their creation. "Let's try the stamps" insists…

  19. Integrated numerical platforms for environmental dose assessments of large tritium inventory facilities

    International Nuclear Information System (INIS)

    Castro, P.; Ardao, J.; Velarde, M.; Sedano, L.; Xiberta, J.

    2013-01-01

    Related with a prospected new scenario of large inventory tritium facilities [KATRIN at TLK, CANDUs, ITER, EAST, other coming] the prescribed dosimetric limits by ICRP-60 for tritium committed-doses are under discussion requiring, in parallel, to surmount the highly conservative assessments by increasing the refinement of dosimetric-assessments in many aspects. Precise Lagrangian-computations of dosimetric cloud-evolution after standardized (normal/incidental/SBO) tritium cloud emissions are today numerically open to the perfect match of real-time meteorological-data, and patterns data at diverse scales for prompt/early and chronic tritium dose assessments. The trends towards integrated-numerical-platforms for environmental-dose assessments of large tritium inventory facilities under development.

  20. Mixed Waste Treatment Project: Computer simulations of integrated flowsheets

    International Nuclear Information System (INIS)

    Dietsche, L.J.

    1993-12-01

    The disposal of mixed waste, that is waste containing both hazardous and radioactive components, is a challenging waste management problem of particular concern to DOE sites throughout the United States. Traditional technologies used for the destruction of hazardous wastes need to be re-evaluated for their ability to handle mixed wastes, and in some cases new technologies need to be developed. The Mixed Waste Treatment Project (MWTP) was set up by DOE's Waste Operations Program (EM30) to provide guidance on mixed waste treatment options. One of MWTP's charters is to develop flowsheets for prototype integrated mixed waste treatment facilities which can serve as models for sites developing their own treatment strategies. Evaluation of these flowsheets is being facilitated through the use of computer modelling. The objective of the flowsheet simulations is to provide mass and energy balances, product compositions, and equipment sizing (leading to cost) information. The modelled flowsheets need to be easily modified to examine how alternative technologies and varying feed streams effect the overall integrated process. One such commercially available simulation program is ASPEN PLUS. This report contains details of the Aspen Plus program

  1. A personal computer code for seismic evaluations of nuclear power plant facilities

    International Nuclear Information System (INIS)

    Xu, J.; Graves, H.

    1990-01-01

    A wide range of computer programs and modeling approaches are often used to justify the safety of nuclear power plants. It is often difficult to assess the validity and accuracy of the results submitted by various utilities without developing comparable computer solutions. Taken this into consideration, CARES is designed as an integrated computational system which can perform rapid evaluations of structural behavior and examine capability of nuclear power plant facilities, thus CARES may be used by the NRC to determine the validity and accuracy of analysis methodologies employed for structural safety evaluations of nuclear power plants. CARES has been designed to: operate on a PC, have user friendly input/output interface, and have quick turnaround. The CARES program is structured in a modular format. Each module performs a specific type of analysis. The basic modules of the system are associated with capabilities for static, seismic and nonlinear analyses. This paper describes the various features which have been implemented into the Seismic Module of CARES version 1.0. In Section 2 a description of the Seismic Module is provided. The methodologies and computational procedures thus far implemented into the Seismic Module are described in Section 3. Finally, a complete demonstration of the computational capability of CARES in a typical soil-structure interaction analysis is given in Section 4 and conclusions are presented in Section 5. 5 refs., 4 figs

  2. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria

    2016-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others.

  3. A personal computer code for seismic evaluations of nuclear power plant facilities

    International Nuclear Information System (INIS)

    Xu, J.; Graves, H.

    1991-01-01

    In the process of review and evaluation of licensing issues related to nuclear power plants, it is essential to understand the behavior of seismic loading, foundation and structural properties and their impact on the overall structural response. In most cases, such knowledge could be obtained by using simplified engineering models which, when properly implemented, can capture the essential parameters describing the physics of the problem. Such models do not require execution on large computer systems and could be implemented through a personal computer (PC) based capability. Recognizing the need for a PC software package that can perform structural response computations required for typical licensing reviews, the US Nuclear Regulatory Commission sponsored the development of a PC operated computer software package CARES (Computer Analysis for Rapid Evaluation of Structures) system. This development was undertaken by Brookhaven National Laboratory (BNL) during FY's 1988 and 1989. A wide range of computer programs and modeling approaches are often used to justify the safety of nuclear power plants. It is often difficult to assess the validity and accuracy of the results submitted by various utilities without developing comparable computer solutions. Taken this into consideration, CARES is designed as an integrated computational system which can perform rapid evaluations of structural behavior and examine capability of nuclear power plant facilities, thus CARES may be used by the NRC to determine the validity and accuracy of analysis methodologies employed for structural safety evaluations of nuclear power plants. CARES has been designed to operate on a PC, have user friendly input/output interface, and have quick turnaround. This paper describes the various features which have been implemented into the seismic module of CARES version 1.0

  4. Integrated Optoelectronic Networks for Application-Driven Multicore Computing

    Science.gov (United States)

    2017-05-08

    AFRL-AFOSR-VA-TR-2017-0102 Integrated Optoelectronic Networks for Application- Driven Multicore Computing Sudeep Pasricha COLORADO STATE UNIVERSITY...AND SUBTITLE Integrated Optoelectronic Networks for Application-Driven Multicore Computing 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-13-1-0110 5c...and supportive materials with innovative architectural designs that integrate these components according to system-wide application needs. 15

  5. Computer science in Dutch secondary education: independent or integrated?

    NARCIS (Netherlands)

    van der Sijde, Peter; Doornekamp, B.G.

    1992-01-01

    Nowadays, in Dutch secondary education, computer science is integrated within school subjects. About ten years ago computer science was considered an independent subject, but in the mid-1980s this idea changed. In our study we investigated whether the objectives of teaching computer science as an

  6. Academic Computing Facilities and Services in Higher Education--A Survey.

    Science.gov (United States)

    Warlick, Charles H.

    1986-01-01

    Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…

  7. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    Science.gov (United States)

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  8. Automated computation of one-loop integrals in massless theories

    International Nuclear Information System (INIS)

    Hameren, A. van; Vollinga, J.; Weinzierl, S.

    2005-01-01

    We consider one-loop tensor and scalar integrals, which occur in a massless quantum field theory, and we report on the implementation into a numerical program of an algorithm for the automated computation of these one-loop integrals. The number of external legs of the loop integrals is not restricted. All calculations are done within dimensional regularization. (orig.)

  9. Heterogeneous Electronics – Wafer Level Integration, Packaging, and Assembly Facility

    Data.gov (United States)

    Federal Laboratory Consortium — This facility integrates active electronics with microelectromechanical (MEMS) devices at the miniature system scale. It obviates current size-, weight-, and power...

  10. Integrated social facility location planning for decision support: Accessibility studies provide support to facility location and integration of social service provision

    CSIR Research Space (South Africa)

    Green, Cheri A

    2012-09-01

    Full Text Available for two or more facilities to create an integrated plan for development Step 6 Costing of development plan Case Study Access norms and thresholds guidelines in accessibility analysis Appropriate norms/provision guidelines facilitate both service... access norms and threshold standards ?Test the relationship between service demand and the supply (service capacity) of the facility provision points within a defined catchment area ?Promote the ?right?sizing? of facilities relative to the demand...

  11. Integral Monitored Retrievable Storage (MRS) Facility conceptual design report

    International Nuclear Information System (INIS)

    1985-09-01

    This document, Volume 5 Book 1, contains cost estimate summaries for a monitored retrievable storage (MRS) facility. The cost estimate is based on the engineering performed during the conceptual design phase of the MRS Facility project

  12. An analytical model for computation of reliability of waste management facilities with intermediate storages

    International Nuclear Information System (INIS)

    Kallweit, A.; Schumacher, F.

    1977-01-01

    A high reliability is called for waste management facilities within the fuel cycle of nuclear power stations which can be fulfilled by providing intermediate storage facilities and reserve capacities. In this report a model based on the theory of Markov processes is described which allows computation of reliability characteristics of waste management facilities containing intermediate storage facilities. The application of the model is demonstrated by an example. (orig.) [de

  13. Knowledge Management tools integration within DLR's concurrent engineering facility

    Science.gov (United States)

    Lopez, R. P.; Soragavi, G.; Deshmukh, M.; Ludtke, D.

    The complexity of space endeavors has increased the need for Knowledge Management (KM) tools. The concept of KM involves not only the electronic storage of knowledge, but also the process of making this knowledge available, reusable and traceable. Establishing a KM concept within the Concurrent Engineering Facility (CEF) has been a research topic of the German Aerospace Centre (DLR). This paper presents the current KM tools of the CEF: the Software Platform for Organizing and Capturing Knowledge (S.P.O.C.K.), the data model Virtual Satellite (VirSat), and the Simulation Model Library (SimMoLib), and how their usage improved the Concurrent Engineering (CE) process. This paper also exposes the lessons learned from the introduction of KM practices into the CEF and elaborates a roadmap for the further development of KM in CE activities at DLR. The results of the application of the Knowledge Management tools have shown the potential of merging the three software platforms with their functionalities, as the next step towards the fully integration of KM practices into the CE process. VirSat will stay as the main software platform used within a CE study, and S.P.O.C.K. and SimMoLib will be integrated into VirSat. These tools will support the data model as a reference and documentation source, and as an access to simulation and calculation models. The use of KM tools in the CEF aims to become a basic practice during the CE process. The settlement of this practice will result in a much more extended knowledge and experience exchange within the Concurrent Engineering environment and, consequently, the outcome of the studies will comprise higher quality in the design of space systems.

  14. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    OpenAIRE

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and s...

  15. 242-A Evaporator crystallizer facility integrated annual safety appraisal

    International Nuclear Information System (INIS)

    1991-01-01

    This report provides the results of the Fiscal Year (FY) 1991 Annual Integrated Safety Appraisal of the 242-A Evaporator Crystallizer Facility in the Hanford 200 East Area. The appraisal was conducted in December 1990 and January 1991, by the Waste Tank Safety Assurance (WTSA) organizations in conjunction with Radiological Engineering, Criticality Safety, Packaging and Shipping Safety, Emergency Preparedness, Environmental Compliance, and Quality Assurance. Reports of these eight organizations are presented as Sections 2 through 7 of this report. The purpose of the appraisal was to verify that the 242-A Evaporator meets US Department of Energy (DOE) and Westinghouse Hanford Company (WHC) requirements and current industry standards of good practice for the areas being appraised. A further purpose was to identify areas in which program effectiveness could be improved. In accordance with the guidance of WHC Management Requirements and Procedures (MRP)5.6, previously identified deficiencies which are being resolved by line management were not repeated as Findings or Observations unless progress or intended disposition was considered to be unsatisfactory

  16. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  17. National electronic medical records integration on cloud computing system.

    Science.gov (United States)

    Mirza, Hebah; El-Masri, Samir

    2013-01-01

    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.

  18. Fundamentals of power integrity for computer platforms and systems

    CERN Document Server

    DiBene, Joseph T

    2014-01-01

    An all-encompassing text that focuses on the fundamentals of power integrity Power integrity is the study of power distribution from the source to the load and the system level issues that can occur across it. For computer systems, these issues can range from inside the silicon to across the board and may egress into other parts of the platform, including thermal, EMI, and mechanical. With a focus on computer systems and silicon level power delivery, this book sheds light on the fundamentals of power integrity, utilizing the author's extensive background in the power integrity industry and un

  19. Computer Security Incident Response Planning at Nuclear Facilities

    International Nuclear Information System (INIS)

    2016-06-01

    The purpose of this publication is to assist Member States in developing comprehensive contingency plans for computer security incidents with the potential to impact nuclear security and/or nuclear safety. It provides an outline and recommendations for establishing a computer security incident response capability as part of a computer security programme, and considers the roles and responsibilities of the system owner, operator, competent authority, and national technical authority in responding to a computer security incident with possible nuclear security repercussions

  20. Scientific computing vol III - approximation and integration

    CERN Document Server

    Trangenstein, John A

    2017-01-01

    This is the third of three volumes providing a comprehensive presentation of the fundamentals of scientific computing. This volume discusses topics that depend more on calculus than linear algebra, in order to prepare the reader for solving differential equations. This book and its companions show how to determine the quality of computational results, and how to measure the relative efficiency of competing methods. Readers learn how to determine the maximum attainable accuracy of algorithms, and how to select the best method for computing problems. This book also discusses programming in several languages, including C++, Fortran and MATLAB. There are 90 examples, 200 exercises, 36 algorithms, 40 interactive JavaScript programs, 91 references to software programs and 1 case study. Topics are introduced with goals, literature references and links to public software. There are descriptions of the current algorithms in GSLIB and MATLAB. This book could be used for a second course in numerical methods, for either ...

  1. Hardware for computing the integral image

    OpenAIRE

    Fernández-Berni, J.; Rodríguez-Vázquez, Ángel; Río, Rocío del; Carmona-Galán, R.

    2015-01-01

    La presente invención, según se expresa en el enunciado de esta memoria descriptiva, consiste en hardware de señal mixta para cómputo de la imagen integral en el plano focal mediante una agrupación de celdas básicas de sensado-procesamiento cuya interconexión puede ser reconfigurada mediante circuitería periférica que hace posible una implementación muy eficiente de una tarea de procesamiento muy útil en visión artificial como es el cálculo de la imagen integral en escenarios tales como monit...

  2. Paradox of integration-A computational model

    Science.gov (United States)

    Krawczyk, Małgorzata J.; Kułakowski, Krzysztof

    2017-02-01

    The paradoxical aspect of integration of a social group has been highlighted by Blau (1964). During the integration process, the group members simultaneously compete for social status and play the role of the audience. Here we show that when the competition prevails over the desire of approval, a sharp transition breaks all friendly relations. However, as was described by Blau, people with high status are inclined to bother more with acceptance of others; this is achieved by praising others and revealing her/his own weak points. In our model, this action smooths the transition and improves interpersonal relations.

  3. National facility for advanced computational science: A sustainable path to scientific discovery

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  4. Integrating Computer Concepts into Principles of Accounting.

    Science.gov (United States)

    Beck, Henry J.; Parrish, Roy James, Jr.

    A package of instructional materials for an undergraduate principles of accounting course at Danville Community College was developed based upon the following assumptions: (1) the principles of accounting student does not need to be able to write computer programs; (2) computerized accounting concepts should be presented in this course; (3)…

  5. Integration of case study approach, project design and computer ...

    African Journals Online (AJOL)

    Integration of case study approach, project design and computer modeling in managerial accounting education ... Journal of Fundamental and Applied Sciences ... in the Laboratory of Management Accounting and Controlling Systems at the ...

  6. Microwave integrated circuit mask design, using computer aided microfilm techniques

    Energy Technology Data Exchange (ETDEWEB)

    Reymond, J.M.; Batliwala, E.R.; Ajose, S.O.

    1977-01-01

    This paper examines the possibility of using a computer interfaced with a precision film C.R.T. information retrieval system, to produce photomasks suitable for the production of microwave integrated circuits.

  7. Integrated Computational Material Engineering Technologies for Additive Manufacturing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — QuesTek Innovations, a pioneer in Integrated Computational Materials Engineering (ICME) and a Tibbetts Award recipient, is teaming with University of Pittsburgh,...

  8. Integrating interactive computational modeling in biology curricula.

    Directory of Open Access Journals (Sweden)

    Tomáš Helikar

    2015-03-01

    Full Text Available While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  9. Integrating interactive computational modeling in biology curricula.

    Science.gov (United States)

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  10. Distributed and multi-core computation of 2-loop integrals

    International Nuclear Information System (INIS)

    De Doncker, E; Yuasa, F

    2014-01-01

    For an automatic computation of Feynman loop integrals in the physical region we rely on an extrapolation technique where the integrals of the sequence are obtained with iterated/repeated adaptive methods from the QUADPACK 1D quadrature package. The integration rule evaluations in the outer level, corresponding to independent inner integral approximations, are assigned to threads dynamically via the OpenMP runtime in the parallel implementation. Furthermore, multi-level (nested) parallelism enables an efficient utilization of hyperthreading or larger numbers of cores. For a class of loop integrals in the unphysical region, which do not suffer from singularities in the interior of the integration domain, we find that the distributed adaptive integration methods in the multivariate PARINT package are highly efficient and accurate. We apply these techniques without resorting to integral transformations and report on the capabilities of the algorithms and the parallel performance for a test set including various types of two-loop integrals

  11. Computer-integrated electric-arc melting process control system

    OpenAIRE

    Дёмин, Дмитрий Александрович

    2014-01-01

    Developing common principles of completing melting process automation systems with hardware and creating on their basis rational choices of computer- integrated electricarc melting control systems is an actual task since it allows a comprehensive approach to the issue of modernizing melting sites of workshops. This approach allows to form the computer-integrated electric-arc furnace control system as part of a queuing system “electric-arc furnace - foundry conveyor” and consider, when taking ...

  12. Oxy-Combustion Burner and Integrated Pollutant Removal Research and Development Test Facility

    Energy Technology Data Exchange (ETDEWEB)

    Mark Schoenfield; Manny Menendez; Thomas Ochs; Rigel Woodside; Danylo Oryshchyn

    2012-09-30

    A high flame temperature oxy-combustion test facility consisting of a 5 MWe equivalent test boiler facility and 20 KWe equivalent IPR® was constructed at the Hammond, Indiana manufacturing site. The test facility was operated natural gas and coal fuels and parametric studies were performed to determine the optimal performance conditions and generated the necessary technical data required to demonstrate the technologies are viable for technical and economic scale-up. Flame temperatures between 4930-6120F were achieved with high flame temperature oxy-natural gas combustion depending on whether additional recirculated flue gases are added to balance the heat transfer. For high flame temperature oxy-coal combustion, flame temperatures in excess of 4500F were achieved and demonstrated to be consistent with computational fluid dynamic modeling of the burner system. The project demonstrated feasibility and effectiveness of the Jupiter Oxygen high flame temperature oxy-combustion process with Integrated Pollutant Removal process for CCS and CCUS. With these technologies total parasitic power requirements for both oxygen production and carbon capture currently are in the range of 20% of the gross power output. The Jupiter Oxygen high flame temperature oxy-combustion process has been demonstrated at a Technology Readiness Level of 6 and is ready for commencement of a demonstration project.

  13. Integrating Network Management for Cloud Computing Services

    Science.gov (United States)

    2015-06-01

    Backend Distributed Datastore High-­‐level   Objec.ve   Network   Policy   Perf.   Metrics   SNAT  IP   Alloca.on   Controller...azure.microsoft.com/. 114 [16] Microsoft Azure ExpressRoute. http://azure.microsoft.com/en-us/ services/expressroute/. [17] Mobility and Networking...Networking Technologies, Services, and Protocols; Performance of Computer and Commu- nication Networks; Mobile and Wireless Communications Systems

  14. An integrated introduction to computer graphics and geometric modeling

    CERN Document Server

    Goldman, Ronald

    2009-01-01

    … this book may be the first book on geometric modelling that also covers computer graphics. In addition, it may be the first book on computer graphics that integrates a thorough introduction to 'freedom' curves and surfaces and to the mathematical foundations for computer graphics. … the book is well suited for an undergraduate course. … The entire book is very well presented and obviously written by a distinguished and creative researcher and educator. It certainly is a textbook I would recommend. …-Computer-Aided Design, 42, 2010… Many books concentrate on computer programming and soon beco

  15. Integrating Cloud-Computing-Specific Model into Aircraft Design

    Science.gov (United States)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  16. The Overview of the National Ignition Facility Distributed Computer Control System

    International Nuclear Information System (INIS)

    Lagin, L.J.; Bettenhausen, R.C.; Carey, R.A.; Estes, C.M.; Fisher, J.M.; Krammen, J.E.; Reed, R.K.; VanArsdall, P.J.; Woodruff, J.P.

    2001-01-01

    The Integrated Computer Control System (ICCS) for the National Ignition Facility (NIF) is a layered architecture of 300 front-end processors (FEP) coordinated by supervisor subsystems including automatic beam alignment and wavefront control, laser and target diagnostics, pulse power, and shot control timed to 30 ps. FEP computers incorporate either VxWorks on PowerPC or Solaris on UltraSPARC processors that interface to over 45,000 control points attached to VME-bus or PCI-bus crates respectively. Typical devices are stepping motors, transient digitizers, calorimeters, and photodiodes. The front-end layer is divided into another segment comprised of an additional 14,000 control points for industrial controls including vacuum, argon, synthetic air, and safety interlocks implemented with Allen-Bradley programmable logic controllers (PLCs). The computer network is augmented asynchronous transfer mode (ATM) that delivers video streams from 500 sensor cameras monitoring the 192 laser beams to operator workstations. Software is based on an object-oriented framework using CORBA distribution that incorporates services for archiving, machine configuration, graphical user interface, monitoring, event logging, scripting, alert management, and access control. Software coding using a mixed language environment of Ada95 and Java is one-third complete at over 300 thousand source lines. Control system installation is currently under way for the first 8 beams, with project completion scheduled for 2008

  17. Computer integration in the curriculum: promises and problems

    NARCIS (Netherlands)

    Plomp, T.; van den Akker, Jan

    1988-01-01

    This discussion of the integration of computers into the curriculum begins by reviewing the results of several surveys conducted in the Netherlands and the United States which provide insight into the problems encountered by schools and teachers when introducing computers in education. Case studies

  18. Computation of Surface Integrals of Curl Vector Fields

    Science.gov (United States)

    Hu, Chenglie

    2007-01-01

    This article presents a way of computing a surface integral when the vector field of the integrand is a curl field. Presented in some advanced calculus textbooks such as [1], the technique, as the author experienced, is simple and applicable. The computation is based on Stokes' theorem in 3-space calculus, and thus provides not only a means to…

  19. Integrating Computational Chemistry into a Course in Classical Thermodynamics

    Science.gov (United States)

    Martini, Sheridan R.; Hartzell, Cynthia J.

    2015-01-01

    Computational chemistry is commonly addressed in the quantum mechanics course of undergraduate physical chemistry curricula. Since quantum mechanics traditionally follows the thermodynamics course, there is a lack of curricula relating computational chemistry to thermodynamics. A method integrating molecular modeling software into a semester long…

  20. Integral Monitored Retrievable Storage (MRS) Facility conceptual design report

    International Nuclear Information System (INIS)

    1985-09-01

    This document, Volume 6 Book 1, contains information on design studies of a Monitored Retrievable Storage (MRS) facility. Topics include materials handling; processing; support systems; support utilities; spent fuel; high-level waste and alpha-bearing waste storage facilities; and field drywell storage

  1. An algorithm of computing inhomogeneous differential equations for definite integrals

    OpenAIRE

    Nakayama, Hiromasa; Nishiyama, Kenta

    2010-01-01

    We give an algorithm to compute inhomogeneous differential equations for definite integrals with parameters. The algorithm is based on the integration algorithm for $D$-modules by Oaku. Main tool in the algorithm is the Gr\\"obner basis method in the ring of differential operators.

  2. Integrating Computational Thinking into Technology and Engineering Education

    Science.gov (United States)

    Hacker, Michael

    2018-01-01

    Computational Thinking (CT) is being promoted as "a fundamental skill used by everyone in the world by the middle of the 21st Century" (Wing, 2006). CT has been effectively integrated into history, ELA, mathematics, art, and science courses (Settle, et al., 2012). However, there has been no analogous effort to integrate CT into…

  3. Integrating Computational Science Tools into a Thermodynamics Course

    Science.gov (United States)

    Vieira, Camilo; Magana, Alejandra J.; García, R. Edwin; Jana, Aniruddha; Krafcik, Matthew

    2018-01-01

    Computational tools and methods have permeated multiple science and engineering disciplines, because they enable scientists and engineers to process large amounts of data, represent abstract phenomena, and to model and simulate complex concepts. In order to prepare future engineers with the ability to use computational tools in the context of their disciplines, some universities have started to integrate these tools within core courses. This paper evaluates the effect of introducing three computational modules within a thermodynamics course on student disciplinary learning and self-beliefs about computation. The results suggest that using worked examples paired to computer simulations to implement these modules have a positive effect on (1) student disciplinary learning, (2) student perceived ability to do scientific computing, and (3) student perceived ability to do computer programming. These effects were identified regardless of the students' prior experiences with computer programming.

  4. Strategic interaction among hospitals and nursing facilities: the efficiency effects of payment systems and vertical integration.

    Science.gov (United States)

    Banks, D; Parker, E; Wendel, J

    2001-03-01

    Rising post-acute care expenditures for Medicare transfer patients and increasing vertical integration between hospitals and nursing facilities raise questions about the links between payment system structure, the incentive for vertical integration and the impact on efficiency. In the United States, policy-makers are responding to these concerns by initiating prospective payments to nursing facilities, and are exploring the bundling of payments to hospitals. This paper develops a static profit-maximization model of the strategic interaction between the transferring hospital and a receiving nursing facility. This model suggests that the post-1984 system of prospective payment for hospital care, coupled with nursing facility payments that reimburse for services performed, induces inefficient under-provision of hospital services and encourages vertical integration. It further indicates that the extension of prospective payment to nursing facilities will not eliminate the incentive to vertically integrate, and will not result in efficient production unless such integration takes place. Bundling prospective payments for hospitals and nursing facilities will neither remove the incentive for vertical integration nor induce production efficiency without such vertical integration. However, bundled payment will induce efficient production, with or without vertical integration, if nursing facilities are reimbursed for services performed. Copyright 2001 John Wiley & Sons, Ltd.

  5. An Integrated Computer-Aided Approach for Environmental Studies

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Chen, Fei; Jaksland, Cecilia

    1997-01-01

    A general framework for an integrated computer-aided approach to solve process design, control, and environmental problems simultaneously is presented. Physicochemical properties and their relationships to the molecular structure play an important role in the proposed integrated approach. The sco...... and applicability of the integrated approach is highlighted through examples involving estimation of properties and environmental pollution prevention. The importance of mixture effects on some environmentally important properties is also demonstrated....

  6. Computational Design Tools for Integrated Design

    DEFF Research Database (Denmark)

    Holst, Malene Kirstine; Kirkegaard, Poul Henning

    2010-01-01

    In an architectural conceptual sketching process, where an architect is working with the initial ideas for a design, the process is characterized by three phases: sketching, evaluation and modification. Basically the architect needs to address three areas in the conceptual sketching phase......: aesthetical, functional and technical requirements. The aim of the present paper is to address the problem of a vague or not existing link between digital conceptual design tools used by architects and designers and engineering analysis and simulation tools. Based on an analysis of the architectural design...... process different digital design methods are related to tasks in an integrated design process....

  7. Numerical computation of molecular integrals via optimized (vectorized) FORTRAN code

    International Nuclear Information System (INIS)

    Scott, T.C.; Grant, I.P.; Saunders, V.R.

    1997-01-01

    The calculation of molecular properties based on quantum mechanics is an area of fundamental research whose horizons have always been determined by the power of state-of-the-art computers. A computational bottleneck is the numerical calculation of the required molecular integrals to sufficient precision. Herein, we present a method for the rapid numerical evaluation of molecular integrals using optimized FORTRAN code generated by Maple. The method is based on the exploitation of common intermediates and the optimization can be adjusted to both serial and vectorized computations. (orig.)

  8. Computer aided probabilistic assessment of containment integrity

    International Nuclear Information System (INIS)

    Tsai, J.C.; Touchton, R.A.

    1984-01-01

    In the probabilistic risk assessment (PRA) of a nuclear power plant, there are three probability-based techniques which are widely used for event sequence frequency quantification (including nodal probability estimation). These three techniques are the event tree analysis, the fault tree analysis and the Bayesian approach for database development. In the barrier analysis for assessing radionuclide release to the environment in a PRA study, these techniques are employed to a greater extent in estimating conditions which could lead to failure of the fuel cladding and the reactor coolant system (RCS) pressure boundary, but to a lesser degree in the containment pressure boundary failure analysis. The main reason is that containment issues are currently still in a state of flux. In this paper, the authors describe briefly the computer programs currently used by the nuclear industry to do event tree analyses, fault tree analyses and the Bayesian update. The authors discuss how these computer aided probabilistic techniques might be adopted for failure analysis of the containment pressure boundary

  9. Integrated Computer Controlled Glow Discharge Tube

    Science.gov (United States)

    Kaiser, Erik; Post-Zwicker, Andrew

    2002-11-01

    An "Interactive Plasma Display" was created for the Princeton Plasma Physics Laboratory to demonstrate the characteristics of plasma to various science education outreach programs. From high school students and teachers, to undergraduate students and visitors to the lab, the plasma device will be a key component in advancing the public's basic knowledge of plasma physics. The device is fully computer controlled using LabVIEW, a touchscreen Graphical User Interface [GUI], and a GPIB interface. Utilizing a feedback loop, the display is fully autonomous in controlling pressure, as well as in monitoring the safety aspects of the apparatus. With a digital convectron gauge continuously monitoring pressure, the computer interface analyzes the input signals, while making changes to a digital flow controller. This function works independently of the GUI, allowing the user to simply input and receive a desired pressure; quickly, easily, and intuitively. The discharge tube is a 36" x 4"id glass cylinder with 3" side port. A 3000 volt, 10mA power supply, is used to breakdown the plasma. A 300 turn solenoid was created to demonstrate the magnetic pinching of a plasma. All primary functions of the device are controlled through the GUI digital controllers. This configuration allows for operators to safely control the pressure (100mTorr-1Torr), magnetic field (0-90Gauss, 7amps, 10volts), and finally, the voltage applied across the electrodes (0-3000v, 10mA).

  10. Monitored retrievable storage (MRS) facility and salt repository integration: Engineering study report

    International Nuclear Information System (INIS)

    1987-07-01

    This MRS Facility and Salt Repository Integration Study evaluates the impacts of an integrated MRS/Salt Repository Waste Management System on the Salt Repository Surface facilities' design, operations, cost, and schedule. Eight separate cases were studied ranging from a two phase repository design with no MRS facility to a design in which the repository only received package waste from the MRS facility for emplacement. The addition of the MRS facility to the Waste Management System significantly reduced the capital cost of the salt repository. All but one of the cases studied were capable of meeting the waste acceptance data. The reduction in the size and complexity of the Salt Repository waste handling building with the integration of the MRS facility reduces the design and operating staff requirements. 7 refs., 35 figs., 43 tabs

  11. Development of integrated platform for computational material design

    Energy Technology Data Exchange (ETDEWEB)

    Kiyoshi, Matsubara; Kumi, Itai; Nobutaka, Nishikawa; Akifumi, Kato [Center for Computational Science and Engineering, Fuji Research Institute Corporation (Japan); Hideaki, Koike [Advance Soft Corporation (Japan)

    2003-07-01

    The goal of our project is to design and develop a problem-solving environment (PSE) that will help computational scientists and engineers develop large complicated application software and simulate complex phenomena by using networking and parallel computing. The integrated platform, which is designed for PSE in the Japanese national project of Frontier Simulation Software for Industrial Science, is defined by supporting the entire range of problem solving activity from program formulation and data setup to numerical simulation, data management, and visualization. A special feature of our integrated platform is based on a new architecture called TASK FLOW. It integrates the computational resources such as hardware and software on the network and supports complex and large-scale simulation. This concept is applied to computational material design and the project 'comprehensive research for modeling, analysis, control, and design of large-scale complex system considering properties of human being'. Moreover this system will provide the best solution for developing large and complicated software and simulating complex and large-scaled phenomena in computational science and engineering. A prototype has already been developed and the validation and verification of an integrated platform will be scheduled by using the prototype in 2003. In the validation and verification, fluid-structure coupling analysis system for designing an industrial machine will be developed on the integrated platform. As other examples of validation and verification, integrated platform for quantum chemistry and bio-mechanical system are planned.

  12. Development of integrated platform for computational material design

    International Nuclear Information System (INIS)

    Kiyoshi, Matsubara; Kumi, Itai; Nobutaka, Nishikawa; Akifumi, Kato; Hideaki, Koike

    2003-01-01

    The goal of our project is to design and develop a problem-solving environment (PSE) that will help computational scientists and engineers develop large complicated application software and simulate complex phenomena by using networking and parallel computing. The integrated platform, which is designed for PSE in the Japanese national project of Frontier Simulation Software for Industrial Science, is defined by supporting the entire range of problem solving activity from program formulation and data setup to numerical simulation, data management, and visualization. A special feature of our integrated platform is based on a new architecture called TASK FLOW. It integrates the computational resources such as hardware and software on the network and supports complex and large-scale simulation. This concept is applied to computational material design and the project 'comprehensive research for modeling, analysis, control, and design of large-scale complex system considering properties of human being'. Moreover this system will provide the best solution for developing large and complicated software and simulating complex and large-scaled phenomena in computational science and engineering. A prototype has already been developed and the validation and verification of an integrated platform will be scheduled by using the prototype in 2003. In the validation and verification, fluid-structure coupling analysis system for designing an industrial machine will be developed on the integrated platform. As other examples of validation and verification, integrated platform for quantum chemistry and bio-mechanical system are planned

  13. On-line satellite/central computer facility of the Multiparticle Argo Spectrometer System

    International Nuclear Information System (INIS)

    Anderson, E.W.; Fisher, G.P.; Hien, N.C.; Larson, G.P.; Thorndike, A.M.; Turkot, F.; von Lindern, L.; Clifford, T.S.; Ficenec, J.R.; Trower, W.P.

    1974-09-01

    An on-line satellite/central computer facility has been developed at Brookhaven National Laboratory as part of the Multiparticle Argo Spectrometer System (MASS). This facility consisting of a PDP-9 and a CDC-6600, has been successfully used in study of proton-proton interactions at 28.5 GeV/c. (U.S.)

  14. Integral Monitored Retrievable Storage (MRS) Facility conceptual design report

    International Nuclear Information System (INIS)

    1985-09-01

    This document, Volume 5 Book 7, contains cost estimate information for a monitored retrievable storage (MRS) facility. Cost estimates are for onsite improvements, waste storage, and offsite improvements for the Clinch River Site

  15. Computer applications for the Fast Flux Test Facility

    International Nuclear Information System (INIS)

    Worth, G.A.; Patterson, J.R.

    1976-01-01

    Computer applications for the FFTF reactor include plant surveillance functions and fuel handling and examination control functions. Plant surveillance systems provide the reactor operator with a selection of over forty continuously updated, formatted displays of correlated data. All data are checked for limits and validity and the operator is advised of any anomaly. Data are also recorded on magnetic tape for historical purposes. The system also provides calculated variables, such as reactor thermal power and anomalous reactivity. Supplementing the basic plant surveillance computer system is a minicomputer system that monitors the reactor cover gas to detect and characterize absorber or fuel pin failures. In addition to plant surveillance functions, computers are used in the FFTF for controlling selected refueling equipment and for post-irradiation fuel pin examination. Four fuel handling or examination systems operate under computer control with manual monitoring and over-ride capability

  16. Implementation of computer security at nuclear facilities in Germany

    Energy Technology Data Exchange (ETDEWEB)

    Lochthofen, Andre; Sommer, Dagmar [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany)

    2013-07-01

    In recent years, electrical and I and C components in nuclear power plants (NPPs) were replaced by software-based components. Due to the increased number of software-based systems also the threat of malevolent interferences and cyber-attacks on NPPs has increased. In order to maintain nuclear security, conventional physical protection measures and protection measures in the field of computer security have to be implemented. Therefore, the existing security management process of the NPPs has to be expanded to computer security aspects. In this paper, we give an overview of computer security requirements for German NPPs. Furthermore, some examples for the implementation of computer security projects based on a GRS-best-practice-approach are shown. (orig.)

  17. Implementation of computer security at nuclear facilities in Germany

    International Nuclear Information System (INIS)

    Lochthofen, Andre; Sommer, Dagmar

    2013-01-01

    In recent years, electrical and I and C components in nuclear power plants (NPPs) were replaced by software-based components. Due to the increased number of software-based systems also the threat of malevolent interferences and cyber-attacks on NPPs has increased. In order to maintain nuclear security, conventional physical protection measures and protection measures in the field of computer security have to be implemented. Therefore, the existing security management process of the NPPs has to be expanded to computer security aspects. In this paper, we give an overview of computer security requirements for German NPPs. Furthermore, some examples for the implementation of computer security projects based on a GRS-best-practice-approach are shown. (orig.)

  18. Computer-aided system for cryogenic research facilities

    International Nuclear Information System (INIS)

    Gerasimov, V.P.; Zhelamsky, M.V.; Mozin, I.V.; Repin, S.S.

    1994-01-01

    A computer-aided system is developed for the more effective choice and optimization of the design and manufacturing technologies of the superconductor for the magnet system of the International Thermonuclear Experimental Reactor (ITER) with the aim to ensure the superconductor certification. The computer-aided system provides acquisition, processing, storage and display of data describing the proceeding tests, the detection of any parameter deviations and their analysis. Besides, it generates commands for the equipment switch off in emergency situations. ((orig.))

  19. MIMI: Multimodality, Multiresource, Information Integration Environment for Biomedical Core Facilities

    OpenAIRE

    Szymanski, Jacek; Wilson, David L.; Zhang, Guo-Qiang

    2007-01-01

    The rapid expansion of biomedical research has brought substantial scientific and administrative data management challenges to modern core facilities. Scientifically, a core facility must be able to manage experimental workflow and the corresponding set of large and complex scientific data. It must also disseminate experimental data to relevant researchers in a secure and expedient manner that facilitates collaboration and provides support for data interpretation and analysis. Administrativel...

  20. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00291854; The ATLAS collaboration; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computin...

  1. Sugarcane agricultural-industrial facilities and greenhouses integration

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Andres da [Estufas Agricolas Comercio e Assessoria Ltda. (EACEA), SP (Brazil)

    2012-07-01

    This chapter approaches Brazilian greenhouse market and technology, food market trends, integration of bioethanol distilleries with GH production, recovering CO{sub 2} from fermentation process, recovering low temperature energy, using vinasse and bagasse in GH processes, examples of integrated GH in the world, a tomato integrated GH study case, and a business model.

  2. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    Science.gov (United States)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  3. CIPSS [computer-integrated process and safeguards system]: The integration of computer-integrated manufacturing and robotics with safeguards, security, and process operations

    International Nuclear Information System (INIS)

    Leonard, R.S.; Evans, J.C.

    1987-01-01

    This poster session describes the computer-integrated process and safeguards system (CIPSS). The CIPSS combines systems developed for factory automation and automated mechanical functions (robots) with varying degrees of intelligence (expert systems) to create an integrated system that would satisfy current and emerging security and safeguards requirements. Specifically, CIPSS is an extension of the automated physical security functions concepts. The CIPSS also incorporates the concepts of computer-integrated manufacturing (CIM) with integrated safeguards concepts, and draws upon the Defense Advance Research Project Agency's (DARPA's) strategic computing program

  4. Proposed integrated hazardous waste disposal facility. Public environmental review

    International Nuclear Information System (INIS)

    1998-05-01

    This Public Environmental Report describes a proposal by the Health Department of Western Australia to establish a disposal facility for certain hazardous wastes and seeks comments from governments agencies and the public that will assist the EPA to make its recommendations to. The facility would only be used for wastes generated in Western Australia.The proposal specifically includes: a high temperature incinerator for the disposal of organo-chlorines (including agricultural chemicals and PCBs), and other intractable wastes for which this is the optimum disposal method; an area for the burial (after any appropriate conditioning) of low level radioactive intractable wastes arising from the processing of mineral sands (including monazite, ilmenite and zircon) and phosphate rock. Detailed information is presented on those wastes which are currently identified as requiring disposal at the facility.The proposed facility will also be suitable for the disposal of other intractable wastes including radioactive wastes (from industry, medicine and research) and other solid intractable wastes of a chemical nature including spent catalysts etc. Proposals to dispose of these other wastes at this facility in the future will be referred to the Environmental Protection Authority for separate assessment

  5. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Peisert, Sean [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Potok, Thomas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jones, Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-03

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues included research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the

  6. A resource facility for kinetic analysis: modeling using the SAAM computer programs.

    Science.gov (United States)

    Foster, D M; Boston, R C; Jacquez, J A; Zech, L

    1989-01-01

    Kinetic analysis and integrated system modeling have contributed significantly to understanding the physiology and pathophysiology of metabolic systems in humans and animals. Many experimental biologists are aware of the usefulness of these techniques and recognize that kinetic modeling requires special expertise. The Resource Facility for Kinetic Analysis (RFKA) provides this expertise through: (1) development and application of modeling technology for biomedical problems, and (2) development of computer-based kinetic modeling methodologies concentrating on the computer program Simulation, Analysis, and Modeling (SAAM) and its conversational version, CONversational SAAM (CONSAM). The RFKA offers consultation to the biomedical community in the use of modeling to analyze kinetic data and trains individuals in using this technology for biomedical research. Early versions of SAAM were widely applied in solving dosimetry problems; many users, however, are not familiar with recent improvements to the software. The purpose of this paper is to acquaint biomedical researchers in the dosimetry field with RFKA, which, together with the joint National Cancer Institute-National Heart, Lung and Blood Institute project, is overseeing SAAM development and applications. In addition, RFKA provides many service activities to the SAAM user community that are relevant to solving dosimetry problems.

  7. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  8. NNS computing facility manual P-17 Neutron and Nuclear Science

    International Nuclear Information System (INIS)

    Hoeberling, M.; Nelson, R.O.

    1993-11-01

    This document describes basic policies and provides information and examples on using the computing resources provided by P-17, the Neutron and Nuclear Science (NNS) group. Information on user accounts, getting help, network access, electronic mail, disk drives, tape drives, printers, batch processing software, XSYS hints, PC networking hints, and Mac networking hints is given

  9. CIF---Design basis for an integrated incineration facility

    International Nuclear Information System (INIS)

    Bennett, G.F.

    1991-01-01

    This paper discusses the evolution of chosen technologies that occurred during the design process of the US Department of Energy (DOE) incineration system designated the Consolidated Incineration Facility (CIF) as the Savannah River Plant, Aiken, South Carolina. The Plant is operated for DOE by the Westinghouse Savannah River Company. The purpose of the incineration system is to treat low level radioactive and/or hazardous liquid and solid wastes by combustion. The objective for the facility is to thermally destroy toxic constituents and volume reduce waste material. Design criteria requires operation be controlled within the limits of RCRA's permit envelope

  10. CSNI Integral test facility validation matrix for the assessment of thermal-hydraulic codes for LWR LOCA and transients

    International Nuclear Information System (INIS)

    1996-07-01

    This report deals with an internationally agreed integral test facility (ITF) matrix for the validation of best estimate thermal-hydraulic computer codes. Firstly, the main physical phenomena that occur during the considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a life of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. The construction of such a matrix is an attempt to collect together in a systematic way the best sets of openly available test data for code validation, assessment and improvement, including quantitative assessment of uncertainties in the modelling of phenomena by the codes. In addition to this objective, it is an attempt to record information which has been generated around the world over the last 20 years so that it is more accessible to present and future workers in that field than would otherwise be the case

  11. Integration of computer technology into the medical curriculum: the King's experience

    Directory of Open Access Journals (Sweden)

    Vickie Aitken

    1997-12-01

    Full Text Available Recently, there have been major changes in the requirements of medical education which have set the scene for the revision of medical curricula (Towle, 1991; GMC, 1993. As part of the new curriculum at King's, the opportunity has been taken to integrate computer technology into the course through Computer-Assisted Learning (CAL, and to train graduates in core IT skills. Although the use of computers in the medical curriculum has up to now been limited, recent studies have shown encouraging steps forward (see Boelen, 1995. One area where there has been particular interest is the use of notebook computers to allow students increased access to IT facilities (Maulitz et al, 1996.

  12. Strategies Used by Facilities in Uganda to Integrate Family Planning ...

    African Journals Online (AJOL)

    Erah

    assistance and coaching from the core team, we integrated routine site/field operations. The performance of each site was monitored using the three FP-HIV care integration indicators – proportion of HIV-positive patients of reproductive age who were: 1) counseled on FP methods at every clinic visit, 2) using at least one FP ...

  13. Improving aircraft accident forecasting for an integrated plutonium storage facility

    International Nuclear Information System (INIS)

    Rock, J.C.; Kiffe, J.; McNerney, M.T.; Turen, T.A.

    1998-06-01

    Aircraft accidents pose a quantifiable threat to facilities used to store and process surplus weapon-grade plutonium. The Department of Energy (DOE) recently published its first aircraft accident analysis guidelines: Accident Analysis for Aircraft Crash into Hazardous Facilities. This document establishes a hierarchy of procedures for estimating the small annual frequency for aircraft accidents that impact Pantex facilities and the even smaller frequency of hazardous material released to the environment. The standard establishes a screening threshold of 10 -6 impacts per year; if the initial estimate of impact frequency for a facility is below this level, no further analysis is required. The Pantex Site-Wide Environmental Impact Statement (SWEIS) calculates the aircraft impact frequency to be above this screening level. The DOE Standard encourages more detailed analyses in such cases. This report presents three refinements, namely, removing retired small military aircraft from the accident rate database, correcting the conversion factor from military accident rates (accidents per 100,000 hours) to the rates used in the DOE model (accidents per flight phase), and adjusting the conditional probability of impact for general aviation to more accurately reflect pilot training and local conditions. This report documents a halving of the predicted frequency of an aircraft impact at Pantex and points toward further reductions

  14. Computer-aided engineering of semiconductor integrated circuits

    Science.gov (United States)

    Meindl, J. D.; Dutton, R. W.; Gibbons, J. F.; Helms, C. R.; Plummer, J. D.; Tiller, W. A.; Ho, C. P.; Saraswat, K. C.; Deal, B. E.; Kamins, T. I.

    1980-07-01

    Economical procurement of small quantities of high performance custom integrated circuits for military systems is impeded by inadequate process, device and circuit models that handicap low cost computer aided design. The principal objective of this program is to formulate physical models of fabrication processes, devices and circuits to allow total computer-aided design of custom large-scale integrated circuits. The basic areas under investigation are (1) thermal oxidation, (2) ion implantation and diffusion, (3) chemical vapor deposition of silicon and refractory metal silicides, (4) device simulation and analytic measurements. This report discusses the fourth year of the program.

  15. DNA-Enabled Integrated Molecular Systems for Computation and Sensing

    Science.gov (United States)

    2014-05-21

    Computational devices can be chemically conjugated to different strands of DNA that are then self-assembled according to strict Watson − Crick binding rules... DNA -Enabled Integrated Molecular Systems for Computation and Sensing Craig LaBoda,† Heather Duschl,† and Chris L. Dwyer*,†,‡ †Department of...guided folding of DNA , inspired by nature, allows designs to manipulate molecular-scale processes unlike any other material system. Thus, DNA can be

  16. Complexity estimates based on integral transforms induced by computational units

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra

    2012-01-01

    Roč. 33, September (2012), s. 160-167 ISSN 0893-6080 R&D Projects: GA ČR GAP202/11/1368 Institutional research plan: CEZ:AV0Z10300504 Institutional support: RVO:67985807 Keywords : neural networks * estimates of model complexity * approximation from a dictionary * integral transforms * norms induced by computational units Subject RIV: IN - Informatics, Computer Science Impact factor: 1.927, year: 2012

  17. Competitiveness in organizational integrated computer system project management

    Directory of Open Access Journals (Sweden)

    Zenovic GHERASIM

    2010-06-01

    Full Text Available The organizational integrated computer system project management aims at achieving competitiveness by unitary, connected and personalised treatment of the requirements for this type of projects, along with the adequate application of all the basic management, administration and project planning principles, as well as of the basic concepts of the organisational information management development. The paper presents some aspects of organizational computer systems project management competitiveness with the specific reference to some Romanian companies’ projects.

  18. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, J; Sartirana, A

    2001-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on thei...

  19. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, Jose

    2010-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on the...

  20. Improving CMS data transfers among its distributed computing facilities

    International Nuclear Information System (INIS)

    Flix, J; Magini, N; Sartirana, A

    2011-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  1. Performance of simulated flexible integrated gasification polygeneration facilities. Part A: A technical-energetic assessment

    NARCIS (Netherlands)

    Meerman, J.C.; Ramírez Ramírez, C.A.; Turkenburg, W.C.; Faaij, A.P.C.

    2011-01-01

    This article investigates technical possibilities and performances of flexible integrated gasification polygeneration (IG-PG) facilities equipped with CO2 capture for the near future. These facilities can produce electricity during peak hours, while switching to the production of chemicals during

  2. Assessing the economic feasibility of flexible integrated gasification Co-generation facilities

    NARCIS (Netherlands)

    Meerman, J.C.; Ramírez Ramírez, C.A.; Turkenburg, W.C.; Faaij, A.P.C.

    2011-01-01

    This paper evaluated the economic effects of introducing flexibility to state-of-the-art integrated gasification co-generation (IGCG) facilities equipped with CO2 capture. In a previous paper the technical and energetic performances of these flexible IG-CG facilities were evaluated. This paper

  3. Integrating Safeguards into the Pit Disassembly and Conversion Facility

    International Nuclear Information System (INIS)

    Clark, T.G.

    2002-01-01

    In September 2000, the United States and the Russian Federation entered into an agreement which stipulates each country will irreversibly transform 34 metric tons of weapons-grade plutonium into material which could not be used for weapon purposes. Supporting the Department of Energy's (DOE) program to dispose of excess nuclear materials, the Pit Disassembly and Conversion Facility (PDCF) is being designed and constructed to disassemble the weapon ''pits'' and convert the nuclear material to an oxide form for fabrication into reactor fuel at the separate Mixed Oxide Fuel Fabrication Facility. The PDCF design incorporates automation to the maximum extent possible to facilitate material safeguards, reduce worker dose, and improve processing efficiency. This includes provisions for automated guided vehicle movements for shipping containers, material transport via automated conveyor between processes, remote process control monitoring, and automated Nondestructive Assay product systems

  4. The challenges of integrating multiple safeguards systems in a large nuclear facility

    International Nuclear Information System (INIS)

    Lavietes, A.; Liguori, C.; Pickrell, M.; Plenteda, R.; Sweet, M.

    2009-01-01

    Full-text: Implementing safeguards in a cost-effective manner in large nuclear facilities such as fuel conditioning, fuel reprocessing, and fuel fabrication plants requires the extensive use of instrumentation that is operated in unattended mode. The collected data is then periodically reviewed by the inspectors either on-site at a central location in the facility or remotely in the IAEA offices. A wide variety of instruments are deployed in large facilities, including video surveillance cameras, electronic sealing devices, non-destructive assay systems based on gamma ray and neutron detection, load cells for mass measurement, ID-readers, and other process-specific monitors. The challenge to integrate these different measurement instruments into an efficient, reliable, and secure system requires implementing standardization at various levels throughout the design process. This standardization includes the data generator behaviour and interface, networking solutions, and data security approaches. This standardization will provide a wide range of savings, including reduced training for inspectors and technicians, reduced periodic technical maintenance, reduced spare parts inventory, increased system robustness, and more predictive system behaviour. The development of standard building blocks will reduce the number of data generators required and allow implementation of simplified architectures that do not require local collection computers but rather utilize transmission of the acquired data directly to a central server via Ethernet connectivity. This approach will result in fewer system components and therefore reduced maintenance efforts and improved reliability. This paper discusses in detail the challenges and the subsequent solutions in the various areas that the IAEA Department of Safeguards has committed to pursue as the best sustainable way of maintaining the ability to implement reliable safeguards systems. (author)

  5. Results of 15 years experiments in the PMK-2 integral-type facility for VVERs

    Energy Technology Data Exchange (ETDEWEB)

    Szabados, L.; Ezsoel, G.; Perneczky, L. [KFKI Atomic Energy Research Institute, Budapest (Hungary)

    2001-07-01

    Due to the specific features of the VVER-440/213-type reactors the transient behaviour of such a reactor system is different from the usual PWR system behaviour. To provide an experimental database for the transient behaviour of VVER systems the PMK integral-type facility, the scaled down model of the Paks NPP was designed and constructed in the early 1980's. Since the start-up of the facility 48 experiments have been performed. It was confirmed through the experiments that the facility is a suitable tool for the computer code validation experiments and to the identification of basic thermal-hydraulic phenomena occurring during plant accidents. High international interest was shown by the four Standard Problem Exercises of the IAEA and by the projects financed by the EU-PHARE. A wide range of small- and medium-size LOCA sequences have been studied to know the performance and effectiveness of ECC systems and to evaluate the thermal-hydraulic safety of the core. Extensive studies have been performed to investigate the one- and two-phase natural circulation, the effect of disturbances coming from the secondary circuit and to validate the effectiveness of accident management measures like bleed and feed. The VVER-specific case, the opening of the SG collector cover was also extensively investigated. Examples given in the report show a few results of experiments and the results of calculation analyses performed for validation purposes of codes like RELAP5, ATHLET and CATHARE. There are some other white spots in Cross Reference Matrices for VVER reactors and, therefore, further experiments are planned to perform tests primarily in further support of accident management measures at low power states of plants to facilitate the improved safety management of VVER-440-type reactors. (authors)

  6. Results of 15 years experiments in the PMK-2 integral-type facility for VVERs

    International Nuclear Information System (INIS)

    Szabados, L.; Ezsoel, G.; Perneczky, L.

    2001-01-01

    Due to the specific features of the VVER-440/213-type reactors the transient behaviour of such a reactor system is different from the usual PWR system behaviour. To provide an experimental database for the transient behaviour of VVER systems the PMK integral-type facility, the scaled down model of the Paks NPP was designed and constructed in the early 1980's. Since the start-up of the facility 48 experiments have been performed. It was confirmed through the experiments that the facility is a suitable tool for the computer code validation experiments and to the identification of basic thermal-hydraulic phenomena occurring during plant accidents. High international interest was shown by the four Standard Problem Exercises of the IAEA and by the projects financed by the EU-PHARE. A wide range of small- and medium-size LOCA sequences have been studied to know the performance and effectiveness of ECC systems and to evaluate the thermal-hydraulic safety of the core. Extensive studies have been performed to investigate the one- and two-phase natural circulation, the effect of disturbances coming from the secondary circuit and to validate the effectiveness of accident management measures like bleed and feed. The VVER-specific case, the opening of the SG collector cover was also extensively investigated. Examples given in the report show a few results of experiments and the results of calculation analyses performed for validation purposes of codes like RELAP5, ATHLET and CATHARE. There are some other white spots in Cross Reference Matrices for VVER reactors and, therefore, further experiments are planned to perform tests primarily in further support of accident management measures at low power states of plants to facilitate the improved safety management of VVER-440-type reactors. (authors)

  7. Natural circulation in a scaled PWR integral test facility

    International Nuclear Information System (INIS)

    Kiang, R.L.; Jeuck, P.R. III

    1987-01-01

    Natural circulation is an important mechanism for cooling a nuclear power plant under abnormal operating conditions. To study natural circulation, we modeled a type of pressurized water reactor (PWR) that incorporates once-through steam generators. We conducted tests of single-phase natural circulations, two-phase natural circulations, and a boiler condenser mode. Because of complex geometry, the natural circulations observed in this facility exhibit some phenomena not commonly seen in a simple thermosyphon loop

  8. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  9. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  10. Status of integration of small computers into NDE systems

    International Nuclear Information System (INIS)

    Dau, G.J.; Behravesh, M.M.

    1988-01-01

    Introduction of computers in nondestructive evaluations (NDE) has enabled data acquisition devices to provide a more thorough and complete coverage in the scanning process, and has aided human inspectors in their data analysis and decision making efforts. The price and size/weight of small computers, coupled with recent increases in processing and storage capacity, have made small personal computers (PC's) the most viable platform for NDE equipment. Several NDE systems using minicomputers and newer PC-based systems, capable of automatic data acquisition, and knowledge-based analysis of the test data, have been field tested in the nuclear power plant environment and are currently available through commercial sources. While computers have been in common use for several NDE methods during the last few years, their greatest impact, however, has been on ultrasonic testing. This paper discusses the evolution of small computers and their integration into the ultrasonic testing process

  11. Soft computing integrating evolutionary, neural, and fuzzy systems

    CERN Document Server

    Tettamanzi, Andrea

    2001-01-01

    Soft computing encompasses various computational methodologies, which, unlike conventional algorithms, are tolerant of imprecision, uncertainty, and partial truth. Soft computing technologies offer adaptability as a characteristic feature and thus permit the tracking of a problem through a changing environment. Besides some recent developments in areas like rough sets and probabilistic networks, fuzzy logic, evolutionary algorithms, and artificial neural networks are core ingredients of soft computing, which are all bio-inspired and can easily be combined synergetically. This book presents a well-balanced integration of fuzzy logic, evolutionary computing, and neural information processing. The three constituents are introduced to the reader systematically and brought together in differentiated combinations step by step. The text was developed from courses given by the authors and offers numerous illustrations as

  12. Chemical Entity Semantic Specification: Knowledge representation for efficient semantic cheminformatics and facile data integration

    Science.gov (United States)

    2011-01-01

    Background Over the past several centuries, chemistry has permeated virtually every facet of human lifestyle, enriching fields as diverse as medicine, agriculture, manufacturing, warfare, and electronics, among numerous others. Unfortunately, application-specific, incompatible chemical information formats and representation strategies have emerged as a result of such diverse adoption of chemistry. Although a number of efforts have been dedicated to unifying the computational representation of chemical information, disparities between the various chemical databases still persist and stand in the way of cross-domain, interdisciplinary investigations. Through a common syntax and formal semantics, Semantic Web technology offers the ability to accurately represent, integrate, reason about and query across diverse chemical information. Results Here we specify and implement the Chemical Entity Semantic Specification (CHESS) for the representation of polyatomic chemical entities, their substructures, bonds, atoms, and reactions using Semantic Web technologies. CHESS provides means to capture aspects of their corresponding chemical descriptors, connectivity, functional composition, and geometric structure while specifying mechanisms for data provenance. We demonstrate that using our readily extensible specification, it is possible to efficiently integrate multiple disparate chemical data sources, while retaining appropriate correspondence of chemical descriptors, with very little additional effort. We demonstrate the impact of some of our representational decisions on the performance of chemically-aware knowledgebase searching and rudimentary reaction candidate selection. Finally, we provide access to the tools necessary to carry out chemical entity encoding in CHESS, along with a sample knowledgebase. Conclusions By harnessing the power of Semantic Web technologies with CHESS, it is possible to provide a means of facile cross-domain chemical knowledge integration with full

  13. Computer integrated manufacturing in the chemical industry : Theory & practice

    NARCIS (Netherlands)

    Ashayeri, J.; Teelen, A.; Selen, W.J.

    1995-01-01

    This paper addresses the possibilities of implementing Computer Integrated Manufacturing in the process industry, and the chemical industry in particular. After presenting some distinct differences of the process industry in relation to discrete manufacturing, a number of focal points are discussed.

  14. Integration of knowledge management system for the decommissioning of nuclear facilities

    International Nuclear Information System (INIS)

    Iguchi, Yukihiro; Yanagihara, Satoshi

    2016-01-01

    The decommissioning of a nuclear facility is a long term project, handling information which begins from the design, construction and operation. Moreover, the decommissioning project is likely to be extended because of the lack of the waste disposal site especially in Japan. In this situation, because the transfer of knowledge and education to the next generation is a crucial issue, integration and implementation of a system for knowledge management is necessary in order to solve it. For this purpose, the total system of decommissioning knowledge management system (KMS) is proposed. In this system, we have to arrange, organize and systematize the data and information of the plant design, maintenance history, trouble events, waste management records etc. The collected data, information and records should be organized by computer support system e.g. data base system. It becomes a base of the explicit knowledge. Moreover, measures of extracting tacit knowledge from retiring employees are necessary. The experience of the retirees should be documented as much as possible through effective questionnaire or interview process. The integrated knowledge mentioned above should be used for the planning, implementation of dismantlement or education for the future generation. (author)

  15. An integrated compact airborne multispectral imaging system using embedded computer

    Science.gov (United States)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  16. Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.

    2014-12-01

    The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and

  17. Development of computer model for radionuclide released from shallow-land disposal facility

    International Nuclear Information System (INIS)

    Suganda, D.; Sucipta; Sastrowardoyo, P.B.; Eriendi

    1998-01-01

    Development of 1-dimensional computer model for radionuclide release from shallow land disposal facility (SLDF) has been done. This computer model is used for the SLDF facility at PPTA Serpong. The SLDF facility is above 1.8 metres from groundwater and 150 metres from Cisalak river. Numerical method by implicit method of finite difference solution is chosen to predict the migration of radionuclide with any concentration.The migration starts vertically from the bottom of SLDF until the groundwater layer, then horizontally in the groundwater until the critical population group. Radionuclide Cs-137 is chosen as a sample to know its migration. The result of the assessment shows that the SLDF facility at PPTA Serpong has the high safety criteria. (author)

  18. Integrated Human Test Facilities at NASA and the Role of Human Engineering

    Science.gov (United States)

    Tri, Terry O.

    2002-01-01

    Integrated human test facilities are a key component of NASA's Advanced Life Support Program (ALSP). Over the past several years, the ALSP has been developing such facilities to serve as a large-scale advanced life support and habitability test bed capable of supporting long-duration evaluations of integrated bioregenerative life support systems with human test crews. These facilities-targeted for evaluation of hypogravity compatible life support and habitability systems to be developed for use on planetary surfaces-are currently in the development stage at the Johnson Space Center. These major test facilities are comprised of a set of interconnected chambers with a sealed internal environment, which will be outfitted with systems capable of supporting test crews of four individuals for periods exceeding one year. The advanced technology systems to be tested will consist of both biological and physicochemical components and will perform all required crew life support and habitability functions. This presentation provides a description of the proposed test "missions" to be supported by these integrated human test facilities, the overall system architecture of the facilities, the current development status of the facilities, and the role that human design has played in the development of the facilities.

  19. Integrating computational methods to retrofit enzymes to synthetic pathways.

    Science.gov (United States)

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  20. Integral Monitored Retrievable Storage (MRS) Facility conceptual design report

    International Nuclear Information System (INIS)

    1985-09-01

    In April 1985, the Department of Energy (DOE) selected the Clinch River site as its preferred site for the construction and operation of the monitored retrievable storage (MRS) facility (USDOE, 1985). In support of the DOE MRS conceptual design activity, available data describing the site have been gathered and analyzed. A composite geotechnical description of the Clinch River site has been developed and is presented herein. This report presents Clinch River site description data in the following sections: general site description, surface hydrologic characteristics, groundwater characteristics, geologic characteristics, vibratory ground motion, surface faulting, stability of subsurface materials, slope stability, and references. 48 refs., 35 figs., 6 tabs

  1. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Graf, F.A. Jr.

    1995-02-27

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System`s pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System.

  2. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    International Nuclear Information System (INIS)

    Graf, F.A. Jr.

    1995-01-01

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System's pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System

  3. Computer control and data acquisition system for the R.F. Test Facility

    International Nuclear Information System (INIS)

    Stewart, K.A.; Burris, R.D.; Mankin, J.B.; Thompson, D.H.

    1986-01-01

    The Radio Frequency Test Facility (RFTF) at Oak Ridge National Laboratory, used to test and evaluate high-power ion cyclotron resonance heating (ICRH) systems and components, is monitored and controlled by a multicomponent computer system. This data acquisition and control system consists of three major hardware elements: (1) an Allen-Bradley PLC-3 programmable controller; (2) a VAX 11/780 computer; and (3) a CAMAC serial highway interface. Operating in LOCAL as well as REMOTE mode, the programmable logic controller (PLC) performs all the control functions of the test facility. The VAX computer acts as the operator's interface to the test facility by providing color mimic panel displays and allowing input via a trackball device. The VAX also provides archiving of trend data acquired by the PLC. Communications between the PLC and the VAX are via the CAMAC serial highway. Details of the hardware, software, and the operation of the system are presented in this paper

  4. Integrating Xgrid into the HENP distributed computing model

    International Nuclear Information System (INIS)

    Hajdu, L; Lauret, J; Kocoloski, A; Miller, M

    2008-01-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology

  5. Integrating Xgrid into the HENP distributed computing model

    Science.gov (United States)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  6. Systems engineering applied to integrated safety management for high consequence facilities

    International Nuclear Information System (INIS)

    Barter, R; Morais, B.

    1998-01-01

    Integrated Safety Management is a concept that is being actively promoted by the U.S. Department of Energy as a means of assuring safe operation of its facilities. The concept involves the integration of safety precepts into work planning rather than adjusting for safe operations after defining the work activity. The system engineering techniques used to design an integrated safety management system for a high consequence research facility are described. An example is given to show how the concepts evolved with the system design

  7. Simulation of natural circulation on an integral type experimental facility, MASLWR

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Youngjong; Lim, Sungwon; Ha, Jaejoo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-05-15

    The OSU MASLWR test facility was reconfigured to eliminate a recurring grounding problem and improve facility reliability in anticipation of conducting an IAEA International Collaborative Standard Problem (ICSP). The purpose of ICSP is to provide experimental data on flow instability phenomena under natural circulation conditions and coupled containment/reactor vessel behavior in integral-type reactors, and to evaluate system code capabilities to predict natural circulation phenomena for integral type PWR, by simulating an integrated experiment. A natural circulation in the primary side during various core powers is analyzed using TASS/SMR code for the integral type experimental facility. The calculation results show higher steady state primary flow than experiment. If it matches the initial flow with experiment, it shows lower primary flow than experiment according to the increase of power. The code predictions may be improved by applying a Reynolds number dependent form loss coefficient to accurately account for unrecoverable pressure losses.

  8. Integrated evolutionary computation neural network quality controller for automated systems

    Energy Technology Data Exchange (ETDEWEB)

    Patro, S.; Kolarik, W.J. [Texas Tech Univ., Lubbock, TX (United States). Dept. of Industrial Engineering

    1999-06-01

    With increasing competition in the global market, more and more stringent quality standards and specifications are being demands at lower costs. Manufacturing applications of computing power are becoming more common. The application of neural networks to identification and control of dynamic processes has been discussed. The limitations of using neural networks for control purposes has been pointed out and a different technique, evolutionary computation, has been discussed. The results of identifying and controlling an unstable, dynamic process using evolutionary computation methods has been presented. A framework for an integrated system, using both neural networks and evolutionary computation, has been proposed to identify the process and then control the product quality, in a dynamic, multivariable system, in real-time.

  9. Computer graphics application in the engineering design integration system

    Science.gov (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  10. CMS Distributed Computing Integration in the LHC sustained operations era

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Bockelman, B; Fisk, I

    2011-01-01

    After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.

  11. Operational Circular nr 5 - October 2000 USE OF CERN COMPUTING FACILITIES

    CERN Multimedia

    Division HR

    2000-01-01

    New rules covering the use of CERN Computing facilities have been drawn up. All users of CERN’s computing facilites are subject to these rules, as well as to the subsidiary rules of use. The Computing Rules explicitly address your responsibility for taking reasonable precautions to protect computing equipment and accounts. In particular, passwords must not be easily guessed or obtained by others. Given the difficulty to completely separate work and personal use of computing facilities, the rules define under which conditions limited personal use is tolerated. For example, limited personal use of e-mail, news groups or web browsing is tolerated in your private time, provided CERN resources and your official duties are not adversely affected. The full conditions governing use of CERN’s computing facilities are contained in Operational Circular N° 5, which you are requested to read. Full details are available at : http://www.cern.ch/ComputingRules Copies of the circular are also available in the Divis...

  12. National Ignition Facility sub-system design requirements integrated timing system SSDR 1.5.3

    International Nuclear Information System (INIS)

    Wiedwald, J.; Van Aersau, P.; Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development, and test requirements for the Integrated Timing System, WBS 1.5.3 which is part of the NIF Integrated Computer Control System (ICCS). The Integrated Timing System provides all temporally-critical hardware triggers to components and equipment in other NIF systems

  13. Bus systems: Integrated facility management; Bus-Systeme: Gewerkeuebergreifende Gebaeudeautomation

    Energy Technology Data Exchange (ETDEWEB)

    Baumgarth, S.; Heiser, M. [Fachhochschule Braunschweig-Wolfenbuettel, Wolfenbuettel (Germany)

    2000-03-01

    Optimisation of facility management relies indispensably on uncomplicated interactive communication between different systems by different producers. An example is described: The system comprises two closed-cycle cooling towers, a cold water set and two different loads (ventilators). Each system can be controlled separately. The trend in automation is in the direction of intelligence even at field level. [German] Unverzichtbare Voraussetzung fuer das Ausschoepfen von Optimierungspotentialen in der Gebaeudeautomation ist die unkomplizierte, wechselseitige Kommunikation zwischen Anlagen und Automatisierungsstationen verschiedener Gewerke und Hersteller. Am Beispiel einer komplexen Anlage, die aus zwei Kuehltuermen mit geschlossenem Kreislauf, einem Kaltwasserersatz sowie unterschiedlichen Verbrauchern (Lueftungsanlagen) besteht, soll die Verknuepfung kaeltetechnischer Gewerke naeher dargestellt werden. Jeder der Teilbereiche ist ueber eine umfangreiche Strategie zu regeln und zu steuern. Dabei geht die Entwicklung in der Gebaeudeautomation hin zu einer Verlagerung der Intelligenz in die Feldebene. (orig./AKF)

  14. Study concerning an integrated radiation monitoring systems for nuclear facilities

    International Nuclear Information System (INIS)

    Oprea, I.; Oprea, M.; Stoica, M.; Cerga, V.; Pirvu, V; Badea, E.

    1996-01-01

    This paper presents an integrated radiation monitoring system designed to assess the effects of nuclear accidents and to provide a basis for making right decisions and countermeasures in order to reduce health damages. The system implies a number of stationary monitoring equipment, data processing unit and a communication network. The system meets the demands of efficiency and reliability, providing the needed tools to easily create programs able to process simple input data filling the information management system. (author). 10 refs

  15. Engineering Task Plan for the Integrity Assessment Examination of Double-Contained Receiver Tanks (DCRT), Catch Tanks and Ancillary facilities

    International Nuclear Information System (INIS)

    BECKER, D.L.

    2000-01-01

    This Engineering Task Plan (ETP) presents the integrity assessment examination of three DCRTs, seven catch tanks, and two ancillary facilities located in the 200 East and West Areas of the Hanford Site. The integrity assessment examinations, as described in this ETP, will provide the necessary information to enable the independently qualified registered professional engineer (IQRPE) to assess the condition and integrity of these facilities. The plan is consistent with the Double-Shell Tank Waste Transfer Facilities Integrity Assessment Plan

  16. Integrated assessment of thermal hydraulic processes in W7-X fusion experimental facility

    Energy Technology Data Exchange (ETDEWEB)

    Kaliatka, T., E-mail: tadas.kaliatka@lei.lt; Uspuras, E.; Kaliatka, A.

    2017-02-15

    Highlights: • The model of Ingress of Coolant Event experiment facility was developed using the RELAP5 code. • Calculation results were compared with Ingress of Coolant Event experiment data. • Using gained experience, the numerical model of Wendelstein 7-X facility was developed. • Performed analysis approved pressure increase protection system for LOCA event. - Abstract: Energy received from the nuclear fusion reaction is one of the most promising options for generating large amounts of carbon-free energy in the future. However, physical and technical problems existing in this technology are complicated. Several experimental nuclear fusion devices around the world have already been constructed, and several are under construction. However, the processes in the cooling system of the in-vessel components, vacuum vessel and pressure increase protection system of nuclear fusion devices are not widely studied. The largest amount of radioactive materials is concentrated in the vacuum vessel of the fusion device. Vacuum vessel is designed for the vacuum conditions inside the vessel. Rupture of the in-vessel components of the cooling system pipe may lead to a sharp pressure increase and possible damage of the vacuum vessel. To prevent the overpressure, the pressure increase protection system should be designed and implemented. Therefore, systematic and detailed experimental and numerical studies, regarding the thermal-hydraulic processes in cooling system, vacuum vessel and pressure increase protection system, are important and relevant. In this article, the numerical investigation of thermal-hydraulic processes in cooling systems of in-vessel components, vacuum vessels and pressure increase protection system of fusion devices is presented. Using the experience gained from the modelling of “Ingress of Coolant Event” experimental facilities, the numerical model of Wendelstein 7-X (W7-X) experimental fusion device was developed. The integrated analysis of the

  17. Computer-integrated design and information management for nuclear projects

    International Nuclear Information System (INIS)

    Gonzalez, A.; Martin-Guirado, L.; Nebrera, F.

    1987-01-01

    Over the past seven years, Empresarios Agrupados has been developing a comprehensive, computer-integrated system to perform the majority of the engineering, design, procurement and construction management activities in nuclear, fossil-fired as well as hydro power plant projects. This system, which is already in a production environment, comprises a large number of computer programs and data bases designed using a modular approach. Each software module, dedicated to meeting the needs of a particular design group or project discipline, facilitates the performance of functional tasks characteristic of the power plant engineering process

  18. Three-dimensional integrated CAE system applying computer graphic technique

    International Nuclear Information System (INIS)

    Kato, Toshisada; Tanaka, Kazuo; Akitomo, Norio; Obata, Tokayasu.

    1991-01-01

    A three-dimensional CAE system for nuclear power plant design is presented. This system utilizes high-speed computer graphic techniques for the plant design review, and an integrated engineering database for handling the large amount of nuclear power plant engineering data in a unified data format. Applying this system makes it possible to construct a nuclear power plant using only computer data from the basic design phase to the manufacturing phase, and it increases the productivity and reliability of the nuclear power plants. (author)

  19. Computation of rectangular source integral by rational parameter polynomial method

    International Nuclear Information System (INIS)

    Prabha, Hem

    2001-01-01

    Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively

  20. Evolution of facility layout requirements and CAD [computer-aided design] system development

    International Nuclear Information System (INIS)

    Jones, M.

    1990-06-01

    The overall configuration of the Superconducting Super Collider (SSC) including the infrastructure and land boundary requirements were developed using a computer-aided design (CAD) system. The evolution of the facility layout requirements and the use of the CAD system are discussed. The emphasis has been on minimizing the amount of input required and maximizing the speed by which the output may be obtained. The computer system used to store the data is also described

  1. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    Science.gov (United States)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  2. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Jayatilaka, B. [Fermilab; Levshina, T. [Fermilab; Sehgal, C. [Fermilab; Gardner, R. [Chicago U.; Rynge, M. [USC - ISI, Marina del Rey; Würthwein, F. [UC, San Diego

    2017-11-22

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  3. A stand alone computer system to aid the development of mirror fusion test facility RF heating systems

    International Nuclear Information System (INIS)

    Thomas, R.A.

    1983-01-01

    The Mirror Fusion Test Facility (MFTF-B) control system architecture requires the Supervisory Control and Diagnostic System (SCDS) to communicate with a LSI-11 Local Control Computer (LCC) that in turn communicates via a fiber optic link to CAMAC based control hardware located near the machine. In many cases, the control hardware is very complex and requires a sizable development effort prior to being integrated into the overall MFTF-B system. One such effort was the development of the Electron Cyclotron Resonance Heating (ECRH) system. It became clear that a stand alone computer system was needed to simulate the functions of SCDS. This paper describes the hardware and software necessary to implement the SCDS Simulation Computer (SSC). It consists of a Digital Equipment Corporation (DEC) LSI-11 computer and a Winchester/Floppy disk operating under the DEC RT-11 operating system. All application software for MFTF-B is programmed in PASCAL, which allowed us to adapt procedures originally written for SCDS to the SSC. This nearly identical software interface means that software written during the equipment development will be useful to the SCDS programmers in the integration phase

  4. Statistical Methodologies to Integrate Experimental and Computational Research

    Science.gov (United States)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  5. Computer integration of engineering design and production: A national opportunity

    Science.gov (United States)

    1984-01-01

    The National Aeronautics and Space Administration (NASA), as a purchaser of a variety of manufactured products, including complex space vehicles and systems, clearly has a stake in the advantages of computer-integrated manufacturing (CIM). Two major NASA objectives are to launch a Manned Space Station by 1992 with a budget of $8 billion, and to be a leader in the development and application of productivity-enhancing technology. At the request of NASA, a National Research Council committee visited five companies that have been leaders in using CIM. Based on these case studies, technical, organizational, and financial issues that influence computer integration are described, guidelines for its implementation in industry are offered, and the use of CIM to manage the space station program is recommended.

  6. Applying Integrated Computer Assisted Media (ICAM in Teaching Vocabulary

    Directory of Open Access Journals (Sweden)

    Opick Dwi Indah

    2015-02-01

    Full Text Available The objective of this research was to find out whether the use of integrated computer assisted media (ICAM is effective to improve the vocabulary achievement of the second semester students of Cokroaminoto Palopo University. The population of this research was the second semester students of English department of Cokroaminoto Palopo University in academic year 2013/2014. The samples of this research were 60 students and they were placed into two groups: experimental and control group where each group consisted of 30 students. This research used cluster random sampling technique. The research data was collected by applying vocabulary test and it was analyzed by using descriptive and inferential statistics. The result of this research was integrated computer assisted media (ICAM can improve vocabulary achievement of the students of English department of Cokroaminoto Palopo University. It can be concluded that the use of ICAM in the teaching vocabulary is effective to be implemented in improving the students’ vocabulary achievement.

  7. Integrating computer programs for engineering analysis and design

    Science.gov (United States)

    Wilhite, A. W.; Crisp, V. K.; Johnson, S. C.

    1983-01-01

    The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization.

  8. Integrated Computational Materials Engineering for Magnesium in Automotive Body Applications

    Science.gov (United States)

    Allison, John E.; Liu, Baicheng; Boyle, Kevin P.; Hector, Lou; McCune, Robert

    This paper provides an overview and progress report for an international collaborative project which aims to develop an ICME infrastructure for magnesium for use in automotive body applications. Quantitative processing-micro structure-property relationships are being developed for extruded Mg alloys, sheet-formed Mg alloys and high pressure die cast Mg alloys. These relationships are captured in computational models which are then linked with manufacturing process simulation and used to provide constitutive models for component performance analysis. The long term goal is to capture this information in efficient computational models and in a web-centered knowledge base. The work is being conducted at leading universities, national labs and industrial research facilities in the US, China and Canada. This project is sponsored by the U.S. Department of Energy, the U.S. Automotive Materials Partnership (USAMP), Chinese Ministry of Science and Technology (MOST) and Natural Resources Canada (NRCan).

  9. Global nuclear material monitoring with NDA and C/S data through integrated facility monitoring

    International Nuclear Information System (INIS)

    Howell, J.A.; Menlove, H.O.; Argo, P.; Goulding, C.; Klosterbuer, S.; Halbig, J.

    1996-01-01

    This paper focuses on a flexible, integrated demonstration of a monitoring approach for nuclear material monitoring. This includes aspects of item signature identification, perimeter portal monitoring, advanced data analysis, and communication as a part of an unattended continuous monitoring system in an operating nuclear facility. Advanced analysis is applied to the integrated nondestructive assay and containment and surveillance data that are synchronized in time. End result will be the foundation for a cost-effective monitoring system that could provide the necessary transparency even in areas that are denied to foreign nationals of both US and Russia should these processes and materials come under full-scope safeguards or bilateral agreements. Monitoring systems of this kind have the potential to provide additional benefits including improved nuclear facility security and safeguards and lower personnel radiation exposures. Demonstration facilities in this paper include VTRAP-prototype, Los Alamos Critical Assemblies Facility, Kazakhstan BM-350 Reactor monitor, DUPIC radiation monitoring, and JOYO and MONJU radiation monitoring

  10. The CT Scanner Facility at Stellenbosch University: An open access X-ray computed tomography laboratory

    Science.gov (United States)

    du Plessis, Anton; le Roux, Stephan Gerhard; Guelpa, Anina

    2016-10-01

    The Stellenbosch University CT Scanner Facility is an open access laboratory providing non-destructive X-ray computed tomography (CT) and a high performance image analysis services as part of the Central Analytical Facilities (CAF) of the university. Based in Stellenbosch, South Africa, this facility offers open access to the general user community, including local researchers, companies and also remote users (both local and international, via sample shipment and data transfer). The laboratory hosts two CT instruments, i.e. a micro-CT system, as well as a nano-CT system. A workstation-based Image Analysis Centre is equipped with numerous computers with data analysis software packages, which are to the disposal of the facility users, along with expert supervision, if required. All research disciplines are accommodated at the X-ray CT laboratory, provided that non-destructive analysis will be beneficial. During its first four years, the facility has accommodated more than 400 unique users (33 in 2012; 86 in 2013; 154 in 2014; 140 in 2015; 75 in first half of 2016), with diverse industrial and research applications using X-ray CT as means. This paper summarises the existence of the laboratory's first four years by way of selected examples, both from published and unpublished projects. In the process a detailed description of the capabilities and facilities available to users is presented.

  11. The CT Scanner Facility at Stellenbosch University: An open access X-ray computed tomography laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Plessis, Anton du, E-mail: anton2@sun.ac.za [CT Scanner Facility, Central Analytical Facilities, Stellenbosch University, Stellenbosch (South Africa); Physics Department, Stellenbosch University, Stellenbosch (South Africa); Roux, Stephan Gerhard le, E-mail: lerouxsg@sun.ac.za [CT Scanner Facility, Central Analytical Facilities, Stellenbosch University, Stellenbosch (South Africa); Guelpa, Anina, E-mail: aninag@sun.ac.za [CT Scanner Facility, Central Analytical Facilities, Stellenbosch University, Stellenbosch (South Africa)

    2016-10-01

    The Stellenbosch University CT Scanner Facility is an open access laboratory providing non-destructive X-ray computed tomography (CT) and a high performance image analysis services as part of the Central Analytical Facilities (CAF) of the university. Based in Stellenbosch, South Africa, this facility offers open access to the general user community, including local researchers, companies and also remote users (both local and international, via sample shipment and data transfer). The laboratory hosts two CT instruments, i.e. a micro-CT system, as well as a nano-CT system. A workstation-based Image Analysis Centre is equipped with numerous computers with data analysis software packages, which are to the disposal of the facility users, along with expert supervision, if required. All research disciplines are accommodated at the X-ray CT laboratory, provided that non-destructive analysis will be beneficial. During its first four years, the facility has accommodated more than 400 unique users (33 in 2012; 86 in 2013; 154 in 2014; 140 in 2015; 75 in first half of 2016), with diverse industrial and research applications using X-ray CT as means. This paper summarises the existence of the laboratory’s first four years by way of selected examples, both from published and unpublished projects. In the process a detailed description of the capabilities and facilities available to users is presented.

  12. National Ignition Facility sub-system design requirements computer system SSDR 1.5.1

    International Nuclear Information System (INIS)

    Spann, J.; VanArsdall, P.; Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development and test requirements for the Computer System, WBS 1.5.1 which is part of the NIF Integrated Computer Control System (ICCS). This document responds directly to the requirements detailed in ICCS (WBS 1.5) which is the document directly above

  13. Remediation Approach for the Integrated Facility Disposition Project at the Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Kirk, P.G.; Stephens, Jr.J.M.

    2009-01-01

    The Integrated Facility Disposition Project (IFDP) is a multi-billion-dollar remediation effort being conducted by the U.S. Department of Energy (DOE) Office of Environmental Management in Oak Ridge, Tennessee. The scope of the IFDP encompasses remedial actions related to activities conducted over the past 65 years at the Oak Ridge National Laboratory (ORNL) and the Y-12 National Security Complex (Y-12). Environmental media and facilities became contaminated as a result of operations, leaks, spills, and past waste disposal practices. ORNL's mission includes energy, environmental, nuclear security, computational, and materials research and development. Remediation activities will be implemented at ORNL as part of IFDP scope to meet remedial action objectives established in existing and future decision documents. Remedial actions are necessary (1) to comply with environmental regulations to reduce human health and environmental risk and (2) to release strategic real estate needed for modernization initiatives at ORNL. The scope of remedial actions includes characterization, waste management, transportation and disposal, stream restoration, and final remediation of contaminated soils, sediments, and groundwater. Activities include removal of at or below-grade substructures such as slabs, underground utilities, underground piping, tanks, basins, pits, ducts, equipment housings, manholes, and concrete-poured structures associated with equipment housings and basement walls/floors/columns. Many interim remedial actions involving groundwater and surface water that have not been completed are included in the IFDP remedial action scope. The challenges presented by the remediation of Bethel Valley at ORNL are formidable. The proposed approach to remediation endeavors to use the best available technologies and technical approaches from EPA and other federal agencies and lessons learned from previous cleanup efforts. The objective is to minimize cost, maximize remedial

  14. Atmospheric dispersion calculation for posturated accident of nuclear facilities and the computer code: PANDA

    International Nuclear Information System (INIS)

    Kitahara, Yoshihisa; Kishimoto, Yoichiro; Narita, Osamu; Shinohara, Kunihiko

    1979-01-01

    Several Calculation methods for relative concentration (X/Q) and relative cloud-gamma dose (D/Q) of the radioactive materials released from nuclear facilities by posturated accident are presented. The procedure has been formulated as a Computer program PANDA and the usage is explained. (author)

  15. Taking the classical large audience university lecture online using tablet computer and webconferencing facilities

    DEFF Research Database (Denmark)

    Brockhoff, Per B.

    2011-01-01

    During four offerings (September 2008 – May 2011) of the course 02402 Introduction to Statistics for Engineering students at DTU, with an average of 256 students, the lecturing was carried out 100% through a tablet computer combined with the web conferencing facility Adobe Connect (version 7...

  16. Economic assessment of a proposed integrated resource recovery facility

    International Nuclear Information System (INIS)

    Burnett, J.S.

    1993-01-01

    This report comprises an initial economic and market appraisal of the proposals made by Materials Recycling Management (MRM) Ltd for a commercial plant engaged in waste treatment and energy recovery. The MRM design is an integrated waste handling system for commercial and industrial non hazardous wastes and civic amenity wastes. After primary separation into three selected broad waste categories, wastes are processed in the plant to recover basic recyclables such as paper, timber, plastics and metals. A quantity of material is directed for composting and the remainder converted into a fuel and combusted on site for energy recovery. Wastes unworthy of processing would be sent for disposal. A basic technical review has been undertaken. The focus of this review has been on the main processing plant where materials are segregated and the fuel and compost produced. (author)

  17. Multi-objective reverse logistics model for integrated computer waste management.

    Science.gov (United States)

    Ahluwalia, Poonam Khanijo; Nema, Arvind K

    2006-12-01

    This study aimed to address the issues involved in the planning and design of a computer waste management system in an integrated manner. A decision-support tool is presented for selecting an optimum configuration of computer waste management facilities (segregation, storage, treatment/processing, reuse/recycle and disposal) and allocation of waste to these facilities. The model is based on an integer linear programming method with the objectives of minimizing environmental risk as well as cost. The issue of uncertainty in the estimated waste quantities from multiple sources is addressed using the Monte Carlo simulation technique. An illustrated example of computer waste management in Delhi, India is presented to demonstrate the usefulness of the proposed model and to study tradeoffs between cost and risk. The results of the example problem show that it is possible to reduce the environmental risk significantly by a marginal increase in the available cost. The proposed model can serve as a powerful tool to address the environmental problems associated with exponentially growing quantities of computer waste which are presently being managed using rudimentary methods of reuse, recovery and disposal by various small-scale vendors.

  18. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  19. Integrated Payment and Delivery Models Offer Opportunities and Challenges for Residential Care Facilities

    OpenAIRE

    Grabowski, David C.; Caudry, Daryl J.; Dean, Katie M.; Stevenson, David G.

    2015-01-01

    Under health care reform, a series of new financing and delivery models are being piloted to integrate health and long-term care services for older adults. To date, these programs have not encompassed residential care facilities, with most programs focusing on long-term care recipients in the community or the nursing home. Our analyses indicate that individuals living in residential care facilities have similarly high rates of chronic illness and Medicare utilization when compared with simila...

  20. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  1. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  2. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160

    2017-01-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  3. An integrated computer aided system for integrated design of chemical processes

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Hytoft, Glen; Jaksland, Cecilia

    1997-01-01

    In this paper, an Integrated Computer Aided System (ICAS), which is particularly suitable for solving problems related to integrated design of chemical processes; is presented. ICAS features include a model generator (generation of problem specific models including model simplification and model ...... form the basis for the toolboxes. The available features of ICAS are highlighted through a case study involving the separation of binary azeotropic mixtures. (C) 1997 Elsevier Science Ltd....

  4. The Origin and Constitution of Facilities Management as an integrated corporate fuction

    DEFF Research Database (Denmark)

    Jensen, Per Anker

    2008-01-01

    Purpose – To understand how facilities management (FM) has evolved over time in a complex public corporation from internal functions of building operation and building client and the related service functions to become an integrated corporate function. Design/methodology/approach – The paper...... is based on results from a research project on space strategies and building values, which included a major longitudinal case study of the development of facilities for the Danish Broadcasting Corporation (DR) over time. The research presented here included literature studies, archive studies...... and a fully integrated corporate Facilities Management function are established. Research limitations/implications – The paper presents empirical evidence of the historical development ofFMfrom one case and provides a deeper understanding of the integration processes that are crucial to FM and which can...

  5. Fast computation of complete elliptic integrals and Jacobian elliptic functions

    Science.gov (United States)

    Fukushima, Toshio

    2009-12-01

    As a preparation step to compute Jacobian elliptic functions efficiently, we created a fast method to calculate the complete elliptic integral of the first and second kinds, K( m) and E( m), for the standard domain of the elliptic parameter, 0 procedure to compute simultaneously three Jacobian elliptic functions, sn( u| m), cn( u| m), and dn( u| m), by repeated usage of the double argument formulae starting from the Maclaurin series expansions with respect to the elliptic argument, u, after its domain is reduced to the standard range, 0 ≤ u procedure is 25-70% faster than the methods based on the Gauss transformation such as Bulirsch’s algorithm, sncndn, quoted in the Numerical Recipes even if the acceleration of computation of K( m) is not taken into account.

  6. The role of computer modelling in participatory integrated assessments

    International Nuclear Information System (INIS)

    Siebenhuener, Bernd; Barth, Volker

    2005-01-01

    In a number of recent research projects, computer models have been included in participatory procedures to assess global environmental change. The intention was to support knowledge production and to help the involved non-scientists to develop a deeper understanding of the interactions between natural and social systems. This paper analyses the experiences made in three projects with the use of computer models from a participatory and a risk management perspective. Our cross-cutting analysis of the objectives, the employed project designs and moderation schemes and the observed learning processes in participatory processes with model use shows that models play a mixed role in informing participants and stimulating discussions. However, no deeper reflection on values and belief systems could be achieved. In terms of the risk management phases, computer models serve best the purposes of problem definition and option assessment within participatory integrated assessment (PIA) processes

  7. Integrating Xgrid into the HENP distributed computing model

    Energy Technology Data Exchange (ETDEWEB)

    Hajdu, L; Lauret, J [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kocoloski, A; Miller, M [Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)], E-mail: kocolosk@mit.edu

    2008-07-15

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  8. Sextant: an expert system for transient analysis of nuclear reactors and integral test facilities

    International Nuclear Information System (INIS)

    Barbet, N.; Dumas, M.; Mihelich, G.

    1987-01-01

    Expert systems provide a new way of dealing with the computer-aided management of nuclear plants by combining several knowledge bases and reasoning modes together with a set of numerical models for real-time analysis of transients. New development tools are required together with metaknowledge bases handling temporal hypothetical reasoning and planning. They have to be efficient and robust because during a transient, neither measurements nor models, nor scenarios are hold as absolute references. SEXTANT is a general purpose physical analyzer intended to provide a pattern and avoid duplication of general tools and knowledge bases for similar applications. It combines several knowledge bases concerning measurements, models and qualitative behavior of PWR with a mechanism of conjecture-refutation and a set of simplified models matching the current physical state. A prototype is under assessment by dealing with integral test facility transients. For its development, SEXTANT requires a powerful shell. SPIRAL is such a toolkit, oriented towards online analysis of complex processes and already used in several applications

  9. Integrated Framework for Patient Safety and Energy Efficiency in Healthcare Facilities Retrofit Projects.

    Science.gov (United States)

    Mohammadpour, Atefeh; Anumba, Chimay J; Messner, John I

    2016-07-01

    There is a growing focus on enhancing energy efficiency in healthcare facilities, many of which are decades old. Since replacement of all aging healthcare facilities is not economically feasible, the retrofitting of these facilities is an appropriate path, which also provides an opportunity to incorporate energy efficiency measures. In undertaking energy efficiency retrofits, it is vital that the safety of the patients in these facilities is maintained or enhanced. However, the interactions between patient safety and energy efficiency have not been adequately addressed to realize the full benefits of retrofitting healthcare facilities. To address this, an innovative integrated framework, the Patient Safety and Energy Efficiency (PATSiE) framework, was developed to simultaneously enhance patient safety and energy efficiency. The framework includes a step -: by -: step procedure for enhancing both patient safety and energy efficiency. It provides a structured overview of the different stages involved in retrofitting healthcare facilities and improves understanding of the intricacies associated with integrating patient safety improvements with energy efficiency enhancements. Evaluation of the PATSiE framework was conducted through focus groups with the key stakeholders in two case study healthcare facilities. The feedback from these stakeholders was generally positive, as they considered the framework useful and applicable to retrofit projects in the healthcare industry. © The Author(s) 2016.

  10. Study of developing nuclear fabrication facility's integrated emergency response manual

    International Nuclear Information System (INIS)

    Kim, Taeh Yeong; Cho, Nam Chan; Han, Seung Hoon; Moon, Jong Han; Lee, Jin Hang; Min, Guem Young; Han, Ji Ah

    2016-01-01

    Public begin to pay attention to emergency management. Thus, public's consensus on having high level of emergency management system up to advanced country's is reached. In this social atmosphere, manual is considered as key factor to prevent accident or secure business continuity. Therefore, we first define possible crisis at KEPCO Nuclear Fuel (hereinafter KNF) and also make a 'Reaction List' for each crisis situation at the view of information-design. To achieve it, we analyze several country's crisis response manual and then derive component, indicate duties and roles at the information-design point of view. From this, we suggested guideline to make 'Integrated emergency response manual(IERM)'. The manual we used before have following few problems; difficult to applicate at the site, difficult to deliver information. To complement these problems, we searched manual elements from the view of information-design. As a result, we develop administrative manual. Although, this manual could be thought as fragmentary manual because it confined specific several agency/organization and disaster type

  11. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  12. The computational design of Geological Disposal Technology Integration System

    International Nuclear Information System (INIS)

    Ishihara, Yoshinao; Iwamoto, Hiroshi; Kobayashi, Shigeki; Neyama, Atsushi; Endo, Shuji; Shindo, Tomonori

    2002-03-01

    In order to develop 'Geological Disposal Technology Integration System' that is intended to systematize as knowledge base for fundamental study, the computational design of an indispensable database and image processing function to 'Geological Disposal Technology Integration System' was done, the prototype was made for trial purposes, and the function was confirmed. (1) Database of Integration System which systematized necessary information and relating information as an examination of a whole of repository composition and managed were constructed, and the system function was constructed as a system composed of image processing, analytical information management, the repository component management, and the system security function. (2) The range of the data treated with this system and information was examined, the design examination of the database structure was done, and the design examination of the image processing function of the data preserved in an integrated database was done. (3) The prototype of the database concerning a basic function, the system operation interface, and the image processing function was manufactured to verify the feasibility of the 'Geological Disposal Technology Integration System' based on the result of the design examination and the function was confirmed. (author)

  13. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    International Nuclear Information System (INIS)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-01-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we describe the WNoDeS architecture.

  14. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    Science.gov (United States)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-12-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we descrive the WNoDeS architecture.

  15. The Mixed Waste Management Facility. Design basis integrated operations plan (Title I design)

    International Nuclear Information System (INIS)

    1994-12-01

    The Mixed Waste Management Facility (MWMF) will be a fully integrated, pilotscale facility for the demonstration of low-level, organic-matrix mixed waste treatment technologies. It will provide the bridge from bench-scale demonstrated technologies to the deployment and operation of full-scale treatment facilities. The MWMF is a key element in reducing the risk in deployment of effective and environmentally acceptable treatment processes for organic mixed-waste streams. The MWMF will provide the engineering test data, formal evaluation, and operating experience that will be required for these demonstration systems to become accepted by EPA and deployable in waste treatment facilities. The deployment will also demonstrate how to approach the permitting process with the regulatory agencies and how to operate and maintain the processes in a safe manner. This document describes, at a high level, how the facility will be designed and operated to achieve this mission. It frequently refers the reader to additional documentation that provides more detail in specific areas. Effective evaluation of a technology consists of a variety of informal and formal demonstrations involving individual technology systems or subsystems, integrated technology system combinations, or complete integrated treatment trains. Informal demonstrations will typically be used to gather general operating information and to establish a basis for development of formal demonstration plans. Formal demonstrations consist of a specific series of tests that are used to rigorously demonstrate the operation or performance of a specific system configuration

  16. Framework for Integrating Safety, Operations, Security, and Safeguards in the Design and Operation of Nuclear Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Darby, John L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Horak, Karl Emanuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); LaChance, Jeffrey L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tolk, Keith Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitehead, Donnie Wayne [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2007-10-01

    The US is currently on the brink of a nuclear renaissance that will result in near-term construction of new nuclear power plants. In addition, the Department of Energy’s (DOE) ambitious new Global Nuclear Energy Partnership (GNEP) program includes facilities for reprocessing spent nuclear fuel and reactors for transmuting safeguards material. The use of nuclear power and material has inherent safety, security, and safeguards (SSS) concerns that can impact the operation of the facilities. Recent concern over terrorist attacks and nuclear proliferation led to an increased emphasis on security and safeguard issues as well as the more traditional safety emphasis. To meet both domestic and international requirements, nuclear facilities include specific SSS measures that are identified and evaluated through the use of detailed analysis techniques. In the past, these individual assessments have not been integrated, which led to inefficient and costly design and operational requirements. This report provides a framework for a new paradigm where safety, operations, security, and safeguards (SOSS) are integrated into the design and operation of a new facility to decrease cost and increase effectiveness. Although the focus of this framework is on new nuclear facilities, most of the concepts could be applied to any new, high-risk facility.

  17. Conceptual design of a fission-based integrated test facility for fusion reactor components

    International Nuclear Information System (INIS)

    Watts, K.D.; Deis, G.A.; Hsu, P.Y.S.; Longhurst, G.R.; Masson, L.S.; Miller, L.G.

    1982-01-01

    The testing of fusion materials and components in fission reactors will become increasingly important because of lack of fusion engineering test devices in the immediate future and the increasing long-term demand for fusion testing when a fusion reactor test station becomes available. This paper presents the conceptual design of a fission-based Integrated Test Facility (ITF) developed by EG and G Idaho. This facility can accommodate entire first wall/blanket (FW/B) test modules such as those proposed for INTOR and can also accommodate smaller cylindrical modules similar to those designed by Oak Ridge National laboratory (ORNL) and Westinghouse. In addition, the facility can be used to test bulk breeder blanket materials, materials for tritium permeation, and components for performance in a nuclear environment. The ITF provides a cyclic neutron/gamma flux as well as the numerous module and experiment support functions required for truly integrated tests

  18. Validation of an integral conceptual model of frailty in older residents of assisted living facilities

    NARCIS (Netherlands)

    Gobbens, R.J.J.; Krans, A.; van Assen, M.A.L.M.

    2015-01-01

    Objective The aim of this cross-sectional study was to examine the validity of an integral model of the associations between life-course determinants, disease(s), frailty, and adverse outcomes in older persons who are resident in assisted living facilities. Methods Between June 2013 and May 2014

  19. Validation of an integral conceptual model of frailty in older residents of assisted living facilities

    NARCIS (Netherlands)

    Gobbens, Robbert J J; Krans, Anita; van Assen, Marcel A L M

    2015-01-01

    Objective: The aim of this cross-sectional study was to examine the validity of an integral model of the associations between life-course determinants, disease(s), frailty, and adverse outcomes in older persons who are resident in assisted living facilities. Methods: Between June 2013 and May 2014

  20. The integration of expert knowledge in decision support systems for facility location planning

    NARCIS (Netherlands)

    Arentze, T.A.; Borgers, A.W.J.; Timmermans, H.J.P.

    1995-01-01

    The integration of expert systems in DSS has led to a new generation of systems commonly referred to as knowledge-based or intelligent DSS. This paper investigates the use of expert system technology for the development of a knowledge-based DSS for the planning of retail and service facilities. The

  1. Advanced data analysis in neuroscience integrating statistical and computational models

    CERN Document Server

    Durstewitz, Daniel

    2017-01-01

    This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering.  Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...

  2. Computing thermal Wigner densities with the phase integration method

    International Nuclear Information System (INIS)

    Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.

    2014-01-01

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems

  3. Computing thermal Wigner densities with the phase integration method.

    Science.gov (United States)

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  4. Operational experience with the Sizewell B integrated plant computer system

    International Nuclear Information System (INIS)

    Ladner, J.E.J.; Alexander, N.C.; Fitzpatrick, J.A.

    1997-01-01

    The Westinghouse Integrated System for Centralised Operation (WISCO) is the primary plant control system at the Sizewell B Power Station. It comprises three subsystems; the High Integrity Control System (HICS), the Process Control System (PCS) and the Distributed Computer system (DCS). The HICS performs the control and data acquisition of nuclear safety significant plant systems. The PCS uses redundant data processing unit pairs. The workstations and servers of the DCS communicate with each other over a standard ethernet. The maintenance requirements for every plant system are covered by a Maintenance Strategy Report. The breakdown of these reports is listed. The WISCO system has performed exceptionally well. Due to the diagnostic information presented by the HICS, problems could normally be resolved within 24 hours. There have been some 200 outstanding modifications to the system. The procedure of modification is briefly described. (A.K.)

  5. Computational Approaches for Integrative Analysis of the Metabolome and Microbiome

    Directory of Open Access Journals (Sweden)

    Jasmine Chong

    2017-11-01

    Full Text Available The study of the microbiome, the totality of all microbes inhabiting the host or an environmental niche, has experienced exponential growth over the past few years. The microbiome contributes functional genes and metabolites, and is an important factor for maintaining health. In this context, metabolomics is increasingly applied to complement sequencing-based approaches (marker genes or shotgun metagenomics to enable resolution of microbiome-conferred functionalities associated with health. However, analyzing the resulting multi-omics data remains a significant challenge in current microbiome studies. In this review, we provide an overview of different computational approaches that have been used in recent years for integrative analysis of metabolome and microbiome data, ranging from statistical correlation analysis to metabolic network-based modeling approaches. Throughout the process, we strive to present a unified conceptual framework for multi-omics integration and interpretation, as well as point out potential future directions.

  6. A description of the demonstration Integral Fast Reactor fuel cycle facility

    International Nuclear Information System (INIS)

    Courtney, J.C.; Carnes, M.D.; Dwight, C.C.; Forrester, R.J.

    1991-01-01

    A fuel examination facility at the Idaho National Engineering Laboratory is being converted into a facility that will electrochemically process spent fuel. This is an important step in the demonstration of the Integral Fast Reactor concept being developed by Argonne National Laboratory. Renovations are designed to bring the facility up to current health and safety and environmental standards and to support its new mission. Improvements include the addition of high-reliability earthquake hardened off-gas and electrical power systems, the upgrading of radiological instrumentation, and the incorporation of advances in contamination control. A major task is the construction of a new equipment repair and decontamination facility in the basement of the building to support operations

  7. Total quality through computer integrated manufacturing in the pharmaceutical industry.

    Science.gov (United States)

    Ufret, C M

    1995-01-01

    The role of Computer Integrated Manufacturing (CIM) in the pursue of total quality in pharmaceutical manufacturing is assessed. CIM key objectives, design criteria, and performance measurements, in addition to its scope and implementation in a hierarchical structure, are explored in detail. Key elements for the success of each phase in a CIM project and a brief status of current CIM implementations in the pharmaceutical industry are presented. The role of World Class Manufacturing performance standards and other key issues to achieve full CIM benefits are also addressed.

  8. Computer integrated construction at AB building in reprocessing plant

    International Nuclear Information System (INIS)

    Takami, Masahiro; Azuchi, Takehiro; Sekiguchi, Kenji

    1999-01-01

    JNFL (Japan Nuclear Fuel Limited) is now processing with construction of the spent nuclear fuel reprocessing plant at Rokkasho Village in Aomori Prefecture, which is coming near to the busiest period of construction. Now we are trying to complete the civil work of AB Building and KA Building in a very short construction term by applying CIC (Computer Integrated Construction) concept, in spite of its hard construction conditions, such as the massive and complicated building structure, interferences with M and E (Mechanical and Electrical) work, severe winter weather, remote site location, etc. The key technologies of CIC are three-dimensional CAD, information network, and prefabrication and mechanization of site work. (author)

  9. Integration of Openstack cloud resources in BES III computing cluster

    Science.gov (United States)

    Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan

    2017-10-01

    Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.

  10. Integration of rocket turbine design and analysis through computer graphics

    Science.gov (United States)

    Hsu, Wayne; Boynton, Jim

    1988-01-01

    An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.

  11. Computed tomographic evaluation of dinosar egg shell integrity

    International Nuclear Information System (INIS)

    Jones, J.C.; Greenberg, W.; Ayers, S.

    1998-01-01

    The purpose of this study was to determine whether computed tomography (CT) could be used to identify hatching holes in partially embedded dinosaur eggs. One Faveololithus and two Dendroolithus eggs were examined using a fourth generation CT scanner. The eggs were partially embedded in a fossilized sediment matrix, with the exposed portion of the shell appearing intact. In CT images of all three eggs, the shells appeared hyperdense relative to the matrix. Hatching holes were visible as large gaps in the embedded portion of the shell, with inwardly displaced shell fragments. It was concluded that CT is an effective technique for nondestructively assessing dinosaur egg shell integrity

  12. 3-D computer graphics based on integral photography.

    Science.gov (United States)

    Naemura, T; Yoshida, T; Harashima, H

    2001-02-12

    Integral photography (IP), which is one of the ideal 3-D photographic technologies, can be regarded as a method of capturing and displaying light rays passing through a plane. The NHK Science and Technical Research Laboratories have developed a real-time IP system using an HDTV camera and an optical fiber array. In this paper, the authors propose a method of synthesizing arbitrary views from IP images captured by the HDTV camera. This is a kind of image-based rendering system, founded on the 4-D data space Representation of light rays. Experimental results show the potential to improve the quality of images rendered by computer graphics techniques.

  13. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    Energy Technology Data Exchange (ETDEWEB)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  14. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-01-01

    A proposal has been made to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multi-level, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data, through typical transformations and correlations, in under 30 sec. The throughput for such a facility, assuming five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600

  15. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-05-01

    A proposal has been made at LBL to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multilevel, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data through typical transformations and correlations in under 30 s. The throughput for such a facility, for five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600. 3 figures

  16. Analog Integrated Circuit Design for Spike Time Dependent Encoder and Reservoir in Reservoir Computing Processors

    Science.gov (United States)

    2018-01-01

    HAS BEEN REVIEWED AND IS APPROVED FOR PUBLICATION IN ACCORDANCE WITH ASSIGNED DISTRIBUTION STATEMENT. FOR THE CHIEF ENGINEER : / S / / S...bridged high-performance computing, nanotechnology , and integrated circuits & systems. 15. SUBJECT TERMS neuromorphic computing, neuron design, spike...multidisciplinary effort encompassed high-performance computing, nanotechnology , integrated circuits, and integrated systems. The project’s architecture was

  17. Health workers' knowledge of and attitudes towards computer applications in rural African health facilities.

    Science.gov (United States)

    Sukums, Felix; Mensah, Nathan; Mpembeni, Rose; Kaltschmidt, Jens; Haefeli, Walter E; Blank, Antje

    2014-01-01

    The QUALMAT (Quality of Maternal and Prenatal Care: Bridging the Know-do Gap) project has introduced an electronic clinical decision support system (CDSS) for pre-natal and maternal care services in rural primary health facilities in Burkina Faso, Ghana, and Tanzania. To report an assessment of health providers' computer knowledge, experience, and attitudes prior to the implementation of the QUALMAT electronic CDSS. A cross-sectional study was conducted with providers in 24 QUALMAT project sites. Information was collected using structured questionnaires. Chi-squared tests and one-way ANOVA describe the association between computer knowledge, attitudes, and other factors. Semi-structured interviews and focus groups were conducted to gain further insights. A total of 108 providers responded, 63% were from Tanzania and 37% from Ghana. The mean age was 37.6 years, and 79% were female. Only 40% had ever used computers, and 29% had prior computer training. About 80% were computer illiterate or beginners. Educational level, age, and years of work experience were significantly associated with computer knowledge (pworkplace. Given the low levels of computer knowledge among rural health workers in Africa, it is important to provide adequate training and support to ensure the successful uptake of electronic CDSSs in these settings. The positive attitudes to computers found in this study underscore that also rural care providers are ready to use such technology.

  18. Development of the computer code to monitor gamma radiation in the nuclear facility environment

    International Nuclear Information System (INIS)

    Akhmad, Y. R.; Pudjiyanto, M.S.

    1998-01-01

    Computer codes for gamma radiation monitoring in the vicinity of nuclear facility which have been developed could be introduced to the commercial potable gamma analyzer. The crucial stage of the first year activity was succeeded ; that is the codes have been tested to transfer data file (pulse high distribution) from Micro NOMAD gamma spectrometer (ORTEC product) and the convert them into dosimetry and physics quantities. Those computer codes are called as GABATAN (Gamma Analyzer of Batan) and NAGABAT (Natural Gamma Analyzer of Batan). GABATAN code can isable to used at various nuclear facilities for analyzing gamma field up to 9 MeV, while NAGABAT could be used for analyzing the contribution of natural gamma rays to the exposure rate in the certain location

  19. Computer program for storage of historical and routine safety data related to radiologically controlled facilities

    International Nuclear Information System (INIS)

    Marsh, D.A.; Hall, C.J.

    1984-01-01

    A method for tracking and quick retrieval of radiological status of radiation and industrial safety systems in an active or inactive facility has been developed. The system uses a mini computer, a graphics plotter, and mass storage devices. Software has been developed which allows input and storage of architectural details, radiological conditions such as exposure rates, current location of safety systems, and routine and historical information on exposure and contamination levels. A blue print size digitizer is used for input. The computer program retains facility floor plans in three dimensional arrays. The software accesses an eight pen color plotter for output. The plotter generates color plots of the floor plans and safety systems on 8 1/2 x 11 or 20 x 30 paper or on overhead transparencies for reports and presentations

  20. Three-dimensional coupled Monte Carlo-discrete ordinates computational scheme for shielding calculations of large and complex nuclear facilities

    International Nuclear Information System (INIS)

    Chen, Y.; Fischer, U.

    2005-01-01

    Shielding calculations of advanced nuclear facilities such as accelerator based neutron sources or fusion devices of the tokamak type are complicated due to their complex geometries and their large dimensions, including bulk shields of several meters thickness. While the complexity of the geometry in the shielding calculation can be hardly handled by the discrete ordinates method, the deep penetration of radiation through bulk shields is a severe challenge for the Monte Carlo particle transport technique. This work proposes a dedicated computational scheme for coupled Monte Carlo-Discrete Ordinates transport calculations to handle this kind of shielding problems. The Monte Carlo technique is used to simulate the particle generation and transport in the target region with both complex geometry and reaction physics, and the discrete ordinates method is used to treat the deep penetration problem in the bulk shield. The coupling scheme has been implemented in a program system by loosely integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and a newly developed coupling interface program for mapping process. Test calculations were performed with comparison to MCNP solutions. Satisfactory agreements were obtained between these two approaches. The program system has been chosen to treat the complicated shielding problem of the accelerator-based IFMIF neutron source. The successful application demonstrates that coupling scheme with the program system is a useful computational tool for the shielding analysis of complex and large nuclear facilities. (authors)

  1. Maintenance of reactor safety and control computers at a large government facility

    International Nuclear Information System (INIS)

    Brady, H.G.

    1985-01-01

    In 1950 the US Government contracted the Du Pont Company to design, build, and operate the Savannah River Plant (SRP). At the time, it was the largest construction project ever undertaken by man. It is still the largest of the Department of Energy facilities. In the nearly 35 years that have elapsed, Du Pont has met its commitments to the US Government and set world safety records in the construction and operation of nuclear facilities. Contributing factors in achieving production goals and setting the safety records are a staff of highly qualified personnel, a well maintained plant, and sound maintenance programs. There have been many ''first ever'' achievements at SRP. These ''firsts'' include: (1) computer control of a nuclear rector, and (2) use of computer systems as safety circuits. This presentation discusses the maintenance program provided for these computer systems and all digital systems at SRP. An in-house computer maintenance program that was started in 1966 with five persons has grown to a staff of 40 with investments in computer hardware increasing from $4 million in 1970 to more than $60 million in this decade. 4 figs

  2. Opportunities for artificial intelligence application in computer- aided management of mixed waste incinerator facilities

    International Nuclear Information System (INIS)

    Rivera, A.L.; Ferrada, J.J.; Singh, S.P.N.

    1992-01-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site. It is designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conservation and Recovery Act (RCRA). This facility, known as the TSCA Incinerator, services seven DOE/OR installations. This incinerator was recently authorized for production operation in the United States for the processing of mixed (radioactively contaminated-chemically hazardous) wastes as regulated under TSCA and RCRA. Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. These requirements impact the characteristics and disposition of incinerator residues, limits the quality of liquid and gaseous effluents, limit the characteristics and rates of waste feeds and operating conditions, and restrict the handling of the waste feed inventories. This incinerator facility presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. Demonstrated computer-aided management systems could be transferred to future mixed waste incinerator facilities

  3. Automation of a cryogenic facility by commercial process-control computer

    International Nuclear Information System (INIS)

    Sondericker, J.H.; Campbell, D.; Zantopp, D.

    1983-01-01

    To insure that Brookhaven's superconducting magnets are reliable and their field quality meets accelerator requirements, each magnet is pre-tested at operating conditions after construction. MAGCOOL, the production magnet test facility, was designed to perform these tests, having the capacity to test ten magnets per five day week. This paper describes the control aspects of MAGCOOL and the advantages afforded the designers by the implementation of a commercial process control computer system

  4. Integrated Computational Solution for Predicting Skin Sensitization Potential of Molecules.

    Directory of Open Access Journals (Sweden)

    Konda Leela Sarath Kumar

    Full Text Available Skin sensitization forms a major toxicological endpoint for dermatology and cosmetic products. Recent ban on animal testing for cosmetics demands for alternative methods. We developed an integrated computational solution (SkinSense that offers a robust solution and addresses the limitations of existing computational tools i.e. high false positive rate and/or limited coverage.The key components of our solution include: QSAR models selected from a combinatorial set, similarity information and literature-derived sub-structure patterns of known skin protein reactive groups. Its prediction performance on a challenge set of molecules showed accuracy = 75.32%, CCR = 74.36%, sensitivity = 70.00% and specificity = 78.72%, which is better than several existing tools including VEGA (accuracy = 45.00% and CCR = 54.17% with 'High' reliability scoring, DEREK (accuracy = 72.73% and CCR = 71.44% and TOPKAT (accuracy = 60.00% and CCR = 61.67%. Although, TIMES-SS showed higher predictive power (accuracy = 90.00% and CCR = 92.86%, the coverage was very low (only 10 out of 77 molecules were predicted reliably.Owing to improved prediction performance and coverage, our solution can serve as a useful expert system towards Integrated Approaches to Testing and Assessment for skin sensitization. It would be invaluable to cosmetic/ dermatology industry for pre-screening their molecules, and reducing time, cost and animal testing.

  5. Computational Acoustics: Computational PDEs, Pseudodifferential Equations, Path Integrals, and All That Jazz

    Science.gov (United States)

    Fishman, Louis

    2000-11-01

    The role of mathematical modeling in the physical sciences will be briefly addressed. Examples will focus on computational acoustics, with applications to underwater sound propagation, electromagnetic modeling, optics, and seismic inversion. Direct and inverse wave propagation problems in both the time and frequency domains will be considered. Focusing on fixed-frequency (elliptic) wave propagation problems, the usual, two-way, partial differential equation formulation will be exactly reformulated, in a well-posed manner, as a one-way (marching) problem. This is advantageous for both direct and inverse considerations, as well as stochastic modeling problems. The reformulation will require the introduction of pseudodifferential operators and their accompanying phase space analysis (calculus), in addition to path integral representations for the fundamental solutions and their subsequent computational algorithms. Unlike the more traditional, purely numerical applications of, for example, finite-difference and finite-element methods, this approach, in effect, writes the exact, or, more generally, the asymptotically correct, answer as a functional integral and, subsequently, computes it directly. The overall computational philosophy is to combine analysis, asymptotics, and numerical methods to attack complicated, real-world problems. Exact and asymptotic analysis will stress the complementary nature of the direct and inverse formulations, as well as indicating the explicit structural connections between the time- and frequency-domain solutions.

  6. Tavaxy: integrating Taverna and Galaxy workflows with cloud computing support.

    Science.gov (United States)

    Abouelhoda, Mohamed; Issa, Shadi Alaa; Ghanem, Moustafa

    2012-05-04

    Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis.The system can be accessed either through a

  7. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    Directory of Open Access Journals (Sweden)

    Abouelhoda Mohamed

    2012-05-01

    Full Text Available Abstract Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub- workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and

  8. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    Science.gov (United States)

    2012-01-01

    Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system

  9. Integrated Payment And Delivery Models Offer Opportunities And Challenges For Residential Care Facilities.

    Science.gov (United States)

    Grabowski, David C; Caudry, Daryl J; Dean, Katie M; Stevenson, David G

    2015-10-01

    Under health care reform, new financing and delivery models are being piloted to integrate health and long-term care services for older adults. Programs using these models generally have not included residential care facilities. Instead, most of them have focused on long-term care recipients in the community or the nursing home. Our analyses indicate that individuals living in residential care facilities have similarly high rates of chronic illness and Medicare utilization when compared with matched individuals in the community and nursing home, and rates of functional dependency that fall between those of their counterparts in the other two settings. These results suggest that the residential care facility population could benefit greatly from models that coordinated health and long-term care services. However, few providers have invested in the infrastructure needed to support integrated delivery models. Challenges to greater care integration include the private-pay basis for residential care facility services, which precludes shared savings from reduced Medicare costs, and residents' preference for living in a home-like, noninstitutional environment. Project HOPE—The People-to-People Health Foundation, Inc.

  10. Computer based plant display and digital control system of Wolsong NPP Tritium Removal Facility

    International Nuclear Information System (INIS)

    Jung, C.; Smith, B.; Tosello, G.; Grosbois, J. de; Ahn, J.

    2007-01-01

    The Wolsong Tritium Removal Facility (WTRF) is an AECL-designed, first-of-a-kind facility that removes tritium from the heavy water that is used in systems of the CANDUM reactors in operation at the Wolsong Nuclear Power Plant in South Korea. The Plant Display and Control System (PDCS) provides digital plant monitoring and control for the WTRF and offers the advantages of state-of-the-art digital control system technologies for operations and maintenance. The overall features of the PDCS will be described and some of the specific approaches taken on the project to save construction time and costs, to reduce in-service life-cycle costs and to improve quality will be presented. The PDCS consists of two separate computer sub-systems: the Digital Control System (DCS) and the Plant Display System (PDS). The PDS provides the computer-based Human Machine Interface (HMI) for operators, and permits efficient supervisory or device level monitoring and control. A System Maintenance Console (SMC) is included in the PDS for the purpose of software and hardware configuration and on-line maintenance. A Historical Data System (HDS) is also included in the PDS as a data-server that continuously captures and logs process data and events for long-term storage and on-demand selective retrieval. The PDCS of WTRF has been designed and implemented based on an off-the-self PDS/DCS product combination, the Delta-V System from Emerson. The design includes fully redundant Ethernet network communications, controllers, power supplies and redundancy on selected I/O modules. The DCS provides field bus communications to interface with 3rd party controllers supplied on specialized skids, and supports HART communication with field transmitters. The DCS control logic was configured using a modular and graphical approach. The control strategies are primarily device control modules implemented as autonomous control loops, and implemented using IEC 61131-3 Function Block Diagram (FBD) and Structured

  11. A personal computer code for seismic evaluations of nuclear power plants facilities

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulos, A.J.; Graves, H.

    1990-01-01

    The program CARES (Computer Analysis for Rapid Evaluation of Structures) is an integrated computational system being developed by Brookhaven National Laboratory (BNL) for the U.S. Nuclear Regulatory Commission. It is specifically designed to be a personal computer (PC) operated package which may be used to determine the validity and accuracy of analysis methodologies used for structural safety evaluations of nuclear power plants. CARES is structured in a modular format. Each module performs a specific type of analysis i.e., static or dynamic, linear or nonlinear, etc. This paper describes the various features which have been implemented into the Seismic Module of CARES

  12. 15 years of The Hungarian integral type test facility: horizontal SG related PMK-2 experiments

    International Nuclear Information System (INIS)

    Perneczky, L.; Ezsoel, G.; Guba, A.; Szabados, L.

    2001-01-01

    support of accident management (AM) procedures. During the 15 operational years - from May 1986 onwards with the first of four IAEA Standard Problem Exercise tests - 48 different experiments, including cold and hot leg break LOCA, primary-to-secondary leakage (PRISE), loss of flow, loss of feedwater, disturbances of natural circulation, etc. tests were performed on this integral type test facility. An overview on 11 experiments related to the operational behaviour of horizontal steam generators performed in framework of national research projects IAEA Technical Co-operation Project RER/9/004 (Standard Problem Exercises) and three EU PHARE projects (in co-operation with AEAT, FRAMATOM, SIEMENS, IPSN, GRS, FZR and VVER owner countries) is given in the first part of paper. In the second part results of two types of tests in shutdown condition with RELAP5 post-test analysis may be of interest to the computer simulation of the horizontal SG too - are summarised. (orig.)

  13. Integrating publicly-available data to generate computationally ...

    Science.gov (United States)

    The adverse outcome pathway (AOP) framework provides a way of organizing knowledge related to the key biological events that result in a particular health outcome. For the majority of environmental chemicals, the availability of curated pathways characterizing potential toxicity is limited. Methods are needed to assimilate large amounts of available molecular data and quickly generate putative AOPs for further testing and use in hazard assessment. A graph-based workflow was used to facilitate the integration of multiple data types to generate computationally-predicted (cp) AOPs. Edges between graph entities were identified through direct experimental or literature information or computationally inferred using frequent itemset mining. Data from the TG-GATEs and ToxCast programs were used to channel large-scale toxicogenomics information into a cpAOP network (cpAOPnet) of over 20,000 relationships describing connections between chemical treatments, phenotypes, and perturbed pathways measured by differential gene expression and high-throughput screening targets. Sub-networks of cpAOPs for a reference chemical (carbon tetrachloride, CCl4) and outcome (hepatic steatosis) were extracted using the network topology. Comparison of the cpAOP subnetworks to published mechanistic descriptions for both CCl4 toxicity and hepatic steatosis demonstrate that computational approaches can be used to replicate manually curated AOPs and identify pathway targets that lack genomic mar

  14. Integrated Geo Hazard Management System in Cloud Computing Technology

    Science.gov (United States)

    Hanifah, M. I. M.; Omar, R. C.; Khalid, N. H. N.; Ismail, A.; Mustapha, I. S.; Baharuddin, I. N. Z.; Roslan, R.; Zalam, W. M. Z.

    2016-11-01

    Geo hazard can result in reducing of environmental health and huge economic losses especially in mountainous area. In order to mitigate geo-hazard effectively, cloud computer technology are introduce for managing geo hazard database. Cloud computing technology and it services capable to provide stakeholder's with geo hazards information in near to real time for an effective environmental management and decision-making. UNITEN Integrated Geo Hazard Management System consist of the network management and operation to monitor geo-hazard disaster especially landslide in our study area at Kelantan River Basin and boundary between Hulu Kelantan and Hulu Terengganu. The system will provide easily manage flexible measuring system with data management operates autonomously and can be controlled by commands to collects and controls remotely by using “cloud” system computing. This paper aims to document the above relationship by identifying the special features and needs associated with effective geohazard database management using “cloud system”. This system later will use as part of the development activities and result in minimizing the frequency of the geo-hazard and risk at that research area.

  15. Development of an integrated facility for processing transuranium solid wastes at the Savannah River Plant

    International Nuclear Information System (INIS)

    Boersma, M.D.; Hootman, H.E.; Permar, P.H.

    1978-01-01

    An integrated facility is being designed for processing solid wastes contaminated with long-lived alpha emitting (TRU) nuclides; this waste has been stored retrievably at the Savannah River Plant since 1965. The stored waste, having a volume of 10 4 m 3 and containing 3x10 5 Ci of transuranics, consists of both mixed combustible trash and failed and obsolete equipment primarily from transuranic production and associated laboratory operations. The facility for processing solid transuranic waste will consist of five processing modules: 1) unpackaging, sorting, and assaying; 2) treatment of combustibles by controlled air incineration; 3) size reduction of noncombustibles by plasma-arc cutting followed by decontamination by electropolishing; 4) fixation of the processed waste in cement; and 5) packaging for shipment to a federal repository. The facility is projected for construction in the mid-1980's. Pilot facilities, sized to manage currently generated wastes, will also demonstrate the key process steps of incineration of combustibles and size reduction/decontamination of noncombustibles; these facilities are projected for 1980-81. Development programs leading to these extensive new facilities are described

  16. Development of an integrated facility for processing TRU solid wastes at the Savannah River Plant

    International Nuclear Information System (INIS)

    Boersma, M.D.; Hootman, H.E.; Permar, P.H.

    1977-01-01

    An integrated facility is being designed for processing solid wastes contaminated with long-lived alpha emitting (TRU) nuclides; this waste has been stored retrievably at the Savannah River Plant since 1965. The stored waste, having a volume of 10 4 m 3 and containing 3 x 10 5 Ci of transuranics, consists of both mixed combustible trash and failed and obsolete equipment primarily from transuranic production and associated laboratory operations. The facility for processing solid transuranic waste will consist of five processing modules: (1) unpackaging, sorting, and assaying; (2) treatment of combustibles by controlled air incineration; (3) size reduction of noncombustibles by plasma-arc cutting followed by decontamination by electropolishing; (4) fixation of the processed waste in cement; and (5) packaging for shipment to a federal repository. The facility is projected for construction in the mid-1980's. Pilot facilities, sized to manage currently generated wastes, will also demonstrate the key process steps of incineration of combustibles and size reduction/decontamination of noncombustibles; these facilities are projected for 1980-81. Development programs leading to these extensive new facilities are described

  17. Passive BWR integral LOCA testing at the Karlstein test facility INKA

    Energy Technology Data Exchange (ETDEWEB)

    Drescher, Robert [AREVA GmbH, Erlangen (Germany); Wagner, Thomas [AREVA GmbH, Karlstein am Main (Germany); Leyer, Stephan [TH University of Applied Sciences, Deggendorf (Germany)

    2014-05-15

    KERENA is an innovative AREVA GmbH boiling water reactor (BWR) with passive safety systems (Generation III+). In order to verify the functionality of the reactor design an experimental validation program was executed. Therefore the INKA (Integral Teststand Karlstein) test facility was designed and erected. It is a mockup of the BWR containment, with integrated pressure suppression system. While the scaling of the passive components and the levels match the original values, the volume scaling of the containment compartments is approximately 1:24. The storage capacity of the test facility pressure vessel corresponds to approximately 1/6 of the KERENA RPV and is supplied by a benson boiler with a thermal power of 22 MW. In March 2013 the first integral test - Main Steam Line Break (MSLB) - was executed. The test measured the combined response of the passive safety systems to the postulated initiating event. The main goal was to demonstrate the ability of the passive systems to ensure core coverage, decay heat removal and to maintain the containment within defined limits. The results of the test showed that the passive safety systems are capable to bring the plant to stable conditions meeting all required safety targets with sufficient margins. Therefore the test verified the function of those components and the interplay between them. The test proved that INKA is an unique test facility, capable to perform integral tests of passive safety concepts under plant-like conditions. (orig.)

  18. Supporting Facility Management Processes through End-Users’ Integration and Coordinated BIM-GIS Technologies

    Directory of Open Access Journals (Sweden)

    Claudio Mirarchi

    2018-05-01

    Full Text Available The integration of facility management and building information modelling (BIM is an innovative and critical undertaking process to support facility maintenance and management. Even though recent research has proposed various methods and performed an increasing number of case studies, there are still issues of communication processes to be addressed. This paper presents a theoretical framework for digital systems integration of virtual models and smart technologies. Based on the comprehensive analysis of existing technologies for indoor localization, a new workflow is defined and designed, and it is utilized in a practical case study to test the model performance. In the new workflow, a facility management supporting platform is proposed and characterized, featuring indoor positioning systems to allow end users to send geo-referenced reports to central virtual models. In addition, system requirements, information technology (IT architecture and application procedures are presented. Results show that the integration of end users in the maintenance processes through smart and easy tools can overcome the existing limits of barcode systems and building management systems for failure localization. The proposed framework offers several advantages. First, it allows the identification of every element of an asset including wide physical building elements (walls, floors, etc. without requiring a prior mapping. Second, the entire cycle of maintenance activities is managed through a unique integrated system including the territorial dimension. Third, data are collected in a standard structure for future uses. Furthermore, the integration of the process in a centralized BIM-GIS (geographical information system information management system admit a scalable representation of the information supporting facility management processes in terms of assets and supply chain management and monitoring from a spatial perspective.

  19. A Computer Simulation to Assess the Nuclear Material Accountancy System of a MOX Fuel Fabrication Facility

    International Nuclear Information System (INIS)

    Portaix, C.G.; Binner, R.; John, H.

    2015-01-01

    SimMOX is a computer programme that simulates container histories as they pass through a MOX facility. It performs two parallel calculations: · the first quantifies the actual movements of material that might be expected to occur, given certain assumptions about, for instance, the accumulation of material and waste, and of their subsequent treatment; · the second quantifies the same movements on the basis of the operator's perception of the quantities involved; that is, they are based on assumptions about quantities contained in the containers. Separate skeletal Excel computer programmes are provided, which can be configured to generate further accountancy results based on these two parallel calculations. SimMOX is flexible in that it makes few assumptions about the order and operational performance of individual activities that might take place at each stage of the process. It is able to do this because its focus is on material flows, and not on the performance of individual processes. Similarly there are no pre-conceptions about the different types of containers that might be involved. At the macroscopic level, the simulation takes steady operation as its base case, i.e., the same quantity of material is deemed to enter and leave the simulated area, over any given period. Transient situations can then be superimposed onto this base scene, by simulating them as operational incidents. A general facility has been incorporated into SimMOX to enable the user to create an ''act of a play'' based on a number of operational incidents that have been built into the programme. By doing this a simulation can be constructed that predicts the way the facility would respond to any number of transient activities. This computer programme can help assess the nuclear material accountancy system of a MOX fuel fabrication facility; for instance the implications of applying NRTA (near real time accountancy). (author)

  20. Structural integrity assessment based on the HFR Petten neutron beam facilities

    CERN Document Server

    Ohms, C; Idsert, P V D

    2002-01-01

    Neutrons are becoming recognized as a valuable tool for structural-integrity assessment of industrial components and advanced materials development. Microstructure, texture and residual stress analyses are commonly performed by neutron diffraction and a joint CEN/ISO Pre-Standard for residual stress analysis is under development. Furthermore neutrons provide for defects analyses, i.e. precipitations, voids, pores and cracks, through small-angle neutron scattering (SANS) or radiography. At the High Flux Reactor, 12 beam tubes have been installed for the extraction of thermal neutrons for such applications. Two of them are equipped with neutron diffractometers for residual stress and structure determination and have been extensively used in the past. Several other facilities are currently being reactivated and upgraded. These include the SANS and radiography facilities as well as a powder diffractometer. This paper summarizes the main characteristics and current status of these facilities as well as recently in...

  1. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  2. Computer mapping and visualization of facilities for planning of D and D operations

    International Nuclear Information System (INIS)

    Wuller, C.E.; Gelb, G.H.; Cramond, R.; Cracraft, J.S.

    1995-01-01

    The lack of as-built drawings for many old nuclear facilities impedes planning for decontamination and decommissioning. Traditional manual walkdowns subject workers to lengthy exposure to radiological and other hazards. The authors have applied close-range photogrammetry, 3D solid modeling, computer graphics, database management, and virtual reality technologies to create geometrically accurate 3D computer models of the interiors of facilities. The required input to the process is a set of photographs that can be acquired in a brief time. They fit 3D primitive shapes to objects of interest in the photos and, at the same time, record attributes such as material type and link patches of texture from the source photos to facets of modeled objects. When they render the model as either static images or at video rates for a walk-through simulation, the phototextures are warped onto the objects, giving a photo-realistic impression. The authors have exported the data to commercial CAD, cost estimating, robotic simulation, and plant design applications. Results from several projects at old nuclear facilities are discussed

  3. Validation of an integral conceptual model of frailty in older residents of assisted living facilities.

    Science.gov (United States)

    Gobbens, Robbert J J; Krans, Anita; van Assen, Marcel A L M

    2015-01-01

    The aim of this cross-sectional study was to examine the validity of an integral model of the associations between life-course determinants, disease(s), frailty, and adverse outcomes in older persons who are resident in assisted living facilities. Between June 2013 and May 2014 seven assisted living facilities were contacted. A total of 221 persons completed the questionnaire on life-course determinants, frailty (using the Tilburg Frailty Indicator), self-reported chronic diseases, and adverse outcomes disability, quality of life, health care utilization, and falls. Adverse outcomes were analyzed with sequential (logistic) regression analyses. The integral model is partially validated. Life-course determinants and disease(s) affected only physical frailty. All three frailty domains (physical, psychological, social) together affected disability, quality of life, visits to a general practitioner, and falls. Contrary to the model, disease(s) had no effect on adverse outcomes after controlling for frailty. Life-course determinants affected adverse outcomes, with unhealthy lifestyle having consistent negative effects, and women had more disability, scored lower on physical health, and received more personal and informal care after controlling for all other predictors. The integral model of frailty is less useful for predicting adverse outcomes of residents of assisted living facilities than for community-dwelling older persons, because these residents are much frailer and already have access to healthcare facilities. The present study showed that a multidimensional assessment of frailty, distinguishing three domains of frailty (physical, psychological, social), is beneficial with respect to predicting adverse outcomes in residents of assisted living facilities. Copyright © 2015. Published by Elsevier Ireland Ltd.

  4. An integrated prediction and optimization model of biogas production system at a wastewater treatment facility.

    Science.gov (United States)

    Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih

    2015-11-01

    This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Track Reconstruction with Cosmic Ray Data at the Tracker Integration Facility

    CERN Document Server

    Adam, Wolfgang; Dragicevic, Marko; Friedl, Markus; Fruhwirth, R; Hansel, S; Hrubec, Josef; Krammer, Manfred; Oberegger, Margit; Pernicka, Manfred; Schmid, Siegfried; Stark, Roland; Steininger, Helmut; Uhl, Dieter; Waltenberger, Wolfgang; Widl, Edmund; Van Mechelen, Pierre; Cardaci, Marco; Beaumont, Willem; de Langhe, Eric; de Wolf, Eddi A; Delmeire, Evelyne; Hashemi, Majid; Bouhali, Othmane; Charaf, Otman; Clerbaux, Barbara; Elgammal, J.-P. Dewulf. S; Hammad, Gregory Habib; de Lentdecker, Gilles; Marage, Pierre Edouard; Vander Velde, Catherine; Vanlaer, Pascal; Wickens, John; Adler, Volker; Devroede, Olivier; De Weirdt, Stijn; D'Hondt, Jorgen; Goorens, Robert; Heyninck, Jan; Maes, Joris; Mozer, Matthias Ulrich; Tavernier, Stefaan; Van Lancker, Luc; Van Mulders, Petra; Villella, Ilaria; Wastiels, C; Bonnet, Jean-Luc; Bruno, Giacomo; De Callatay, Bernard; Florins, Benoit; Giammanco, Andrea; Gregoire, Ghislain; Keutgen, Thomas; Kcira, Dorian; Lemaitre, Vincent; Michotte, Daniel; Militaru, Otilia; Piotrzkowski, Krzysztof; Quertermont, L; Roberfroid, Vincent; Rouby, Xavier; Teyssier, Daniel; Daubie, Evelyne; Anttila, Erkki; Czellar, Sandor; Engstrom, Pauli; Harkonen, J; Karimaki, V; Kostesmaa, J; Kuronen, Auli; Lampen, Tapio; Linden, Tomas; Luukka, Panja-Riina; Maenpaa, T; Michal, Sebastien; Tuominen, Eija; Tuominiemi, Jorma; Ageron, Michel; Baulieu, Guillaume; Bonnevaux, Alain; Boudoul, Gaelle; Chabanat, Eric; Chabert, Eric Christian; Chierici, Roberto; Contardo, Didier; Della Negra, Rodolphe; Dupasquier, Thierry; Gelin, Georges; Giraud, Noël; Guillot, Gérard; Estre, Nicolas; Haroutunian, Roger; Lumb, Nicholas; Perries, Stephane; Schirra, Florent; Trocme, Benjamin; Vanzetto, Sylvain; Agram, Jean-Laurent; Blaes, Reiner; Drouhin, Frédéric; Ernenwein, Jean-Pierre; Fontaine, Jean-Charles; Berst, Jean-Daniel; Brom, Jean-Marie; Didierjean, Francois; Goerlach, Ulrich; Graehling, Philippe; Gross, Laurent; Hosselet, J; Juillot, Pierre; Lounis, Abdenour; Maazouzi, Chaker; Olivetto, Christian; Strub, Roger; Van Hove, Pierre; Anagnostou, Georgios; Brauer, Richard; Esser, Hans; Feld, Lutz; Karpinski, Waclaw; Klein, Katja; Kukulies, Christoph; Olzem, Jan; Ostapchuk, Andrey; Pandoulas, Demetrios; Pierschel, Gerhard; Raupach, Frank; Schael, Stefan; Schwering, Georg; Sprenger, Daniel; Thomas, Maarten; Weber, Markus; Wittmer, Bruno; Wlochal, Michael; Beissel, Franz; Bock, E; Flugge, G; Gillissen, C; Hermanns, Thomas; Heydhausen, Dirk; Jahn, Dieter; Kaussen, Gordon; Linn, Alexander; Perchalla, Lars; Poettgens, Michael; Pooth, Oliver; Stahl, Achim; Zoeller, Marc Henning; Buhmann, Peter; Butz, Erik; Flucke, Gero; Hamdorf, Richard Helmut; Hauk, Johannes; Klanner, Robert; Pein, Uwe; Schleper, Peter; Steinbruck, G; Blum, P; De Boer, Wim; Dierlamm, Alexander; Dirkes, Guido; Fahrer, Manuel; Frey, Martin; Furgeri, Alexander; Hartmann, Frank; Heier, Stefan; Hoffmann, Karl-Heinz; Kaminski, Jochen; Ledermann, Bernhard; Liamsuwan, Thiansin; Muller, S; Muller, Th; Schilling, Frank-Peter; Simonis, Hans-Jürgen; Steck, Pia; Zhukov, Valery; Cariola, P; De Robertis, Giuseppe; Ferorelli, Raffaele; Fiore, Luigi; Preda, M; Sala, Giuliano; Silvestris, Lucia; Tempesta, Paolo; Zito, Giuseppe; Creanza, Donato; De Filippis, Nicola; De Palma, Mauro; Giordano, Domenico; Maggi, Giorgio; Manna, Norman; My, Salvatore; Selvaggi, Giovanna; Albergo, Sebastiano; Chiorboli, Massimiliano; Costa, Salvatore; Galanti, Mario; Giudice, Nunzio; Guardone, Nunzio; Noto, Francesco; Potenza, Renato; Saizu, Mirela Angela; Sparti, V; Sutera, Concetta; Tricomi, Alessia; Tuve, Cristina; Brianzi, Mirko; Civinini, Carlo; Maletta, Fernando; Manolescu, Florentina; Meschini, Marco; Paoletti, Simone; Sguazzoni, Giacomo; Broccolo, B; Ciulli, Vitaliano; Focardi, R. D'Alessandro. E; Frosali, Simone; Genta, Chiara; Landi, Gregorio; Lenzi, Piergiulio; Macchiolo, Anna; Magini, Nicolo; Parrini, Giuliano; Scarlini, Enrico; Cerati, Giuseppe Benedetto; Azzi, Patrizia; Bacchetta, Nicola; Candelori, Andrea; Dorigo, Tommaso; Kaminsky, A; Karaevski, S; Khomenkov, Volodymyr; Reznikov, Sergey; Tessaro, Mario; Bisello, Dario; De Mattia, Marco; Giubilato, Piero; Loreti, Maurizio; Mattiazzo, Serena; Nigro, Massimo; Paccagnella, Alessandro; Pantano, Devis; Pozzobon, Nicola; Tosi, Mia; Bilei, Gian Mario; Checcucci, Bruno; Fano, Livio; Servoli, Leonello; Ambroglini, Filippo; Babucci, Ezio; Benedetti, Daniele; Biasini, Maurizio; Caponeri, Benedetta; Covarelli, Roberto; Giorgi, Marco; Lariccia, Paolo; Mantovani, Giancarlo; Marcantonini, Marta; Postolache, Vasile; Santocchia, Attilio; Spiga, Daniele; Bagliesi, Giuseppe; Balestri, Gabriele; Berretta, Luca; Bianucci, S; Boccali, Tommaso; Bosi, Filippo; Bracci, Fabrizio; Castaldi, Rino; Ceccanti, Marco; Cecchi, Roberto; Cerri, Claudio; Cucoanes, Andi Sebastian; Dell'Orso, Roberto; Dobur, Didar; Dutta, Suchandra; Giassi, Alessandro; Giusti, Simone; Kartashov, Dmitry; Kraan, Aafke; Lomtadze, Teimuraz; Lungu, George-Adrian; Magazzu, Guido; Mammini, Paolo; Mariani, Filippo; Martinelli, Giovanni; Moggi, Andrea; Palla, Fabrizio; Palmonari, Francesco; Petragnani, Giulio; Profeti, Alessandro; Raffaelli, Fabrizio; Rizzi, Domenico; Sanguinetti, Giulio; Sarkar, Subir; Sentenac, Daniel; Serban, Alin Titus; Slav, Adrian; Soldani, A; Spagnolo, Paolo; Tenchini, Roberto; Tolaini, Sergio; Venturi, Andrea; Verdini, Piero Giorgio; Vos, Marcel; Zaccarelli, Luciano; Avanzini, Carlo; Basti, Andrea; Benucci, Leonardo; Bocci, Andrea; Cazzola, Ugo; Fiori, Francesco; Linari, Stefano; Massa, Maurizio; Messineo, Alberto; Segneri, Gabriele; Tonelli, Guido; Azzurri, Paolo; Bernardini, Jacopo; Borrello, Laura; Calzolari, Federico; Foa, Lorenzo; Gennai, Simone; Ligabue, Franco; Petrucciani, Giovanni; Rizzi, Andrea; Yang, Zong-Chang; Benotto, Franco; Demaria, Natale; Dumitrache, Floarea; Farano, R; Borgia, Maria Assunta; Castello, Roberto; Costa, Marco; Migliore, Ernesto; Romero, Alessandra; Abbaneo, Duccio; Abbas, M; Ahmed, Ijaz; Akhtar, I; Albert, Eric; Bloch, Christoph; Breuker, Horst; Butt, Shahid Aleem; Buchmuller, Oliver; Cattai, Ariella; Delaere, Christophe; Delattre, Michel; Edera, Laura Maria; Engstrom, Pauli; Eppard, Michael; Gateau, Maryline; Gill, Karl; Giolo-Nicollerat, Anne-Sylvie; Grabit, Robert; Honma, Alan; Huhtinen, Mika; Kloukinas, Kostas; Kortesmaa, Jarmo; Kottelat, Luc-Joseph; Kuronen, Auli; Leonardo, Nuno; Ljuslin, Christer; Mannelli, Marcello; Masetti, Lorenzo; Marchioro, Alessandro; Mersi, Stefano; Michal, Sebastien; Mirabito, Laurent; Muffat-Joly, Jeannine; Onnela, Antti; Paillard, Christian; Pal, Imre; Pernot, Jean-Francois; Petagna, Paolo; Petit, Patrick; Piccut, C; Pioppi, Michele; Postema, Hans; Ranieri, Riccardo; Ricci, Daniel; Rolandi, Gigi; Ronga, Frederic Jean; Sigaud, Christophe; Syed, A; Siegrist, Patrice; Tropea, Paola; Troska, Jan; Tsirou, Andromachi; Vander Donckt, Muriel; Vasey, François; Alagoz, Enver; Amsler, Claude; Chiochia, Vincenzo; Regenfus, Christian; Robmann, Peter; Rochet, Jacky; Rommerskirchen, Tanja; Schmidt, Alexander; Steiner, Stefan; Wilke, Lotte; Church, Ivan; Cole, Joanne; Coughlan, John A; Gay, Arnaud; Taghavi, S; Tomalin, Ian R; Bainbridge, Robert; Cripps, Nicholas; Fulcher, Jonathan; Hall, Geoffrey; Noy, Matthew; Pesaresi, Mark; Radicci, Valeria; Raymond, David Mark; Sharp, Peter; Stoye, Markus; Wingham, Matthew; Zorba, Osman; Goitom, Israel; Hobson, Peter R; Reid, Ivan; Teodorescu, Liliana; Hanson, Gail; Jeng, Geng-Yuan; Liu, Haidong; Pasztor, Gabriella; Satpathy, Asish; Stringer, Robert; Mangano, Boris; Affolder, K; Affolder, T; Allen, Andrea; Barge, Derek; Burke, Samuel; Callahan, D; Campagnari, Claudio; Crook, A; D'Alfonso, Mariarosaria; Dietch, J; Garberson, Jeffrey; Hale, David; Incandela, H; Incandela, Joe; Jaditz, Stephen; Kalavase, Puneeth; Kreyer, Steven Lawrence; Kyre, Susanne; Lamb, James; Mc Guinness, C; Mills, C; Nguyen, Harold; Nikolic, Milan; Lowette, Steven; Rebassoo, Finn; Ribnik, Jacob; Richman, Jeffrey; Rubinstein, Noah; Sanhueza, S; Shah, Yousaf Syed; Simms, L; Staszak, D; Stoner, J; Stuart, David; Swain, Sanjay Kumar; Vlimant, Jean-Roch; White, Dean; Ulmer, Keith; Wagner, Stephen Robert; Bagby, Linda; Bhat, Pushpalatha C; Burkett, Kevin; Cihangir, Selcuk; Gutsche, Oliver; Jensen, Hans; Johnson, Mark; Luzhetskiy, Nikolay; Mason, David; Miao, Ting; Moccia, Stefano; Noeding, Carsten; Ronzhin, Anatoly; Skup, Ewa; Spalding, William J; Spiegel, Leonard; Tkaczyk, Slawek; Yumiceva, Francisco; Zatserklyaniy, Andriy; Zerev, E; Anghel, Ioana Maria; Bazterra, Victor Eduardo; Gerber, Cecilia Elena; Khalatian, S; Shabalina, Elizaveta; Baringer, Philip; Bean, Alice; Chen, Jie; Hinchey, Carl Louis; Martin, Christophe; Moulik, Tania; Robinson, Richard; Gritsan, Andrei; Lae, Chung Khim; Tran, Nhan Viet; Everaerts, Pieter; Hahn, Kristan Allan; Harris, Philip; Nahn, Steve; Rudolph, Matthew; Sung, Kevin; Betchart, Burton; Demina, Regina; Gotra, Yury; Korjenevski, Sergey; Miner, Daniel Carl; Orbaker, Douglas; Christofek, Leonard; Hooper, Ryan; Landsberg, Greg; Nguyen, Duong; Narain, Meenakshi; Speer, Thomas; Tsang, Ka Vang

    2008-01-01

    The subsystems of the CMS silicon strip tracker were integrated and commissioned at the Tracker Integration Facility (TIF) in the period from November 2006 to July 2007. As part of the commissioning, large samples of cosmic ray data were recorded under various running conditions in the absence of a magnetic field. Cosmic rays detected by scintillation counters were used to trigger the readout of up to 15\\,\\% of the final silicon strip detector, and over 4.7~million events were recorded. This document describes the cosmic track reconstruction and presents results on the performance of track and hit reconstruction as from dedicated analyses.

  6. An Integrated Assessment of Location-Dependent Scaling for Microalgae Biofuel Production Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, Andre M.; Abodeely, Jared; Skaggs, Richard; Moeglein, William AM; Newby, Deborah T.; Venteris, Erik R.; Wigmosta, Mark S.

    2014-06-19

    Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting/design through processing/upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are addressed in part by applying the Integrated Assessment Framework (IAF)—an integrated multi-scale modeling, analysis, and data management suite—to address key issues in developing and operating an open-pond facility by analyzing how variability and uncertainty in space and time affect algal feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. The IAF was applied to a set of sites previously identified as having the potential to cumulatively produce 5 billion-gallons/year in the southeastern U.S. and results indicate costs can be reduced by selecting the most effective processing technology pathway and scaling downstream processing capabilities to fit site-specific growing conditions, available resources, and algal strains.

  7. Integrated leak rate test of the FFTF [Fast Flux Test Facility] containment vessel

    International Nuclear Information System (INIS)

    Grygiel, M.L.; Davis, R.H.; Polzin, D.L.; Yule, W.D.

    1987-04-01

    The third integrated leak rate test (ILRT) performed at the Fast Flux Test Facility (FFTF) demonstrated that effective leak rate measurements could be obtained at a pressure of 2 psig. In addition, innovative data reduction methods demonstrated the ability to accurately account for diurnal variations in containment pressure and temperature. Further development of methods used in this test indicate significant savings in the time and effort required to perform an ILRT on Liquid Metal Reactor Systems with consequent reduction in test costs

  8. Development of Integral Effect Test Facility P and ID and Technical Specification for SMART Fluid System

    International Nuclear Information System (INIS)

    Lee, Sang Il; Jung, Y. H.; Yang, H. J.; Song, S. Y.; Han, O. J.; Lee, B. J.; Kim, Y. A.; Lim, J. H.; Park, K. W.; Kim, N. G.

    2010-01-01

    SMART integral test loop is the thermal hydraulic test facility with a high pressure and temperature for simulating the major systems of the prototype reactor, SMART-330. The objective of this project is to conduct the basic design for constructing SMART ITL. The major results of this project include a series of design documents, technical specifications and P and ID. The results can be used as the fundamental materials for making the detailed design which is essential for manufacturing and installing SMART ITL

  9. Several problems of algorithmization in integrated computation programs on third generation computers for short circuit currents in complex power networks

    Energy Technology Data Exchange (ETDEWEB)

    Krylov, V.A.; Pisarenko, V.P.

    1982-01-01

    Methods of modeling complex power networks with short circuits in the networks are described. The methods are implemented in integrated computation programs for short circuit currents and equivalents in electrical networks with a large number of branch points (up to 1000) on a computer with a limited on line memory capacity (M equals 4030 for the computer).

  10. Fully integrated digital GAMMA camera-computer system

    International Nuclear Information System (INIS)

    Berger, H.J.; Eisner, R.L.; Gober, A.; Plankey, M.; Fajman, W.

    1985-01-01

    Although most of the new non-nuclear imaging techniques are fully digital, there has been a reluctance in nuclear medicine to abandon traditional analog planar imaging in favor of digital acquisition and display. The authors evaluated a prototype digital camera system (GE STARCAM) in which all of the analog acquisition components are replaced by microprocessor controls and digital circuitry. To compare the relative effects of acquisition matrix size on image quality and to ascertain whether digital techniques could be used in place of analog imaging, Tc-99m bone scans were obtained on this digital system and on a comparable analog camera in 10 patients. The dedicated computer is used for camera setup including definition of the energy window, spatial energy correction, and spatial distortion correction. The display monitor, which is used for patient positioning and image analysis, is 512/sup 2/ non-interlaced, allowing high resolution imaging. Data acquisition and processing can be performed simultaneously. Thus, the development of a fully integrated digital camera-computer system with optimized display should allow routine utilization of non-analog studies in nuclear medicine and the ultimate establishment of fully digital nuclear imaging laboratories

  11. An integrated computer design environment for the development of micro-computer critical software

    International Nuclear Information System (INIS)

    De Agostino, E.; Massari, V.

    1986-01-01

    The paper deals with the development of micro-computer software for Nuclear Safety System. More specifically, it describes an experimental work in the field of software development methodologies to be used for the implementation of micro-computer based safety systems. An investigation of technological improvements that are provided by state-of-the-art integrated packages for micro-based systems development has been carried out. The work has aimed to assess a suitable automated tools environment for the whole software life-cycle. The main safety functions, as DNBR, KW/FT, of a nuclear power reactor have been implemented in a host-target approach. A prototype test-bed microsystem has been implemented to run the safety functions in order to derive a concrete evaluation on the feasibility of critical software according to new technological trends of ''Software Factories''. (author)

  12. Integration of the SSPM and STAGE with the MPACT Virtual Facility Distributed Test Bed.

    Energy Technology Data Exchange (ETDEWEB)

    Cipiti, Benjamin B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shoman, Nathan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    The Material Protection Accounting and Control Technologies (MPACT) program within DOE NE is working toward a 2020 milestone to demonstrate a Virtual Facility Distributed Test Bed. The goal of the Virtual Test Bed is to link all MPACT modeling tools, technology development, and experimental work to create a Safeguards and Security by Design capability for fuel cycle facilities. The Separation and Safeguards Performance Model (SSPM) forms the core safeguards analysis tool, and the Scenario Toolkit and Generation Environment (STAGE) code forms the core physical security tool. These models are used to design and analyze safeguards and security systems and generate performance metrics. Work over the past year has focused on how these models will integrate with the other capabilities in the MPACT program and specific model changes to enable more streamlined integration in the future. This report describes the model changes and plans for how the models will be used more collaboratively. The Virtual Facility is not designed to integrate all capabilities into one master code, but rather to maintain stand-alone capabilities that communicate results between codes more effectively.

  13. Long term integrity of spent fuel and construction materials for dry storage facilities

    Energy Technology Data Exchange (ETDEWEB)

    Saegusa, T [CRIEPI (Japan)

    2012-07-01

    In Japan, two dry storage facilities at reactor sites have already been operating since 1995 and 2002, respectively. Additionally, a large scale dry storage facility away from reactor sites is under safety examination for license near the coast and desired to start its operation in 2010. Its final storage capacity is 5,000tU. It is therefore necessary to obtain and evaluate the related data on integrity of spent fuels loaded into and construction materials of casks during long term dry storage. The objectives are: - Spent fuel rod: To evaluate hydrogen migration along axial fuel direction on irradiated claddings stored for twenty years in air; To evaluate pellet oxidation behaviour for high burn-up UO{sub 2} fuels; - Construction materials for dry storage facilities: To evaluate long term reliability of welded stainless steel canister under stress corrosion cracking (SCC) environment; To evaluate long term integrity of concrete cask under carbonation and salt attack environment; To evaluate integrity of sealability of metal gasket under long term storage and short term accidental impact force.

  14. Integrated severe accident containment analysis with the CONTAIN computer code

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Williams, D.C.; Rexroth, P.E.; Tills, J.L.

    1985-12-01

    Analysis of physical and radiological conditions iunside the containment building during a severe (core-melt) nuclear reactor accident requires quantitative evaluation of numerous highly disparate yet coupled phenomenologies. These include two-phase thermodynamics and thermal-hydraulics, aerosol physics, fission product phenomena, core-concrete interactions, the formation and combustion of flammable gases, and performance of engineered safety features. In the past, this complexity has meant that a complete containment analysis would require application of suites of separate computer codes each of which would treat only a narrower subset of these phenomena, e.g., a thermal-hydraulics code, an aerosol code, a core-concrete interaction code, etc. In this paper, we describe the development and some recent applications of the CONTAIN code, which offers an integrated treatment of the dominant containment phenomena and the interactions among them. We describe the results of a series of containment phenomenology studies, based upon realistic accident sequence analyses in actual plants. These calculations highlight various phenomenological effects that have potentially important implications for source term and/or containment loading issues, and which are difficult or impossible to treat using a less integrated code suite

  15. Safety integrity requirements for computer based I ampersand C systems

    International Nuclear Information System (INIS)

    Thuy, N.N.Q.; Ficheux-Vapne, F.

    1997-01-01

    In order to take into account increasingly demanding functional requirements, many instrumentation and control (I ampersand C) systems in nuclear power plants are implemented with computers. In order to ensure the required safety integrity of such equipment, i.e., to ensure that they satisfactorily perform the required safety functions under all stated conditions and within stated periods of time, requirements applicable to these equipment and to their life cycle need to be expressed and followed. On the other hand, the experience of the last years has led EDF (Electricite de France) and its partners to consider three classes of systems and equipment, according to their importance to safety. In the EPR project (European Pressurized water Reactor), these classes are labeled E1A, E1B and E2. The objective of this paper is to present the outline of the work currently done in the framework of the ETC-I (EPR Technical Code for I ampersand C) regarding safety integrity requirements applicable to each of the three classes. 4 refs., 2 figs

  16. Integrated operations plan for the MFTF-B Mirror Fusion Test Facility. Volume II. Integrated operations plan

    Energy Technology Data Exchange (ETDEWEB)

    1981-12-01

    This document defines an integrated plan for the operation of the Lawrence Livermore National Laboratory (LLNL) Mirror Fusion Test Facility (MFTF-B). The plan fulfills and further delineates LLNL policies and provides for accomplishing the functions required by the program. This plan specifies the management, operations, maintenance, and engineering support responsibilities. It covers phasing into sustained operations as well as the sustained operations themselves. Administrative and Plant Engineering support, which are now being performed satisfactorily, are not part of this plan unless there are unique needs.

  17. Integrated operations plan for the MFTF-B Mirror Fusion Test Facility. Volume II. Integrated operations plan

    International Nuclear Information System (INIS)

    1981-12-01

    This document defines an integrated plan for the operation of the Lawrence Livermore National Laboratory (LLNL) Mirror Fusion Test Facility (MFTF-B). The plan fulfills and further delineates LLNL policies and provides for accomplishing the functions required by the program. This plan specifies the management, operations, maintenance, and engineering support responsibilities. It covers phasing into sustained operations as well as the sustained operations themselves. Administrative and Plant Engineering support, which are now being performed satisfactorily, are not part of this plan unless there are unique needs

  18. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  19. A Tractable Disequilbrium Framework for Integrating Computational Thermodynamics and Geodynamics

    Science.gov (United States)

    Spiegelman, M. W.; Tweed, L. E. L.; Evans, O.; Kelemen, P. B.; Wilson, C. R.

    2017-12-01

    The consistent integration of computational thermodynamics and geodynamics is essential for exploring and understanding a wide range of processes from high-PT magma dynamics in the convecting mantle to low-PT reactive alteration of the brittle crust. Nevertheless, considerable challenges remain for coupling thermodynamics and fluid-solid mechanics within computationally tractable and insightful models. Here we report on a new effort, part of the ENKI project, that provides a roadmap for developing flexible geodynamic models of varying complexity that are thermodynamically consistent with established thermodynamic models. The basic theory is derived from the disequilibrium thermodynamics of De Groot and Mazur (1984), similar to Rudge et. al (2011, GJI), but extends that theory to include more general rheologies, multiple solid (and liquid) phases and explicit chemical reactions to describe interphase exchange. Specifying stoichiometric reactions clearly defines the compositions of reactants and products and allows the affinity of each reaction (A = -Δ/Gr) to be used as a scalar measure of disequilibrium. This approach only requires thermodynamic models to return chemical potentials of all components and phases (as well as thermodynamic quantities for each phase e.g. densities, heat capacity, entropies), but is not constrained to be in thermodynamic equilibrium. Allowing meta-stable phases mitigates some of the computational issues involved with the introduction and exhaustion of phases. Nevertheless, for closed systems, these problems are guaranteed to evolve to the same equilibria predicted by equilibrium thermodynamics. Here we illustrate the behavior of this theory for a range of simple problems (constructed with our open-source model builder TerraFERMA) that model poro-viscous behavior in the well understood Fo-Fa binary phase loop. Other contributions in this session will explore a range of models with more petrologically interesting phase diagrams as well as

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  1. CMT scaling analysis and distortion evaluation in passive integral test facility

    International Nuclear Information System (INIS)

    Deng Chengcheng; Qin Benke; Wang Han; Chang Huajian

    2013-01-01

    Core makeup tank (CMT) is the crucial device of AP1000 passive core cooling system, and reasonable scaling analysis of CMT plays a key role in the design of passive integral test facilities. H2TS method was used to perform scaling analysis for both circulating mode and draining mode of CMT. And then, the similarity criteria for CMT important processes were applied in the CMT scaling design of the ACME (advanced core-cooling mechanism experiment) facility now being built in China. Furthermore, the scaling distortion results of CMT characteristic Ⅱ groups of ACME were calculated. At last, the reason of scaling distortion was analyzed and the distortion evaluation was conducted for ACME facility. The dominant processes of CMT circulating mode can be adequately simulated in the ACME facility, but the steam condensation process during CMT draining is not well preserved because the excessive CMT mass leads to more energy to be absorbed by cold metal. However, comprehensive analysis indicates that the ACME facility with high-pressure simulation scheme is able to properly represent CMT's important phenomena and processes of prototype nuclear plant. (authors)

  2. Computational analysis of battery optimized reactor integral system

    International Nuclear Information System (INIS)

    Hwang, J. S.; Son, H. M.; Jeong, W. S.; Kim, T. W.; Suh, K. Y.

    2007-01-01

    Battery Optimized Reactor Integral System (BORIS) is being developed as a multi-purpose fast spectrum reactor cooled by lead (Pb). BORIS is an integral optimized reactor with an ultra-long life core. BORIS aims to satisfy various energy demands maintaining inherent safety with the primary coolant Pb, and improving economics. BORIS is being designed to generate 23 MW t h with 10 MW e for at least twenty consecutive years without refueling and to meet the Generation IV Nuclear Energy System goals of sustainability, safety, reliability, and economics. BORIS is conceptualized to be used as the main power and heat source for remote areas and barren lands, and also considered to be deployed for desalinisation purpose. BORIS, based on modular components to be viable for rapid construction and easy maintenance, adopts an integrated heat exchanger system operated by natural circulation of Pb without pumps to realize a small sized reactor. The BORIS primary system is designed through an optimization study. Thermal hydraulic characteristics during a reactor steady state with heat source and sink by core and heat exchanger, respectively, have been carried out by utilizing a computational fluid dynamics code and hand calculations based on first principles. This paper analyzes a transient condition of the BORIS primary system. The Pb coolant was selected for its lower chemical activity with air or water than sodium (Na) and good thermal characteristics. The reactor transient conditions such as core blockage, heat exchanger failure, and loss of heat sink, were selected for this study. Blockage in the core or its inlet structure causes localized flow starvation in one or several fuel assemblies. The coolant loop blockages cause a more or less uniform flow reduction across the core, which may trigger coolant temperature transient. General conservation equations were applied to model the primary system transients. Numerical approaches were adopted to discretized the governing

  3. Computer software design description for the Treated Effluent Disposal Facility (TEDF), Project L-045H, Operator Training Station (OTS)

    International Nuclear Information System (INIS)

    Carter, R.L. Jr.

    1994-01-01

    The Treated Effluent Disposal Facility (TEDF) Operator Training Station (OTS) is a computer-based training tool designed to aid plant operations and engineering staff in familiarizing themselves with the TEDF Central Control System (CCS)

  4. Teaching ergonomics to nursing facility managers using computer-based instruction.

    Science.gov (United States)

    Harrington, Susan S; Walker, Bonnie L

    2006-01-01

    This study offers evidence that computer-based training is an effective tool for teaching nursing facility managers about ergonomics and increasing their awareness of potential problems. Study participants (N = 45) were randomly assigned into a treatment or control group. The treatment group completed the ergonomics training and a pre- and posttest. The control group completed the pre- and posttests without training. Treatment group participants improved significantly from 67% on the pretest to 91% on the posttest, a gain of 24%. Differences between mean scores for the control group were not significant for the total score or for any of the subtests.

  5. CPP-603 Underwater Fuel Storage Facility Site Integrated Stabilization Management Plan (SISMP), Volume I

    International Nuclear Information System (INIS)

    Denney, R.D.

    1995-10-01

    The CPP-603 Underwater Fuel Storage Facility (UFSF) Site Integrated Stabilization Management Plan (SISMP) has been constructed to describe the activities required for the relocation of spent nuclear fuel (SNF) from the CPP-603 facility. These activities are the only Idaho National Engineering Laboratory (INEL) actions identified in the Implementation Plan developed to meet the requirements of the Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 94-1 to the Secretary of Energy regarding an improved schedule for remediation in the Defense Nuclear Facilities Complex. As described in the DNFSB Recommendation 94-1 Implementation Plan, issued February 28, 1995, an INEL Spent Nuclear Fuel Management Plan is currently under development to direct the placement of SNF currently in existing INEL facilities into interim storage, and to address the coordination of intrasite SNF movements with new receipts and intersite transfers that were identified in the DOE SNF Programmatic and INEL Environmental Restoration and Waste Management Environmental Impact Statement Record, of Decision. This SISMP will be a subset of the INEL Spent Nuclear Fuel Management Plan and the activities described are being coordinated with other INEL SNF management activities. The CPP-603 relocation activities have been assigned a high priority so that established milestones will be meet, but there will be some cases where other activities will take precedence in utilization of available resources. The Draft INEL Site Integrated Stabilization Management Plan (SISMP), INEL-94/0279, Draft Rev. 2, dated March 10, 1995, is being superseded by the INEL Spent Nuclear Fuel Management Plan and this CPP-603 specific SISMP

  6. FIRAC: a computer code to predict fire-accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Krause, F.R.; Tang, P.K.; Andrae, R.W.; Martin, R.A.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire

  7. Computer-based data acquisition system in the Large Coil Test Facility

    International Nuclear Information System (INIS)

    Gould, S.S.; Layman, L.R.; Million, D.L.

    1983-01-01

    The utilization of computers for data acquisition and control is of paramount importance on large-scale fusion experiments because they feature the ability to acquire data from a large number of sensors at various sample rates and provide for flexible data interpretation, presentation, reduction, and analysis. In the Large Coil Test Facility (LCTF) a Digital Equipment Corporation (DEC) PDP-11/60 host computer with the DEC RSX-11M operating system coordinates the activities of five DEC LSI-11/23 front-end processors (FEPs) via direct memory access (DMA) communication links. This provides host control of scheduled data acquisition and FEP event-triggered data collection tasks. Four of the five FEPs have no operating system

  8. The HEPCloud Facility: elastic computing for High Energy Physics – The NOvA Use Case

    Energy Technology Data Exchange (ETDEWEB)

    Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Holzman, B. [Fermilab; Kennedy, R. [Fermilab; Norman, A. [Fermilab; Timm, S. [Fermilab; Tiradani, A. [Fermilab

    2017-03-15

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 25 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper

  9. The HEPCloud Facility: elastic computing for High Energy Physics - The NOvA Use Case

    Science.gov (United States)

    Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Norman, A.; Timm, S.; Tiradani, A.

    2017-10-01

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 38 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper

  10. A Study of Critical Flowrate in the Integral Effect Test Facilities

    International Nuclear Information System (INIS)

    Kim, Yeongsik; Ryu, Sunguk; Cho, Seok; Yi, Sungjae; Park, Hyunsik

    2014-01-01

    In earlier studies, most of the information available in the literature was either for a saturated two-phase flow or a sub-cooled water flow at medium pressure conditions, e. g., up to about 7.0 MPa. The choking is regarded as a condition of maximum possible discharge through a given orifice and/or nozzle exit area. A critical flow rate can be achieved at a choking under the given thermo-hydraulic conditions. The critical flow phenomena were studied extensively in both single-phase and two-phase systems because of its importance in the LOCA analyses of light water reactors and in the design of other engineering areas. Park suggested a modified correlation for predicting the critical flow for sub-cooled water through a nozzle. Recently, Park et al. performed an experimental study on a two-phase critical flow with a noncondensable gas at high pressure conditions. Various experiments of critical flow using sub-cooled water were performed for a modeling of break simulators in thermohydraulic integral effect test facilities for light water reactors, e. g., an advanced power reactor 1400MWe (APR1400) and a system-integrated modular advanced reactor (SMART). For the design of break simulators of SBLOCA scenarios, the aspect ratio (L/D) is considered to be a key parameter to determine the shape of a break simulator. In this paper, an investigation of critical flow phenomena was performed especially on break simulators for LOCA scenarios in the integral effect test facilities of KAERI, such as ATLAS and FESTA. In this study, various studies on the critical flow models for sub-cooled and/or saturated water were reviewed. For a comparison among the models for the selected test data, discussions of the comparisons on the effect of the diameters, predictions of critical flow models, and break simulators for SBLOCA in the integral effect test facilities were presented

  11. A Study of Critical Flowrate in the Integral Effect Test Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yeongsik; Ryu, Sunguk; Cho, Seok; Yi, Sungjae; Park, Hyunsik [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    In earlier studies, most of the information available in the literature was either for a saturated two-phase flow or a sub-cooled water flow at medium pressure conditions, e. g., up to about 7.0 MPa. The choking is regarded as a condition of maximum possible discharge through a given orifice and/or nozzle exit area. A critical flow rate can be achieved at a choking under the given thermo-hydraulic conditions. The critical flow phenomena were studied extensively in both single-phase and two-phase systems because of its importance in the LOCA analyses of light water reactors and in the design of other engineering areas. Park suggested a modified correlation for predicting the critical flow for sub-cooled water through a nozzle. Recently, Park et al. performed an experimental study on a two-phase critical flow with a noncondensable gas at high pressure conditions. Various experiments of critical flow using sub-cooled water were performed for a modeling of break simulators in thermohydraulic integral effect test facilities for light water reactors, e. g., an advanced power reactor 1400MWe (APR1400) and a system-integrated modular advanced reactor (SMART). For the design of break simulators of SBLOCA scenarios, the aspect ratio (L/D) is considered to be a key parameter to determine the shape of a break simulator. In this paper, an investigation of critical flow phenomena was performed especially on break simulators for LOCA scenarios in the integral effect test facilities of KAERI, such as ATLAS and FESTA. In this study, various studies on the critical flow models for sub-cooled and/or saturated water were reviewed. For a comparison among the models for the selected test data, discussions of the comparisons on the effect of the diameters, predictions of critical flow models, and break simulators for SBLOCA in the integral effect test facilities were presented.

  12. Integrated monitoring and reviewing systems for the Rokkasho Spent Fuel Receipt and Storage Facility

    International Nuclear Information System (INIS)

    Yokota, Yasuhiro; Ishikawa, Masayuki; Matsuda, Yuji

    1998-01-01

    The Rokkasho Spent Fuel Receipt and Storage (RSFS) Facility at the Rokkasho Reprocessing Plant (RRP) in Japan is expected to begin operations in 1998. Effective safeguarding by International Atomic Energy Agency (IAEA) and Japan Atomic Energy Bureau (JAEB) inspectors requires monitoring the time of transfer, direction of movement, and number of spent fuel assemblies transferred. At peak throughput, up to 1,000 spent fuel assemblies will be accepted by the facility in a 90-day period. In order for the safeguards inspector to efficiently review the resulting large amounts of inspection information, an unattended monitoring system was developed that integrates containment and surveillance (C/S) video with radiation monitors. This allows for an integrated review of the facility's radiation data, C/S video, and operator declaration data. This paper presents an outline of the integrated unattended monitoring hardware and associated data reviewing software. The hardware consists of a multicamera optical surveillance (MOS) system radiation monitoring gamma-ray and neutron detector (GRAND) electronics, and an intelligent local operating network (ILON). The ILON was used for time synchronization and MOS video triggers. The new software consists of a suite of tools, each one specific to a single data type: radiation data, surveillance video, and operator declarations. Each tool can be used in a stand-alone mode as a separate ion application or configured to communicate and match time-synchronized data with any of the other tools. A data summary and comparison application (Integrated Review System [IRS]) coordinates the use of all of the data-specific review tools under a single-user interface. It therefore automates and simplifies the importation of data and the data-specific analyses

  13. Modular multiple sensors information management for computer-integrated surgery.

    Science.gov (United States)

    Vaccarella, Alberto; Enquobahrie, Andinet; Ferrigno, Giancarlo; Momi, Elena De

    2012-09-01

    In the past 20 years, technological advancements have modified the concept of modern operating rooms (ORs) with the introduction of computer-integrated surgery (CIS) systems, which promise to enhance the outcomes, safety and standardization of surgical procedures. With CIS, different types of sensor (mainly position-sensing devices, force sensors and intra-operative imaging devices) are widely used. Recently, the need for a combined use of different sensors raised issues related to synchronization and spatial consistency of data from different sources of information. In this study, we propose a centralized, multi-sensor management software architecture for a distributed CIS system, which addresses sensor information consistency in both space and time. The software was developed as a data server module in a client-server architecture, using two open-source software libraries: Image-Guided Surgery Toolkit (IGSTK) and OpenCV. The ROBOCAST project (FP7 ICT 215190), which aims at integrating robotic and navigation devices and technologies in order to improve the outcome of the surgical intervention, was used as the benchmark. An experimental protocol was designed in order to prove the feasibility of a centralized module for data acquisition and to test the application latency when dealing with optical and electromagnetic tracking systems and ultrasound (US) imaging devices. Our results show that a centralized approach is suitable for minimizing synchronization errors; latency in the client-server communication was estimated to be 2 ms (median value) for tracking systems and 40 ms (median value) for US images. The proposed centralized approach proved to be adequate for neurosurgery requirements. Latency introduced by the proposed architecture does not affect tracking system performance in terms of frame rate and limits US images frame rate at 25 fps, which is acceptable for providing visual feedback to the surgeon in the OR. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  15. Enhanced computational infrastructure for data analysis at the DIII-D National Fusion Facility

    International Nuclear Information System (INIS)

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; McHarg, B.B.; Meyer, W.H.; Parker, C.T.

    2000-01-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from nine national laboratories, 19 foreign laboratories, 16 universities, and five industrial partnerships. As a result of this work, DIII-D data is available on a 24x7 basis from a set of viewing and analysis tools that can be run on either the collaborators' or DIII-D's computer systems. Additionally, a web based data and code documentation system has been created to aid the novice and expert user alike

  16. Enhanced Computational Infrastructure for Data Analysis at the DIII-D National Fusion Facility

    International Nuclear Information System (INIS)

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; Meyer, W.H.; Parker, C.T.; McCharg, B.B.

    1999-01-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from 9 national laboratories, 19 foreign laboratories, 16 universities, and 5 industrial partnerships. As a result of this work, DIII-D data is available on a 24 x 7 basis from a set of viewing and analysis tools that can be run either on the collaborators' or DIII-Ds computer systems. Additionally, a Web based data and code documentation system has been created to aid the novice and expert user alike

  17. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele

    2012-01-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  18. FIRAC - a computer code to predict fire accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Foster, R.D.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire. A basic material transport capability that features the effects of convection, deposition, entrainment, and filtration of material is included. The interrelated effects of filter plugging, heat transfer, gas dynamics, and material transport are taken into account. In this paper the authors summarize the physical models used to describe the gas dynamics, material transport, and heat transfer processes. They also illustrate how a typical facility is modeled using the code

  19. Application of personal computer to development of entrance management system for radiating facilities

    International Nuclear Information System (INIS)

    Suzuki, Shogo; Hirai, Shouji

    1989-01-01

    The report describes a system for managing the entrance and exit of personnel to radiating facilities. A personal computer is applied to its development. Major features of the system is outlined first. The computer is connected to the gate and two magnetic card readers provided at the gate. The gate, which is installed at the entrance to a room under control, opens only for those who have a valid card. The entrance-exit management program developed is described next. The following three files are used: ID master file (random file of the magnetic card number, name, qualification, etc., of each card carrier), entrance-exit management file (random file of time of entrance/exit, etc., updated everyday), and entrance-exit record file (sequential file of card number, name, date, etc.), which are stored on floppy disks. A display is provided to show various lists including a list of workers currently in the room and a list of workers who left the room at earlier times of the day. This system is useful for entrance management of a relatively small facility. Though small in required cost, it requires only a few operators to perform effective personnel management. (N.K.)

  20. Conjunctive operation of river facilities for integrated water resources management in Korea

    Directory of Open Access Journals (Sweden)

    H. Kim

    2016-10-01

    Full Text Available With the increasing trend of water-related disasters such as floods and droughts resulting from climate change, the integrated management of water resources is gaining importance recently. Korea has worked towards preventing disasters caused by floods and droughts, managing water resources efficiently through the coordinated operation of river facilities such as dams, weirs, and agricultural reservoirs. This has been pursued to enable everyone to enjoy the benefits inherent to the utilization of water resources, by preserving functional rivers, improving their utility and reducing the degradation of water quality caused by floods and droughts. At the same time, coordinated activities are being conducted in multi-purpose dams, hydro-power dams, weirs, agricultural reservoirs and water use facilities (featuring a daily water intake of over 100 000 m3 day−1 with the purpose of monitoring the management of such facilities. This is being done to ensure the protection of public interest without acting as an obstacle to sound water management practices. During Flood Season, each facilities contain flood control capacity by limited operating level which determined by the Regulation Council in advance. Dam flood discharge decisions are approved through the flood forecasting and management of Flood Control Office due to minimize flood damage for both upstream and downstream. The operational plan is implemented through the council's predetermination while dry season for adequate quantity and distribution of water.

  1. Current state of the construction of an integrated test facility for hydrogen risk

    Energy Technology Data Exchange (ETDEWEB)

    Na, Young Su; Hong, Seong-Ho; Hong, Seong-Wan [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Experimental research on hydrogen as a combustible gas is important for an assessment of the integrity of a containment building under a severe accident. The Korea Atomic Energy Research Institute (KAERI) is preparing a large-scaled test facility, called SPARC (SPray-Aerosol-Recombiner-Combustion), to estimate the hydrogen behavior such as the distribution, combustion and mitigation. This paper introduces the experimental research activity on hydrogen risk, which was presented at International Congress on Advances in Nuclear Power Plants (ICAPP) this year. The KAERI is preparing a test facility, called SPARC (SPray-Aerosol-Recombiner-Combustion test facility), for an assessment of the hydrogen risk. In the SPARC, hydrogen behavior such as mixing with steam and air, distribution, and combustion in the containment atmosphere will be observed. The SPARC consists of a pressure vessel with a 9.5 m height and 3.4 m in diameter and the operating system to control the thermal hydraulic conditions up to 1.5 MPa at 453 K in a vessel. The temperature, pressure, and gas concentration at various locations will be measured to estimate the atmospheric behavior in a vessel. To install the SPARC, an experimental building, called LIFE (Laboratory for Innovative mitigation of threats from Fission products and Explosion), was constructed at the KAERI site. LIFE has an area of 480 m''2 and height of 18.6 m, and it was designed by considering the experimental safety and specification of a large-sized test facility.

  2. Integration of smart wearable mobile devices and cloud computing in South African healthcare

    CSIR Research Space (South Africa)

    Mvelase, PS

    2015-11-01

    Full Text Available Integration of Smart Wearable Mobile Devices and Cloud Computing in South African Healthcare Promise MVELASE, Zama DLAMINI, Angeline DLUDLA, Happy SITHOLE Abstract: The acceptance of cloud computing is increasing in a fast pace in distributed...

  3. The Integrated Computational Environment for Airbreathing Hypersonic Flight Vehicle Modeling and Design Evaluation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — An integrated computational environment for multidisciplinary, physics-based simulation and analyses of airbreathing hypersonic flight vehicles will be developed....

  4. The Virtual Brain Integrates Computational Modeling and Multimodal Neuroimaging

    Science.gov (United States)

    Schirner, Michael; McIntosh, Anthony R.; Jirsa, Viktor K.

    2013-01-01

    Abstract Brain function is thought to emerge from the interactions among neuronal populations. Apart from traditional efforts to reproduce brain dynamics from the micro- to macroscopic scales, complementary approaches develop phenomenological models of lower complexity. Such macroscopic models typically generate only a few selected—ideally functionally relevant—aspects of the brain dynamics. Importantly, they often allow an understanding of the underlying mechanisms beyond computational reproduction. Adding detail to these models will widen their ability to reproduce a broader range of dynamic features of the brain. For instance, such models allow for the exploration of consequences of focal and distributed pathological changes in the system, enabling us to identify and develop approaches to counteract those unfavorable processes. Toward this end, The Virtual Brain (TVB) (www.thevirtualbrain.org), a neuroinformatics platform with a brain simulator that incorporates a range of neuronal models and dynamics at its core, has been developed. This integrated framework allows the model-based simulation, analysis, and inference of neurophysiological mechanisms over several brain scales that underlie the generation of macroscopic neuroimaging signals. In this article, we describe how TVB works, and we present the first proof of concept. PMID:23442172

  5. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  6. An Integrated Review of Emoticons in Computer-Mediated Communication.

    Science.gov (United States)

    Aldunate, Nerea; González-Ibáñez, Roberto

    2016-01-01

    Facial expressions constitute a rich source of non-verbal cues in face-to-face communication. They provide interlocutors with resources to express and interpret verbal messages, which may affect their cognitive and emotional processing. Contrarily, computer-mediated communication (CMC), particularly text-based communication, is limited to the use of symbols to convey a message, where facial expressions cannot be transmitted naturally. In this scenario, people use emoticons as paralinguistic cues to convey emotional meaning. Research has shown that emoticons contribute to a greater social presence as a result of the enrichment of text-based communication channels. Additionally, emoticons constitute a valuable resource for language comprehension by providing expressivity to text messages. The latter findings have been supported by studies in neuroscience showing that particular brain regions involved in emotional processing are also activated when people are exposed to emoticons. To reach an integrated understanding of the influence of emoticons in human communication on both socio-cognitive and neural levels, we review the literature on emoticons in three different areas. First, we present relevant literature on emoticons in CMC. Second, we study the influence of emoticons in language comprehension. Finally, we show the incipient research in neuroscience on this topic. This mini review reveals that, while there are plenty of studies on the influence of emoticons in communication from a social psychology perspective, little is known about the neurocognitive basis of the effects of emoticons on communication dynamics.

  7. Secondary Waste Cementitious Waste Form Data Package for the Integrated Disposal Facility Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cantrell, Kirk J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Westsik, Joseph H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Serne, R Jeffrey [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Um, Wooyong [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cozzi, Alex D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-05-16

    A review of the most up-to-date and relevant data currently available was conducted to develop a set of recommended values for use in the Integrated Disposal Facility (IDF) performance assessment (PA) to model contaminant release from a cementitious waste form for aqueous wastes treated at the Hanford Effluent Treatment Facility (ETF). This data package relies primarily upon recent data collected on Cast Stone formulations fabricated with simulants of low-activity waste (LAW) and liquid secondary wastes expected to be produced at Hanford. These data were supplemented, when necessary, with data developed for saltstone (a similar grout waste form used at the Savannah River Site). Work is currently underway to collect data on cementitious waste forms that are similar to Cast Stone and saltstone but are tailored to the characteristics of ETF-treated liquid secondary wastes. Recommended values for key parameters to conduct PA modeling of contaminant release from ETF-treated liquid waste are provided.

  8. Structural Integrity Program for the Calcined Solids Storage Facilities at the Idaho Nuclear Technology and Engineering Center

    International Nuclear Information System (INIS)

    Bryant, J.W.; Nenni, J.A.

    2003-01-01

    This report documents the activities of the structural integrity program at the Idaho Nuclear Technology and Engineering Center relevant to the high-level waste Calcined Solids Storage Facilities and associated equipment, as required by DOE M 435.1-1, ''Radioactive Waste Management Manual.'' Based on the evaluation documented in this report, the Calcined Solids Storage Facilities are not leaking and are structurally sound for continued service. Recommendations are provided for continued monitoring of the Calcined Solids Storage Facilities

  9. Double crystal monochromator controlled by integrated computing on BL07A in New SUBARU, Japan

    Energy Technology Data Exchange (ETDEWEB)

    Okui, Masato, E-mail: okui@kohzu.co.jp [Kohzu Precision Co., Ltd., 2-6-15, Kurigi, Asao-ku, Kawasaki-shi, Kanagawa 215-8521 (Japan); Laboratory of Advanced Science and Technology for Industry, University of Hyogo (Japan); Yato, Naoki; Watanabe, Akinobu; Lin, Baiming; Murayama, Norio [Kohzu Precision Co., Ltd., 2-6-15, Kurigi, Asao-ku, Kawasaki-shi, Kanagawa 215-8521 (Japan); Fukushima, Sei, E-mail: FUKUSHIMA.Sei@nims.go.jp [Laboratory of Advanced Science and Technology for Industry, University of Hyogo (Japan); National Institute for Material Sciences (Japan); Kanda, Kazuhiro [Laboratory of Advanced Science and Technology for Industry, University of Hyogo (Japan)

    2016-07-27

    The BL07A beamline in New SUBARU, University of Hyogo, has been used for many studies of new materials. A new double crystal monochromator controlled by integrated computing was designed and installed in the beamline in 2014. In this report we will discuss the unique features of this new monochromator, MKZ-7NS. This monochromator was not designed exclusively for use in BL07A; on the contrary, it was designed to be installed at low cost in various beamlines to facilitate the industrial applications of medium-scale synchrotron radiation facilities. Thus, the design of the monochromator utilized common packages that can satisfy the wide variety of specifications required at different synchrotron radiation facilities. This monochromator can be easily optimized for any beamline due to the fact that a few control parameters can be suitably customized. The beam offset can be fixed precisely even if one of the two slave axes is omitted. This design reduces the convolution of mechanical errors. Moreover, the monochromator’s control mechanism is very compact, making it possible to reduce the size of the vacuum chamber can be made smaller.

  10. Structural Integrity Program for the Calcined Solids Storage Facilities at the Idaho Nuclear Technology and Engineering Center

    International Nuclear Information System (INIS)

    Jeffrey Bryant

    2008-01-01

    This report documents the activities of the structural integrity program at the Idaho Nuclear Technology and Engineering Center relevant to the high-level waste Calcined Solids Storage Facilities and associated equipment, as required by DOE M 435.1-1, 'Radioactive Waste Management Manual'. Based on the evaluation documented in this report, the Calcined Solids Storage Facilities are not leaking and are structurally sound for continued service. Recommendations are provided for continued monitoring of the Calcined Solids Storage Facilities

  11. GASFLOW: A computational model to analyze accidents in nuclear containment and facility buildings

    International Nuclear Information System (INIS)

    Travis, J.R.; Nichols, B.D.; Wilson, T.L.; Lam, K.L.; Spore, J.W.; Niederauer, G.F.

    1993-01-01

    GASFLOW is a finite-volume computer code that solves the time-dependent, compressible Navier-Stokes equations for multiple gas species. The fluid-dynamics algorithm is coupled to the chemical kinetics of combusting liquids or gases to simulate diffusion or propagating flames in complex geometries of nuclear containment or confinement and facilities' buildings. Fluid turbulence is calculated to enhance the transport and mixing of gases in rooms and volumes that may be connected by a ventilation system. The ventilation system may consist of extensive ductwork, filters, dampers or valves, and fans. Condensation and heat transfer to walls, floors, ceilings, and internal structures are calculated to model the appropriate energy sinks. Solid and liquid aerosol behavior is simulated to give the time and space inventory of radionuclides. The solution procedure of the governing equations is a modified Los Alamos ICE'd-ALE methodology. Complex facilities can be represented by separate computational domains (multiblocks) that communicate through overlapping boundary conditions. The ventilation system is superimposed throughout the multiblock mesh. Gas mixtures and aerosols are transported through the free three-dimensional volumes and the restricted one-dimensional ventilation components as the accident and fluid flow fields evolve. Combustion may occur if sufficient fuel and reactant or oxidizer are present and have an ignition source. Pressure and thermal loads on the building, structural components, and safety-related equipment can be determined for specific accident scenarios. GASFLOW calculations have been compared with large oil-pool fire tests in the 1986 HDR containment test T52.14, which is a 3000-kW fire experiment. The computed results are in good agreement with the observed data

  12. Integrated disposal Facility Sagebrush Habitat Mitigation Project: FY2007 Compensation Area Monitoring Report

    Energy Technology Data Exchange (ETDEWEB)

    Durham, Robin E.; Sackschewsky, Michael R.

    2007-09-01

    This report summarizes the first year survival of sagebrush seedlings planted as compensatory mitigation for the Integrated Disposal Facility Project. Approximately 42,600 bare root seedlings and 26,000 pluglings were planted at a mitigation site along Army Loop Road in February 2007. Initial baseline monitoring occurred in March 2007, and first summer survival was assessed in September 2007. Overall survival was 19%, with bare root survival being marginally better than pluglings (21% versus 14%). Likely major factors contributing to low survival were late season planting and insufficient soil moisture during seedling establishment.

  13. HPCAT: an integrated high-pressure synchrotron facility at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Shen, Guoyin; Chow, Paul; Xiao, Yuming; Sinogeikin, Stanislav; Meng, Yue; Yang, Wenge; Liermann, Hans-Peter; Shebanova, Olga; Rod, Eric; Bommannavar, Arunkumar; Mao, Ho-Kwang

    2008-01-01

    The high pressure collaborative access team (HPCAT) was established to advance cutting edge, multidisciplinary, high-pressure (HP) science and technology using synchrotron radiation at sector 16 of the Advanced Photon Source of Argonne National Laboratory. The integrated HPCAT facility has established four operating beamlines in nine hutches. Two beamlines are split in energy space from the insertion device (16ID) line, whereas the other two are spatially divided into two fans from the bending magnet (16BM) line. An array of novel X-ray diffraction and spectroscopic techniques has been integrated with HP and extreme temperature instrumentation at HPCAT. With a multidisciplinary approach and multi-institution collaborations, the HP program at the HPCAT has been enabling myriad scientific breakthroughs in HP physics, chemistry, materials, and Earth and planetary sciences.

  14. IOTA (Integrable Optics Test Accelerator): facility and experimental beam physics program

    Science.gov (United States)

    Antipov, S.; Broemmelsiek, D.; Bruhwiler, D.; Edstrom, D.; Harms, E.; Lebedev, V.; Leibfritz, J.; Nagaitsev, S.; Park, C. S.; Piekarz, H.; Piot, P.; Prebys, E.; Romanov, A.; Ruan, J.; Sen, T.; Stancari, G.; Thangaraj, C.; Thurman-Keup, R.; Valishev, A.; Shiltsev, V.

    2017-03-01

    The Integrable Optics Test Accelerator (IOTA) is a storage ring for advanced beam physics research currently being built and commissioned at Fermilab. It will operate with protons and electrons using injectors with momenta of 70 and 150 MeV/c, respectively. The research program includes the study of nonlinear focusing integrable optical beam lattices based on special magnets and electron lenses, beam dynamics of space-charge effects and their compensation, optical stochastic cooling, and several other experiments. In this article, we present the design and main parameters of the facility, outline progress to date and provide the timeline of the construction, commissioning and research. The physical principles, design, and hardware implementation plans for the major IOTA experiments are also discussed.

  15. IOTA (Integrable Optics Test Accelerator): Facility and experimental beam physics program

    International Nuclear Information System (INIS)

    Antipov, Sergei; Broemmelsiek, Daniel; Bruhwiler, David; Edstrom, Dean; Harms, Elvin

    2017-01-01

    The Integrable Optics Test Accelerator (IOTA) is a storage ring for advanced beam physics research currently being built and commissioned at Fermilab. It will operate with protons and electrons using injectors with momenta of 70 and 150 MeV/c, respectively. The research program includes the study of nonlinear focusing integrable optical beam lattices based on special magnets and electron lenses, beam dynamics of space-charge effects and their compensation, optical stochastic cooling, and several other experiments. In this article, we present the design and main parameters of the facility, outline progress to date and provide the timeline of the construction, commissioning and research. Finally, the physical principles, design, and hardware implementation plans for the major IOTA experiments are also discussed.

  16. A study on development of Pyro process integrated inactive demonstration facility

    International Nuclear Information System (INIS)

    Cho, I.; Lee, E.; Choung, W.; You, G.; Kim, H.

    2010-10-01

    Since 2007, the Pride (Pyro process integrated inactive demonstration facility) has been developed to demonstrate the integrated engineering-scale pyro processing using natural uranium with surrogate materials. In this paper, safety evaluation on hypothetical accident case is carried out to ensure the release of radioactivity being negligible to the environment and the performance of indoor argon flow for the argon cell has been investigated by means of CFD analysis. The worst accident case, even in the firing of the all uranium metal in argon cell, cause dose rate are negligible comparing to 0.25 Sv of effective dose rate to whole body or 3 Sv of equivalent dose rate to the thyroid preliminary CFD analyses show the temperature and velocity distribution of argon cell, and give the information to change the argon exchange rate and displace the argon supply or exhaust duct. CFD will allow design change and improvements in ventilation systems at lower cost. (Author)

  17. Integrated computer control system CORBA-based simulator FY98 LDRD project final summary report

    International Nuclear Information System (INIS)

    Bryant, R M; Holloway, F W; Van Arsdall, P J.

    1999-01-01

    The CORBA-based Simulator was a Laboratory Directed Research and Development (LDRD) project that applied simulation techniques to explore critical questions about distributed control architecture. The simulator project used a three-prong approach comprised of a study of object-oriented distribution tools, computer network modeling, and simulation of key control system scenarios. This summary report highlights the findings of the team and provides the architectural context of the study. For the last several years LLNL has been developing the Integrated Computer Control System (ICCS), which is an abstract object-oriented software framework for constructing distributed systems. The framework is capable of implementing large event-driven control systems for mission-critical facilities such as the National Ignition Facility (NIF). Tools developed in this project were applied to the NIF example architecture in order to gain experience with a complex system and derive immediate benefits from this LDRD. The ICCS integrates data acquisition and control hardware with a supervisory system, and reduces the amount of new coding and testing necessary by providing prebuilt components that can be reused and extended to accommodate specific additional requirements. The framework integrates control point hardware with a supervisory system by providing the services needed for distributed control such as database persistence, system start-up and configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. The design is interoperable among computers of different kinds and provides plug-in software connections by leveraging a common object request brokering architecture (CORBA) to transparently distribute software objects across the network of computers. Because object broker distribution applied to control systems is relatively new and its inherent performance is roughly threefold less than traditional point

  18. The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster

    Science.gov (United States)

    Löwe, P.; Klump, J.; Thaler, J.

    2012-04-01

    Compute clusters can be used as GIS workbenches, their wealth of resources allow us to take on geocomputation tasks which exceed the limitations of smaller systems. To harness these capabilities requires a Geographic Information System (GIS), able to utilize the available cluster configuration/architecture and a sufficient degree of user friendliness to allow for wide application. In this paper we report on the first successful porting of GRASS GIS, the oldest and largest Free Open Source (FOSS) GIS project, onto a compute cluster using Platform Computing's Load Sharing Facility (LSF). In 2008, GRASS6.3 was installed on the GFZ compute cluster, which at that time comprised 32 nodes. The interaction with the GIS was limited to the command line interface, which required further development to encapsulate the GRASS GIS business layer to facilitate its use by users not familiar with GRASS GIS. During the summer of 2011, multiple versions of GRASS GIS (v 6.4, 6.5 and 7.0) were installed on the upgraded GFZ compute cluster, now consisting of 234 nodes with 480 CPUs providing 3084 cores. The GFZ compute cluster currently offers 19 different processing queues with varying hardware capabilities and priorities, allowing for fine-grained scheduling and load balancing. After successful testing of core GIS functionalities, including the graphical user interface, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008). A first application of the new GIS functionality was the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). For this, up to 500 processing nodes were used in parallel. Further trials included the processing of geometrically complex problems, requiring significant amounts of processing time. The GIS cluster successfully completed all these tasks, with processing times

  19. Human Computation An Integrated Approach to Learning from the Crowd

    CERN Document Server

    Law, Edith

    2011-01-01

    Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoy

  20. Airborne gravimetry used in precise geoid computations by ring integration

    DEFF Research Database (Denmark)

    Kearsley, A.H.W.; Forsberg, René; Olesen, Arne Vestergaard

    1998-01-01

    Two detailed geoids have been computed in the region of North Jutland. The first computation used marine data in the offshore areas. For the second computation the marine data set was replaced by the sparser airborne gravity data resulting from the AG-MASCO campaign of September 1996. The results...... of comparisons of the geoid heights at on-shore geometric control showed that the geoid heights computed from the airborne gravity data matched in precision those computed using the marine data, supporting the view that airborne techniques have enormous potential for mapping those unsurveyed areas between...

  1. Coal-fired MHD test progress at the Component Development and Integration Facility

    International Nuclear Information System (INIS)

    Hart, A.T.; Rivers, T.J.; Alsberg, C.M.; Filius, K.D.

    1992-01-01

    The Component Development and Integration Facility (CDIF) is a Department of Energy test facility operated by MSE, Inc. In the fall of 1984, a 50-MW t , pressurized, slag rejecting coal-fired combustor (CFC) replaced the oil-fired combustor in the test train. In the spring of 1989, a coal-fired precombustor was added to the test hardware, and current controls were installed in the spring of 1990. In the fall of 1990, the slag rejector was installed. MSE test hardware activities included installing the final workhorse channel and modifying the coalfired combustor by installing improved design and proof-of-concept (POC) test pieces. This paper discusses the involvement of this hardware in test progress during the past year. Testing during the last year emphasized the final workhorse hardware testing. This testing will be discussed. Facility modifications and system upgrades for improved operation and duration testing will be discussed. In addition, this paper will address long-term testing plans

  2. Integrated doses calculation in evacuation scenarios of the neutron generator facility at Missouri S&T

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Manish K.; Alajo, Ayodeji B., E-mail: alajoa@mst.edu

    2016-08-11

    Any source of ionizing radiations could lead to considerable dose acquisition to individuals in a nuclear facility. Evacuation may be required when elevated levels of radiation is detected within a facility. In this situation, individuals are more likely to take the closest exit. This may not be the most expedient decision as it may lead to higher dose acquisition. The strategy followed in preventing large dose acquisitions should be predicated on the path that offers least dose acquisition. In this work, the neutron generator facility at Missouri University of Science and Technology was analyzed. The Monte Carlo N-Particle (MCNP) radiation transport code was used to model the entire floor of the generator's building. The simulated dose rates in the hallways were used to estimate the integrated doses for different paths leading to exits. It was shown that shortest path did not always lead to minimum dose acquisition and the approach was successful in predicting the expedient path as opposed to the approach of taking the nearest exit.

  3. PWR station blackout transient simulation in the INER integral system test facility

    International Nuclear Information System (INIS)

    Liu, T.J.; Lee, C.H.; Hong, W.T.; Chang, Y.H.

    2004-01-01

    Station blackout transient (or TMLB' scenario) in a pressurized water reactor (PWR) was simulated using the INER Integral System Test Facility (IIST) which is a 1/400 volumetrically-scaled reduce-height and reduce-pressure (RHRP) simulator of a Westinghouse three-loop PWR. Long-term thermal-hydraulic responses including the secondary boil-off and the subsequent primary saturation, pressurization and core uncovery were simulated based on the assumptions of no offsite and onsite power, feedwater and operator actions. The results indicate that two-phase discharge is the major depletion mode since it covers 81.3% of the total amount of the coolant inventory loss. The primary coolant inventory has experienced significant re-distribution during a station blackout transient. The decided parameter to avoid the core overheating is not the total amount of the coolant inventory remained in the primary core cooling system but only the part of coolant left in the pressure vessel. The sequence of significant events during transient for the IIST were also compared with those of the ROSA-IV large-scale test facility (LSTF), which is a 1/48 volumetrically-scaled full-height and full-pressure (FHFP) simulator of a PWR. The comparison indicates that the sequence and timing of these events during TMLB' transient studied in the RHRP IIST facility are generally consistent with those of the FHFP LSTF. (author)

  4. [Elderlies in street situation or social vulnerability: facilities and difficulties in the use of computational tools].

    Science.gov (United States)

    Frias, Marcos Antonio da Eira; Peres, Heloisa Helena Ciqueto; Pereira, Valclei Aparecida Gandolpho; Negreiros, Maria Célia de; Paranhos, Wana Yeda; Leite, Maria Madalena Januário

    2014-01-01

    This study aimed to identify the advantages and difficulties encountered by older people living on the streets or social vulnerability, to use the computer or internet. It is an exploratory qualitative research, in which five elderlies, attended on a non-governmental organization located in the city of São Paulo, have participated. The discourses were analyzed by content analysis technique and showed, as facilities, among others, to clarify doubts with the monitors, the stimulus for new discoveries coupled with proactivity and curiosity, and develop new skills. The mentioned difficulties were related to physical or cognitive issues, lack of instructor, and lack of knowledge to interact with the machine. The studies focusing on the elderly population living on the streets or in social vulnerability may contribute with evidence to guide the formulation of public policies to this population.

  5. Development of a personal computer based facility-level SSAC component and inspector support system

    International Nuclear Information System (INIS)

    Markov, A.

    1989-08-01

    Research Contract No. 4658/RB was conducted between the IAEA and the Bulgarian Committee on Use of Atomic Energy for Peaceful Purposes. The contract required the Committee to develop and program a personal computer based software package to be used as a facility-level computerized State System of Accounting and Control (SSAC) at an off-load power reactor. The software delivered, called the National Safeguards System (NSS) keeps track of all fuel assembly activity at a power reactor and generates all ledgers, MBA material balances and any required reports to national or international authorities. The NSS is designed to operate on a PC/AT or compatible equipment with a hard disk of 20 MB, color graphics monitor or adaptor and at least one floppy disk drive, 360 Kb. The programs are written in Basic (compiler 2.0). They are executed under MS DOS 3.1 or later

  6. Lustre Distributed Name Space (DNE) Evaluation at the Oak Ridge Leadership Computing Facility (OLCF)

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, James S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Computational Sciences; Leverman, Dustin B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Computational Sciences; Hanley, Jesse A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Computational Sciences; Oral, Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Computational Sciences

    2016-08-22

    This document describes the Lustre Distributed Name Space (DNE) evaluation carried at the Oak Ridge Leadership Computing Facility (OLCF) between 2014 and 2015. DNE is a development project funded by the OpenSFS, to improve Lustre metadata performance and scalability. The development effort has been split into two parts, the first part (DNE P1) providing support for remote directories over remote Lustre Metadata Server (MDS) nodes and Metadata Target (MDT) devices, while the second phase (DNE P2) addressed split directories over multiple remote MDS nodes and MDT devices. The OLCF have been actively evaluating the performance, reliability, and the functionality of both DNE phases. For these tests, internal OLCF testbed were used. Results are promising and OLCF is planning on a full DNE deployment by mid-2016 timeframe on production systems.

  7. Multilevel examination of facility characteristics, social integration, and health for older adults living in nursing homes.

    Science.gov (United States)

    Leedahl, Skye N; Chapin, Rosemary K; Little, Todd D

    2015-01-01

    Testing a model based on past research and theory, this study assessed relationships between facility characteristics (i.e., culture change efforts, social workers) and residents' social networks and social support across nursing homes; and examined relationships between multiple aspects of social integration (i.e., social networks, social capital, social engagement, social support) and mental and functional health for older adults in nursing homes. Data were collected at nursing homes using a planned missing data design with random sampling techniques. Data collection occurred at the individual-level through in-person structured interviews with older adult nursing home residents (N = 140) and at the facility-level (N = 30) with nursing home staff. The best fitting multilevel structural equation model indicated that the culture change subscale for relationships significantly predicted differences in residents' social networks. Additionally, social networks had a positive indirect relationship with mental and functional health among residents primarily via social engagement. Social capital had a positive direct relationship with both health outcomes. To predict better social integration and mental and functional health outcomes for nursing homes residents, study findings support prioritizing that close relationships exist among staff, residents, and the community as well as increased resident social engagement and social trust. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Computational Simulations of the NASA Langley HyMETS Arc-Jet Facility

    Science.gov (United States)

    Brune, A. J.; Bruce, W. E., III; Glass, D. E.; Splinter, S. C.

    2017-01-01

    The Hypersonic Materials Environmental Test System (HyMETS) arc-jet facility located at the NASA Langley Research Center in Hampton, Virginia, is primarily used for the research, development, and evaluation of high-temperature thermal protection systems for hypersonic vehicles and reentry systems. In order to improve testing capabilities and knowledge of the test article environment, an effort is underway to computationally simulate the flow-field using computational fluid dynamics (CFD). A detailed three-dimensional model of the arc-jet nozzle and free-jet portion of the flow-field has been developed and compared to calibration probe Pitot pressure and stagnation-point heat flux for three test conditions at low, medium, and high enthalpy. The CFD model takes into account uniform pressure and non-uniform enthalpy profiles at the nozzle inlet as well as catalytic recombination efficiency effects at the probe surface. Comparing the CFD results and test data indicates an effectively fully-catalytic copper surface on the heat flux probe of about 10% efficiency and a 2-3 kpa pressure drop from the arc heater bore, where the pressure is measured, to the plenum section, prior to the nozzle. With these assumptions, the CFD results are well within the uncertainty of the stagnation pressure and heat flux measurements. The conditions at the nozzle exit were also compared with radial and axial velocimetry. This simulation capability will be used to evaluate various three-dimensional models that are tested in the HyMETS facility. An end-to-end aerothermal and thermal simulation of HyMETS test articles will follow this work to provide a better understanding of the test environment, test results, and to aid in test planning. Additional flow-field diagnostic measurements will also be considered to improve the modeling capability.

  9. Integrating supervision, control and data acquisition—The ITER Neutral Beam Test Facility experience

    Energy Technology Data Exchange (ETDEWEB)

    Luchetta, A., E-mail: adriano.luchetta@igi.cnr.it; Manduchi, G.; Taliercio, C.; Breda, M.; Capobianco, R.; Molon, F.; Moressa, M.; Simionato, P.; Zampiva, E.

    2016-11-15

    Highlights: • The paper describes the experience gained in the integration of different systems for the control and data acquisition system of the ITER Neutral Beam Test Facility. • It describes the way the different frameworks have been integrated. • It reports some lessons learnt during system integration. • It reports some authors’ considerations about the development the ITER CODAC. - Abstract: The ITER Neutral Beam (NBI) Test Facility, under construction in Padova, Italy consists in the ITER full scale ion source for the heating neutral beam injector, referred to as SPIDER, and the full size prototype injector, referred to as MITICA. The Control and Data Acquisition System (CODAS) for SPIDER has been developed and is going to be in operation in 2016. The system is composed of four main components: Supervision, Slow Control, Fast Control and Data Acquisition. These components interact with each other to carry out the system operation and, since they represent a common pattern in fusion experiments, software frameworks have been used for each (set of) component. In order to reuse as far as possible the architecture developed for SPIDER, it is important to clearly define the boundaries and the interfaces among the system components so that the implementation of any component can be replaced without affecting the overall architecture. This work reports the experience gained in the development of SPIDER components, highlighting the importance in the definition of generic interfaces among component, showing how the specific solutions have been adapted to such interfaces and suggesting possible approaches for the development of other ITER subsystems.

  10. Integrating supervision, control and data acquisition—The ITER Neutral Beam Test Facility experience

    International Nuclear Information System (INIS)

    Luchetta, A.; Manduchi, G.; Taliercio, C.; Breda, M.; Capobianco, R.; Molon, F.; Moressa, M.; Simionato, P.; Zampiva, E.

    2016-01-01

    Highlights: • The paper describes the experience gained in the integration of different systems for the control and data acquisition system of the ITER Neutral Beam Test Facility. • It describes the way the different frameworks have been integrated. • It reports some lessons learnt during system integration. • It reports some authors’ considerations about the development the ITER CODAC. - Abstract: The ITER Neutral Beam (NBI) Test Facility, under construction in Padova, Italy consists in the ITER full scale ion source for the heating neutral beam injector, referred to as SPIDER, and the full size prototype injector, referred to as MITICA. The Control and Data Acquisition System (CODAS) for SPIDER has been developed and is going to be in operation in 2016. The system is composed of four main components: Supervision, Slow Control, Fast Control and Data Acquisition. These components interact with each other to carry out the system operation and, since they represent a common pattern in fusion experiments, software frameworks have been used for each (set of) component. In order to reuse as far as possible the architecture developed for SPIDER, it is important to clearly define the boundaries and the interfaces among the system components so that the implementation of any component can be replaced without affecting the overall architecture. This work reports the experience gained in the development of SPIDER components, highlighting the importance in the definition of generic interfaces among component, showing how the specific solutions have been adapted to such interfaces and suggesting possible approaches for the development of other ITER subsystems.

  11. Unstructured Computational Aerodynamics on Many Integrated Core Architecture

    KAUST Repository

    Al Farhan, Mohammed A.

    2016-06-08

    Shared memory parallelization of the flux kernel of PETSc-FUN3D, an unstructured tetrahedral mesh Euler flow code previously studied for distributed memory and multi-core shared memory, is evaluated on up to 61 cores per node and up to 4 threads per core. We explore several thread-level optimizations to improve flux kernel performance on the state-of-the-art many integrated core (MIC) Intel processor Xeon Phi “Knights Corner,” with a focus on strong thread scaling. While the linear algebraic kernel is bottlenecked by memory bandwidth for even modest numbers of cores sharing a common memory, the flux kernel, which arises in the control volume discretization of the conservation law residuals and in the formation of the preconditioner for the Jacobian by finite-differencing the conservation law residuals, is compute-intensive and is known to exploit effectively contemporary multi-core hardware. We extend study of the performance of the flux kernel to the Xeon Phi in three thread affinity modes, namely scatter, compact, and balanced, in both offload and native mode, with and without various code optimizations to improve alignment and reduce cache coherency penalties. Relative to baseline “out-of-the-box” optimized compilation, code restructuring optimizations provide about 3.8x speedup using the offload mode and about 5x speedup using the native mode. Even with these gains for the flux kernel, with respect to execution time the MIC simply achieves par with optimized compilation on a contemporary multi-core Intel CPU, the 16-core Sandy Bridge E5 2670. Nevertheless, the optimizations employed to reduce the data motion and cache coherency protocol penalties of the MIC are expected to be of value for CFD and many other unstructured applications as many-core architecture evolves. We explore large-scale distributed-shared memory performance on the Cray XC40 supercomputer, to demonstrate that optimizations employed on Phi hybridize to this context, where each of

  12. Unstructured Computational Aerodynamics on Many Integrated Core Architecture

    KAUST Repository

    Al Farhan, Mohammed A.; Kaushik, Dinesh K.; Keyes, David E.

    2016-01-01

    Shared memory parallelization of the flux kernel of PETSc-FUN3D, an unstructured tetrahedral mesh Euler flow code previously studied for distributed memory and multi-core shared memory, is evaluated on up to 61 cores per node and up to 4 threads per core. We explore several thread-level optimizations to improve flux kernel performance on the state-of-the-art many integrated core (MIC) Intel processor Xeon Phi “Knights Corner,” with a focus on strong thread scaling. While the linear algebraic kernel is bottlenecked by memory bandwidth for even modest numbers of cores sharing a common memory, the flux kernel, which arises in the control volume discretization of the conservation law residuals and in the formation of the preconditioner for the Jacobian by finite-differencing the conservation law residuals, is compute-intensive and is known to exploit effectively contemporary multi-core hardware. We extend study of the performance of the flux kernel to the Xeon Phi in three thread affinity modes, namely scatter, compact, and balanced, in both offload and native mode, with and without various code optimizations to improve alignment and reduce cache coherency penalties. Relative to baseline “out-of-the-box” optimized compilation, code restructuring optimizations provide about 3.8x speedup using the offload mode and about 5x speedup using the native mode. Even with these gains for the flux kernel, with respect to execution time the MIC simply achieves par with optimized compilation on a contemporary multi-core Intel CPU, the 16-core Sandy Bridge E5 2670. Nevertheless, the optimizations employed to reduce the data motion and cache coherency protocol penalties of the MIC are expected to be of value for CFD and many other unstructured applications as many-core architecture evolves. We explore large-scale distributed-shared memory performance on the Cray XC40 supercomputer, to demonstrate that optimizations employed on Phi hybridize to this context, where each of

  13. On a new method to compute photon skyshine doses around radiotherapy facilities

    Energy Technology Data Exchange (ETDEWEB)

    Falcao, R.; Facure, A. [Comissao Nacional de Eenrgia Nuclear, Rio de Janeiro (Brazil); Xavier, A. [PEN/Coppe -UFRJ, Rio de Janeiro (Brazil)

    2006-07-01

    Full text of publication follows: Nowadays, in a great number of situations constructions are raised around radiotherapy facilities. In cases where the constructions would not be in the primary x-ray beam, 'skyshine' radiation is normally accounted for. The skyshine method is commonly used to to calculate the dose contribution from scattered radiation in such circumstances, when the roof shielding is projected considering there will be no occupancy upstairs. In these cases, there will be no need to have the usual 1,5-2,0 m thick ceiling, and the construction costs can be considerably reduced. The existing expression to compute these doses do not accomplish to explain mathematically the existence of a shadow area just around the outer room walls, and its growth, as we get away from these walls. In this paper we propose a new method to compute photon skyshine doses, using geometrical considerations to find the maximum dose point. An empirical equation is derived, and its validity is tested using M.C.N.P. 5 Monte Carlo calculation to simulate radiotherapy rooms configurations. (authors)

  14. Computer-guided facility for the study of single crystals at the gamma diffractometer GADI

    International Nuclear Information System (INIS)

    Heer, H.; Bleichert, H.; Gruhn, W.; Moeller, R.

    1984-10-01

    In the study of solid-state properties it is in many cases necessary to work with single crystals. The increased requirement in the industry and research as well as the desire for better characterization by means of γ-diffractometry made it necessary to improve and to modernize the existing instrument. The advantages of a computer-guided facility against the conventional, semiautomatic operation are manifold. Not only the process guidance, but also the data acquisition and evaluation are performed by the computer. By a remote control the operator is able to find quickly a reflex and to drive the crystal in every desired measuring position. The complete protocollation of all important measuring parameters, the convenient data storage, as well as the automatic evaluation are much useful for the user. Finally the measuring time can be increased to practically 24 hours per day. By this the versed characterization by means of γ-diffractometry is put on a completely new level. (orig.) [de

  15. A guide for the selection of computer assisted mapping (CAM) and facilities informations systems

    Energy Technology Data Exchange (ETDEWEB)

    Haslin, S.; Baxter, P.; Jarvis, L.

    1980-12-01

    Many distribution engineers are now aware that computer assisted mapping (CAM) and facilities informations systems are probably the most significant breakthrough to date in computer applications for distribution engineering. The Canadian Electrical Asociation (CEA) recognized this and requested engineers of B.C. Hydro make a study of the state of the art in Canadian utilities and the progress of CAM systems on an international basis. The purpose was to provide a guide to assist Canadian utility distribution engineers faced with the problem of studying the application of CAM systems as an alternative to present methods, consideration being given to the long-term and other benefits that were perhaps not apparent for those approaching this field for the first time. It soon became apparent that technology was developing at a high rate and competition in the market was very strong. Also a number of publications were produced by other sources which adequately covered the scope of this study. This report is thus a collection of references to reports, manuals, and other documents with a few considerations provided for those companies interested in exploring further the use of interactive graphics. 24 refs.

  16. Computer programs for capital cost estimation, lifetime economic performance simulation, and computation of cost indexes for laser fusion and other advanced technology facilities

    International Nuclear Information System (INIS)

    Pendergrass, J.H.

    1978-01-01

    Three FORTRAN programs, CAPITAL, VENTURE, and INDEXER, have been developed to automate computations used in assessing the economic viability of proposed or conceptual laser fusion and other advanced-technology facilities, as well as conventional projects. The types of calculations performed by these programs are, respectively, capital cost estimation, lifetime economic performance simulation, and computation of cost indexes. The codes permit these three topics to be addressed with considerable sophistication commensurate with user requirements and available data

  17. Integrating numerical computation into the undergraduate education physics curriculum using spreadsheet excel

    Science.gov (United States)

    Fauzi, Ahmad

    2017-11-01

    Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.

  18. Hanford Site Composite Analysis Technical Approach Description: Integrated Computational Framework.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, K. J. [CH2M HILL Plateau Remediation Company, Richland, WA (United States)

    2017-09-14

    The U.S. Department of Energy (DOE) in DOE O 435.1 Chg. 1, Radioactive Waste Management, requires the preparation and maintenance of a composite analysis (CA). The primary purpose of the CA is to provide a reasonable expectation that the primary public dose limit is not likely to be exceeded by multiple source terms that may significantly interact with plumes originating at a low-level waste disposal facility. The CA is used to facilitate planning and land use decisions that help assure disposal facility authorization will not result in long-term compliance problems; or, to determine management alternatives, corrective actions, or assessment needs if potential problems are identified.

  19. Cloud Computing: Should It Be Integrated into the Curriculum?

    Science.gov (United States)

    Changchit, Chuleeporn

    2015-01-01

    Cloud computing has become increasingly popular among users and businesses around the world, and education is no exception. Cloud computing can bring an increased number of benefits to an educational setting, not only for its cost effectiveness, but also for the thirst for technology that college students have today, which allows learning and…

  20. Integrating Human and Computer Intelligence. Technical Report No. 32.

    Science.gov (United States)

    Pea, Roy D.

    This paper explores the thesis that advances in computer applications and artificial intelligence have important implications for the study of development and learning in psychology. Current approaches to the use of computers as devices for problem solving, reasoning, and thinking--i.e., expert systems and intelligent tutoring systems--are…

  1. Gesture Recognition by Computer Vision : An Integral Approach

    NARCIS (Netherlands)

    Lichtenauer, J.F.

    2009-01-01

    The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads

  2. Integration of distributed computing into the drug discovery process.

    Science.gov (United States)

    von Korff, Modest; Rufener, Christian; Stritt, Manuel; Freyss, Joel; Bär, Roman; Sander, Thomas

    2011-02-01

    Grid computing offers an opportunity to gain massive computing power at low costs. We give a short introduction into the drug discovery process and exemplify the use of grid computing for image processing, docking and 3D pharmacophore descriptor calculations. The principle of a grid and its architecture are briefly explained. More emphasis is laid on the issues related to a company-wide grid installation and embedding the grid into the research process. The future of grid computing in drug discovery is discussed in the expert opinion section. Most needed, besides reliable algorithms to predict compound properties, is embedding the grid seamlessly into the discovery process. User friendly access to powerful algorithms without any restrictions, that is, by a limited number of licenses, has to be the goal of grid computing in drug discovery.

  3. The management of mechanical integrity inspections at small-sized 'Seveso' facilities

    International Nuclear Information System (INIS)

    Bragatto, Paolo A.; Pittiglio, Paolo; Ansaldi, Silvia

    2009-01-01

    The mechanical integrity (MI) of equipment has been controlled at all industrial facilities for many decades. Control methods and intervals are regulated by laws or codes and best practices. In European countries, the legislations implementing the Seveso Directives on the control of major accident hazards require the owner of establishments where hazardous chemicals are handled, to implement a safety management system (SMS). MI controls should be an integral part of the SMS. At large establishments this goal is achieved by adopting the RBI method, but in small-sized establishments with a limited budget and scanty personnel, a heuristic approach is more suitable. This paper demonstrates the feasibility and advantages of integrating SMS and MI by means of a simple method that includes a few basic concepts of RBI without additional costs for operator. This method, supported by a software tool, is resilient as it functions effectively in spite of eventual budget reductions and personnel turnover. The results of MI controls can also be exploited to monitor equipment condition and demonstrate the adequacy of technical systems to the Competent Authorities (CA). Furthermore, the SMS can 'capture' knowledge resulting from MI experience and exploit it for a better understanding of risk

  4. Computation of Hopkins' 3-circle integrals using Zernike expansions

    NARCIS (Netherlands)

    Janssen, A.J.E.M.

    2011-01-01

    The integrals occurring in optical diffraction theory under conditions of partial coherence have the form of an incomplete autocorrelation integral of the pupil function of the optical system. The incompleteness is embodied by a spatial coherence function of limited extent. In the case of circular

  5. Integrated Visible Photonics for Trapped-Ion Quantum Computing

    Science.gov (United States)

    2017-06-10

    etch to provide a smooth oxide facet, and clearance for fiber positioning for edge input coupling. Integrated Visible Photonics for Trapped-Ion...capability to optically address individual ions at several wavelengths. We demonstrate a dual-layered silicon nitride photonic platform for integration...coherence times, strong coulomb interactions, and optical addressability, hold great promise for implementation of practical quantum information

  6. Fuel cycle facility control system for the Integral Fast Reactor Program

    International Nuclear Information System (INIS)

    Benedict, R.W.; Tate, D.A.

    1993-01-01

    As part of the Integral Fast Reactor (IFR) Fuel Demonstration, a new distributed control system designed, implemented and installed. The Fuel processes are a combination of chemical and machining processes operated remotely. To meet this special requirement, the new control system provides complete sequential logic control motion and positioning control and continuous PID loop control. Also, a centralized computer system provides near-real time nuclear material tracking, product quality control data archiving and a centralized reporting function. The control system was configured to use programmable logic controllers, small logic controllers, personal computers with touch screens, engineering work stations and interconnecting networks. By following a structured software development method the operator interface was standardized. The system has been installed and is presently being tested for operations

  7. Integrating user studies into computer graphics-related courses.

    Science.gov (United States)

    Santos, B S; Dias, P; Silva, S; Ferreira, C; Madeira, J

    2011-01-01

    This paper presents computer graphics. Computer graphics and visualization are essentially about producing images for a target audience, be it the millions watching a new CG-animated movie or the small group of researchers trying to gain insight into the large amount of numerical data resulting from a scientific experiment. To ascertain the final images' effectiveness for their intended audience or the designed visualizations' accuracy and expressiveness, formal user studies are often essential. In human-computer interaction (HCI), such user studies play a similar fundamental role in evaluating the usability and applicability of interaction methods and metaphors for the various devices and software systems we use.

  8. A Scheme for Verification on Data Integrity in Mobile Multicloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Laicheng Cao

    2016-01-01

    Full Text Available In order to verify the data integrity in mobile multicloud computing environment, a MMCDIV (mobile multicloud data integrity verification scheme is proposed. First, the computability and nondegeneracy of verification can be obtained by adopting BLS (Boneh-Lynn-Shacham short signature scheme. Second, communication overhead is reduced based on HVR (Homomorphic Verifiable Response with random masking and sMHT (sequence-enforced Merkle hash tree construction. Finally, considering the resource constraints of mobile devices, data integrity is verified by lightweight computing and low data transmission. The scheme improves shortage that mobile device communication and computing power are limited, it supports dynamic data operation in mobile multicloud environment, and data integrity can be verified without using direct source file block. Experimental results also demonstrate that this scheme can achieve a lower cost of computing and communications.

  9. Oregon state university's advanced plant experiment (APEX) AP1000 integral facility test program

    International Nuclear Information System (INIS)

    Reyes, J.N.; Groome, J.T.; Woods, B.G.; Young, E.; Abel, K.; Wu, Q.

    2005-01-01

    Oregon State University (OSU) has recently completed a three year study of the thermal hydraulic behavior of the Westinghouse AP1000 passive safety systems. Eleven Design Basis Accident (DBA) scenarios, sponsored by the U.S. Department of Energy (DOE) with technical support from Westinghouse Electric, were simulated in OSU's Advanced Plant Experiment (APEX)-1000. The OSU test program was conducted within the purview of the requirements of 10CFR50 Appendix B, NQA-1 and 10 CFR 21 and the test data was used to provide benchmarks for computer codes used in the final design approval of the AP1000. In addition to the DOE certification testing, OSU conducted eleven confirmatory tests for the U.S. Nuclear Regulatory Commission. This paper presents the test program objectives, a description of the APEX-1000 test facility and an overview of the test matrix that was conducted in support of plant certification. (authors)

  10. VISA: a method for evaluating the performance of integrated safeguards systems at nuclear facilities

    International Nuclear Information System (INIS)

    Donnelly, H.; Fullwood, R.; Glancy, J.

    1977-06-01

    This is the second volume of a two volume report on the VISA method for evaluating safeguards at fixed-site facilities. This volume contains appendices that support the description of the VISA concept and the initial working version of the method, VISA-1, presented in Volume I. The information is separated into four appendices, each describing details of one of the four analysis modules that comprise the analysis sections of the method. The first appendix discusses Path Analysis methodology, applies it to a Model Fuel Facility, and describes the computer codes that are being used. Introductory material on Path Analysis given in Chapter 3.2.1 and Chapter 4.2.1 of Volume I. The second appendix deals with Detection Analysis, specifically the schemes used in VISA-1 for classifying adversaries and the methods proposed for evaluating individual detection mechanisms in order to build the data base required for detection analysis. Examples of evaluations on identity-access systems, SNM portal monitors, and intrusion devices are provided. The third appendix describes the Containment Analysis overt-segment path ranking, the Monte Carlo engagement model, the network simulation code, the delay mechanism data base, and the results of a sensitivity analysis. The last appendix presents general equations used in Interruption Analysis for combining covert-overt segments and compares them with equations given in Volume I, Chapter 3

  11. Recharge Data Package for the 2005 Integrated Disposal Facility Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Fayer, Michael J.; Szecsody, Jim E.

    2004-06-30

    Pacific Northwest National Laboratory assisted CH2M Hill Hanford Group, Inc., (CHG) by providing estimates of recharge rates for current conditions and long-term scenarios involving disposal in the Integrated Disposal Facility (IDF). The IDF will be located in the 200 East Area at the Hanford Site and will receive several types of waste including immobilized low-activity waste. The recharge estimates for each scenario were derived from lysimeter and tracer data collected by the IDF PA Project and from modeling studies conducted for the project. Recharge estimates were provided for three specific site features (the surface barrier; possible barrier side slopes; and the surrounding soil) and four specific time periods (pre-Hanford; Hanford operations; surface barrier design life; post-barrier design life). CHG plans to conduct a performance assessment of the latest IDF design and call it the IDF 2005 PA; this recharge data package supports the upcoming IDF 2005 PA.

  12. Solid secondary waste testing for maintenance of the Hanford Integrated Disposal Facility Performance Assessment - FY 2017

    Energy Technology Data Exchange (ETDEWEB)

    Nichols, Ralph L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Seitz, Roger R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Dixon, Kenneth L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-08-01

    The Waste Treatment and Immobilization Plant (WTP) at Hanford is being constructed to treat 56 million gallons of radioactive waste currently stored in underground tanks at the Hanford site. Operation of the WTP will generate several solid secondary waste (SSW) streams including used process equipment, contaminated tools and instruments, decontamination wastes, high-efficiency particulate air filters (HEPA), carbon adsorption beds, silver mordenite iodine sorbent beds, and spent ion exchange resins (IXr) all of which are to be disposed in the Integrated Disposal Facility (IDF). An applied research and development program was developed using a phased approach to incrementally develop the information necessary to support the IDF PA with each phase of the testing building on results from the previous set of tests and considering new information from the IDF PA calculations. This report contains the results from the exploratory phase, Phase 1 and preliminary results from Phase 2. Phase 3 is expected to begin in the fourth quarter of FY17.

  13. Design and first integral test of MUSE facility in ALPHA program

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyun-sun; Yamano, Norihiro; Maruyama, Yu; Moriyama, Kiyofumi; Kudo, Tamotsu; Yang, Yanhua; Sugimoto, Jun [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    Vapor explosion (Steam explosion or energetic Fuel-Coolant Interaction) is a phenomenon in which a hot liquid rapidly releases its internal energy into a surrounding colder and more volatile liquid when these liquids come into a sudden contact. This rapid energy release leads to rapid vapor production within a timescale short compared to vapor expansion causes local pressurization similar to an explosion and eventually threatens the surroundings by dynamic pressures and the subsequent expansion. It has been recognized that the energetics of vapor explosions strongly depend on the initial mixing geometry established by the contact of hot and cold liquids. Therefore, a new program has been initiated to investigate the energetics of vapor explosions in various contact geometries; i.e., pouring, stratified, coolant and melt injection modes in a facility which is able to measure the energy conversion ratio and eventually to provide data to evaluate the mechanistic analytical models. In the report, this new facility, called MUSE (MUlti-configuration in Steam Explosions), and the results of the first integral test are described in detail. (author)

  14. Transition from depressurization to long term cooling in AP600 scaled integral test facilities

    International Nuclear Information System (INIS)

    Bessette, D.E.; Marzo, M. di

    1999-01-01

    A novel light water reactor design called the AP600 has been proposed by the Westinghouse Electric Corporation. In the evaluation of this plant's behavior during a small break loss of coolant accident (LOCA), the crucial transition to low pressure, long-term cooling is marked by the injection of the gravitationally driven flow from the in-containment refueling water storage tank (IRWST). The onset of this injection is characterized by intermittency in the IRWST flow. This happens at a time when the reactor vessel reaches its minimum inventory. Therefore, it is important to understand and scale the behavior of the integral experimental test facilities during this portion of the transient. The explanation is that the periodic liquid drains and refills of the pressurizer are the reason for the intermittent behavior. The momentum balance for the surge line yields the nondimensional parameter controlling this process. Data from one of the three experimental facilities represent the phenomena well at the prototypical scale. The impact of the intermittent IRWST injection on the safe plant operation is assessed and its implications are successfully resolved. The oscillation is found to result from, in effect, excess water in the primary system and it is not of safety significance. (orig.)

  15. Development of a Remote Handling System in an Integrated Pyroprocessing Facility

    Directory of Open Access Journals (Sweden)

    Hyo Jik Lee

    2013-10-01

    Full Text Available Over the course of a decade-long research programme, the Korea Atomic Energy Research Institute (KAERI has developed several remote handling systems for use in pyroprocessing research facilities. These systems are now used successfully for the operation and maintenance of processing equipment. The most recent remote handling system is the bridge-transported dual arm servo-manipulator system (BDSM, which is used for remote operation at the world's largest pyroprocess integrated inactive demonstration facility (PRIDE. Accurate and reliable servo-control is the basic requirement for the BDSM to accomplish any given tasks successfully in a hotcell environment. To achieve this end, the hardware and software of a digital signal processor-based remote control system were fully custom-developed and implemented to control the BDSM. To reduce the residual vibration of the BDSM, several input profiles, including input shaping, were carefully chosen and evaluated. Furthermore, a time delay controller was employed to achieve good tracking performance and systematic gain tuning. The experimental results demonstrate that the applied control algorithms are more effective than conventional approaches. The BDSM successfully completed its performance tests at a mock-up and was installed at PRIDE for real-world operation. The remote handling system at KAERI is expected to advance the actualization of pyroprocessing.

  16. Biomedical data integration in computational drug design and bioinformatics.

    Science.gov (United States)

    Seoane, Jose A; Aguiar-Pulido, Vanessa; Munteanu, Cristian R; Rivero, Daniel; Rabunal, Juan R; Dorado, Julian; Pazos, Alejandro

    2013-03-01

    In recent years, in the post genomic era, more and more data is being generated by biological high throughput technologies, such as proteomics and transcriptomics. This omics data can be very useful, but the real challenge is to analyze all this data, as a whole, after integrating it. Biomedical data integration enables making queries to different, heterogeneous and distributed biomedical data sources. Data integration solutions can be very useful not only in the context of drug design, but also in biomedical information retrieval, clinical diagnosis, system biology, etc. In this review, we analyze the most common approaches to biomedical data integration, such as federated databases, data warehousing, multi-agent systems and semantic technology, as well as the solutions developed using these approaches in the past few years.

  17. Advanced computer algebra algorithms for the expansion of Feynman integrals

    International Nuclear Information System (INIS)

    Ablinger, Jakob; Round, Mark; Schneider, Carsten

    2012-10-01

    Two-point Feynman parameter integrals, with at most one mass and containing local operator insertions in 4+ε-dimensional Minkowski space, can be transformed to multi-integrals or multi-sums over hyperexponential and/or hypergeometric functions depending on a discrete parameter n. Given such a specific representation, we utilize an enhanced version of the multivariate Almkvist-Zeilberger algorithm (for multi-integrals) and a common summation framework of the holonomic and difference field approach (for multi-sums) to calculate recurrence relations in n. Finally, solving the recurrence we can decide efficiently if the first coefficients of the Laurent series expansion of a given Feynman integral can be expressed in terms of indefinite nested sums and products; if yes, the all n solution is returned in compact representations, i.e., no algebraic relations exist among the occurring sums and products.

  18. Advanced computer algebra algorithms for the expansion of Feynman integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Round, Mark; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-10-15

    Two-point Feynman parameter integrals, with at most one mass and containing local operator insertions in 4+{epsilon}-dimensional Minkowski space, can be transformed to multi-integrals or multi-sums over hyperexponential and/or hypergeometric functions depending on a discrete parameter n. Given such a specific representation, we utilize an enhanced version of the multivariate Almkvist-Zeilberger algorithm (for multi-integrals) and a common summation framework of the holonomic and difference field approach (for multi-sums) to calculate recurrence relations in n. Finally, solving the recurrence we can decide efficiently if the first coefficients of the Laurent series expansion of a given Feynman integral can be expressed in terms of indefinite nested sums and products; if yes, the all n solution is returned in compact representations, i.e., no algebraic relations exist among the occurring sums and products.

  19. Projects at the component development and integration facility. Quarterly technical progress report, April 1, 1994--June 30, 1994

    International Nuclear Information System (INIS)

    1994-01-01

    This quarterly technical progress report presents progress on the projects at the Component Development and Integration Facility (CDIF) during the third quarter of FY94. The CDIF is a major Department of Energy test facility in Butte, Montana, operated by MSE, Inc. Projects in progress include: Biomass Remediation Project; Heavy Metal-Contaminated Soil Project; MHD Shutdown; Mine Waste Technology Pilot Program; Plasma Projects; Resource Recovery Project; and Spray Casting Project

  20. A Performance Measurement and Implementation Methodology in a Department of Defense CIM (Computer Integrated Manufacturing) Environment

    Science.gov (United States)

    1988-01-24

    vanes.-The new facility is currently being called the Engine Blade/ Vape Facility (EB/VF). There are three primary goals in automating this proc..e...earlier, the search led primarily into the areas of CIM Justification, Automation Strategies , Performance Measurement, and Integration issues. Of...of living, has been steadily eroding. One dangerous trend that has developed in keenly competitive world markets , says Rohan [33], has been for U.S

  1. Using a qualitative approach for understanding hospital-affiliated integrated clinical and fitness facilities: characteristics and members' experiences.

    Science.gov (United States)

    Yang, Jingzhen; Kingsbury, Diana; Nichols, Matthew; Grimm, Kristin; Ding, Kele; Hallam, Jeffrey

    2015-06-19

    With health care shifting away from the traditional sick care model, many hospitals are integrating fitness facilities and programs into their clinical services in order to support health promotion and disease prevention at the community level. Through a series of focus groups, the present study assessed characteristics of hospital-affiliated integrated facilities located in Northeast Ohio, United States and members' experiences with respect to these facilities. Adult members were invited to participate in a focus group using a recruitment flyer. A total of 6 focus groups were conducted in 2013, each lasting one hour, ranging from 5 to 12 participants per group. The responses and discussions were recorded and transcribed verbatim, then analyzed independently by research team members. Major themes were identified after consensus was reached. The participants' average age was 57, with 56.8% currently under a doctor's care. Four major themes associated with integrated facilities and members' experiences emerged across the six focus groups: 1) facility/program, 2) social atmosphere, 3) provider, and 4) member. Within each theme, several sub-themes were also identified. A key feature of integrated facilities is the availability of clinical and fitness services "under one roof". Many participants remarked that they initially attended physical therapy, becoming members of the fitness facility afterwards, or vice versa. The participants had favorable views of and experiences with the superior physical environment and atmosphere, personal attention, tailored programs, and knowledgeable, friendly, and attentive staff. In particular, participants favored the emphasis on preventive care and the promotion of holistic health and wellness. These results support the integration of wellness promotion and programming with traditional medical care and call for the further evaluation of such a model with regard to participants' health outcomes.

  2. New challenges for HEP computing: RHIC [Relativistic Heavy Ion Collider] and CEBAF [Continuous Electron Beam Accelerator Facility

    International Nuclear Information System (INIS)

    LeVine, M.J.

    1990-01-01

    We will look at two facilities; RHIC and CEBF. CEBF is in the construction phase, RHIC is about to begin construction. For each of them, we examine the kinds of physics measurements that motivated their construction, and the implications of these experiments for computing. Emphasis will be on on-line requirements, driven by the data rates produced by these experiments

  3. Final deactivation project report on the Integrated Process Demonstration Facility, Building 7602 Oak Ridge National Laboratory, Oak Ridge, Tennessee

    International Nuclear Information System (INIS)

    1997-09-01

    The purpose of this report is to document the condition of the Integrated Process Demonstration Facility (Building 7602) at Oak Ridge National Laboratory (ORNL) after completion of deactivation activities by the High Ranking Facilities Deactivation Project (HRFDP). This report identifies the activities conducted to place the facility in a safe and environmentally sound condition prior to transfer to the U.S. Department of Energy (DOE) Environmental Restoration EM-40 Program. This report provides a history and description of the facility prior to commencing deactivation activities and documents the condition of the building after completion of all deactivation activities. Turnover items, such as the Post-Deactivation Surveillance and Maintenance (S ampersand M) Plan, remaining hazardous and radioactive materials inventory, radiological controls, Safeguards and Security, and supporting documentation provided in the Office of Nuclear Material and Facility Stabilization Program (EM-60) Turnover package are discussed

  4. Validation of RETRAN-03 by simulating a peach bottom turbine trip and boiloff at the full integral simulation test facility

    International Nuclear Information System (INIS)

    Westacott, J.L.; Peterson, C.E.

    1992-01-01

    This paper reports that the RETRAN-03 computer code is validated by simulating two tests that were performed at the Full Integral Simulation Test (FIST) facility. The RETRAN-03 results of a turbine trip (test 4PTT1) and failure to maintain water level at decay power (test T1QUV) are compared with the FIST test data. The RETRAN-03 analysis of test 4PTT1 is compared with a previous TRAC-BWR analysis of the test. Sensitivity to various model nodalizations and RETRAN-03 slip options are studied by comparing results of test T1QUV. The predicted thermal-hydraulic responses of both tests agree well with the test data. The pressure response of test 4PTT1 and the boiloff rate for test T1QUV are accurately predicted. Core uncovery time is found to be sensitive to the upper downcomer and upper plenum nodalization. The RETRAN-03 algebraic and dynamic slip options produce similar results for test T1QUV

  5. An integrative computational modelling of music structure apprehension

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2014-01-01

    , the computational model, by virtue of its generality, extensiveness and operationality, is suggested as a blueprint for the establishment of cognitively validated model of music structure apprehension. Available as a Matlab module, it can be used for practical musicological uses.......An objectivization of music analysis requires a detailed formalization of the underlying principles and methods. The formalization of the most elementary structural processes is hindered by the complexity of music, both in terms of profusions of entities (such as notes) and of tight interactions...... between a large number of dimensions. Computational modeling would enable systematic and exhaustive tests on sizeable pieces of music, yet current researches cover particular musical dimensions with limited success. The aim of this research is to conceive a computational modeling of music analysis...

  6. Surface Water Modeling Using an EPA Computer Code for Tritiated Waste Water Discharge from the heavy Water Facility

    International Nuclear Information System (INIS)

    Chen, K.F.

    1998-06-01

    Tritium releases from the D-Area Heavy Water Facilities to the Savannah River have been analyzed. The U.S. EPA WASP5 computer code was used to simulate surface water transport for tritium releases from the D-Area Drum Wash, Rework, and DW facilities. The WASP5 model was qualified with the 1993 tritium measurements at U.S. Highway 301. At the maximum tritiated waste water concentrations, the calculated tritium concentration in the Savannah River at U.S. Highway 301 due to concurrent releases from D-Area Heavy Water Facilities varies from 5.9 to 18.0 pCi/ml as a function of the operation conditions of these facilities. The calculated concentration becomes the lowest when the batch releases method for the Drum Wash Waste Tanks is adopted

  7. Integration of the PHIN RF Gun into the CLIC Test Facility

    CERN Document Server

    Döbert, Steffen

    2006-01-01

    CERN is a collaborator within the European PHIN project, a joint research activity for Photo injectors within the CARE program. A deliverable of this project is an rf Gun equipped with high quantum efficiency Cs2Te cathodes and a laser to produce the nominal beam for the CLIC Test Facility (CTF3). The nominal beam for CTF3 has an average current of 3.5 A, 1.5 GHz bunch repetition frequency and a pulse length of 1.5 ìs (2332 bunches) with quite tight stability requirements. In addition a phase shift of 180 deg is needed after each train of 140 ns for the special CLIC combination scheme. This rf Gun will be tested at CERN in fall 2006 and shall be integrated as a new injector into the CTF3 linac, replacing the existing injector consisting of a thermionic gun and a subharmonic bunching system. The paper studies the optimal integration into the machine trying to optimize transverse and longitudinal phase space of the beam while respecting the numerous constraints of the existing accelerator. The presented scheme...

  8. Integrated, long term, sustainable, cost effective biosolids management at a large Canadian wastewater treatment facility.

    Science.gov (United States)

    Leblanc, R J; Allain, C J; Laughton, P J; Henry, J G

    2004-01-01

    The Greater Moncton Sewerage Commission's 115,000 m3/d advanced, chemically assisted primary wastewater treatment facility located in New Brunswick, Canada, has developed an integrated, long term, sustainable, cost effective programme for the management and beneficial utilization of biosolids from lime stabilized raw sludge. The paper overviews biosolids production, lime stabilization, conveyance, and odour control followed by an indepth discussion of the wastewater sludge as a resource programme, namely: composting, mine site reclamation, landfill cover, land application for agricultural use, tree farming, sod farm base as a soil enrichment, topsoil manufacturing. The paper also addresses the issues of metals, pathogens, organic compounds, the quality control program along with the regulatory requirements. Biosolids capital and operating costs are presented. Research results on removal of metals from primary sludge using a unique biological process known as BIOSOL as developed by the University of Toronto, Canada to remove metals and destroy pathogens are presented. The paper also discusses an ongoing cooperative research project with the Université de Moncton where various mixtures of plant biosolids are composted with low quality soil. Integration, approach to sustainability and "cumulative effects" as part of the overall biosolids management strategy are also discussed.

  9. Integrated, long term, sustainable, cost effective biosolids management at a large Canadian wastewater treatment facility

    Energy Technology Data Exchange (ETDEWEB)

    LeBlance, R.J.; Allain, C.J.; Laughton, P.J.; Henry, J.G.

    2003-07-01

    The Greater Moncton Sewerage Commission's 115 000 m{sup 3}/d advanced, chemically assisted primary wastewater treatment facility located in New Brunswick, Canada, has developed an integrated, long term, sustainable, cost effective programme for the management and beneficial utilization of biosolids from lime stabilized raw sludge. The paper overviews biosolids production, lime stabilization, conveyance, and odour control followed by an indepth discussion of the wastewater sludge as a resource programme, namely: composting, mine site reclamation, landfill cover, land application for agricultural use, tree farming, sod farm base as a soil enrichment, topsoil manufacturing. The paper also addresses the issues of metals, pathogens, organic compounds, the quality control program along with the regulatory requirements. Biosolids capital and operating costs are presented. Research results on removal of metals from primary sludge using a unique biological process known as BIOSOL as developed by the University of Toronto, Canada to remove metals and destroy pathogens are presented. The paper also discusses an ongoing cooperative research project with the Universite de Moncton where various mixtures of plant biosolids are composted with low quality soil. Integration, approach to sustainability and ''cumulative effects'' as part of the overall biosolids management strategy is also discussed. (author)

  10. NOMINATION FOR THE PROJECT MANAGEMENT INSTITUTE (PMI) PROJECT OF THE YEAR AWARD. INTEGRATED DISPOSAL FACILITY (IDF)

    International Nuclear Information System (INIS)

    MCLELLAN, G.W.

    2007-01-01

    CH2M HILL Hanford Group, Inc. (CH2M HILL) is pleased to nominate the Integrated Disposal Facility (IDF) project for the Project Management Institute's consideration as 2007 Project of the Year, Built for the U.S, Department of Energy's (DOE) Office of River Protection (ORP) at the Hanford Site, the IDF is the site's first Resource Conservation and Recovery Act (RCRA)-compliant disposal facility. The IDF is important to DOE's waste management strategy for the site. Effective management of the IDF project contributed to the project's success. The project was carefully managed to meet three Tri-Party Agreement (TPA) milestones. The completed facility fully satisfied the needs and expectations of the client, regulators and stakeholders. Ultimately, the project, initially estimated to require 48 months and $33.9 million to build, was completed four months ahead of schedule and $11.1 million under budget. DOE directed construction of the IDF to provide additional capacity for disposing of low-level radioactive and mixed (i.e., radioactive and hazardous) solid waste. The facility needed to comply with federal and Washington State environmental laws and meet TPA milestones. The facility had to accommodate over one million cubic yards of the waste material, including immobilized low-activity waste packages from the Waste Treatment Plant (WTP), low-level and mixed low-level waste from WTP failed melters, and alternative immobilized low-activity waste forms, such as bulk-vitrified waste. CH2M HILL designed and constructed a disposal facility with a redundant system of containment barriers and a sophisticated leak-detection system. Built on a 168-area, the facility's construction met all regulatory requirements. The facility's containment system actually exceeds the state's environmental requirements for a hazardous waste landfill. Effective management of the IDF construction project required working through highly political and legal issues as well as challenges with

  11. NOMINATION FOR THE PROJECT MANAGEMENT INSTITUTE (PMI) PROJECT OF THE YEAR AWARD INTEGRATED DISPOSAL FACILITY (IDF)

    Energy Technology Data Exchange (ETDEWEB)

    MCLELLAN, G.W.

    2007-02-07

    CH2M HILL Hanford Group, Inc. (CH2M HILL) is pleased to nominate the Integrated Disposal Facility (IDF) project for the Project Management Institute's consideration as 2007 Project of the Year, Built for the U.S, Department of Energy's (DOE) Office of River Protection (ORP) at the Hanford Site, the IDF is the site's first Resource Conservation and Recovery Act (RCRA)-compliant disposal facility. The IDF is important to DOE's waste management strategy for the site. Effective management of the IDF project contributed to the project's success. The project was carefully managed to meet three Tri-Party Agreement (TPA) milestones. The completed facility fully satisfied the needs and expectations of the client, regulators and stakeholders. Ultimately, the project, initially estimated to require 48 months and $33.9 million to build, was completed four months ahead of schedule and $11.1 million under budget. DOE directed construction of the IDF to provide additional capacity for disposing of low-level radioactive and mixed (i.e., radioactive and hazardous) solid waste. The facility needed to comply with federal and Washington State environmental laws and meet TPA milestones. The facility had to accommodate over one million cubic yards of the waste material, including immobilized low-activity waste packages from the Waste Treatment Plant (WTP), low-level and mixed low-level waste from WTP failed melters, and alternative immobilized low-activity waste forms, such as bulk-vitrified waste. CH2M HILL designed and constructed a disposal facility with a redundant system of containment barriers and a sophisticated leak-detection system. Built on a 168-area, the facility's construction met all regulatory requirements. The facility's containment system actually exceeds the state's environmental requirements for a hazardous waste landfill. Effective management of the IDF construction project required working through highly political and legal

  12. Development of a computer code for shielding calculation in X-ray facilities

    International Nuclear Information System (INIS)

    Borges, Diogo da S.; Lava, Deise D.; Affonso, Renato R.W.; Moreira, Maria de L.; Guimaraes, Antonio C.F.

    2014-01-01

    The construction of an effective barrier against the interaction of ionizing radiation present in X-ray rooms requires consideration of many variables. The methodology used for specifying the thickness of primary and secondary shielding of an traditional X-ray room considers the following factors: factor of use, occupational factor, distance between the source and the wall, workload, Kerma in the air and distance between the patient and the receptor. With these data it was possible the development of a computer program in order to identify and use variables in functions obtained through graphics regressions offered by NCRP Report-147 (Structural Shielding Design for Medical X-Ray Imaging Facilities) for the calculation of shielding of the room walls as well as the wall of the darkroom and adjacent areas. With the built methodology, a program validation is done through comparing results with a base case provided by that report. The thickness of the obtained values comprise various materials such as steel, wood and concrete. After validation is made an application in a real case of radiographic room. His visual construction is done with the help of software used in modeling of indoor and outdoor. The construction of barriers for calculating program resulted in a user-friendly tool for planning radiographic rooms to comply with the limits established by CNEN-NN-3:01 published in September / 2011

  13. Computational investigation of reshock strength in hydrodynamic instability growth at the National Ignition Facility

    Science.gov (United States)

    Bender, Jason; Raman, Kumar; Huntington, Channing; Nagel, Sabrina; Morgan, Brandon; Prisbrey, Shon; MacLaren, Stephan

    2017-10-01

    Experiments at the National Ignition Facility (NIF) are studying Richtmyer-Meshkov and Rayleigh-Taylor hydrodynamic instabilities in multiply-shocked plasmas. Targets feature two different-density fluids with a multimode initial perturbation at the interface, which is struck by two X-ray-driven shock waves. Here we discuss computational hydrodynamics simulations investigating the effect of second-shock (``reshock'') strength on instability growth, and how these simulations are informing target design for the ongoing experimental campaign. A Reynolds-Averaged Navier Stokes (RANS) model was used to predict motion of the spike and bubble fronts and the mixing-layer width. In addition to reshock strength, the reshock ablator thickness and the total length of the target were varied; all three parameters were found to be important for target design, particularly for ameliorating undesirable reflected shocks. The RANS data are compared to theoretical models that predict multimode instability growth proportional to the shock-induced change in interface velocity, and to currently-available data from the NIF experiments. Work performed under the auspices of the U.S. D.O.E. by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344. LLNL-ABS-734611.

  14. EXPERIMENTAL AND COMPUTATIONAL ACTIVITIES AT THE OREGON STATE UNIVERSITY NEES TSUNAMI RESEARCH FACILITY

    Directory of Open Access Journals (Sweden)

    S.C. Yim

    2009-01-01

    Full Text Available A diverse series of research projects have taken place or are underway at the NEES Tsunami Research Facility at Oregon State University. Projects range from the simulation of the processes and effects of tsunamis generated by sub-aerial and submarine landslides (NEESR, Georgia Tech., model comparisons of tsunami wave effects on bottom profiles and scouring (NEESR, Princeton University, model comparisons of wave induced motions on rigid and free bodies (Shared-Use, Cornell, numerical model simulations and testing of breaking waves and inundation over topography (NEESR, TAMU, structural testing and development of standards for tsunami engineering and design (NEESR, University of Hawaii, and wave loads on coastal bridge structures (non-NEES, to upgrading the two-dimensional wave generator of the Large Wave Flume. A NEESR payload project (Colorado State University was undertaken that seeks to improve the understanding of the stresses from wave loading and run-up on residential structures. Advanced computational tools for coupling fluid-structure interaction including turbulence, contact and impact are being developed to assist with the design of experiments and complement parametric studies. These projects will contribute towards understanding the physical processes that occur during earthquake generated tsunamis including structural stress, debris flow and scour, inundation and overland flow, and landslide generated tsunamis. Analytical and numerical model development and comparisons with the experimental results give engineers additional predictive tools to assist in the development of robust structures as well as identification of hazard zones and formulation of hazard plans.

  15. Development of a computational code for calculations of shielding in dental facilities

    International Nuclear Information System (INIS)

    Lava, Deise D.; Borges, Diogo da S.; Affonso, Renato R.W.; Guimaraes, Antonio C.F.; Moreira, Maria de L.

    2014-01-01

    This paper is prepared in order to address calculations of shielding to minimize the interaction of patients with ionizing radiation and / or personnel. The work includes the use of protection report Radiation in Dental Medicine (NCRP-145 or Radiation Protection in Dentistry), which establishes calculations and standards to be adopted to ensure safety to those who may be exposed to ionizing radiation in dental facilities, according to the dose limits established by CNEN-NN-3.1 standard published in September / 2011. The methodology comprises the use of computer language for processing data provided by that report, and a commercial application used for creating residential projects and decoration. The FORTRAN language was adopted as a method for application to a real case. The result is a programming capable of returning data related to the thickness of material, such as steel, lead, wood, glass, plaster, acrylic, acrylic and leaded glass, which can be used for effective shielding against single or continuous pulse beams. Several variables are used to calculate the thickness of the shield, as: number of films used in the week, film load, use factor, occupational factor, distance between the wall and the source, transmission factor, workload, area definition, beam intensity, intraoral and panoramic exam. Before the application of the methodology is made a validation of results with examples provided by NCRP-145. The calculations redone from the examples provide answers consistent with the report

  16. Optimizing Computation of Repairs from Active Integrity Constraints

    DEFF Research Database (Denmark)

    Cruz-Filipe, Luís

    2014-01-01

    Active integrity constraints (AICs) are a form of integrity constraints for databases that not only identify inconsistencies, but also suggest how these can be overcome. The semantics for AICs defines different types of repairs, but deciding whether an inconsistent database can be repaired...... and finding possible repairs is a NP- or Σ2p-complete problem, depending on the type of repairs one has in mind. In this paper, we introduce two different relations on AICs: an equivalence relation of independence, allowing the search to be parallelized among the equivalence classes, and a precedence relation...

  17. Geochemical Data Package for the 2005 Hanford Integrated Disposal Facility Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Krupka, Kenneth M.; Serne, R JEFFREY.; Kaplan, D I.

    2004-09-30

    CH2M HILL Hanford Group, Inc. (CH2M HILL) is designing and assessing the performance of an integrated disposal facility (IDF) to receive low-level waste (LLW), mixed low-level waste (MLLW), immobilized low-activity waste (ILAW), and failed or decommissioned melters. The CH2M HILL project to assess the performance of this disposal facility is the Hanford IDF Performance Assessment (PA) activity. The goal of the Hanford IDF PA activity is to provide a reasonable expectation that the disposal of the waste is protective of the general public, groundwater resources, air resources, surface-water resources, and inadvertent intruders. Achieving this goal will require prediction of contaminant migration from the facilities. This migration is expected to occur primarily via the movement of water through the facilities, and the consequent transport of dissolved contaminants in the vadose zone to groundwater where contaminants may be re-introduced to receptors via drinking water wells or mixing in the Columbia River. Pacific Northwest National Laboratory (PNNL) assists CH2M HILL in their performance assessment activities. One of the PNNL tasks is to provide estimates of the geochemical properties of the materials comprising the IDF, the disturbed region around the facility, and the physically undisturbed sediments below the facility (including the vadose zone sediments and the aquifer sediments in the upper unconfined aquifer). The geochemical properties are expressed as parameters that quantify the adsorption of contaminants and the solubility constraints that might apply for those contaminants that may exceed solubility constraints. The common parameters used to quantify adsorption and solubility are the distribution coefficient (Kd) and the thermodynamic solubility product (Ksp), respectively. In this data package, we approximate the solubility of contaminants using a more simplified construct, called the solution concentration limit, a constant value. The Kd values and

  18. Thermal studies of the canister staging pit in a hypothetical Yucca Mountain canister handling facility using computational fluid dynamics

    International Nuclear Information System (INIS)

    Soltani, Mehdi; Barringer, Chris; Bues, Timothy T. de

    2007-01-01

    The proposed Yucca Mountain nuclear waste storage site will contain facilities for preparing the radioactive waste canisters for burial. A previous facility design considered was the Canister Handling Facility Staging Pit. This design is no longer used, but its thermal evaluation is typical of such facilities. Structural concrete can be adversely affected by the heat from radioactive decay. Consequently, facilities must have heating ventilation and air conditioning (HVAC) systems for cooling. Concrete temperatures are a function of conductive, convective and radiative heat transfer. The prediction of concrete temperatures under such complex conditions can only be adequately handled by computational fluid dynamics (CFD). The objective of the CFD analysis was to predict concrete temperatures under normal and off-normal conditions. Normal operation assumed steady state conditions with constant HVAC flow and temperatures. However, off-normal operation was an unsteady scenario which assumed a total HVAC failure for a period of 30 days. This scenario was particularly complex in that the concrete temperatures would gradually rise, and air flows would be buoyancy driven. The CFD analysis concluded that concrete wall temperatures would be at or below the maximum temperature limits in both the normal and off-normal scenarios. While this analysis was specific to a facility design that is no longer used, it demonstrates that such facilities are reasonably expected to have satisfactory thermal performance. (author)

  19. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    Science.gov (United States)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    relevant European Research infrastructure in the field of Earth Science (EPOS and ICOS), Bioinformatics (BBMRI and ELIXIR) and Space Physics (EISCAT-3D). The first outcome of this activity has been the definition of a generic use case that captures the typical user scenario with respect the integrated use of the EGI and EUDAT infrastructures. This generic use case allows a user to instantiate a set of Virtual Machine images on the EGI Federated Cloud to perform computational jobs that analyse data previously stored on EUDAT long-term storage systems. The results of such analysis can be staged back to EUDAT storages, and if needed, allocated with Permanent identifyers (PIDs) for future use. The implementation of this generic use case requires the following integration activities between EGI and EUDAT: (1) harmonisation of the user authentication and authorisation models, (2) implementing interface connectors between the relevant EGI and EUDAT services, particularly EGI Cloud compute facilities and EUDAT long-term storage and PID systems. In the presentation, the collected user requirements and the implementation status of the universal use case will be showed. Furthermore, how the universal use case is currently applied to satisfy EPOS and ICOS needs will be described.

  20. All for One: Integrating Budgetary Methods by Computer.

    Science.gov (United States)

    Herman, Jerry J.

    1994-01-01

    With the advent of high speed and sophisticated computer programs, all budgetary systems can be combined in one fiscal management information system. Defines and provides examples for the four budgeting systems: (1) function/object; (2) planning, programming, budgeting system; (3) zero-based budgeting; and (4) site-based budgeting. (MLF)