International Nuclear Information System (INIS)
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR's phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package
Energy Technology Data Exchange (ETDEWEB)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L. [Sandia National Labs., Albuquerque, NM (United States); Hodge, S.A.; Hyman, C.R.; Sanders, R.L. [Oak Ridge National Lab., TN (United States)
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.
Network coding for computing: Linear codes
Appuswamy, Rathinakumar; Karamchandani, Nikhil; Zeger, Kenneth
2011-01-01
In network coding it is known that linear codes are sufficient to achieve the coding capacity in multicast networks and that they are not sufficient in general to achieve the coding capacity in non-multicast networks. In network computing, Rai, Dey, and Shenvi have recently shown that linear codes are not sufficient in general for solvability of multi-receiver networks with scalar linear target functions. We study single receiver networks where the receiver node demands a target function of the source messages. We show that linear codes may provide a computing capacity advantage over routing only when the receiver demands a `linearly-reducible' target function. % Many known target functions including the arithmetic sum, minimum, and maximum are not linearly-reducible. Thus, the use of non-linear codes is essential in order to obtain a computing capacity advantage over routing if the receiver demands a target function that is not linearly-reducible. We also show that if a target function is linearly-reducible,...
Computer access security code system
Collins, Earl R., Jr. (Inventor)
1990-01-01
A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.
Nädalavahetusel algab festival "Ariel"
2008-01-01
26.-27. okt. toimub Tallinnas V juudi süvakultuuri festival "Ariel", mille peaesinejaks on klarnetist David Krakauer ansambliga Klezmer Madness1 USAst. Lisaks annavad kontserdi solist Sofia Rubina ja ansambel Vox Clamantis
Characterizing Video Coding Computing in Conference Systems
Tuquerres, G.
2000-01-01
In this paper, a number of coding operations is provided for computing continuous data streams, in particular, video streams. A coding capability of the operations is expressed by a pyramidal structure in which coding processes and requirements of a distributed information system are represented. Th
Simplified computer codes for cask impact analysis
International Nuclear Information System (INIS)
In regard to the evaluation of the acceleration and deformation of casks, the simplified computer codes make analyses economical and decrease input and calculation time. The results obtained by the simplified computer codes have enough adequacy for their practical use. (J.P.N.)
Computer Code for Nanostructure Simulation
Filikhin, Igor; Vlahovic, Branislav
2009-01-01
Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.
International Nuclear Information System (INIS)
SEURBNUK-2 has been designed to model the hydrodynamic development in time of a hypothetical core disrupture accident in a fast breeder reactor. SEURBNUK-2 is a two-dimensional, axisymmetric, eulerian, finite difference containment code. The numerical procedure adopted in SEURBNUK to solve the hydrodynamic equations is based on the semi-implicit ICE method. SEURBNUK has a full thin shell treatment for tanks of arbitrary shape and includes the effects of the compressibility of the fluid. Fluid flow through porous media and porous structures can also be accommodated. An important feature of SEURBNUK is that the thin shell equations are solved quite separately from those of the fluid, and the time step for the fluid flow calculation can be an integer multiple of that for calculating the shell motion. The interaction of the shell with the fluid is then considered as a modification to the coefficients in the implicit pressure equations, the modifications naturally depending on the behaviour of the thin shell section within the fluid cell. The code is limited to dealing with a single fluid, the coolant, whereas the bubble and the cover gas are treated as cavities of uniform pressure calculated via appropriate pressure-volume-energy relationships. This manual describes the input data specifications needed for the execution of SEURBNUK-2 calculations and nine sample problems of varying degrees of complexity highlight the code capabilities. After explaining the output facilities information is included to aid those unfamiliar with SEURBNUK-2 to avoid the common pit-falls experienced by novices
ANACROM - A computer code for chromatogram analysis
International Nuclear Information System (INIS)
The computer code was developed for automatic research of peaks and evaluation of chromatogram parameters as : center, height, area, medium - height width (FWHM) and the rate FWHM/center of each peak. (Author)
Topological Code Architectures for Quantum Computation
Cesare, Christopher Anthony
This dissertation is concerned with quantum computation using many-body quantum systems encoded in topological codes. The interest in these topological systems has increased in recent years as devices in the lab begin to reach the fidelities required for performing arbitrarily long quantum algorithms. The most well-studied system, Kitaev's toric code, provides both a physical substrate for performing universal fault-tolerant quantum computations and a useful pedagogical tool for explaining the way other topological codes work. In this dissertation, I first review the necessary formalism for quantum information and quantum stabilizer codes, and then I introduce two families of topological codes: Kitaev's toric code and Bombin's color codes. I then present three chapters of original work. First, I explore the distinctness of encoding schemes in the color codes. Second, I introduce a model of quantum computation based on the toric code that uses adiabatic interpolations between static Hamiltonians with gaps constant in the system size. Lastly, I describe novel state distillation protocols that are naturally suited for topological architectures and show that they provide resource savings in terms of the number of required ancilla states when compared to more traditional approaches to quantum gate approximation.
Computer codes in particle transport physics
International Nuclear Information System (INIS)
Simulation of transport and interaction of various particles in complex media and wide energy range (from 1 MeV up to 1 TeV) is very complicated problem that requires valid model of a real process in nature and appropriate solving tool - computer code and data library. A brief overview of computer codes based on Monte Carlo techniques for simulation of transport and interaction of hadrons and ions in wide energy range in three dimensional (3D) geometry is shown. Firstly, a short attention is paid to underline the approach to the solution of the problem - process in nature - by selection of the appropriate 3D model and corresponding tools - computer codes and cross sections data libraries. Process of data collection and evaluation from experimental measurements and theoretical approach to establishing reliable libraries of evaluated cross sections data is Ion g, difficult and not straightforward activity. For this reason, world reference data centers and specialized ones are acknowledged, together with the currently available, state of art evaluated nuclear data libraries, as the ENDF/B-VI, JEF, JENDL, CENDL, BROND, etc. Codes for experimental and theoretical data evaluations (e.g., SAMMY and GNASH) together with the codes for data processing (e.g., NJOY, PREPRO and GRUCON) are briefly described. Examples of data evaluation and data processing to generate computer usable data libraries are shown. Among numerous and various computer codes developed in transport physics of particles, the most general ones are described only: MCNPX, FLUKA and SHIELD. A short overview of basic application of these codes, physical models implemented with their limitations, energy ranges of particles and types of interactions, is given. General information about the codes covers also programming language, operation system, calculation speed and the code availability. An example of increasing computation speed of running MCNPX code using a MPI cluster compared to the code sequential option
Gender codes why women are leaving computing
Misa, Thomas J
2010-01-01
The computing profession is facing a serious gender crisis. Women are abandoning the computing field at an alarming rate. Fewer are entering the profession than anytime in the past twenty-five years, while too many are leaving the field in mid-career. With a maximum of insight and a minimum of jargon, Gender Codes explains the complex social and cultural processes at work in gender and computing today. Edited by Thomas Misa and featuring a Foreword by Linda Shafer, Chair of the IEEE Computer Society Press, this insightful collection of essays explores the persisting gender imbalance in computing and presents a clear course of action for turning things around.
Development of probabilistic multimedia multipathway computer codes.
Energy Technology Data Exchange (ETDEWEB)
Yu, C.; LePoire, D.; Gnanapragasam, E.; Arnish, J.; Kamboj, S.; Biwer, B. M.; Cheng, J.-J.; Zielen, A. J.; Chen, S. Y.; Mo, T.; Abu-Eid, R.; Thaggard, M.; Sallo, A., III.; Peterson, H., Jr.; Williams, W. A.; Environmental Assessment; NRC; EM
2002-01-01
The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributions for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.
COLD-SAT Dynamic Model Computer Code
Bollenbacher, G.; Adams, N. S.
1995-01-01
COLD-SAT Dynamic Model (CSDM) computer code implements six-degree-of-freedom, rigid-body mathematical model for simulation of spacecraft in orbit around Earth. Investigates flow dynamics and thermodynamics of subcritical cryogenic fluids in microgravity. Consists of three parts: translation model, rotation model, and slosh model. Written in FORTRAN 77.
Computer Security: is your code sane?
Stefan Lueders, Computer Security Team
2015-01-01
How many of us write code? Software? Programs? Scripts? How many of us are properly trained in this and how well do we do it? Do we write functional, clean and correct code, without flaws, bugs and vulnerabilities*? In other words: are our codes sane? Figuring out weaknesses is not that easy (see our quiz in an earlier Bulletin article). Therefore, in order to improve the sanity of your code, prevent common pit-falls, and avoid the bugs and vulnerabilities that can crash your code, or – worse – that can be misused and exploited by attackers, the CERN Computer Security team has reviewed its recommendations for checking the security compliance of your code. “Static Code Analysers” are stand-alone programs that can be run on top of your software stack, regardless of whether it uses Java, C/C++, Perl, PHP, Python, etc. These analysers identify weaknesses and inconsistencies including: employing undeclared variables; expressions resu...
Cluster Computing: A Mobile Code Approach
Directory of Open Access Journals (Sweden)
R. B. Patel
2006-01-01
Full Text Available Cluster computing harnesses the combined computing power of multiple processors in a parallel configuration. Cluster Computing environments built from commodity hardware have provided a cost-effective solution for many scientific and high-performance applications. In this paper we have presented design and implementation of a cluster based framework using mobile code. The cluster implementation involves the designing of a server named MCLUSTER which manages the configuring, resetting of cluster. It allows a user to provide necessary information regarding the application to be executed via a graphical user interface (GUI. Framework handles- the generation of application mobile code and its distribution to appropriate client nodes, efficient handling of results so generated and communicated by a number of client nodes and recording of execution time of application. The client node receives and executes the mobile code that defines the distributed job submitted by MCLUSTER server and replies the results back. We have also the analyzed the performance of the developed system emphasizing the tradeoff between communication and computation overhead.
Present state of the SOURCES computer code
Energy Technology Data Exchange (ETDEWEB)
Shores, E. F. (Erik F.)
2002-01-01
In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.
ABINIT: a computer code for matter
International Nuclear Information System (INIS)
The PAW (Projector Augmented Wave) method has been implemented in the ABINIT Code that computes electronic structures in atoms. This method relies on the simultaneous use of a set of auxiliary functions (in plane waves) and a sphere around each atom. This method allows the computation of systems including many atoms and gives the expression of energy, forces, stress... in terms of the auxiliary function only. We have generated atomic data for iron at very high pressure (over 200 GPa). We get a bcc-hcp transition around 10 GPa and the magnetic order disappears around 50 GPa. This method has been validated on a series of metals. The development of the PAW method has required a great effort for the massive parallelization of the ABINIT code. (A.C.)
Probabilistic structural analysis computer code (NESSUS)
Shiao, Michael C.
1988-01-01
Probabilistic structural analysis has been developed to analyze the effects of fluctuating loads, variable material properties, and uncertain analytical models especially for high performance structures such as SSME turbopump blades. The computer code NESSUS (Numerical Evaluation of Stochastic Structure Under Stress) was developed to serve as a primary computation tool for the characterization of the probabilistic structural response due to the stochastic environments by statistical description. The code consists of three major modules NESSUS/PRE, NESSUS/FEM, and NESSUS/FPI. NESSUS/PRE is a preprocessor which decomposes the spatially correlated random variables into a set of uncorrelated random variables using a modal analysis method. NESSUS/FEM is a finite element module which provides structural sensitivities to all the random variables considered. NESSUS/FPI is Fast Probability Integration method by which a cumulative distribution function or a probability density function is calculated.
Poisson/Superfish codes for personal computers
International Nuclear Information System (INIS)
The Poisson/Superfish codes calculate static E or B fields in two-dimensions and electromagnetic fields in resonant structures. New versions for 386/486 PCs and Macintosh computers have capabilities that exceed the mainframe versions. Notable improvements are interactive graphical post-processors, improved field calculation routines, and a new program for charged particle orbit tracking. (author). 4 refs., 1 tab., figs
Quality assurance of the computer code INDAR
International Nuclear Information System (INIS)
Detailed aquatic dispersion and radiation exposure models are required in order to assess the radiological impact of routine aquatic discharges from nuclear power stations in the United Kingdom. Such models have been developed and incorporated in the computer program INDAR. This report describes the quality assurance procedures adopted in producing and testing the first release of the code, which was complied in November 1988 and is currently stored in the production load module PROD.INDAR.V10. (author)
Validation Report for ISAAC Computer Code
International Nuclear Information System (INIS)
A fully integrated severe accident code ISAAC was developed to simulate the accident scenarios that could lead to a severe core damage and eventually to the containment failure in CANDU reactors. Three ways of validation were adopted in this report. The first approach is to show the ISAAC results for the typical severe core damage sequences. In general, the ISAAC computer code shows the reasonable results in terms of the thermal hydraulic behavior as well as fission product transport from the PHTS to the containment. As the second step, the ISAAC results are compared against those from CATHENA and MAAP4-CANDU. In spite of the modeling differences, the overall trend is similar to each other. Especially, the major severe accident phenomena and the accident progression are similar to MAAP4-CANDU, though ISAAC predicts the accident progression faster. Finally ISAAC results are compared with the experimental data. The ISAAC models provide a good agreement with the measured data. Still more efforts are needed to validate the code by the code-to-code comparison and the comparison against the experimental data available
Computing Challenges in Coded Mask Imaging
Skinner, Gerald
2009-01-01
This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.
New developments in the Saphire computer codes
Energy Technology Data Exchange (ETDEWEB)
Russell, K.D.; Wood, S.T.; Kvarfordt, K.J. [Idaho Engineering Lab., Idaho Falls, ID (United States)] [and others
1996-03-01
The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a suite of computer programs that were developed to create and analyze a probabilistic risk assessment (PRA) of a nuclear power plant. Many recent enhancements to this suite of codes have been made. This presentation will provide an overview of these features and capabilities. The presentation will include a discussion of the new GEM module. This module greatly reduces and simplifies the work necessary to use the SAPHIRE code in event assessment applications. An overview of the features provided in the new Windows version will also be provided. This version is a full Windows 32-bit implementation and offers many new and exciting features. [A separate computer demonstration was held to allow interested participants to get a preview of these features.] The new capabilities that have been added since version 5.0 will be covered. Some of these major new features include the ability to store an unlimited number of basic events, gates, systems, sequences, etc.; the addition of improved reporting capabilities to allow the user to generate and {open_quotes}scroll{close_quotes} through custom reports; the addition of multi-variable importance measures; and the simplification of the user interface. Although originally designed as a PRA Level 1 suite of codes, capabilities have recently been added to SAPHIRE to allow the user to apply the code in Level 2 analyses. These features will be discussed in detail during the presentation. The modifications and capabilities added to this version of SAPHIRE significantly extend the code in many important areas. Together, these extensions represent a major step forward in PC-based risk analysis tools. This presentation provides a current up-to-date status of these important PRA analysis tools.
SALE: Safeguards Analytical Laboratory Evaluation computer code
Energy Technology Data Exchange (ETDEWEB)
Carroll, D.J.; Bush, W.J.; Dolan, C.A.
1976-09-01
The Safeguards Analytical Laboratory Evaluation (SALE) program implements an industry-wide quality control and evaluation system aimed at identifying and reducing analytical chemical measurement errors. Samples of well-characterized materials are distributed to laboratory participants at periodic intervals for determination of uranium or plutonium concentration and isotopic distributions. The results of these determinations are statistically-evaluated, and each participant is informed of the accuracy and precision of his results in a timely manner. The SALE computer code which produces the report is designed to facilitate rapid transmission of this information in order that meaningful quality control will be provided. Various statistical techniques comprise the output of the SALE computer code. Assuming an unbalanced nested design, an analysis of variance is performed in subroutine NEST resulting in a test of significance for time and analyst effects. A trend test is performed in subroutine TREND. Microfilm plots are obtained from subroutine CUMPLT. Within-laboratory standard deviations are calculated in the main program or subroutine VAREST, and between-laboratory standard deviations are calculated in SBLV. Other statistical tests are also performed. Up to 1,500 pieces of data for each nuclear material sampled by 75 (or fewer) laboratories may be analyzed with this code. The input deck necessary to run the program is shown, and input parameters are discussed in detail. Printed output and microfilm plot output are described. Output from a typical SALE run is included as a sample problem.
Neutron spectrum unfolding using computer code SAIPS
Karim, S
1999-01-01
The main objective of this project was to study the neutron energy spectrum at rabbit station-1 in Pakistan Research Reactor (PARR-I). To do so, multiple foils activation method was used to get the saturated activities. The computer code SAIPS was used to unfold the neutron spectra from the measured reaction rates. Of the three built in codes in SAIPS, only SANDI and WINDOWS were used. Contribution of thermal part of the spectra was observed to be higher than the fast one. It was found that the WINDOWS gave smooth spectra while SANDII spectra have violet oscillations in the resonance region. The uncertainties in the WINDOWS results are higher than those of SANDII. The results show reasonable agreement with the published results.
ARIEL: an ESA M4 mission candidate
Puig, L.; Pilbratt, G. L.; Heske, A.; Escudero Sanz, I.; Crouzet, P.-E.
2016-07-01
The Atmospheric Remote sensing Infrared Exoplanet Large survey (ARIEL) mission is an M-class mission candidate within the science program Cosmic Vision of the European Space Agency (ESA). It was selected in June 2015 as one of three candidates to enter an assessment phase (phase 0/A). This process involves the definition of science and mission requirements as well as a preliminary model payload, and an internal Concurrent Design Facility (CDF) study providing the input to parallel industrial studies (in progress since 2016). After this process, the three candidates will be reviewed and in mid-2017 one of them will be selected as the M4 mission for launch in 2026. ARIEL is a survey-type mission dedicated to the characterisation of exoplanetary atmospheres. Using the differential technique of transit spectroscopy, ARIEL will obtain transmission and/or emission spectra of the atmospheres of a large and diverse sample of known exoplanets (~500) covering a wide range of masses, densities, equilibrium temperatures, orbital properties and host-star characteristics. This will include hot Jupiters to warm Super-Earths, orbiting M5 to F0 stars. This paper describes critical requirements, and reports on the results of the Concurrent Design Facility (CDF) study that was conducted in June / July 2015, providing a description of the resulting spacecraft design. It will employ a 0.7 m x 1.1 m off-axis three mirror telescope, feeding four photometric channels in the VNIR range (0.5-1.95 μm) and an IR spectrometer covering 1.95-7.8 μm.
Computer code for quantitative ALARA evaluations
International Nuclear Information System (INIS)
A FORTRAN computer code has been developed to simplify the determination of whether dose reduction actions meet the as low as is reasonably achievable (ALARA) criterion. The calculations are based on the methodology developed for the Atomic Industrial Forum. The code is used for analyses of eight types of dose reduction actions, characterized as follows: reduce dose rate, reduce job frequency, reduce productive working time, reduce crew size, increase administrative dose limit for the task, and increase the workers' time utilization and dose utilization through (a) improved working conditions, (b) basic skill training, or (c) refresher training for special skills. For each type of action, two analysis modes are available. The first is a generic analysis in which the program computes potential benefits (in dollars) for a range of possible improvements, e.g., for a range of lower dose rates. Generic analyses are most useful in the planning stage and for evaluating the general feasibility of alternative approaches. The second is a specific analysis in which the potential annual benefits of a specific level of improvement and the annual implementation cost are compared. The potential benefits reflect savings in operational and societal costs that can be realized if occupational radiation doses are reduced. Because the potential benefits depend upon many variables which characterize the job, the workplace, and the workers, there is no unique relationship between the potential dollar savings and the dose savings. The computer code permits rapid quantitative analyses of alternatives and is a tool that supplements the health physicist's professional judgment. The program output provides a rational basis for decision-making and a record of the assumptions employed
Spiking network simulation code for petascale computers
Directory of Open Access Journals (Sweden)
Susanne eKunkel
2014-10-01
Full Text Available Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
ICAN Computer Code Adapted for Building Materials
Murthy, Pappu L. N.
1997-01-01
The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.
A surface code quantum computer in silicon.
Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L
2015-10-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310
Energy Technology Data Exchange (ETDEWEB)
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
Computer codes validation for conditions of core voiding
International Nuclear Information System (INIS)
Void generation during a Loss of Coolant Accident (LOCA) in a core of a CANDU reactor is of specific importance because of its strong coupling with reactor neutronics. The use of dynamic behaviour and computer code capability to predict void generation accurately in the temporal and spatial domain of the reactor core is fundamental for the determination of CANDU safety. The Canadian industry has used the RD-14M test facilities for its code validation. The validation exercises for the Canadian computer codes TUF and CATHENA were performed some years ago. Recently, the CNSC has gained access to the USNRC computer code TRACE. This has provided an opportunity to explore the use of this code in CANDU related applications. As a part of regulatory assessment and resolving identified Generic Issues (GI), and in an effort to build independent thermal hydraulic computer codes assessment capability within the CNSC, preliminary validation exercises were performed using the TRACE computer code for an evaluation of the void generation phenomena. The paper presents a preliminary assessment of the TRACE computer code for an RD-14M channel voiding test. It is also a validation exercise of void generation for the TRACE computer code. The accuracy of the obtained results is discussed and compared with previous validation assessments that were done using the CATHENA and TUF codes. (author)
40 CFR 194.23 - Models and computer codes.
2010-07-01
... executing the computer codes, including hardware and software requirements, input and output formats with explanations of each input and output variable and parameter (e.g., parameter name and units); listings of input and output files from a sample computer run; and reports on code verification,...
The SEDA computer code and its utilization for Angra 1
International Nuclear Information System (INIS)
The implementation of SEDA 2.0 computer code, developed at Ezeiza Atomic Center, Argentine for Angra 1 reactor is described. The SEDA code gives an estimate for radiological consequences of nuclear accidents with release of radiactive materials for the environment. This code is now available for an IBM PC-XT. The computer environment, the files used, data, the programining structure and the models used are presented. The input data and results for two sample case are described. (author)
Panel-Method Computer Code For Potential Flow
Ashby, Dale L.; Dudley, Michael R.; Iguchi, Steven K.
1992-01-01
Low-order panel method used to reduce computation time. Panel code PMARC (Panel Method Ames Research Center) numerically simulates flow field around or through complex three-dimensional bodies such as complete aircraft models or wind tunnel. Based on potential-flow theory. Facilitates addition of new features to code and tailoring of code to specific problems and computer-hardware constraints. Written in standard FORTRAN 77.
Reducing Computational Overhead of Network Coding with Intrinsic Information Conveying
DEFF Research Database (Denmark)
Heide, Janus; Zhang, Qi; Pedersen, Morten V.;
is RLNC (Random Linear Network Coding) and the goal is to reduce the amount of coding operations both at the coding and decoding node, and at the same time remove the need for dedicated signaling messages. In a traditional RLNC system, coding operation takes up significant computational resources and adds......This paper investigated the possibility of intrinsic information conveying in network coding systems. The information is embedded into the coding vector by constructing the vector based on a set of predefined rules. This information can subsequently be retrieved by any receiver. The starting point...
Computer codes for birds of North America
US Fish and Wildlife Service, Department of the Interior — Purpose of paper was to provide a more useful way to provide codes for all North American species, thus making the list useful for virtually all projects concerning...
Low rank approximations for the DEPOSIT computer code
Litsarev, Mikhail; Oseledets, Ivan
2014-01-01
We present an efficient technique based on low-rank separated approximations for the computation of three-dimensional integrals in the computer code DEPOSIT that describes ion-atomic collision processes. Implementation of this technique decreases the total computational time by a factor of 1000. The general concept can be applied to more complicated models.
Optimization of KINETICS Chemical Computation Code
Donastorg, Cristina
2012-01-01
NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.
Continuous Materiality: Through a Hierarchy of Computational Codes
Directory of Open Access Journals (Sweden)
Jichen Zhu
2008-01-01
Full Text Available The legacy of Cartesian dualism inherent in linguistic theory deeply influences current views on the relation between natural language, computer code, and the physical world. However, the oversimplified distinction between mind and body falls short of capturing the complex interaction between the material and the immaterial. In this paper, we posit a hierarchy of codes to delineate a wide spectrum of continuous materiality. Our research suggests that diagrams in architecture provide a valuable analog for approaching computer code in emergent digital systems. After commenting on ways that Cartesian dualism continues to haunt discussions of code, we turn our attention to diagrams and design morphology. Finally we notice the implications a material understanding of code bears for further research on the relation between human cognition and digital code. Our discussion concludes by noticing several areas that we have projected for ongoing research.
Study of nuclear computer code maintenance and management system
International Nuclear Information System (INIS)
Software maintenance is one of the most important problems since late 1970's.We wish to develop a nuclear computer code system to maintenance and manage KAERI's nuclear software. As a part of this system, we have developed three code management programs for use on CYBER and PC systems. They are used in systematic management of computer code in KAERI. The first program is embodied on the CYBER system to rapidly provide information on nuclear codes to the users. The second and the third programs were embodied on the PC system for the code manager and for the management of data in korean language, respectively. In the requirement analysis, we defined each code, magnetic tape, manual and abstract information data. In the conceptual design, we designed retrieval, update, and output functions. In the implementation design, we described the technical considerations of database programs, utilities, and directions for the use of databases. As a result of this research, we compiled the status of nuclear computer codes which belonged KAERI until September, 1988. Thus, by using these three database programs, we could provide the nuclear computer code information to the users more rapidly. (Author)
Computer codes for level 1 probabilistic safety assessment
International Nuclear Information System (INIS)
Probabilistic Safety Assessment (PSA) entails several laborious tasks suitable for computer codes assistance. This guide identifies these tasks, presents guidelines for selecting and utilizing computer codes in the conduct of the PSA tasks and for the use of PSA results in safety management and provides information on available codes suggested or applied in performing PSA in nuclear power plants. The guidance is intended for use by nuclear power plant system engineers, safety and operating personnel, and regulators. Large efforts are made today to provide PC-based software systems and PSA processed information in a way to enable their use as a safety management tool by the nuclear power plant overall management. Guidelines on the characteristics of software needed for management to prepare a software that meets their specific needs are also provided. Most of these computer codes are also applicable for PSA of other industrial facilities. The scope of this document is limited to computer codes used for the treatment of internal events. It does not address other codes available mainly for the analysis of external events (e.g. seismic analysis) flood and fire analysis. Codes discussed in the document are those used for probabilistic rather than for phenomenological modelling. It should be also appreciated that these guidelines are not intended to lead the user to selection of one specific code. They provide simply criteria for the selection. Refs and tabs
Code 672 observational science branch computer networks
Hancock, D. W.; Shirk, H. G.
1988-01-01
In general, networking increases productivity due to the speed of transmission, easy access to remote computers, ability to share files, and increased availability of peripherals. Two different networks within the Observational Science Branch are described in detail.
Tuning complex computer code to data
Energy Technology Data Exchange (ETDEWEB)
Cox, D.; Park, J.S.; Sacks, J.; Singer, C.
1992-01-01
The problem of estimating parameters in a complex computer simulator of a nuclear fusion reactor from an experimental database is treated. Practical limitations do not permit a standard statistical analysis using nonlinear regression methodology. The assumption that the function giving the true theoretical predictions is a realization of a Gaussian stochastic process provides a statistical method for combining information from relatively few computer runs with information from the experimental database and making inferences on the parameters.
Computer aided power flow software engineering and code generation
Energy Technology Data Exchange (ETDEWEB)
Bacher, R. [Swiss Federal Inst. of Tech., Zuerich (Switzerland)
1996-02-01
In this paper a software engineering concept is described which permits the automatic solution of a non-linear set of network equations. The power flow equation set can be seen as a defined subset of a network equation set. The automated solution process is the numerical Newton-Raphson solution process of the power flow equations where the key code parts are the numeric mismatch and the numeric Jacobian term computation. It is shown that both the Jacobian and the mismatch term source code can be automatically generated in a conventional language such as Fortran or C. Thereby one starts from a high level, symbolic language with automatic differentiation and code generation facilities. As a result of this software engineering process an efficient, very high quality newton-Raphson solution code is generated which allows easier implementation of network equation model enhancements and easier code maintenance as compared to hand-coded Fortran or C code.
Computer aided power flow software engineering and code generation
Energy Technology Data Exchange (ETDEWEB)
Bacher, R. [Swiss Federal Inst. of Tech., Zuerich (Switzerland)
1995-12-31
In this paper a software engineering concept is described which permits the automatic solution of a non-linear set of network equations. The power flow equation set can be seen as a defined subset of a network equation set. The automated solution process is the numerical Newton-Raphson solution process of the power flow equations where the key code parts are the numeric mismatch and the numeric Jacobian term computation. It is shown that both the Jacobian and the mismatch term source code can be automatically generated in a conventional language such as Fortran or C. Thereby one starts from a high level, symbolic language with automatic differentiation and code generation facilities. As a result of this software engineering process an efficient, very high quality Newton-Raphson solution code is generated which allows easier implementation of network equation model enhancements and easier code maintenance as compared to hand-coded Fortran or C code.
APC: A New Code for Atmospheric Polarization Computations
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
Two-phase computer codes for zero-gravity applications
Energy Technology Data Exchange (ETDEWEB)
Krotiuk, W.J.
1986-10-01
This paper discusses the problems existing in the development of computer codes which can analyze the thermal-hydraulic behavior of two-phase fluids especially in low gravity nuclear reactors. The important phenomenon affecting fluid flow and heat transfer in reduced gravity is discussed. The applicability of using existing computer codes for space applications is assessed. Recommendations regarding the use of existing earth based fluid flow and heat transfer correlations are made and deficiencies in these correlations are identified.
Adaptation of HAMMER computer code to CYBER 170/750 computer
International Nuclear Information System (INIS)
The adaptation of HAMMER computer code to CYBER 170/750 computer is presented. The HAMMER code calculates cell parameters by multigroup transport theory and reactor parameters by few group diffusion theory. The auxiliary programs, the carried out modifications and the use of HAMMER system adapted to CYBER 170/750 computer are described. (M.C.K.)
A restructuring of CF package for MIDAS computer code
International Nuclear Information System (INIS)
CF package, which evaluates user-specified 'control functions' and applies them to define or control various aspects of computation, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and modernized data structure. To do this, data transferring methods of current MELCOR code are modified and adopted into the CF package. The data structure of the current MELCOR code using FORTRAN77 causes a difficult grasping of meaning of the variables as well as waste of memory, difficulty is more over because its data is location information of other package's data due to characteristics of CF package. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which lead to an efficient memory treatment and an easy understanding of the code. Restructuring of the CF package addressed in this paper includes module development, subroutine modification, and treats MELGEN, which generates data file, as well as MELCOR, which is processing a calculation. The verification has been done by comparing the results of the modified code with those from the existing code. As the trends are similar to each other, it hints that the same approach could be extended to the entire code package. It is expected that code restructuring will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models
Code system to compute radiation dose in human phantoms
International Nuclear Information System (INIS)
Monte Carlo photon transport code and a code using Monte Carlo integration of a point kernel have been revised to incorporate human phantom models for an adult female, juveniles of various ages, and a pregnant female at the end of the first trimester of pregnancy, in addition to the adult male used earlier. An analysis code has been developed for deriving recommended values of specific absorbed fractions of photon energy. The computer code system and calculational method are described, emphasizing recent improvements in methods
An algorithm for computing the distance spectrum of trellis codes
Rouanne, Marc; Costello, Daniel J., Jr.
1989-01-01
A class of quasiregular codes is defined for which the distance spectrum can be calculated from the codeword corresponding to the all-zero information sequence. Convolutional codes and regular codes are both quasiregular, as well as most of the best known trellis codes. An algorithm to compute the distance spectrum of linear, regular, and quasiregular trellis codes is presented. In particular, it can calculate the weight spectrum of convolutional (linear trellis) codes and the distance spectrum of most of the best known trellis codes. The codes do not have to be linear or regular, and the signals do not have to be used with equal probabilities. The algorithm is derived from a bidirectional stack algorithm, although it could also be based on the Viterbi algorithm. The algorithm is used to calculate the beginning of the distance spectrum of some of the best known trellis codes and to compute tight estimates on the first-event-error probability and on the bit-error probability.
Computer vision cracks the leaf code.
Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas
2016-03-22
Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664
Computer vision cracks the leaf code.
Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas
2016-03-22
Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies.
A restructuring of COR package for MIDAS computer code
International Nuclear Information System (INIS)
The COR package, which calculates the thermal response of the core and the lower plenum internal structures and models the relocation of the core and lower plenum structural materials, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and a modernized data structure. To do this, the data transferring methods of the current MELCOR code are modified and adopted into the COR package. The data structure of the current MELCOR code using FORTRAN77 has a difficulty in grasping the meaning of the variables as well as a waste of memory. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which leads to an efficient memory treatment and an easy understanding of the code. Restructuring of the COR package addressed in this paper includes a module development, subroutine modification. The verification has been done by comparing the results of the modified code with those of the existing code. As the trends are similar to each other, it implies that the same approach could be extended to the entire code package. It is expected that the code restructuring will accelerated the code's domestication thanks to a direct understanding of each variable and an easy implementation of the modified or newly developed models. (author)
A restructuring of RN2 package for MIDAS computer code
International Nuclear Information System (INIS)
RN2 package, which is one of two fission product-related package in MELCOR, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and data structure. To do this, data transferring methods of current MELCOR code are modified and adopted into the RN2 package. The data structure of the current MELCOR code using FORTRAN77 causes a difficult grasping of meaning of the variables as well as waste of memory. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which lead to an efficient memory treatment and an easy understanding of the code. Restructuring of the RN2 package addressed in this paper includes module development, subroutine modification, and treats MELGEN, which generates data file, as well as MELCOR, which is processing a calculation. The validation has been done by comparing the results of the modified code with those from the existing code. As the trends are the similar to each other, it hints that the same approach could be extended to the entire code package. It is expected that code restructuring will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models
Theory Manual for ISAAC Computer Code
International Nuclear Information System (INIS)
Major models adopted in ISAAC are introduced briefly. The primary heat transport system of two independent figure-of-eight loops are represented in ISAAC. All four steam generators and 4 pumps are also modeled individually. PHTS model tracks the masses and energy in one gas space and in multiple water pools. The pressurizer model is similar to the PWR model in MAAP4, except two independent surge lines which are connected to each PHTS loop. The two-region steam generator model was newly implemented into the ISAAC code. In each region, masses and energies of water, steam, and non-condensable gases are tracked. Then, the pressure, water temperature, and gas temperature in each region are calculated based on the masses and energies, using non-equilibrium thermodynamic model. The core heatup module calculates the thermal-hydraulic response and fission product transport, including the rates-of-change of dynamic variables, within the core region. The key quantities of calandria tank model include thermal-hydraulic variables, modeling of corium, water, and calandria tank wall heat transfer, corium debris bed in the bottom of the tank, failure mechanisms, and fission product transport. ISAAC also models main safety features in Wolsong plants as well as fission product behavior. As described, ISAAC has a fundamental models to capture the main phenomena during the severe accident in Wolsong plants
HUDU: The Hanford Unified Dose Utility computer code
International Nuclear Information System (INIS)
The Hanford Unified Dose Utility (HUDU) computer program was developed to provide rapid initial assessment of radiological emergency situations. The HUDU code uses a straight-line Gaussian atmospheric dispersion model to estimate the transport of radionuclides released from an accident site. For dose points on the plume centerline, it calculates internal doses due to inhalation and external doses due to exposure to the plume. The program incorporates a number of features unique to the Hanford Site (operated by the US Department of Energy), including a library of source terms derived from various facilities' safety analysis reports. The HUDU code was designed to run on an IBM-PC or compatible personal computer. The user interface was designed for fast and easy operation with minimal user training. The theoretical basis and mathematical models used in the HUDU computer code are described, as are the computer code itself and the data libraries used. Detailed instructions for operating the code are also included. Appendices to the report contain descriptions of the program modules, listings of HUDU's data library, and descriptions of the verification tests that were run as part of the code development. 14 refs., 19 figs., 2 tabs
HUDU: The Hanford Unified Dose Utility computer code
Energy Technology Data Exchange (ETDEWEB)
Scherpelz, R.I.
1991-02-01
The Hanford Unified Dose Utility (HUDU) computer program was developed to provide rapid initial assessment of radiological emergency situations. The HUDU code uses a straight-line Gaussian atmospheric dispersion model to estimate the transport of radionuclides released from an accident site. For dose points on the plume centerline, it calculates internal doses due to inhalation and external doses due to exposure to the plume. The program incorporates a number of features unique to the Hanford Site (operated by the US Department of Energy), including a library of source terms derived from various facilities' safety analysis reports. The HUDU code was designed to run on an IBM-PC or compatible personal computer. The user interface was designed for fast and easy operation with minimal user training. The theoretical basis and mathematical models used in the HUDU computer code are described, as are the computer code itself and the data libraries used. Detailed instructions for operating the code are also included. Appendices to the report contain descriptions of the program modules, listings of HUDU's data library, and descriptions of the verification tests that were run as part of the code development. 14 refs., 19 figs., 2 tabs.
Computer Security: better code, fewer problems
Stefan Lueders, Computer Security Team
2016-01-01
The origin of many security incidents is negligence or unintentional mistakes made by web developers or programmers. In the rush to complete the work, due to skewed priorities, or just to ignorance, basic security principles can be omitted or forgotten. The resulting vulnerabilities lie dormant until the evil side spots them and decides to hit hard. Computer security incidents in the past have put CERN’s reputation at risk due to websites being defaced with negative messages about the Organization, hash files of passwords being extracted, restricted data exposed… And it all started with a little bit of negligence! If you check out the Top 10 web development blunders, you will see that the most prevalent mistakes are: Not filtering input, e.g. accepting “<“ or “>” in input fields even if only a number is expected. Not validating that input: you expect a birth date? So why accept letters? &...
Preliminary blade design using integrated computer codes
Ryan, Arve
1988-12-01
Loads on the root of a horizontal axis wind turbine (HAWT) rotor blade were analyzed. A design solution for the root area is presented. The loads on the blades are given by different load cases that are specified. To get a clear picture of the influence of different parameters, the whole blade is designed from scratch. This is only a preliminary design study and the blade should not be looked upon as a construction reference. The use of computer programs for the design and optimization is extensive. After the external geometry is set and the aerodynamic loads calculated, parameters like design stresses and laminate thicknesses are run through the available programs, and a blade design optimized on basis of facts and estimates used is shown.
Low Computational Complexity Network Coding For Mobile Networks
DEFF Research Database (Denmark)
Heide, Janus
2012-01-01
Network Coding (NC) is a technique that can provide benefits in many types of networks, some examples from wireless networks are: In relay networks, either the physical or the data link layer, to reduce the number of transmissions. In reliable multicast, to reduce the amount of signaling and enable...... cooperation among receivers. In meshed networks, to simplify routing schemes and to increase robustness toward node failures. This thesis deals with implementation issues of one NC technique namely Random Linear Network Coding (RLNC) which can be described as a highly decentralized non-deterministic intra......-flow coding technique. One of the key challenges of this technique is its inherent computational complexity which can lead to high computational load and energy consumption in particular on the mobile platforms that are the target platform in this work. To increase the coding throughput several...
A three-dimensional magnetostatics computer code for insertion devices.
Chubar, O; Elleaume, P; Chavanne, J
1998-05-01
RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica [Mathematica is a registered trademark of Wolfram Research, Inc.]. The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented.
A restructuring of TF package for MIDAS computer code
International Nuclear Information System (INIS)
TF package which defines some interpolation and extrapolation condition through user defined table has been restructured in MIDAS computer code. To do this, data transferring methods of current MELCOR code are modified and adopted into TF package. The data structure of the current MELCOR code using FORTRAN77 causes a difficult grasping of the meaning of the variables as well as waste of memory. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which lead to an efficient memory treatment and an easy understanding of the code. Restructuring of TF package addressed in this paper does module development and subroutine modification, and treats MELGEN which is making restart file as well as MELCOR which is processing calculation. The validation has been done by comparing the results of the modified code with those from the existing code, and it is confirmed that the results are the same. It hints that the similar approach could be extended to the entire code package. It is expected that code restructuring will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models
A restructuring of DCH package for MIDAS computer code
International Nuclear Information System (INIS)
DCH package, which is one of thermal-hydraulic packages in MELCOR, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and modernized data structure. The data structure of the current MELCOR code using FORTRAN77 causes a difficult grasping of meaning of the variables as well as waste of memory. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which lead to an efficient memory treatment and an easy understanding of the code. Restructuring of the DCH package addressed in this paper includes module development, subroutine modification, and treats MELGEN, which generates an initial data file, as well as MELCOR, which is processing a calculation. The results of the modified code are verified against those from the existing code. As the trends are similar to each other, it hints that the same approach could be extended to the entire code package. It is expected that code restructuring will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models
Continuous Materiality: Through a Hierarchy of Computational Codes
Jichen Zhu; Kenneth J. Knoespe
2008-01-01
The legacy of Cartesian dualism inherent in linguistic theory deeply influences current views on the relation between natural language, computer code, and the physical world. However, the oversimplified distinction between mind and body falls short of capturing the complex interaction between the material and the immaterial. In this paper, we posit a hierarchy of codes to delineate a wide spectrum of continuous materiality. Our research suggests that diagrams in architecture provide a valuabl...
Sample test cases using the environmental computer code NECTAR
International Nuclear Information System (INIS)
This note demonstrates a few of the many different ways in which the environmental computer code NECTAR may be used. Four sample test cases are presented and described to show how NECTAR input data are structured. Edited output is also presented to illustrate the format of the results. Two test cases demonstrate how NECTAR may be used to study radio-isotopes not explicitly included in the code. (U.K.)
RADTRAN: a computer code to analyze transportation of radioactive material
International Nuclear Information System (INIS)
A computer code is presented which predicts the environmental impact of any specific scheme of radioactive material transportation. Results are presented in terms of annual latent cancer fatalities and annual early fatility probability resulting from exposure, during normal transportation or transport accidents. The code is developed in a generalized format to permit wide application including normal transportation analysis; consideration of alternatives; and detailed consideration of specific sectors of industry
FLASH: A finite element computer code for variably saturated flow
Energy Technology Data Exchange (ETDEWEB)
Baca, R.G.; Magnuson, S.O.
1992-05-01
A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.
User's manual for the NEFTRAN II computer code
International Nuclear Information System (INIS)
This document describes the NEFTRAN II (NEtwork Flow and TRANsport in Time-Dependent Velocity Fields) computer code and is intended to provide the reader with sufficient information to use the code. NEFTRAN II was developed as part of a performance assessment methodology for storage of high-level nuclear waste in unsaturated, welded tuff. NEFTRAN II is a successor to the NEFTRAN and NWFT/DVM computer codes and contains several new capabilities. These capabilities include: (1) the ability to input pore velocities directly to the transport model and bypass the network fluid flow model, (2) the ability to transport radionuclides in time-dependent velocity fields, (3) the ability to account for the effect of time-dependent saturation changes on the retardation factor, and (4) the ability to account for time-dependent flow rates through the source regime. In addition to these changes, the input to NEFTRAN II has been modified to be more convenient for the user. This document is divided into four main sections consisting of (1) a description of all the models contained in the code, (2) a description of the program and subprograms in the code, (3) a data input guide and (4) verification and sample problems. Although NEFTRAN II is the fourth generation code, this document is a complete description of the code and reference to past user's manuals should not be necessary. 19 refs., 33 figs., 25 tabs
Prodeto, a computer code for probabilistic fatigue design
Energy Technology Data Exchange (ETDEWEB)
Braam, H. [ECN-Solar and Wind Energy, Petten (Netherlands); Christensen, C.J.; Thoegersen, M.L. [Risoe National Lab., Roskilde (Denmark); Ronold, K.O. [Det Norske Veritas, Hoevik (Norway)
1999-03-01
A computer code for structural relibility analyses of wind turbine rotor blades subjected to fatigue loading is presented. With pre-processors that can transform measured and theoretically predicted load series to load range distributions by rain-flow counting and with a family of generic distribution models for parametric representation of these distribution this computer program is available for carying through probabilistic fatigue analyses of rotor blades. (au)
Benchmarking Severe Accident Computer Codes for Heavy Water Reactor Applications
International Nuclear Information System (INIS)
Requests for severe accident investigations and assurance of mitigation measures have increased for operating nuclear power plants and the design of advanced nuclear power plants. Severe accident analysis investigations necessitate the analysis of the very complex physical phenomena that occur sequentially during various stages of accident progression. Computer codes are essential tools for understanding how the reactor and its containment might respond under severe accident conditions. The IAEA organizes coordinated research projects (CRPs) to facilitate technology development through international collaboration among Member States. The CRP on Benchmarking Severe Accident Computer Codes for HWR Applications was planned on the advice and with the support of the IAEA Nuclear Energy Department's Technical Working Group on Advanced Technologies for HWRs (the TWG-HWR). This publication summarizes the results from the CRP participants. The CRP promoted international collaboration among Member States to improve the phenomenological understanding of severe core damage accidents and the capability to analyse them. The CRP scope included the identification and selection of a severe accident sequence, selection of appropriate geometrical and boundary conditions, conduct of benchmark analyses, comparison of the results of all code outputs, evaluation of the capabilities of computer codes to predict important severe accident phenomena, and the proposal of necessary code improvements and/or new experiments to reduce uncertainties. Seven institutes from five countries with HWRs participated in this CRP
Methods and computer codes for nuclear systems calculations
Indian Academy of Sciences (India)
B P Kochurov; A P Knyazev; A Yu Kwaretzkheli
2007-02-01
Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.
Plagiarism Detection Algorithm for Source Code in Computer Science Education
Liu, Xin; Xu, Chan; Ouyang, Boyu
2015-01-01
Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…
Method for quantitative assessment of nuclear safety computer codes
International Nuclear Information System (INIS)
A procedure has been developed for the quantitative assessment of nuclear safety computer codes and tested by comparison of RELAP4/MOD6 predictions with results from two Semiscale tests. This paper describes the developed procedure, the application of the procedure to the Semiscale tests, and the results obtained from the comparison
Connecting Neural Coding to Number Cognition: A Computational Account
Prather, Richard W.
2012-01-01
The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…
User's manual for the ORIGEN2 computer code
International Nuclear Information System (INIS)
This report describes how to use a revised version of the ORIGEN computer code, designated ORIGEN2. Included are a description of the input data, input deck organization, and sample input and output. ORIGEN2 can be obtained from the Radiation Shielding Information Center at ORNL
Multilevel Coding Schemes for Compute-and-Forward
Hern, Brett
2010-01-01
We investigate techniques for designing modulation/coding schemes for the wireless two-way relaying channel. The relay is assumed to have perfect channel state information, but the transmitters are assumed to have no channel state information. We consider physical layer network coding based on multilevel coding techniques. Our multilevel coding framework is inspired by the compute-and-forward relaying protocol. Indeed, we show that the framework developed here naturally facilitates decoding of linear combinations of codewords for forwarding by the relay node. We develop our framework with general modulation formats in mind, but numerical results are presented for the case where each node transmits using the QPSK constellation with gray labeling. We focus our discussion on the rates at which the relay may reliably decode linear combinations of codewords transmitted from the end nodes.
Some numerical results with the COMMIX-2 computer code
International Nuclear Information System (INIS)
The computer code COMMIX-2 has been developed for analyzing and designing thermal-hydraulic aspects of nuclear reactor components. The code employs a two-fluid model for solving transient, three-dimensional two-phase (or single phase) nonhomogeneous and nonequilibrium flow conditions. The report presents numerical results of four problems selected to demonstrate the capabilities of COMMIX-2: (1) transient single-phase flow with heat source; (2) two-phase flow in a vertical tube, where the surface heat flux is sufficiently high that a single-phase liquid emerges as a mixture of liquid and vapor; (3) separation of vapor and liquid; and (4) a high-pressure jet impinging on a vertical plate. The third and fourth problems were selected to demonstrate, respectively, that the code can handle computational difficulties usually encountered in problems with sharp interfaces, and the important role of interfacial mass and momentum exhange. The numerical results obtained by COMMIX-2 code are very encouraging. It has not only demonstrated the computational capability but has also exhibited the ability of modeling complex phenomena of the jet impingement problems with very simple interfacial drag and evaporation models
New Parallel computing framework for radiation transport codes
Energy Technology Data Exchange (ETDEWEB)
Kostin, M.A.; /Michigan State U., NSCL; Mokhov, N.V.; /Fermilab; Niita, K.; /JAERI, Tokai
2010-09-01
A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.
LMFBR models for the ORIGEN2 computer code
Energy Technology Data Exchange (ETDEWEB)
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1981-10-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
LMFBR models for the ORIGEN2 computer code
International Nuclear Information System (INIS)
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-238U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given
Computer codes for evaluation of control room habitability (HABIT)
Energy Technology Data Exchange (ETDEWEB)
Stage, S.A. [Pacific Northwest Lab., Richland, WA (United States)
1996-06-01
This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs.
Computer codes for evaluation of control room habitability (HABIT)
International Nuclear Information System (INIS)
This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs
Evaluation of the FRAPTRAN -1.3 Computer Code
Energy Technology Data Exchange (ETDEWEB)
Manngaard, Tero [Quantum Technologies AB, Uppsala Science Park, SE-751 83 Uppsala (Sweden)
2007-03-15
The FRAPTRAN-1.3 computer code has been evaluated regarding its applicability, modelling capability, user friendliness, source code structure and supporting experimental database. The code is intended for thermo-mechanical analyses of light water reactor nuclear fuel rods under reactor power and coolant transients, such as overpower transients, reactivity initiated accidents (RIA), boiling-water reactor power oscillations without scram, and loss of coolant accidents (LOCA). Its experimental database covers boiling- and pressurized water reactor fuel rods with UO{sub 2} fuel up to rod burnups around 64 MWd/kgU. In FRAPTRAN-1.3, the fundamental equations for heat transfer and structural analysis are solved in one-dimensional (in the radial direction) and transient (time-dependent) form, and interaction between axial segments of the rod is confined to calculations of coolant axial flow, rod internal gas pressure and optionally axial flow of fission gases. The clad-to-coolant heat transfer conditions can either be specified as pre-calculated data or can be determined by a coolant channel model in the code. The code provides different clad rupture models depending on cladding temperature and amount of cladding plastic hoop strain. For LOCA analysis, a model calculating local clad shape (ballooning) and associated local stresses is available to predict clad burst. A strain based failure model is present for cladding rupture driven by pellet-cladding mechanical interaction. Two models exist for computation of high-temperature clad oxidation under LOCA (i) the Baker-Just model for licensing calculations and (ii) the Cathcart-Pawel model for best-estimate calculations. The code appears to be fairly easy to use, however, the applicability of the current version as a self-standing analysis tool for LOCA and RIA analyses depends highly on the numerical robustness of the coolant channel model for generation of clad-to-coolant heat transfer boundary conditions. The main
Benchmarking of computer codes and approaches for modeling exposure scenarios
International Nuclear Information System (INIS)
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided
Benchmarking of computer codes and approaches for modeling exposure scenarios
Energy Technology Data Exchange (ETDEWEB)
Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)
1994-08-01
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.
Verification of structural analysis computer codes in nuclear engineering
International Nuclear Information System (INIS)
Sources of potential errors, which can take place during use of finite element method based computer programs, are described in the paper. The magnitude of errors was defined as acceptance criteria for those programs. Error sources are described as they are treated by 'National Agency for Finite Element Methods and Standards (NAFEMS)'. Specific verification examples are used from literature of Nuclear Regulatory Commission (NRC). Example of verification is made on PAFEC-FE computer code for seismic response analyses of piping systems by response spectrum method. (author)
Bragg optics computer codes for neutron scattering instrument design
Energy Technology Data Exchange (ETDEWEB)
Popovici, M.; Yelon, W.B.; Berliner, R.R. [Missouri Univ. Research Reactor, Columbia, MO (United States); Stoica, A.D. [Institute of Physics and Technology of Materials, Bucharest (Romania)
1997-09-01
Computer codes for neutron crystal spectrometer design, optimization and experiment planning are described. Phase space distributions, linewidths and absolute intensities are calculated by matrix methods in an extension of the Cooper-Nathans resolution function formalism. For modeling the Bragg reflection on bent crystals the lamellar approximation is used. Optimization is done by satisfying conditions of focusing in scattering and in real space, and by numerically maximizing figures of merit. Examples for three-axis and two-axis spectrometers are given.
War of ontology worlds: mathematics, computer code, or Esperanto?
Andrey Rzhetsky; Evans, James A.
2011-01-01
The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor an...
Methods for the development of large computer codes under LTSS
International Nuclear Information System (INIS)
TRAC is a large computer code being developed by Group Q-6 for the analysis of the transient thermal hydraulic behavior of light-water nuclear reactors. A system designed to assist the development of TRAC is described. The system consists of a central HYDRA dataset, R6LIB, containing files used in the development of TRAC, and a file maintenance program, HORSE, which facilitates the use of this dataset
Computer codes for the analysis of flask impact problems
International Nuclear Information System (INIS)
This review identifies typical features of the design of transportation flasks and considers some of the analytical tools required for the analysis of impact events. Because of the complexity of the physical problem, it is unlikely that a single code will adequately deal with all the aspects of the impact incident. Candidate codes are identified on the basis of current understanding of their strengths and limitations. It is concluded that the HONDO-II, DYNA3D AND ABAQUS codes which ar already mounted on UKAEA computers will be suitable tools for use in the analysis of experiments conducted in the proposed AEEW programme and of general flask impact problems. Initial attention should be directed at the DYNA3D and ABAQUS codes with HONDO-II being reserved for situations where the three-dimensional elements of DYNA3D may provide uneconomic simulations in planar or axisymmetric geometries. Attention is drawn to the importance of access to suitable mesh generators to create the nodal coordinate and element topology data required by these structural analysis codes. (author)
Methodology for computational fluid dynamics code verification/validation
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, W.L.; Blottner, F.G.; Aeschliman, D.P.
1995-07-01
The issues of verification, calibration, and validation of computational fluid dynamics (CFD) codes has been receiving increasing levels of attention in the research literature and in engineering technology. Both CFD researchers and users of CFD codes are asking more critical and detailed questions concerning the accuracy, range of applicability, reliability and robustness of CFD codes and their predictions. This is a welcomed trend because it demonstrates that CFD is maturing from a research tool to the world of impacting engineering hardware and system design. In this environment, the broad issue of code quality assurance becomes paramount. However, the philosophy and methodology of building confidence in CFD code predictions has proven to be more difficult than many expected. A wide variety of physical modeling errors and discretization errors are discussed. Here, discretization errors refer to all errors caused by conversion of the original partial differential equations to algebraic equations, and their solution. Boundary conditions for both the partial differential equations and the discretized equations will be discussed. Contrasts are drawn between the assumptions and actual use of numerical method consistency and stability. Comments are also made concerning the existence and uniqueness of solutions for both the partial differential equations and the discrete equations. Various techniques are suggested for the detection and estimation of errors caused by physical modeling and discretization of the partial differential equations.
A study on the nuclear computer code maintenance and management system
International Nuclear Information System (INIS)
According to current software development and quality assurance trends. It is necessary to develop computer code management system for nuclear programs. For this reason, the project started in 1987. Main objectives of the project are to establish a nuclear computer code management system, to secure software reliability, and to develop nuclear computer code packages. Contents of performing the project in this year were to operate and maintain computer code information system of KAERI computer codes, to develop application tool, AUTO-i, for solving the 1st and 2nd moments of inertia on polygon or circle, and to research nuclear computer code conversion between different machines. For better supporting the nuclear code availability and reliability, assistance from users who are using codes is required. Lastly, for easy reference about the codes information, we presented list of code names and information on the codes which were introduced or developed during this year. (Author)
ABINIT: a computer code for matter; Abinit: un code au service de la matiere
Energy Technology Data Exchange (ETDEWEB)
Amadon, B.; Bottin, F.; Bouchet, J.; Dewaele, A.; Jollet, F.; Jomard, G.; Loubeyre, P.; Mazevet, S.; Recoules, V.; Torrent, M.; Zerah, G. [CEA Bruyeres-le-Chatel, 91 (France)
2008-07-01
The PAW (Projector Augmented Wave) method has been implemented in the ABINIT Code that computes electronic structures in atoms. This method relies on the simultaneous use of a set of auxiliary functions (in plane waves) and a sphere around each atom. This method allows the computation of systems including many atoms and gives the expression of energy, forces, stress... in terms of the auxiliary function only. We have generated atomic data for iron at very high pressure (over 200 GPa). We get a bcc-hcp transition around 10 GPa and the magnetic order disappears around 50 GPa. This method has been validated on a series of metals. The development of the PAW method has required a great effort for the massive parallelization of the ABINIT code. (A.C.)
A computer code for computing the beam profiles in the NBI beam line 'BEMPROF'
International Nuclear Information System (INIS)
A computer code was developed which can compute the beam profiles and the percentage heat loadings on the various components in the NBI beam line such as the beam target, the beam limiters and the calorimeter. The geometrical injection efficiency of NBI and the heat input pattern on the counter surface of the injection port of the torus can also be computed. The major feature of this code is that the effects of the beamlet intensity distribution, the beamlet deflection, the beam screening by the upstream limiters and also the plasma density distribution and the divergence angle distribution over the beam extraction area can be taken into account. (author)
International Nuclear Information System (INIS)
The computer code system GSRW (Generic Safety assessment code for geologic disposal of Radioactive Waste) was developed as in interim version of safety assessment methodology for geologic disposal of high-level radioactive waste. Scenarios used here are based on normal evolution scenarios which assume that the performance of a disposal system is not affected by probabilistic events. The code consists of three parts. The first part evaluates a source term from a disposal facility which consists mainly of a vitrified waste, a metallic container and a buffer zone. Two kinds of source term models are provided: Model 1 which simulate the dissolution of silicate component of glass and the diffusive transport of radionuclides in the buffere zone, and Model 2 which assumes that the concentration of a radionuclide is limited by the solubility of its specific chemical form at the interface between the buffer and a vitrified wastes. The second part analyses the transport of radionuclides in the geosphere, which is based on analytical solutions or numerical solutions of a mass transport equation involving the advection, dispersion, linear sorption and decay chain. The third part assesses the transport of radionuclides in the biosphere and the resulting radiological consequences to the man, which is based on a dynamic compartment model for the biosphere and a dose factor method for dose calculations. This report describes mathematical models used, the structure of the code system, and user information and instructions for execution of the code. (author)
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
International Nuclear Information System (INIS)
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
Energy Technology Data Exchange (ETDEWEB)
Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)
1997-12-01
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.
WSRC approach to validation of criticality safety computer codes
International Nuclear Information System (INIS)
Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (Keff) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope 236U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed
MAGNUM-2D computer code: user's guide
Energy Technology Data Exchange (ETDEWEB)
England, R.L.; Kline, N.W.; Ekblad, K.J.; Baca, R.G.
1985-01-01
Information relevant to the general use of the MAGNUM-2D computer code is presented. This computer code was developed for the purpose of modeling (i.e., simulating) the thermal and hydraulic conditions in the vicinity of a waste package emplaced in a deep geologic repository. The MAGNUM-2D computer computes (1) the temperature field surrounding the waste package as a function of the heat generation rate of the nuclear waste and thermal properties of the basalt and (2) the hydraulic head distribution and associated groundwater flow fields as a function of the temperature gradients and hydraulic properties of the basalt. MAGNUM-2D is a two-dimensional numerical model for transient or steady-state analysis of coupled heat transfer and groundwater flow in a fractured porous medium. The governing equations consist of a set of coupled, quasi-linear partial differential equations that are solved using a Galerkin finite-element technique. A Newton-Raphson algorithm is embedded in the Galerkin functional to formulate the problem in terms of the incremental changes in the dependent variables. Both triangular and quadrilateral finite elements are used to represent the continuum portions of the spatial domain. Line elements may be used to represent discrete conduits. 18 refs., 4 figs., 1 tab.
Interactive computer code for dynamic and soil structure interaction analysis
Energy Technology Data Exchange (ETDEWEB)
Mulliken, J.S.
1995-12-01
A new interactive computer code is presented in this paper for dynamic and soil-structure interaction (SSI) analyses. The computer program FETA (Finite Element Transient Analysis) is a self contained interactive graphics environment for IBM-PC`s that is used for the development of structural and soil models as well as post-processing dynamic analysis output. Full 3-D isometric views of the soil-structure system, animation of displacements, frequency and time domain responses at nodes, and response spectra are all graphically available simply by pointing and clicking with a mouse. FETA`s finite element solver performs 2-D and 3-D frequency and time domain soil-structure interaction analyses. The solver can be directly accessed from the graphical interface on a PC, or run on a number of other computer platforms.
Internal pump reactor stability: Qualification of TRACG computer code
International Nuclear Information System (INIS)
Stability tests of the Forsmark Unit 1 reactor were conducted in January 1989 during power and flow conditions which put the plant at the onset of limit cycle instabilities. Studies of the LPRM recordings indicate the predominant mode to be global ''in phase'' oscillation. (The core made a few attempts to break away from the in phase pattern, the out of phase mode lasted only for two oscillation periods.) Sufficient data was taken to quantify the decay ratio and measure the effect of both small and large, power and flow changes. The axial power distribution also varied during the tests due to xenon transients. A computer model of the Forsmark Unit 1 has been constructed for use with TRACG, the GE Nuclear Energy version of the TRAC-BD1 computer code. TRACG has been thoroughly validated for application in loss of coolant, anticipated transients and stability analysis. The stability qualification data base has previously consisted of jet pump BWRs. The additional flow restriction of internal recirculation pumps generally causes such reactors to have more limiting stability. Therefore separate qualification of the TRACG code is desirable, although not strictly necessary. TRACG is shown to be a valuable tool in determining the range of stable operating conditions for BWRs. The TRACG calculated decay ratio and measured decay ratio are reported for five tests. At the limit cycle condition the TRACG decay ratio matches within 15% of the measurement, with the code calculating a higher than actual decay ratio
Codes for Computationally Simple Channels: Explicit Constructions with Optimal Rate
Guruswami, Venkatesan
2010-01-01
In this paper, we consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors is bounded with high probability by a parameter p and (b) the process which adds the errors can be described by a sufficiently "simple" circuit. For three classes of channels, we provide explicit, efficiently encodable/decodable codes of optimal rate where only inefficiently decodable codes were previously known. In each case, we provide one encoder/decoder that works for every channel in the class. (1) Unique decoding for additive errors: We give the first construction of poly-time encodable/decodable codes for additive (a.k.a. oblivious) channels that achieve the Shannon capacity 1-H(p). Such channels capture binary symmetric errors and burst errors as special cases. (2) List-decoding for log-space channels: A space-S(n) channel reads and modifies the transmitted codeword as a stream, using at most S(n) bits of workspace on transmissions of n bi...
CARP: a computer code and albedo data library for use by BREESE, the MORSE albedo package
International Nuclear Information System (INIS)
The CARP computer code was written to allow processing of DOT angular flux tapes to produce albedo data for use in the MORSE computer code. An albedo data library was produced containing several materials. 3 tables
Compilation of the abstracts of nuclear computer codes available at CPD/IPEN
International Nuclear Information System (INIS)
A compilation of all computer codes available at IPEN in S.Paulo are presented. These computer codes are classified according to Argonne National Laboratory - and Energy Nuclear Agency schedule. (E.G.)
Multicode comparison of selected source-term computer codes
Energy Technology Data Exchange (ETDEWEB)
Hermann, O.W.; Parks, C.V.; Renier, J.P.; Roddy, J.W.; Ashline, R.C.; Wilson, W.B.; LaBauve, R.J.
1989-04-01
This report summarizes the results of a study to assess the predictive capabilities of three radionuclide inventory/depletion computer codes, ORIGEN2, ORIGEN-S, and CINDER-2. The task was accomplished through a series of comparisons of their output for several light-water reactor (LWR) models (i.e., verification). Of the five cases chosen, two modeled typical boiling-water reactors (BWR) at burnups of 27.5 and 40 GWd/MTU and two represented typical pressurized-water reactors (PWR) at burnups of 33 and 50 GWd/MTU. In the fifth case, identical input data were used for each of the codes to examine the results of decay only and to show differences in nuclear decay constants and decay heat rates. Comparisons were made for several different characteristics (mass, radioactivity, and decay heat rate) for 52 radionuclides and for nine decay periods ranging from 30 d to 10,000 years. Only fission products and actinides were considered. The results are presented in comparative-ratio tables for each of the characteristics, decay periods, and cases. A brief summary description of each of the codes has been included. Of the more than 21,000 individual comparisons made for the three codes (taken two at a time), nearly half (45%) agreed to within 1%, and an additional 17% fell within the range of 1 to 5%. Approximately 8% of the comparison results disagreed by more than 30%. However, relatively good agreement was obtained for most of the radionuclides that are expected to contribute the greatest impact to waste disposal. Even though some defects have been noted, each of the codes in the comparison appears to produce respectable results. 12 figs., 12 tabs.
Code Verification of the HIGRAD Computational Fluid Dynamics Solver
Energy Technology Data Exchange (ETDEWEB)
Van Buren, Kendra L. [Los Alamos National Laboratory; Canfield, Jesse M. [Los Alamos National Laboratory; Hemez, Francois M. [Los Alamos National Laboratory; Sauer, Jeremy A. [Los Alamos National Laboratory
2012-05-04
The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.
Fault-tolerance for MPI Codes on Computational Clusters
Hagen, Knut Imar
2007-01-01
This thesis focuses on fault-tolerance for MPI codes on computational clusters. When an application runs on a very large cluster with thousands of processors, there is likely that a process crashes due to a hardware or software failure. Fault-tolerance is the ability of a system to respond gracefully to an unexpected hardware or software failure. A test application which is meant to run for several weeks on several nodes is used in this thesis. The application is a seismic MPI application, w...
Knowlton, Marie; Wetzel, Robin
2006-01-01
This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…
Comparison of computer code calculations with FEBA test data
International Nuclear Information System (INIS)
The FEBA forced feed reflood experiments included base line tests with unblocked geometry. The experiments consisted of separate effect tests on a full-length 5x5 rod bundle. Experimental cladding temperatures and heat transfer coefficients of FEBA test No. 216 are compared with the analytical data postcalculated utilizing the SSYST-3 computer code. The comparison indicates a satisfactory matching of the peak cladding temperatures, quench times and heat transfer coefficients for nearly all axial positions. This agreement was made possible by the use of an artificially adjusted value of the empirical code input parameter in the heat transfer for the dispersed flow regime. A limited comparison of test data and calculations using the RELAP4/MOD6 transient analysis code are also included. In this case the input data for the water entrainment fraction and the liquid weighting factor in the heat transfer for the dispersed flow regime were adjusted to match the experimental data. On the other hand, no fitting of the input parameters was made for the COBRA-TF calculations which are included in the data comparison. (orig.)
A computer code for seismic qualification of nuclear service valves
International Nuclear Information System (INIS)
The computer code, CERTIVALVE, has been developed for detailed frequency, seismic stress, and deformation analysis of nuclear service valves. It is an expedient means of analyzing, designing, and qualifying nuclear service valves for dynamic loads. The program is designed to be applicable to virtually all types of ASME Class 1, 2, and 3 valves, including gate, globe, disk, ball, safety-relief, diaphragm, and butterfly valves. It can evaluate valves of nearly all major manufacturers both foreign and domestic. CERTIVALVE computes all natural frequencies of any valve up to 40 Hertz (or higher if the user desires). The program constructs a multi-degree-offreedom (MDOF) finite element model which is used in the eigenvalue (natural frequency) solution. CERTIVALVE also computes the maximum allowable acceleration capacity of any valve on the basis of user-supplied allowable stress limits and the geometric and material properties of the valve components. The worst spatial orientation of the valve is assumed. The maximum allowable resultant is reported in terms of a multiple of weight (that is, in values of g) for each valve component. The valve components evaluated include the non-pressure retaining yoke structure for prismatic, nonprismatic, and curved sections; the yoke-bonnet junction for bolted, clamped and bossed connection details; the bonnet and body sections for various geometric cross sections; and the body-bonnet junction for both bolted-gasket and threaded connections. Finally, the program also performs an operability deformation analysis to check that stem binding is precluded. The evaluation procedures are performed in accordance with included codes and standards. The theoretical development of the program is provided and examples of the program options are presented
A computer code to simulate X-ray imaging techniques
Energy Technology Data Exchange (ETDEWEB)
Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel
2000-09-01
A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.
International Nuclear Information System (INIS)
The implementation of the CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Comission is presented. CP1 is a computer code used to solve the equations of punctual kinetic with Doppler feed back from the system temperature variation based on the Newton refrigeration equation (E.G.)
Interface design of VSOP'94 computer code for safety analysis
International Nuclear Information System (INIS)
Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects
Interface design of VSOP'94 computer code for safety analysis
Natsir, Khairina; Yazid, Putranto Ilham; Andiwijayakusuma, D.; Wahanani, Nursinta Adi
2014-09-01
Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.
Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation
Scott, James R.
2004-01-01
NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.
Compute-and-Forward: Harnessing Interference through Structured Codes
Nazer, Bobak
2009-01-01
Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information. Its potential is demonstrated through examples drawn ...
Nädalavahetusel saab alguse süvakultuuri festival "Ariel"
2008-01-01
26.-27. okt. toimub Tallinnas V juudi süvakultuuri festival "Ariel", mille peaesinejaks on klarnetist David Krakauer ansambliga Klezmer Madness1 USAst. Lisaks annavad kontserdi solist Sofia Rubina ja ansambel Vox Clamantis
Computer Tensor Codes to Design the War Drive
Maccone, C.
To address problems in Breakthrough Propulsion Physics (BPP) and design the Warp Drive one needs sheer computing capabilities. This is because General Relativity (GR) and Quantum Field Theory (QFT) are so mathematically sophisticated that the amount of analytical calculations is prohibitive and one can hardly do all of them by hand. In this paper we make a comparative review of the main tensor calculus capabilities of the three most advanced and commercially available “symbolic manipulator” codes. We also point out that currently one faces such a variety of different conventions in tensor calculus that it is difficult or impossible to compare results obtained by different scholars in GR and QFT. Mathematical physicists, experimental physicists and engineers have each their own way of customizing tensors, especially by using different metric signatures, different metric determinant signs, different definitions of the basic Riemann and Ricci tensors, and by adopting different systems of physical units. This chaos greatly hampers progress toward the design of the Warp Drive. It is thus suggested that NASA would be a suitable organization to establish standards in symbolic tensor calculus and anyone working in BPP should adopt these standards. Alternatively other institutions, like CERN in Europe, might consider the challenge of starting the preliminary implementation of a Universal Tensor Code to design the Warp Drive.
Application of the RESRAD computer code to VAMP scenario S
International Nuclear Information System (INIS)
The RESRAD computer code developed at Argonne National Laboratory was among 11 models from 11 countries participating in the international Scenario S validation of radiological assessment models with Chernobyl fallout data from southern Finland. The validation test was conducted by the Multiple Pathways Assessment Working Group of the Validation of Environmental Model Predictions (VAMP) program coordinated by the International Atomic Energy Agency. RESRAD was enhanced to provide an output of contaminant concentrations in environmental media and in food products to compare with measured data from southern Finland. Probability distributions for inputs that were judged to be most uncertain were obtained from the literature and from information provided in the scenario description prepared by the Finnish Centre for Radiation and Nuclear Safety. The deterministic version of RESRAD was run repeatedly to generate probability distributions for the required predictions. These predictions were used later to verify the probabilistic RESRAD code. The RESRAD predictions of radionuclide concentrations are compared with measured concentrations in selected food products. The radiological doses predicted by RESRAD are also compared with those estimated by the Finnish Centre for Radiation and Nuclear Safety
Ariel Williams, el escriturador magnánimo
Domine, Marcela
2015-01-01
Este artículo propone una lectura crítica de la producción de Ariel Williams, escritor patagónico contemporáneo. Reflexiona sobre el trabajo con el lenguaje presente en sus textos, y la creación de una lengua literaria propia. Se detiene en una nueva relación con los géneros que implica un vínculo entre géneros discursivos y literarios y la teoría de géneros, ya que rompe las referencias genéricas de la lengua para la representación de lo femenino y masculino. Finalmente analiza una represent...
Comparison of computer codes for calculating dynamic loads in wind turbines
Spera, D. A.
1978-01-01
The development of computer codes for calculating dynamic loads in horizontal axis wind turbines was examined, and a brief overview of each code was given. The performance of individual codes was compared against two sets of test data measured on a 100 KW Mod-0 wind turbine. All codes are aeroelastic and include loads which are gravitational, inertial and aerodynamic in origin.
Final technical position on documentation of computer codes for high-level waste management
International Nuclear Information System (INIS)
Guidance is given for the content of documentation of computer codes which are used in support of a license application for high-level waste disposal. The guidelines cover theoretical basis, programming, and instructions for use of the code
RADTRAN II: revised computer code to analyze transportation of radioactive material
International Nuclear Information System (INIS)
A revised and updated version of the RADTRAN computer code is presented. This code has the capability to predict the radiological impacts associated with specific schemes of radioactive material shipments and mode specific transport variables
An efficient methodology for modeling complex computer codes with Gaussian processes
Marrel, Amandine; Iooss, Bertrand; Van Dorpe, Francois; Volkova, Elena
2008-01-01
International audience Complex computer codes are often too time expensive to be directly used to perform uncertainty propagation studies, global sensitivity analysis or to solve optimization problems. A well known and widely used method to circumvent this inconvenience consists in replacing the complex computer code by a reduced model, called a metamodel, or a response surface that represents the computer code and requires acceptable calculation time. One particular class of metamodels is...
The TESS [Tandem Experiment Simulation Studies] computer code user's manual
International Nuclear Information System (INIS)
TESS (Tandem Experiment Simulation Studies) is a one-dimensional, bounded particle-in-cell (PIC) simulation code designed to investigate the confinement and transport of plasma in a magnetic mirror device, including tandem mirror configurations. Mirror plasmas may be modeled in a system which includes an applied magnetic field and/or a self-consistent or applied electrostatic potential. The PIC code TESS is similar to the PIC code DIPSI (Direct Implicit Plasma Surface Interactions) which is designed to study plasma transport to and interaction with a solid surface. The codes TESS and DIPSI are direct descendants of the PIC code ES1 that was created by A. B. Langdon. This document provides the user with a brief description of the methods used in the code and a tutorial on the use of the code. 10 refs., 2 tabs
The science of ARIEL (Atmospheric Remote-sensing Infrared Exoplanet Large-survey)
Tinetti, G.; Drossart, P.; Eccleston, P.; Hartogh, P.; Heske, A.; Leconte, J.; Micela, G.; Ollivier, M.; Pilbratt, G.; Puig, L.; Turrini, D.; Vandenbussche, B.; Wolkenberg, P.; Pascale, E.; Beaulieu, J.-P.; Güdel, M.; Min, M.; Rataj, M.; Ray, T.; Ribas, I.; Barstow, J.; Bowles, N.; Coustenis, A.; Coudé du Foresto, V.; Decin, L.; Encrenaz, T.; Forget, F.; Friswell, M.; Griffin, M.; Lagage, P. O.; Malaguti, P.; Moneti, A.; Morales, J. C.; Pace, E.; Rocchetto, M.; Sarkar, S.; Selsis, F.; Taylor, W.; Tennyson, J.; Venot, O.; Waldmann, I. P.; Wright, G.; Zingales, T.; Zapatero-Osorio, M. R.
2016-07-01
The Atmospheric Remote-Sensing Infrared Exoplanet Large-survey (ARIEL) is one of the three candidate missions selected by the European Space Agency (ESA) for its next medium-class science mission due for launch in 2026. The goal of the ARIEL mission is to investigate the atmospheres of several hundred planets orbiting distant stars in order to address the fundamental questions on how planetary systems form and evolve. During its four (with a potential extension to six) years mission ARIEL will observe 500+ exoplanets in the visible and the infrared with its meter-class telescope in L2. ARIEL targets will include gaseous and rocky planets down to the Earth-size around different types of stars. The main focus of the mission will be on hot and warm planets orbiting close to their star, as they represent a natural laboratory in which to study the chemistry and formation of exoplanets. The ARIEL mission concept has been developed by a consortium of more than 50 institutes from 12 countries, which include UK, France, Italy, Germany, the Netherlands, Poland, Spain, Belgium, Austria, Denmark, Ireland and Portugal. The analysis of the ARIEL spectra and photometric data in the 0.5-7.8 micron range will allow to extract the chemical fingerprints of gases and condensates in the planets' atmospheres, including the elemental composition for the most favorable targets. It will also enable the study of thermal and scattering properties of the atmosphere as the planet orbit around the star. ARIEL will have an open data policy, enabling rapid access by the general community to the high-quality exoplanet spectra that the core survey will deliver.
ARIEL – Atmospheric Remote-Sensing Infrared Exoplanet Large-survey
Tinetti, Giovanna; Drossart, Pierre; Eccleston, Paul; Hartogh, Paul; Leconte, Jérémy; Micela, Giusi; Ollivier, Marc; Pilbratt, Göran; Puig, Ludovic; Turrini, Diego; Vandenbussche, Bart; Wolkenberg, Paulina; ARIEL consortium, ARIEL ESA Study Team
2016-10-01
The Atmospheric Remote-Sensing Infrared Exoplanet Large-survey (ARIEL) is one of the three candidate missions selected by the European Space Agency (ESA) for its next medium-class science mission due for launch in 2026. The goal of the ARIEL mission is to investigate the atmospheres of several hundreds planets orbiting distant stars in order to address the fundamental questions on how planetary systems form and evolve.During its four (with a potential extension to six) years mission ARIEL will observe 500+ exoplanets in the visible and the infrared with its meter-class telescope in L2. ARIEL targets will include Jupiter- and Neptune-size down to super-Earth and Earth-size around different types of stars. The main focus of the mission will be on hot and warm planets orbiting very close to their star, as they represent a natural laboratory in which to study the chemistry and formation of exoplanets. In cooler planets, different gases separate out through condensation and sinking into distinct cloud layers. The scorching heat experienced by hot exoplanets overrides these processes and keeps all molecular species circulating throughout the atmosphere.The ARIEL mission concept has been developed by a consortium of more than 50 institutes from 12 countries, which include UK, France, Italy, Germany, the Netherlands, Poland, Spain, Belgium, Austria, Denmark, Ireland and Portugal. The analysis of ARIEL spectra and photometric data will allow to extract the chemical fingerprints of gases and condensates in the planets' atmospheres, including the elemental composition for the most favorable targets. It will also enable the study of thermal and scattering properties of the atmosphere as the planet orbit around the star.ARIEL will have an open data policy, enabling rapid access by the general community to the high-quality exoplanet spectra that the core survey will deliver.
Assessment of the computer code COBRA/CFTL
Energy Technology Data Exchange (ETDEWEB)
Baxi, C. B.; Burhop, C. J.
1981-07-01
The COBRA/CFTL code has been developed by Oak Ridge National Laboratory (ORNL) for thermal-hydraulic analysis of simulated gas-cooled fast breeder reactor (GCFR) core assemblies to be tested in the core flow test loop (CFTL). The COBRA/CFTL code was obtained by modifying the General Atomic code COBRA*GCFR. This report discusses these modifications, compares the two code results for three cases which represent conditions from fully rough turbulent flow to laminar flow. Case 1 represented fully rough turbulent flow in the bundle. Cases 2 and 3 represented laminar and transition flow regimes. The required input for the COBRA/CFTL code, a sample problem input/output and the code listing are included in the Appendices.
International Nuclear Information System (INIS)
The computer code SUPERFISH has been implemented in CYBER - IEAv computer system. This code locates eletromagnetic modes in rf ressonant cavities. The manipulation of the boundary conditions and of the driving point was optimized. A computer program (ARRUELA) was developed in order to make easier SUPERFISH analysis of the rf properties of disc-and-washer cavities. This version of SUPERFISH showed satisfactory performance under tests. (Author)
MMA, A Computer Code for Multi-Model Analysis
Energy Technology Data Exchange (ETDEWEB)
Eileen P. Poeter and Mary C. Hill
2007-08-20
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.
New design studies for TRIUMF's ARIEL High Resolution Separator
Maloney, J. A.; Baartman, R.; Marchetto, M.
2016-06-01
As part of its new Advanced Rare IsotopE Laboratory (ARIEL), TRIUMF is designing a novel High Resolution Separator (HRS) (Maloney et al., 2015) to separate rare isotopes. The HRS has a 180° bend, separated into two 90° magnetic dipoles, bend radius 1.2 m, with an electrostatic multipole corrector between them. Second order correction comes mainly from the dipole edge curvatures, but is intended to be fine-tuned with a sextupole component and a small octupole component in the multipole. This combination is designed to achieve 1:20,000 resolution for a 3 μm (horizontal) and 6 μm (vertical) emittance. A design for the HRS dipole magnets achieves both radial and integral flatness goals of separation, matching and aberration correction. Field simulations from the OPERA-3D (OPERA) [2] models of the dipole magnets are used in COSY Infinity (COSY) (Berz and Makino, 2005) [3] to find and optimize the transfer maps to 3rd order and study residual nonlinearities to 8th order.
International Nuclear Information System (INIS)
This report is a user's manual for MLSOIL (Multiple Layer SOIL model) and DFSOIL (Dose Factors for MLSOIL) and a documentation of the computational methods used in those two computer codes. MLSOIL calculates an effective ground surface concentration to be used in computations of external doses. This effective ground surface concentration is equal to (the computed dose in air from the concentration in the soil layers)/(the dose factor for computing dose in air from a plane). MLSOIL implements a five compartment linear-transfer model to calculate the concentrations of radionuclides in the soil following deposition on the ground surface from the atmosphere. The model considers leaching through the soil as well as radioactive decay and buildup. The element-specific transfer coefficients used in this model are a function of the k/sub d/ and environmental parameters. DFSOIL calculates the dose in air per unit concentration at 1 m above the ground from each of the five soil layers used in MLSOIL and the dose per unit concentration from an infinite plane source. MLSOIL and DFSOIL have been written to be part of the Computerized Radiological Risk Investigation System (CRRIS) which is designed for assessments of the health effects of airborne releases of radionuclides. 31 references, 3 figures, 4 tables
Numerical computation of phase distribution in two fluid flow using the two-dimensional TOFFEA code
International Nuclear Information System (INIS)
A new iterative approach has been developed for multidimensional computational analysis of two fluid flow. It has been implemented and tested in a two-dimensional computer code. Parametric surveys are described to illustrate that this code rationally predicts separation of two fluid flows under gravitational and centrifugal influences. Comparisons are made between behaviour computed by the code, and results reported in experimental studies of air and water flowing in elbows and pipes. Plans for extending the code to three dimensions are discussed, as are methods for incorporating an improved model of turbulence
TRANS4: a computer code calculation of solid fuel penetration of a concrete barrier
International Nuclear Information System (INIS)
The computer code, TRANS4, models the melting and penetration of a solid barrier by a solid disc of fuel following a core disruptive accident. This computer code has been used to model fuel debris penetration of basalt, limestone concrete, basaltic concrete, and magnetite concrete. Sensitivity studies were performed to assess the importance of various properties on the rate of penetration. Comparisons were made with results from the GROWS II code
Compendium of computer codes for the safety analysis of fast breeder reactors
International Nuclear Information System (INIS)
The objective of the compendium is to provide the reader with a guide which briefly describes many of the computer codes used for liquid metal fast breeder reactor safety analyses, since it is for this system that most of the codes have been developed. The compendium is designed to address the following frequently asked questions from individuals in licensing and research and development activities: (1) What does the code do. (2) To what safety problems has it been applied. (3) What are the code's limitations. (4) What is being done to remove these limitations. (5) How does the code compare with experimental observations and other code predictions. (6) What reference documents are available
Benchmark and partial validation testing of the FLASH computer code, Version 3.0
Energy Technology Data Exchange (ETDEWEB)
Martian, P.; Smith, C.S.
1993-09-01
This document presents methods and results of benchmark testing (i.e., code-to-code comparisons) and partial validation testing (i.e., tests which compare field data to the computer generated solutions) of the FLASH computer code, Version 3.0, which were conducted to determine if the code is ready for performance assessment studies of the Radioactive Waste Management Complex. Three test problems are presented that were designed to check computational efficiency, accuracy of the numerical algorithms, and the capability of the code to simulate diverse hydrological conditions. These test problems were designed to specifically test the code`s ability to simulate, (a) seasonal infiltration in response to meteorological conditions, (b) changing watertable elevations due to a transient areal source of water, (i.e., influx from spreading basins), and (c) infiltration into fractured basalt as a result of seasonal water in drainage ditches. The FLASH simulations generally compared well with the benchmark codes, indicating good stability and acceptable computational efficiency while simulating a wide range of conditions. The code appears operational for modeling both unsaturated and saturated flow in fractured, heterogeneous porous media. However, the code failed to converge when a unsaturated to saturated transition occurred. Consequently, the code should not be used when this condition occurs or is expected to occur, i.e. when perched water is present or when infiltration rates exceed the saturated conductivity of the soil.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will
Optimizing Computational Efficiency and User Convenience in Plasma Simulation Codes
Barnes, Christopher W.
1985-10-01
It is usually important to write plasma simulation codes in such a way that they execute efficiently and are convenient to use. I discuss here practical techniques to achieve this goal. Numerical algorithms must be well formulated and advantage taken of machine architecture in casting the algorithm into a high level language such as Fortran. The advantages of writing critical routines in Assembler are discussed. For large simulation codes, disks must often be used as a temporary store for working data. Efficient methods for doing this are presented. Codes must not only be well organized for ease of implementation and maintenance, but also for ease of use. Ways are suggested for packaging codes such that setup, batch production, restarting and diagnostic postprocessing is facilitated. Particular emphasis is placed on graphics postprocessors, since they must be used in real time with graphics terminals as well as with hardcopy devices.
Quantum error correcting codes and one-way quantum computing: Towards a quantum memory
Schlingemann, D
2003-01-01
For realizing a quantum memory we suggest to first encode quantum information via a quantum error correcting code and then concatenate combined decoding and re-encoding operations. This requires that the encoding and the decoding operation can be performed faster than the typical decoherence time of the underlying system. The computational model underlying the one-way quantum computer, which has been introduced by Hans Briegel and Robert Raussendorf, provides a suitable concept for a fast implementation of quantum error correcting codes. It is shown explicitly in this article is how encoding and decoding operations for stabilizer codes can be realized on a one-way quantum computer. This is based on the graph code representation for stabilizer codes, on the one hand, and the relation between cluster states and graph codes, on the other hand.
Application of computational fluid dynamics methods to improve thermal hydraulic code analysis
Sentell, Dennis Shannon, Jr.
A computational fluid dynamics code is used to model the primary natural circulation loop of a proposed small modular reactor for comparison to experimental data and best-estimate thermal-hydraulic code results. Recent advances in computational fluid dynamics code modeling capabilities make them attractive alternatives to the current conservative approach of coupled best-estimate thermal hydraulic codes and uncertainty evaluations. The results from a computational fluid dynamics analysis are benchmarked against the experimental test results of a 1:3 length, 1:254 volume, full pressure and full temperature scale small modular reactor during steady-state power operations and during a depressurization transient. A comparative evaluation of the experimental data, the thermal hydraulic code results and the computational fluid dynamics code results provides an opportunity to validate the best-estimate thermal hydraulic code's treatment of a natural circulation loop and provide insights into expanded use of the computational fluid dynamics code in future designs and operations. Additionally, a sensitivity analysis is conducted to determine those physical phenomena most impactful on operations of the proposed reactor's natural circulation loop. The combination of the comparative evaluation and sensitivity analysis provides the resources for increased confidence in model developments for natural circulation loops and provides for reliability improvements of the thermal hydraulic code.
Research on parallel computing of MCNP code based on MPI
International Nuclear Information System (INIS)
This paper introduces the method that develops a parallel computing platform with ordinary PCs and the mpi.nt.1.2.5 software based on MPI (Message Interface Passing) standard specification on Windows operating system. The parallel computing of MCNP on this platform is realized, and the parallel computing performance of MCNP is analyzed in this paper. (authors)
International Nuclear Information System (INIS)
The CITHAN computer code was developed at IPEN (Instituto de Pesquisas Energeticas e Nucleares) to link the HAMMER computer code with a fuel depletion routine and to provide neutron cross sections to be read with the appropriate format of the CITATION code. The problem arised due to the efforts to addapt the new version denomined HAMMER-TECHION with the routine refered. The HAMMER-TECHION computer code was elaborated by Haifa Institute, Israel within a project with EPRI. This version is at CNEN to be used in multigroup constant generation for neutron diffusion calculation in the scope of the new methodology to be adopted by CNEN. The theoretical formulation of CITHAM computer code, tests and modificatins are described. (Author)
X-ray binary systems - Ariel V SSI observations
International Nuclear Information System (INIS)
The basis of our current theoretical understanding of galactic x-ray sources is reviewed. Models are outlined involving close binary systems containing a compact object accreting mass which has been lost from the nondegenerate star by a variety of mechanisms. The present status of galactic x-ray astronomy is discussed, with emphasis on the links between established observational categories and the characteristics of the proposed models. Observational results, consisting primarily of extended x-ray light curves derived from analysis of Ariel V SSI data are presented for two main classes of galactic x-ray source: (i) high-mass x-ray binaries containing an early-type giant or supergiant star; (ii) low-mass x-ray binaries in which the nondegenerate star is a late-type dwarf. For the high-mass binaries emphasis is placed on the determination and improvement of the orbital parameters; for the low-mass binaries, where a less complete picture is available, the discussion centres on the type of system involved, taking into account the optical observations of the source. Finally, the properties of two further categories - the sources in the galactic bulge and those associated with dwarf novae - are discussed as examples of rather different types of galactic x-ray emitter. In the case of the galactic bulge sources current observations have not led so far to a clear picture of the nature of the systems involved, indeed their binary membership is not established. X-ray emission from dwarf novae and related objects is a relatively recent discovery and represents the opening up of a new field of galactic x-ray astronomy. (author)
Qualification of the new version of HAMMER computer code
International Nuclear Information System (INIS)
(HTEC) code were tested with a great number of diferent type of experiments. This experiments covers the most important parameters in neutronic calculations, such as the cell geometry and composition. The HTEC code results have been analysed and compared with experimental data and results given by the literature and simulated by HAMMER and LEOPARD codes. The quantities used for analysis were Keff and the following integral parameters: R28 - ratio of epicadmium-to-subcadmium 238U captures; D25 - ratio of epicadmium-to-subcadmium 235U fission; D28 - ratio of 238U fissions to 235U fissions; C - ratio of 238U captures to 235U fissions; RC02 - ratio of epicadmium-to-subcadmium 232Th capture. The analysis shows that the results given by the code are in good agreement with the experimental data and the results given by the other codes. The calculation that have been done with the detailed ressonance profile tabulations of plutonium isotopes shows worst results than that obtained with the ressonance parameters. Almost all the simulated cases, shows that the HTEC results are closest to the experimental data than the HAMMER results, when one do not use the detailed ressonance profile tabulations of the plutonium isotopes. (Author)
Development of a graphical interface computer code for reactor fuel reloading optimization
International Nuclear Information System (INIS)
This report represents the results of the project performed in 2007. The aim of this project is to develop a graphical interface computer code that allows refueling engineers to design fuel reloading patterns for research reactor using simulated graphical model of reactor core. Besides, this code can perform refueling optimization calculations based on genetic algorithms as well as simulated annealing. The computer code was verified based on a sample problem, which relies on operational and experimental data of Dalat research reactor. This code can play a significant role in in-core fuel management practice at nuclear research reactor centers and in training. (author)
Quantum computation with topological codes from qubit to topological fault-tolerance
Fujii, Keisuke
2015-01-01
This book presents a self-consistent review of quantum computation with topological quantum codes. The book covers everything required to understand topological fault-tolerant quantum computation, ranging from the definition of the surface code to topological quantum error correction and topological fault-tolerant operations. The underlying basic concepts and powerful tools, such as universal quantum computation, quantum algorithms, stabilizer formalism, and measurement-based quantum computation, are also introduced in a self-consistent way. The interdisciplinary fields between quantum information and other fields of physics such as condensed matter physics and statistical physics are also explored in terms of the topological quantum codes. This book thus provides the first comprehensive description of the whole picture of topological quantum codes and quantum computation with them.
Hartenstein, Richard G., Jr.
1985-08-01
Computer codes have been developed to analyze antennas on aircraft and in the presence of scatterers. The purpose of this study is to use these codes to develop accurate computer models of various aircraft and antenna systems. The antenna systems analyzed are a P-3B L-Band antenna, an A-7E UHF relay pod antenna, and traffic advisory antenna system installed on a Bell Long Ranger helicopter. Computer results are compared to measured ones with good agreement. These codes can be used in the design stage of an antenna system to determine the optimum antenna location and save valuable time and costly flight hours.
Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide
International Nuclear Information System (INIS)
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems
Efficient Quantification of Uncertainties in Complex Computer Code Results Project
National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...
Second Generation Integrated Composite Analyzer (ICAN) Computer Code
Murthy, Pappu L. N.; Ginty, Carol A.; Sanfeliz, Jose G.
1993-01-01
This manual updates the original 1986 NASA TP-2515, Integrated Composite Analyzer (ICAN) Users and Programmers Manual. The various enhancements and newly added features are described to enable the user to prepare the appropriate input data to run this updated version of the ICAN code. For reference, the micromechanics equations are provided in an appendix and should be compared to those in the original manual for modifications. A complete output for a sample case is also provided in a separate appendix. The input to the code includes constituent material properties, factors reflecting the fabrication process, and laminate configuration. The code performs micromechanics, macromechanics, and laminate analyses, including the hygrothermal response of polymer-matrix-based fiber composites. The output includes the various ply and composite properties, the composite structural response, and the composite stress analysis results with details on failure. The code is written in FORTRAN 77 and can be used efficiently as a self-contained package (or as a module) in complex structural analysis programs. The input-output format has changed considerably from the original version of ICAN and is described extensively through the use of a sample problem.
Computational Participation: Understanding Coding as an Extension of Literacy Instruction
Burke, Quinn; O'Byrne, W. Ian; Kafai, Yasmin B.
2016-01-01
Understanding the computational concepts on which countless digital applications run offers learners the opportunity to no longer simply read such media but also become more discerning end users and potentially innovative "writers" of new media themselves. To think computationally--to solve problems, to design systems, and to process and…
International Nuclear Information System (INIS)
As a result of a request from Commissioner V. Gilinsky to investigate in detail the causes of an error discovered in a vendor Emergency Core Cooling System (ECCS) computer code in March, 1978, the staff undertook an extensive investigation of the vendor quality assurance practices applied to safety analysis computer code development and use. This investigation included inspections of code development and use practices of the four major Light Water Reactor Nuclear Steam Supply System vendors and a major reload fuel supplier. The conclusion reached by the staff as a result of the investigation is that vendor practices for code development and use are basically sound. A number of areas were identified, however, where improvements to existing vendor procedures should be made. In addition, the investigation also addressed the quality assurance (QA) review and inspection process for computer codes and identified areas for improvement
DCHAIN 2: a computer code for calculation of transmutation of nuclides
International Nuclear Information System (INIS)
DCHAIN2 is a one-point depletion code which solves the coupled equation of radioactive growth and decay for a large number of nuclides by the Bateman method. A library of nuclear data for 1170 fission products has been prepared for providing input data to this code. The Bateman method surpasses the matrix exponential method in computational accuracies and in saving computer storage for the code. However, most existing computer codes based on the Bateman method have shown serious drawbacks in treating cyclic chains and more than a few specific types of decay chains. The present code has surmounted the above drawbacks by improving the code FP-S, and has the following characteristics: (1) The code can treat any type of transmutation through decays or neutron induced reactions. Multiple decays and reactions are allowed for a nuclide. (2) Unknown decay energy in the nuclear data library can be estimated. (3) The code constructs the decay scheme of each nuclide in the code and breaks it up into linear chains. Nuclide names, decay types and branching ratios of mother nuclides are necessary as the input data for each nuclide. Order of nuclides in the library is arbitrary because each nuclide is destinguished by its nuclide name. (4) The code can treat cyclic chains by an approximation. A library of the nuclear data has been prepared for 1170 fission products, including the data for half-lives, decay schemes, neutron absorption cross sections, fission yields, and disintegration energies. While DCHAIN2 is used to compute the compositions, radioactivity and decay heat of fission products, the gamma-ray spectrum of fission products can be computed also by a separate code FPGAM using the composition obtained from DCHAIN2. (J.P.N.)
Computer codes used during upgrading activities at MINT TRIGA reactor
Energy Technology Data Exchange (ETDEWEB)
Mohammad Suhaimi Kassim; Adnan Bokhari; Mohd. Idris Taib [Malaysian Institute for Nuclear Technology Research, Kajang (Malaysia)
1999-10-01
MINT TRIGA Reactor is a 1-MW swimming pool nuclear research reactor commissioned in 1982. In 1993, a project was initiated to upgrade the thermal power to 2 MW. The IAEA assistance was sought to assist the various activities relevant to an upgrading exercise. For neutronics calculations, the IAEA has provided expert assistance to introduce the WIMS code, TRIGAP, and EXTERMINATOR2. For thermal-hydraulics calculations, PARET and RELAP5 were introduced. Shielding codes include ANISN and MERCURE. However, in the middle of 1997, MINT has decided to change the scope of the project to safety upgrading of the MINT Reactor. This paper describes some of the activities carried out during the upgrading process. (author)
Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.
1980-01-01
A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.
International Nuclear Information System (INIS)
In this paper two new numerical methods for computing two phase flow transients are presented. One method uses a semi-implicit technique and the other one applies a fully implicit technique. A computer module Triton has been developped using these two methods. The physical modeling of this code is the two fluid flow model with six partial differential equations, the various transfer terms between wall and fluid or between liquid and steam are given by correlations implemented into the Poseidon Nuclear Safety Analysis Computer code system
GATE: computation code for medical imagery, radiotherapy and dosimetry
International Nuclear Information System (INIS)
The author presents the GATE code, a simulation software based on the Geant4 development environment developed by the CERN (the European organization for nuclear research) which enables Monte-Carlo type simulation to be developed for tomography imagery using ionizing radiation, and radiotherapy examinations (conventional and hadron therapy) to be simulated. The authors concentrate on the use of medical imagery in carcinology. They comment some results obtained in nuclear imagery and in radiotherapy
GATO Code Modification to Compute Plasma Response to External Perturbations
Turnbull, A. D.; Chu, M. S.; Ng, E.; Li, X. S.; James, A.
2006-10-01
It has become increasingly clear that the plasma response to an external nonaxiymmetric magnetic perturbation cannot be neglected in many situations of interest. This response can be described as a linear combination of the eigenmodes of the ideal MHD operator. The eigenmodes of the system can be obtained numerically with the GATO ideal MHD stability code, which has been modified for this purpose. A key requirement is the removal of inadmissible continuum modes. For Finite Hybrid Element codes such as GATO, a prerequisite for this is their numerical restabilization by addition of small numerical terms to δ,to cancel the analytic numerical destabilization. In addition, robustness of the code was improved and the solution method speeded up by use of the SuperLU package to facilitate calculation of the full set of eigenmodes in a reasonable time. To treat resonant plasma responses, the finite element basis has been extended to include eigenfunctions with finite jumps at rational surfaces. Some preliminary numerical results for DIII-D equilibria will be given.
A user's guide to GENEX, SDR, and related computer codes
International Nuclear Information System (INIS)
This series of codes will be of use in a variety of fields connected with reactor physics, examples of which are: (a) In evaluation of nuclear data in which the RESP-GENEX part of the system would be used to examine and produce a cross-section set based on the theories and experiments of the nuclear physicists. The approximations in GENEX must however be kept in mind, the chief one being the diagonal expansion approximation of the inverse level matrix originally due to Bethe which precludes a correct representation of strong interference effects (the Lynn effect). (b) In the calculation of Doppler effects or other resonance effects such as establishing equivalence relationships, approximate resonance treatments, etc. A given set of tapes generated by GENEX (or by some other means into the GENEX format) would be used to run the SDH code. The SDR code produces cross-sections and reaction rates over any group structure within its working range. In situations with complex geometries the spatial representation of SDR is liable to be inadequate and in these circumstances it is recommended that the reaction rates are not used directly but instead the cross-sections are used in a more accurate spatial calculation to produce revised reaction rates. (c) Finally the system may be used for a variety of special investigations such as an analysis of the variance of the Doppler coefficient in fast reactors or the accurate assessment of ideal integral measurements, (for instance the Aldermaston sphere experiment
International Nuclear Information System (INIS)
This report presents the NRC staff with a tool for assessing the potential effects of accidental releases of radioactive materials and toxic substances on habitability of nuclear facility control rooms. The tool is a computer code that estimates concentrations at nuclear facility control room air intakes given information about the release and the environmental conditions. The name of the computer code is EXTRAN. EXTRAN combines procedures for estimating the amount of airborne material, a Gaussian puff dispersion model, and the most recent algorithms for estimating diffusion coefficients in building wakes. It is a modular computer code, written in FORTRAN-77, that runs on personal computers. It uses a math coprocessor, if present, but does not require one. Code output may be directed to a printer or disk files. 25 refs., 8 figs., 4 tabs
A Computer Code For Evaluation of Design Parameters of Concrete Piercing Earth Shock Missile Warhead
Roy, P. K.; K. Ramarao
1985-01-01
A simple and reliable computer code has been devised for evaluating various design parameters, and predicting the penetration performance of concrete piercing earth shock missile-warhead and will be useful to the designers of earth penetrating weapon system.
Structural dynamics in LMFBR containment analysis: a brief survey of computational methods and codes
Energy Technology Data Exchange (ETDEWEB)
Chang, Y.W.; Gvildys, J.
1977-01-01
In recent years, the use of computer codes to study the response of primary containment of large, liquid-metal fast breeder reactors (LMFBR) under postulated accident conditions has been adopted by most fast reactor projects. Since the first introduction of REXCO-H containment code in 1969, a number of containment codes have evolved and been reported in the literature. The paper briefly summarizes the various numerical methods commonly used in containment analysis in computer programs. They are compared on the basis of truncation errors resulting in the numerical approximation, the method of integration, the resolution of the computed results, and the ease of programming in computer codes. The aim of the paper is to provide enough information to an analyst so that he can suitably define his choice of method, and hence his choice of programs.
LWR-WIMS, a computer code for light water reactor lattice calculations
International Nuclear Information System (INIS)
LMR-WIMS is a comprehensive scheme of computation for studying the reactor physics aspects and burnup behaviour of typical lattices of light water reactors. This report describes the physics methods that have been incorporated in the code, and the modifications that have been made since the code was issued in 1972. (U.K.)
Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela
2015-01-01
Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…
HADES. A computer code for fast neutron cross section from the Optical Model
International Nuclear Information System (INIS)
A FORTRAN V computer code for UNIVAC 1108/6 using a local Optical Model with spin-orbit interaction is described. The code calculates fast neutron cross sections, angular distribution, and Legendre moments for heavy and intermediate spherical nuclei. It allows for the possibility of automatic variation of potential parameters for experimental data fitting. (Author) 55 refs
Holbrook, M. Cay; MacCuspie, P. Ann
2010-01-01
Braille-reading mathematicians, scientists, and computer scientists were asked to examine the usability of the Unified English Braille Code (UEB) for technical materials. They had little knowledge of the code prior to the study. The research included two reading tasks, a short tutorial about UEB, and a focus group. The results indicated that the…
PAD: a one-dimensional, coupled neutronic-thermodynamic-hydrodynamic computer code
International Nuclear Information System (INIS)
Theoretical and numerical foundations, utilization guide, sample problems, and program listing and glossary are given for the PAD computer code which describes dynamic systems with interactive neutronics, thermodynamics, and hydrodynamics in one-dimensional spherical, cylindrical, and planar geometries. The code has been applied to prompt critical excursions in various fissioning systems (solution, metal, LMFBR, etc.) as well as to nonfissioning systems
Performance analysis of VVER-type fuel rods with the STOFFEL-1 computer code
International Nuclear Information System (INIS)
The main features of the fuel rod performance modelling computer code STOFFEL-1 are described. Submodels of the code are briefly characterized, and some results of comparisons between model predictions and experiments are presented. Examples of modelling calculations are given for some thermo-mechanical values of VVER-1000 fuel rods. (author)
Comparison of different computer platforms for running the Versatile Advection Code
Toth, G.; Keppens, R.; Sloot, P.; Bubak, M.; Hertzberger, B.
1998-01-01
The Versatile Advection Code is a general tool for solving hydrodynamical and magnetohydrodynamical problems arising in astrophysics. We compare the performance of the code on different computer platforms, including work stations and vector and parallel supercomputers. Good parallel scaling can be a
Proposed standards for peer-reviewed publication of computer code
Computer simulation models are mathematical abstractions of physical systems. In the area of natural resources and agriculture, these physical systems encompass selected interacting processes in plants, soils, animals, or watersheds. These models are scientific products and have become important i...
Code and papers: computing publication patterns in the LHC era
CERN. Geneva
2012-01-01
Publications in scholarly journals establish the body of knowledge deriving from scientific research; they also play a fundamental role in the career path of scientists and in the evaluation criteria of funding agencies. This presentation reviews the evolution of computing-oriented publications in HEP following the start of operation of LHC. Quantitative analyses are illustrated, which document the production of scholarly papers on computing-related topics by HEP experiments and core tools projects (including distributed computing R&D), and the citations they receive. Several scientometric indicators are analyzed to characterize the role of computing in HEP literature. Distinctive features of scholarly publication production in the software-oriented and hardware-oriented experimental HEP communities are highlighted. Current patterns and trends are compared to the situation in previous generations' HEP experiments at LEP, Tevatron and B-factories. The results of this scientometric analysis document objec...
ARIEL. Programa para la transmisión electrónica de datos
Pérez, Carmen
1999-01-01
En junio del pasado año, la Unidad de Coordinación de Bibliotecas, propuso a la Red la posibilidad de adquirir licencias del programa Ariel, de manera colectiva, con el fin de negociarlo a un precio más barato.
Distribution of absorbed dose in human eye simulated by SRNA-2KG computer code
International Nuclear Information System (INIS)
Rapidly increasing performances of personal computers and development of codes for proton transport based on Monte Carlo methods will allow, very soon, the introduction of the computer planning proton therapy as a normal activity in regular hospital procedures. A description of SRNA code used for such applications and results of calculated distributions of proton-absorbed dose in human eye are given in this paper. (author)
A FORTRAN computer code for calculating flows in multiple-blade-element cascades
Mcfarland, E. R.
1985-01-01
A solution technique has been developed for solving the multiple-blade-element, surface-of-revolution, blade-to-blade flow problem in turbomachinery. The calculation solves approximate flow equations which include the effects of compressibility, radius change, blade-row rotation, and variable stream sheet thickness. An integral equation solution (i.e., panel method) is used to solve the equations. A description of the computer code and computer code input is given in this report.
Computation of Grobner basis for systematic encoding of generalized quasi-cyclic codes
Van, Vo Tam; Mita, Seiichi
2008-01-01
Generalized quasi-cyclic (GQC) codes form a wide and useful class of linear codes that includes thoroughly quasi-cyclic codes, finite geometry (FG) low density parity check (LDPC) codes, and Hermitian codes. Although it is known that the systematic encoding of GQC codes is equivalent to the division algorithm in the theory of Grobner basis of modules, there has been no algorithm that computes Grobner basis for all types of GQC codes. In this paper, we propose two algorithms to compute Grobner basis for GQC codes from their parity check matrices: echelon canonical form algorithm and transpose algorithm. Both algorithms require sufficiently small number of finite-field operations with the order of the third power of code-length. Each algorithm has its own characteristic; the first algorithm is composed of elementary methods, and the second algorithm is based on a novel formula and is faster than the first one for high-rate codes. Moreover, we show that a serial-in serial-out encoder architecture for FG LDPC cod...
Development of a one-dimensional computer code describing the convection in arbitrary flow networks
International Nuclear Information System (INIS)
The subject of this paper is in the first place the computer code LOOPY and the application of this code to the mathematical investigation of the flow behavior of the HHT demonstration plant at afterheat removal operation and on failure of all afterheat removal circulators. Moreover, comparative calculations assuming failure of the afterheat removal were performed for the THTR in order to verify the code. The present code may be used to solve the problems mentioned above. In combination with the two-dimensional flow and heat code THERMIX and other code units a modular code system may be established by means of which the flow and temperature behavior in HTR plants may be described with justifiable computing effort. Within such a code system the one-dimensional code takes over the duty of describing the loops adjacent to the core. The results obtained using the one-dimensional code must, however, be judged critically with respect to natural convection where in part only small mass flows will be formed. (orig.)
Plutonium explosive dispersal modeling using the MACCS2 computer code
Energy Technology Data Exchange (ETDEWEB)
Steele, C.M.; Wald, T.L.; Chanin, D.I.
1998-11-01
The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.
The computer code SEURBNUK/EURDYN. Pt. 2
International Nuclear Information System (INIS)
SEURBNUK-2 is a two-dimensional, axisymmetric, Eulerian, finite difference containment code. The numerical procedure adopted in SEURBNUK to solve the hydrodynamic equations is based on the semi-implicit ICE method which itself is an extension of the MAC algorithm. SEURBNUK has a finite difference thin shell treatment for vessels and internal structures of arbitrary shape and includes the effects of the compressibility of the fluid. Fluid flow through porous media and porous structures can also be accommodated. SEURBNUK/EURDYN is an extension of SEURBNUK-2 in which the finite difference thin shell treatment is replaced by a finite element calculation for both thin or thick structures. This has been achieved by coupling the shell elements and axisymmetric triangular elements. Within the code, the equations of motion for the structures are solved quite separately from those for the fluid, and the timestep for the fluid can be an integer multiple of that for the structures. The interaction of the structures with the fluid is then considered as a modification to the coefficients in the pressure equations, the modifications naturally depending on the behaviour of the structures within the fluid cell. The code is limited to dealing with a single fluid, the coolant, and the bubble and the cover gas are treated as cavities of uniform pressure calculated via appropriate pressure-volume-energy relationships. This manual describes the input data specifications needed for the execution of SEURBNUK/EURDYN calculations. After explaining the output facilities information is included to aid users to avoid some common pit-falls
Plutonium explosive dispersal modeling using the MACCS2 computer code
International Nuclear Information System (INIS)
The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ''Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants''. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology
Windtalking Computers: Frequency Normalization, Binary Coding Systems and Encryption
Zirkind, Givon
2009-01-01
The goal of this paper is to discuss the application of known techniques, knowledge and technology in a novel way, to encrypt computer and non-computer data. To-date most computers use base 2 and most encryption systems use ciphering and/or an encryption algorithm, to convert data into a secret message. The method of having the computer "speak another secret language" as used in human military secret communications has never been imitated. The author presents the theory and several possible implementations of a method for computers for secret communications analogous to human beings using a secret language or; speaking multiple languages. The kind of encryption scheme proposed significantly increases the complexity of and the effort needed for, decryption. As every methodology has its drawbacks, so too, the data of the proposed system has its drawbacks. It is not as compressed as base 2 would be. However, this is manageable and acceptable, if the goal is very strong encryption: At least two methods and their ...
Status Report on Hydrogen Management and Related Computer Codes
International Nuclear Information System (INIS)
In follow-up to the Fukushima Daiichi NPP accident, the Committee on the Safety of Nuclear Installations (CSNI) decided to launch several high priority activities. At the 14. plenary meeting of the Working Group on Analysis and Management of Accidents (WGAMA), a proposal for a status paper on hydrogen generation, transport and mitigation under severe accident conditions was approved. The proposed activity is in line with the WGAMA mandate and it was considered to be needed to revisit the hydrogen issue. The report is broken down into five Chapters and two appendixes. Chapter 1 provides background information for this activity and expected topics defined by the WGAMA members. A general understanding of hydrogen behavior and control in severe accidents is discussed. A brief literature review is included in this chapter to summarize the progress obtained from the early US NRC sponsored research on hydrogen and recent international OECD or EC sponsored projects on hydrogen related topics (generation, distribution, combustion and mitigation). Chapter 2 provides a general overview of the various reactor designs of Western PWRs, BWRs, Eastern European VVERs and PHWRs (CANDUs). The purpose is to understand the containment design features in relation to hydrogen management measures. Chapter 3 provides a detailed description of national requirements on hydrogen management and hydrogen mitigation measures inside the containment and other places (e.g., annulus space, secondary buildings, spent fuel pool, etc.). Discussions are followed on hydrogen analysis approaches, application of safety systems (e.g., spray, containment ventilation, local air cooler, suppression pool, and latch systems), hydrogen measurement strategies as well as lessons learnt from the Fukushima Daiichi nuclear power accident. Chapter 4 provides an overview of various codes that are being used for hydrogen risk assessment, and the codes capabilities and validation status in terms of hydrogen related
Focardi, M.; Pace, E.; Colomé, J.; Ribas, I.; Rataj, M.; Ottensamer, R.; Farina, M.; Di Giorgio, A. M.; Wawer, P.; Pancrazzi, M.; Noce, V.; Pezzuto, S.; Morgante, G.; Artigues, B.; Sierra-Roig, C.; Gesa, L.; Eccleston, P.; Crook, M.; Micela, G.
2016-07-01
The ARIEL mission has been proposed to ESA by an European Consortium as the first space mission to extensively perform remote sensing on the atmospheres of a well defined set of warm and hot transiting gas giant exoplanets, whose temperature range between ~600 K and 3000 K. ARIEL will observe a large number (~500) of warm and hot transiting gas giants, Neptunes and super-Earths around a range of host star types using transit spectroscopy in the ~2-8 μm spectral range and broad-band photometry in the NIR and optical. ARIEL will target planets hotter than 600 K to take advantage of their well-mixed atmospheres, which should show minimal condensation and sequestration of high-Z materials and thus reveal their bulk and elemental composition. One of the major motivations for exoplanet characterisation is to understand the probability of occurrence of habitable worlds, i.e. suitable for surface liquid water. While ARIEL will not study habitable planets, its major contribution to this topic will results from its capability to detect the presence of atmospheres on many terrestrial planets outside the habitable zone and, in many cases, characterise them. This represents a fundamental breakthrough in understanding the physical and chemical processes of a large sample of exoplanets atmospheres as well as their bulk properties and to probe in-space technology. The ARIEL infrared spectrometer (AIRS) provides data on the atmospheric composition; these data are acquired and processed by an On-Board Data Handling (OBDH) system including the Cold Front End Electronics (CFEE) and the Instrument Control Unit (ICU). The Telescope Control Unit (TCU) is also included inside the ICU. The latter is directly connected to the Control and Data Management Unit (CDMU) on board the Service Module (SVM). The general hardware architecture and the application software of the ICU are described. The Fine Guidance Sensor (FGS) electronics and the Cooler Control Electronics are also presented.
TPASS: a gamma-ray spectrum analysis and isotope identification computer code
Energy Technology Data Exchange (ETDEWEB)
Dickens, J.K.
1981-03-01
The gamma-ray spectral data-reduction and analysis computer code TPASS is described. This computer code is used to analyze complex Ge(Li) gamma-ray spectra to obtain peak areas corrected for detector efficiencies, from which are determined gamma-ray yields. These yields are compared with an isotope gamma-ray data file to determine the contributions to the observed spectrum from decay of specific radionuclides. A complete FORTRAN listing of the code and a complex test case are given.
Development of a system of computer codes for severe accident analyses and its applications
Energy Technology Data Exchange (ETDEWEB)
Chang, Soon Hong; Cheon, Moon Heon; Cho, Nam jin; No, Hui Cheon; Chang, Hyeon Seop; Moon, Sang Kee; Park, Seok Jeong; Chung, Jee Hwan [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
1991-12-15
The objectives of this study is to develop a system of computer codes for postulated severe accident analyses in Nuclear Power Plants. This system of codes is necessary to conduct individual plant examination for domestic nuclear power plants. As a result of this study, one can conduct severe accident assessments more easily, and can extract the plant-specific vulnerabilities for severe accidents and at the same time the ideas for enhancing overall accident resistance. The scope and contents of this study are as follows : development of a system of computer codes for severe accident analyses, development of severe accident management strategy.
Development of a system of computer codes for severe accident analyses and its applications
International Nuclear Information System (INIS)
The objectives of this study is to develop a system of computer codes for postulated severe accident analyses in Nuclear Power Plants. This system of codes is necessary to conduct individual plant examination for domestic nuclear power plants. As a result of this study, one can conduct severe accident assessments more easily, and can extract the plant-specific vulnerabilities for severe accidents and at the same time the ideas for enhancing overall accident resistance. The scope and contents of this study are as follows : development of a system of computer codes for severe accident analyses, development of severe accident management strategy
International Nuclear Information System (INIS)
A fast running computer code SHETEMP has been developed for analysis of reactivity initiated accidents under constant core cooling conditions such as coolant temperature and heat transfer coefficient on fuel rods. This code can predict core power and fuel temperature behaviours. A control rod movement can be taken into account in power control system. The objective of the code is to provide fast running capability with easy handling of the code required for audit and design calculations where a large number of calculations are performed for parameter surveys during short time period. The fast running capability of the code was realized by neglection of fluid flow calculation. The computer code SHETEMP was made up by extracting and conglomerating routines for reactor kinetics and heat conduction in the transient reactor thermal-hydraulic analysis code ALARM-P1, and by combining newly developed routines for reactor power control system. As ALARM-P1, SHETEMP solves point reactor kinetics equations by the modified Runge-Kutta method and one-dimensional transient heat conduction equations for slab and cylindrical geometries by the Crank-Nicholson methods. The model for reactor power control system takes into account effects of PID regulator and control rod drive mechanism. In order to check errors in programming of the code, calculated results by SHETEMP were compared with analytic solution. Based on the comparisons, the appropriateness of the programming was verified. Also, through a sample calculation for typical modelling, it was concluded that the code could satisfy the fast running capability required for audit and design calculations. This report will be described as a code manual of SHETEMP. It contains descriptions on a sample problem, code structure, input data specifications and usage of the code, in addition to analytical models and results of code verification calculations. (author)
Proceedings of the conference on computer codes and the linear accelerator community
Energy Technology Data Exchange (ETDEWEB)
Cooper, R.K. (comp.)
1990-07-01
The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.
Proceedings of the conference on computer codes and the linear accelerator community
International Nuclear Information System (INIS)
The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned
Exact Gap Computation for Code Coverage Metrics in ISO-C
Richter, Dirk; 10.4204/EPTCS.80.4
2012-01-01
Test generation and test data selection are difficult tasks for model based testing. Tests for a program can be meld to a test suite. A lot of research is done to quantify the quality and improve a test suite. Code coverage metrics estimate the quality of a test suite. This quality is fine, if the code coverage value is high or 100%. Unfortunately it might be impossible to achieve 100% code coverage because of dead code for example. There is a gap between the feasible and theoretical maximal possible code coverage value. Our review of the research indicates, none of current research is concerned with exact gap computation. This paper presents a framework to compute such gaps exactly in an ISO-C compatible semantic and similar languages. We describe an efficient approximation of the gap in all the other cases. Thus, a tester can decide if more tests might be able or necessary to achieve better coverage.
Visualization of elastic wavefields computed with a finite difference code
Energy Technology Data Exchange (ETDEWEB)
Larsen, S. [Lawrence Livermore National Lab., CA (United States); Harris, D.
1994-11-15
The authors have developed a finite difference elastic propagation model to simulate seismic wave propagation through geophysically complex regions. To facilitate debugging and to assist seismologists in interpreting the seismograms generated by the code, they have developed an X Windows interface that permits viewing of successive temporal snapshots of the (2D) wavefield as they are calculated. The authors present a brief video displaying the generation of seismic waves by an explosive source on a continent, which propagate to the edge of the continent then convert to two types of acoustic waves. This sample calculation was part of an effort to study the potential of offshore hydroacoustic systems to monitor seismic events occurring onshore.
Computer code for general analysis of radon risks (GARR)
International Nuclear Information System (INIS)
This document presents a computer model for general analysis of radon risks that allow the user to specify a large number of possible models with a small number of simple commands. The model is written in a version of BASIC which conforms closely to the American National Standards Institute (ANSI) definition for minimal BASIC and thus is readily modified for use on a wide variety of computers and, in particular, microcomputers. Model capabilities include generation of single-year life tables from 5-year abridged data, calculation of multiple-decrement life tables for lung cancer for the general population, smokers, and nonsmokers, and a cohort lung cancer risk calculation that allows specification of level and duration of radon exposure, the form of the risk model, and the specific population assumed at risk. 36 references, 8 figures, 7 tables
Introduction to error correcting codes in quantum computers
Salas, P J
2006-01-01
The goal of this paper is to review the theoretical basis for achieving a faithful quantum information transmission and processing in the presence of noise. Initially encoding and decoding, implementing gates and quantum error correction will be considered error free. Finally we will relax this non realistic assumption, introducing the quantum fault-tolerant concept. The existence of an error threshold permits to conclude that there is no physical law preventing a quantum computer from being built. An error model based on the depolarizing channel will be able to provide a simple estimation of the storage or memory computation error threshold: < 5.2 10-5. The encoding is made by means of the [[7,1,3
Compute-and-Forward: Harnessing Interference through Structured Codes
Nazer, Bobak; Gastpar, Michael C.
2009-01-01
Abstract—Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them t...
Computing the Feng-Rao distances for codes from order domains
DEFF Research Database (Denmark)
Ruano Benito, Diego
2007-01-01
We compute the Feng–Rao distance of a code coming from an order domain with a simplicial value semigroup. The main tool is the Apéry set of a semigroup that can be computed using a Gröbner basis....
A finite element option for the MARC transport/ diffusion theory computer code
International Nuclear Information System (INIS)
The MARC multigroup transport/diffusion theory computer code has been extended to include a finite element option. The facility is available in two-dimensional geometry and has a novel feature in allowing high order polynomial approximations to the flux using an automated computer procedure. (U.K.)
Compendium of computer codes for the safety analysis of fast breeder reactors
Energy Technology Data Exchange (ETDEWEB)
1977-10-01
The objective of the compendium is to provide the reader with a guide which briefly describes many of the computer codes used for liquid metal fast breeder reactor safety analyses, since it is for this system that most of the codes have been developed. The compendium is designed to address the following frequently asked questions from individuals in licensing and research and development activities: (1) What does the code do. (2) To what safety problems has it been applied. (3) What are the code's limitations. (4) What is being done to remove these limitations. (5) How does the code compare with experimental observations and other code predictions. (6) What reference documents are available.
The extensive international use of commercial computational fluid dynamics (CFD) codes
International Nuclear Information System (INIS)
What are the main reasons for the extensive international success of commercial CFD codes? This is due to their ability to calculate the fine structures of the investigated processes due to their versatility, their numerical stability and that they can guarantee the proper solution in most cases. This was made possible by the constantly increasing computer power at an ever more affordable prize. Furthermore it is much more efficient to have researchers use a CFD code rather than to develop a similar code system due to the time consuming nature of this activity and the high probability of hidden coding errors. The centralized development and upgrading makes these reliable CFD codes possible and affordable. However, the CFD companies' developments are naturally concentrated on the most profitable areas, and thus, if one works in a 'non-priority' field one cannot use them. Moreover, the prize of renting CFD codes, applications to complex systems such as whole nuclear reactors and the need to teach students gives the development of self-made codes still plenty of room. But CFD codes can model detailed aspects of large systems and subroutines generated by users can be added. Since there are only a few heavily used CFD codes such as FLUENT, STAR-CD, ANSYS CFX, these are used in many countries. Also international training courses are given and the news bulletins of these codes help to spread the news on further developments. A larger number of international codes would increase the competition but would at the same time make it harder to select the most appropriate CFD code for a given problem. Examples will be presented of uses of CFD codes as more detailed system codes for the decay heat removal from reactors, the application to aerosol physics and the application to heavy metal fluids using different turbulence models. (author)
High pressure humidification columns: Design equations, algorithm, and computer code
Energy Technology Data Exchange (ETDEWEB)
Enick, R.M. [Pittsburgh Univ., PA (United States). Dept. of Chemical and Petroleum Engineering; Klara, S.M. [USDOE Pittsburgh Energy Technology Center, PA (United States); Marano, J.J. [Burns and Roe Services Corp., Pittsburgh, PA (United States)
1994-07-01
This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.
The Uncertainty Test for the MAAP Computer Code
Energy Technology Data Exchange (ETDEWEB)
Park, S. H.; Song, Y. M.; Park, S. Y.; Ahn, K. I.; Kim, K. R.; Lee, Y. J. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2008-10-15
After the Three Mile Island Unit 2 (TMI-2) and Chernobyl accidents, safety issues for a severe accident are treated in various aspects. Major issues in our research part include a level 2 PSA. The difficulty in expanding the level 2 PSA as a risk information activity is the uncertainty. In former days, it attached a weight to improve the quality in a internal accident PSA, but the effort is insufficient for decrease the phenomenon uncertainty in the level 2 PSA. In our country, the uncertainty degree is high in the case of a level 2 PSA model, and it is necessary to secure a model to decrease the uncertainty. We have not yet experienced the uncertainty assessment technology, the assessment system itself depends on advanced nations. In advanced nations, the severe accident simulator is implemented in the hardware level. But in our case, basic function in a software level can be implemented. In these circumstance at home and abroad, similar instances are surveyed such as UQM and MELCOR. Referred to these instances, SAUNA (Severe Accident UNcertainty Analysis) system is being developed in our project to assess and decrease the uncertainty in a level 2 PSA. It selects the MAAP code to analyze the uncertainty in a severe accident.
Development of a model and computer code to describe solar grade silicon production processes
Gould, R. K.; Srivastava, R.
1979-01-01
Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.
Issues in computational fluid dynamics code verification and validation
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, W.L.; Blottner, F.G.
1997-09-01
A broad range of mathematical modeling errors of fluid flow physics and numerical approximation errors are addressed in computational fluid dynamics (CFD). It is strongly believed that if CFD is to have a major impact on the design of engineering hardware and flight systems, the level of confidence in complex simulations must substantially improve. To better understand the present limitations of CFD simulations, a wide variety of physical modeling, discretization, and solution errors are identified and discussed. Here, discretization and solution errors refer to all errors caused by conversion of the original partial differential, or integral, conservation equations representing the physical process, to algebraic equations and their solution on a computer. The impact of boundary conditions on the solution of the partial differential equations and their discrete representation will also be discussed. Throughout the article, clear distinctions are made between the analytical mathematical models of fluid dynamics and the numerical models. Lax`s Equivalence Theorem and its frailties in practical CFD solutions are pointed out. Distinctions are also made between the existence and uniqueness of solutions to the partial differential equations as opposed to the discrete equations. Two techniques are briefly discussed for the detection and quantification of certain types of discretization and grid resolution errors.
Algorithms and computer codes for atomic and molecular quantum scattering theory
Energy Technology Data Exchange (ETDEWEB)
Thomas, L. (ed.)
1979-01-01
This workshop has succeeded in bringing up 11 different coupled equation codes on the NRCC computer, testing them against a set of 24 different test problems and making them available to the user community. These codes span a wide variety of methodologies, and factors of up to 300 were observed in the spread of computer times on specific problems. A very effective method was devised for examining the performance of the individual codes in the different regions of the integration range. Many of the strengths and weaknesses of the codes have been identified. Based on these observations, a hybrid code has been developed which is significantly superior to any single code tested. Thus, not only have the original goals been fully met, the workshop has resulted directly in an advancement of the field. All of the computer programs except VIVS are available upon request from the NRCC. Since an improved version of VIVS is contained in the hybrid program, VIVAS, it was not made available for distribution. The individual program LOGD is, however, available. In addition, programs which compute the potential energy matrices of the test problems are also available. The software library names for Tests 1, 2 and 4 are HEH2, LICO, and EN2, respectively.
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
The eukaryotic genome contains varying numbers of non-coding RNA(ncRNA) genes."Computational RNomics" takes a multidisciplinary approach,like information science,to resolve the structure and function of ncRNAs.Here,we review the main issues in "Computational RNomics" of data storage and management,ncRNA gene identification and characterization,ncRNA target identification and functional prediction,and we summarize the main methods and current content of "computational RNomics".
Energy Technology Data Exchange (ETDEWEB)
Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d' %C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)
2011-06-01
This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the
Skála, J.; Baruffa, F.; Büchner, J.; Rampp, M.
2015-08-01
Context. The numerical simulation of turbulence and flows in almost ideal astrophysical plasmas with large Reynolds numbers motivates the implementation of magnetohydrodynamical (MHD) computer codes with low resistivity. They need to be computationally efficient and scale well with large numbers of CPU cores, allow obtaining a high grid resolution over large simulation domains, and be easily and modularly extensible, for instance, to new initial and boundary conditions. Aims: Our aims are the implementation, optimization, and verification of a computationally efficient, highly scalable, and easily extensible low-dissipative MHD simulation code for the numerical investigation of the dynamics of astrophysical plasmas with large Reynolds numbers in three dimensions (3D). Methods: The new GOEMHD3 code discretizes the ideal part of the MHD equations using a fast and efficient leap-frog scheme that is second-order accurate in space and time and whose initial and boundary conditions can easily be modified. For the investigation of diffusive and dissipative processes the corresponding terms are discretized by a DuFort-Frankel scheme. To always fulfill the Courant-Friedrichs-Lewy stability criterion, the time step of the code is adapted dynamically. Numerically induced local oscillations are suppressed by explicit, externally controlled diffusion terms. Non-equidistant grids are implemented, which enhance the spatial resolution, where needed. GOEMHD3 is parallelized based on the hybrid MPI-OpenMP programing paradigm, adopting a standard two-dimensional domain-decomposition approach. Results: The ideal part of the equation solver is verified by performing numerical tests of the evolution of the well-understood Kelvin-Helmholtz instability and of Orszag-Tang vortices. The accuracy of solving the (resistive) induction equation is tested by simulating the decay of a cylindrical current column. Furthermore, we show that the computational performance of the code scales very
ISP-46 analysis with RELAP5/SCDAPSIM computer code
International Nuclear Information System (INIS)
The thermal-hydraulic and severe accidents analysis code RELAP5/SCDAPSIM was used in the calculation of the Phebus FPT1 in-pile experiment. This experiment, carried out on 26 July 1996 in the Phebus facility, Cadarache, France, was chosen as the basis for the OECD International Standard Problem (ISP-46) exercise. Investigation of severe accidents phenomena like fuel degradation and hydrogen production was the objective of the ISP and of the presented analysis. The ISP was an open exercise, that is, all the relevant experimental results were available to the participants from the start. The FPT1 test bundle included 18 PWR fuel rods previously irradiated to a mean burnup of 23.4 GWd/tU, two instrumented fresh fuel rods and one silver-indium-cadmium control rod. The bundle was housed in an insulating shroud and introduced into the Phebus driver core which supplied the nuclear power. The fuel degradation phase of the test lasted about 5 hours during which the bundle was cooled by steam at pressure of about 2 bar with the mass flow rate varying between 0.5 g/s and 2.2 g/s, while the bundle nuclear power was being progressively increased from zero up to 36.5 kW. RELAP5/SCDAPSIM modelling of the Phebus facility and the main results, such as the temperature response of all rods and shroud, the oxidation and resulting hydrogen production, will be discussed and presented in this paper. The analysis of fuel rods degradation and problems related to SCDAPSIM underprediction of the amount of relocated fuel and cladding will also be covered. (author)
Development of computing code system for level 3 PSA
Energy Technology Data Exchange (ETDEWEB)
Jeong, Jong Tae; Yu, Dong Han; Kim, Seung Hwan
1997-07-01
Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated through wind tunnel experiment. These results will give a physical insight in the development of a new dispersion model. Because there are some discrepancies between the results from Gaussian plume model and those from field test, the effect of terrain on the atmospheric dispersion was investigated by using CTDMPLUS code. Through this study we find that the model which can treat terrain effect is essential in the atmospheric dispersion of radioactive materials and the CTDMPLUS model can be used as a useful tool. And it is suggested that modification of a model and experimental study should be made through the continuous effort. The health effect assessment near the Yonggwang site by using IPE (Individual plant examination) results and its site data was performed. The health effect assessment is an important part of consequence analysis of a nuclear power plant site. The MACCS was used in the assessment. Based on the calculation of CCDF for each risk measure, it is shown that CCDF has a slow slope and thus wide probability distribution in cases of early fatality, early injury, total early fatality risk, and total weighted early fatality risk. And in cases of cancer fatality and population dose within 48km and 80km, the CCDF curve have a steep slope and thus narrow probability distribution. The establishment of methodologies for necessary models for consequence analysis resulting form a server accident in the nuclear power plant was made and a program for consequence analysis was developed. The models include atmospheric transport and diffusion, calculation of exposure doses for various pathways, and assessment of health effects and associated risks. Finally, the economic impact resulting form an accident in a nuclear power plant was investigated. In this study, estimation models for each cost terms that considered in economic
Universal holonomic quantum computing with cat-codes
Albert, Victor V.; Shu, Chi; Krastanov, Stefan; Shen, Chao; Liu, Ren-Bao; Yang, Zhen-Biao; Schoelkopf, Robert J.; Mirrahimi, Mazyar; Devoret, Michel H.; Jiang, Liang
2016-05-01
Universal computation of a quantum system consisting of superpositions of well-separated coherent states of multiple harmonic oscillators can be achieved by three families of adiabatic holonomic gates. The first gate consists of moving a coherent state around a closed path in phase space, resulting in a relative Berry phase between that state and the other states. The second gate consists of ``colliding'' two coherent states of the same oscillator, resulting in coherent population transfer between them. The third gate is an effective controlled-phase gate on coherent states of two different oscillators. Such gates should be realizable via reservoir engineering of systems which support tunable nonlinearities, such as trapped ions and circuit QED.
Multiplexing Genetic and Nucleosome Positioning Codes: A Computational Approach.
Eslami-Mossallam, Behrouz; Schram, Raoul D; Tompitak, Marco; van Noort, John; Schiessel, Helmut
2016-01-01
Eukaryotic DNA is strongly bent inside fundamental packaging units: the nucleosomes. It is known that their positions are strongly influenced by the mechanical properties of the underlying DNA sequence. Here we discuss the possibility that these mechanical properties and the concomitant nucleosome positions are not just a side product of the given DNA sequence, e.g. that of the genes, but that a mechanical evolution of DNA molecules might have taken place. We first demonstrate the possibility of multiplexing classical and mechanical genetic information using a computational nucleosome model. In a second step we give evidence for genome-wide multiplexing in Saccharomyces cerevisiae and Schizosacharomyces pombe. This suggests that the exact positions of nucleosomes play crucial roles in chromatin function. PMID:27272176
Benchmark Problems Used to Assess Computational Aeroacoustics Codes
Dahl, Milo D.; Envia, Edmane
2005-01-01
The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.
International Nuclear Information System (INIS)
The objective of this paper is to present a compilation of computer codes for the assessment of accidental or routine releases of radioactivity to the environment from nuclear power facilities. The capabilities of 83 computer codes in the areas of environmental transport and radiation dosimetry are summarized in tabular form. This preliminary analysis clearly indicates that the initial efforts in assessment methodology development have concentrated on atmospheric dispersion, external dosimetry, and internal dosimetry via inhalation. The incorporation of terrestrial and aquatic food chain pathways has been a more recent development and reflects the current requirements of environmental legislation and the needs of regulatory agencies. The characteristics of the conceptual models employed by these codes are reviewed. The appendixes include abstracts of the codes and indexes by author, key words, publication description, and title
KC-A Kinectic computer code for investigation of parametric plasma instabilities
International Nuclear Information System (INIS)
In the frame of a joint research program of the Institute of Plasma Physics of the NationaI Science Center 'Kharkov Institute of Physics and Technology' (Kh IPT), Ukraine, and the plasma physics group of the Austrian Research Center Seibersdorf (FZS) a kinetic computer code with the acronym KC for investigation of paramarametric plasma instabilities has been implemented at the computer facilities of FZS as a starting point for further research in this field. This code based on a macroparticle technique is appropriate for studying the evolution of instabilities in a turbulent plasma including saturation. The results can be of interest for heating of tokamaks of the next generation, i.g. ITER. The present report describes the underlying physical models and numerical methods as well as the code structure and how to use the code as a reference of forthcoming joint papers. (author)
Vectorization of nuclear codes on FACOM 230-75 APU computer
International Nuclear Information System (INIS)
To provide for the future usage of supercomputer, we have investigated the vector processing efficiency of nuclear codes which are being used at JAERI. The investigation is performed by using FACOM 230-75 APU computer. The codes are CITATION (3D neutron diffusion), SAP5 (structural analysis), CASCMARL (irradiation damage simulation). FEM-BABEL (3D neutron diffusion by FEM), GMSCOPE (microscope simulation). DWBA (cross section calculation at molecular collisions). A new type of cell density calculation for particle-in-cell method is also investigated. For each code we have obtained a significant speedup which ranges from 1.8 (CASCMARL) to 7.5 (GMSCOPE), respectively. We have described in this report the running time dynamic profile analysis of the codes, numerical algorithms used, program restructuring for the vectorization, numerical experiments of the iterative process, vectorized ratios, speedup ratios on the FACOM 230-75 APU computer, and some vectorization views. (author)
Energy Technology Data Exchange (ETDEWEB)
Hoffman, F. O.; Miller, C. W.; Shaeffer, D. L.; Garten, Jr., C. T.; Shor, R. W.; Ensminger, J. T.
1977-04-01
The objective of this paper is to present a compilation of computer codes for the assessment of accidental or routine releases of radioactivity to the environment from nuclear power facilities. The capabilities of 83 computer codes in the areas of environmental transport and radiation dosimetry are summarized in tabular form. This preliminary analysis clearly indicates that the initial efforts in assessment methodology development have concentrated on atmospheric dispersion, external dosimetry, and internal dosimetry via inhalation. The incorporation of terrestrial and aquatic food chain pathways has been a more recent development and reflects the current requirements of environmental legislation and the needs of regulatory agencies. The characteristics of the conceptual models employed by these codes are reviewed. The appendixes include abstracts of the codes and indexes by author, key words, publication description, and title.
Energy Technology Data Exchange (ETDEWEB)
Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M
2006-07-01
The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.
Compendium of computer codes for the researcher in magnetic fusion energy
Energy Technology Data Exchange (ETDEWEB)
Porter, G.D. (ed.)
1989-03-10
This is a compendium of computer codes, which are available to the fusion researcher. It is intended to be a document that permits a quick evaluation of the tools available to the experimenter who wants to both analyze his data, and compare the results of his analysis with the predictions of available theories. This document will be updated frequently to maintain its usefulness. I would appreciate receiving further information about codes not included here from anyone who has used them. The information required includes a brief description of the code (including any special features), a bibliography of the documentation available for the code and/or the underlying physics, a list of people to contact for help in running the code, instructions on how to access the code, and a description of the output from the code. Wherever possible, the code contacts should include people from each of the fusion facilities so that the novice can talk to someone ''down the hall'' when he first tries to use a code. I would also appreciate any comments about possible additions and improvements in the index. I encourage any additional criticism of this document. 137 refs.
SAMDIST: A computer code for calculating statistical distributions for R-matrix resonance parameters
Energy Technology Data Exchange (ETDEWEB)
Leal, L.C.; Larson, N.M.
1995-09-01
The SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.
Suitability of the present computer codes for the future nuclear power plants
International Nuclear Information System (INIS)
The report presents evaluations of calculation results and modelling abilities of three thermal hydraulic computer codes for analysis of gravity driven emergency core cooling (ECC) systems in advanced reactor concepts. The work is based on PACTEL (PArallel Channel TEst Loop) experiments modelled with RELAP5/mod3.1, APROS(V2.11) and CATHARE 2 V1.3 code. (8 refs.)
International Nuclear Information System (INIS)
Safety analysis is an important tool for justifying the safety of nuclear power plants. Typically, this type of analysis is performed by means of system computer codes with one dimensional approximation for modelling real plant systems. However, in the nuclear area there are issues for which traditional treatment using one dimensional system codes is considered inadequate for modelling local flow and heat transfer phenomena. There is therefore increasing interest in the application of three dimensional computational fluid dynamics (CFD) codes as a supplement to or in combination with system codes. There are a number of both commercial (general purpose) CFD codes as well as special codes for nuclear safety applications available. With further progress in safety analysis techniques, the increasing use of CFD codes for nuclear applications is expected. At present, the main objective with respect to CFD codes is generally to improve confidence in the available analysis tools and to achieve a more reliable approach to safety relevant issues. An exchange of views and experience can facilitate and speed up progress in the implementation of this objective. Both the International Atomic Energy Agency (IAEA) and the Nuclear Energy Agency of the Organisation for Economic Co-operation and Development (OECD/NEA) believed that it would be advantageous to provide a forum for such an exchange. Therefore, within the framework of the Working Group on the Analysis and Management of Accidents of the NEA's Committee on the Safety of Nuclear Installations, the IAEA and the NEA agreed to jointly organize the Technical Meeting on the Use of Computational Fluid Dynamics Codes for Safety Analysis of Reactor Systems, including Containment. The meeting was held in Pisa, Italy, from 11 to 14 November 2002. The entire collection of papers is provided in this report
Abstracts of digital computer code packages. Assembled by the Radiation Shielding Information Center
International Nuclear Information System (INIS)
The term ''code package'' is used to describe a miscellaneous grouping of materials which, when interpreted in connection with a digital computer, enables the scientist--user to solve technical problems in the area for which the material was designed. In general, a ''code package'' consists of written material--reports, instructions, flow charts, listings of data, and other useful material and IBM card decks (or, more often, a reel of magnetic tape) on which the source decks, sample problem input (including libraries of data) and the BCD/EBCDIC output listing from the sample problem are written. In addition to the main code, and any available auxiliary routines are also included. The abstract format was chosen to give to a potential code user several criteria for deciding whether or not he wishes to request the code package
Development of a system of computer codes for severe accident analysis and its applications
Energy Technology Data Exchange (ETDEWEB)
Jang, S. H.; Chun, S. W.; Jang, H. S. and others [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
1993-01-15
As a continuing study for the development of a system of computer codes to analyze severe accidents which had been performed last year, major focuses were on the aspect of application of the developed code systems. As the first step, two most commonly used code packages other than STCP, i.e., MELCOR of NRC and MAAP of IDCOR were reviewed to compare the models that they used. Next, important heat transfer phenomena were surveyed as severe accident progressed. Particularly, debris bed coolability and molten core-concrete interaction were selected as sample models, and they were studied extensively. The recent theoretical works and experiments for these phenomena were surveyed, and also the relevant models adopted by major code packages were compared and assessed. Based on the results obtained in this study, it is expected to be able to take into account these phenomenological uncertainties when one uses the severe accident code packages for probabilistic safety assessments or accident management programs.
LEADS-DC: A computer code for intense dc beam nonlinear transport simulation
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
An intense dc beam nonlinear transport code has been developed. The code is written in Visual FORTRAN 6.6 and has ~13000 lines. The particle distribution in the transverse cross section is uniform or Gaussian. The space charge forces are calculated by the PIC (particle in cell) scheme, and the effects of the applied fields on the particle motion are calculated with the Lie algebraic method through the third order approximation. Obviously,the solutions to the equations of particle motion are self-consistent. The results obtained from the theoretical analysis have been put in the computer code. Many optical beam elements are contained in the code. So, the code can simulate the intense dc particle motions in the beam transport lines, high voltage dc accelerators and ion implanters.
A Multiple Sphere T-Matrix Fortran Code for Use on Parallel Computer Clusters
Mackowski, D. W.; Mishchenko, M. I.
2011-01-01
A general-purpose Fortran-90 code for calculation of the electromagnetic scattering and absorption properties of multiple sphere clusters is described. The code can calculate the efficiency factors and scattering matrix elements of the cluster for either fixed or random orientation with respect to the incident beam and for plane wave or localized- approximation Gaussian incident fields. In addition, the code can calculate maps of the electric field both interior and exterior to the spheres.The code is written with message passing interface instructions to enable the use on distributed memory compute clusters, and for such platforms the code can make feasible the calculation of absorption, scattering, and general EM characteristics of systems containing several thousand spheres.
Window-based computer code package CONPAS for an integrated level 2 PSA
International Nuclear Information System (INIS)
A PC window-based computer code. CONPAS (CONtainment Performance Analysis System), has been developed to integrate the numerical, graphical, and results-operation aspects of Level2 probabilistic safety assessments (PSA) for nuclear power plants automatically. As a main logic for accident progression analysis, it employs a concept of the small containment phenomenological event tree (CPET) helpful to trace out visually individual accident progressions and of the large supporting event tree (LSET) for its detailed quantification. Compared with other existing computer codes for Level 2 PSA, the CONPAS code provides several advanced features: computational aspects including systematic uncertainty analysis, importance analysis, and sensitivity analysis, reporting aspects including tabling and graphic, and user-friend interface
A first accident simulation for Angra-1 power plant using the ALMOD computer code
International Nuclear Information System (INIS)
The acquisition of the Almod computer code from GRS-Munich to CNEN has permited doing calculations of transients in PWR nuclear power plants, in which doesn't occur loss of coolant. The implementation of the german computer code Almod and its application in the calculation of Angra-1, a nuclear power plant different from the KWU power plants, demanded study and models adaptation; and due to economic reasons simplifications and optimizations were necessary. The first results define the analytical potential of the computer code, confirm the adequacy of the adaptations done and provide relevant conclusions about the Angra-1 safety analysis, showing at the same time areas in which the model can be applied or simply improved. (Author)
FLAME: A finite element computer code for contaminant transport n variably-saturated media
International Nuclear Information System (INIS)
A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A
SCALE: A modular code system for performing standardized computer analyses for licensing evaluation
International Nuclear Information System (INIS)
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files
FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces
Energy Technology Data Exchange (ETDEWEB)
Ahluwalia, R.K.; Im, K.H.
1992-08-01
A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S{sub 4}), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0{sub 2}, H{sub 2}0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.
FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces
Energy Technology Data Exchange (ETDEWEB)
Ahluwalia, R.K.; Im, K.H.
1992-08-01
A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S[sub 4]), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0[sub 2], H[sub 2]0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.
SCALE: A modular code system for performing standardized computer analyses for licensing evaluation
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.
Shift-scale invariance based computer code for wiggler radiation simulation
Smolyakov, N V
2001-01-01
A new package of computer codes for calculating incoherent electromagnetic radiation from a relativistic electron beam moving in arbitrary three-dimensional magnetic field is developed at Hiroshima University. The codes are able to accept either an experimentally measured magnetic field or numerically simulated field map (with field errors, if necessary). The near-field effects as well as the electron beam emittance effects are also included into simulation. The codes are based on the shift-scale invariance property of radiation spectra that enables us to reduce considerably the bulk of individual calculations of single electron radiation.
Development of computer code packages for molten salt reactor core analysis
International Nuclear Information System (INIS)
This paper presents the implementations of the Oak Ridge National Laboratory (ORNL) approach for Molten Salt Reactor (MSR) core analysis with two nuclear reactor core analysis computer code systems. The first code system has been set up with the MCNP6 Monte Carlo code, its depletion module CINDER90 and the PYTHON script language. The second code system has been set up with the NEWT transport calculation module and ORIGEN depletion module connected by TRITON sequence in SCALE code, and the PYTHON script language. The PYTHON script language is used for implementing the online reprocessing of molten-salt fuel, and feeding new fertile material in the computer code simulations. In this paper, simplified nuclear reactor core models of a Molten Salt Breeder Reactor (MSBR), designed by ORNL in the 1960's, and FUJI-U3 designed by Toyohashi University of Technology (TUT) in the 2000's, were analyzed by the two code systems. Using these, various reactor design parameters of the MSRs were compared, such as the multiplication factor, breeding ratio, amount of material, total feeding, neutron flux distribution, and temperature coefficient. (author)
Helios-2 Vela-Ariel-5 gamma-ray burst source position
Cline, T. L.; Trainor, J. H.; Desai, U. D.; Klebesadel, R. W.; Ricketts, M.; Heluken, H.
1979-01-01
The gamma-ray burst of 28 January 1976, one of 18 events thus far detected in interplanetary space with Helios-2, was also observed with the Vela-5A, -6A and the Ariel-5 satellites. A small source field is obtained from the intersection of the region derived from the observed time delays between Helios-2 and Vela-5A and -6A with the source region independently found with the Ariel-5 X-ray detector. This area contains neither any steady X-ray source as scanned by HEAO-A nor any previously catalogued X-ray, radio or infrared sources, X-ray transients, quasars, seyferts, globular clusters, flare stars, pulsars, white dwarfs or high energy gamma-ray sources. The region is however, within the source field of a gamma-ray transient observed in 1974, which exhibited nuclear gamma-ray line structure.
NSR-77: a computer code for transient analysis of a light water reactor fuel rod
International Nuclear Information System (INIS)
This report describes computer code NSR-77 written in FORTRAN IV for FACOM-M 200 computer in detail. It has been developed for transient response analysis of a light water reactor fuel rod during an accident such as a reactivityy initiated accident, a loss-of-coolant accident or a power-cooling-mismatch accident. The code consists of subcodes which calculate heat conduction in a fuel rod, gas gap conductance between fuel and cladding, heat transfer from cladding to coolant, fluid hydrodynamics, elastic-plastic fuel and cladding deformation, and material properties, and so on. (author)
Design geometry and design/off-design performance computer codes for compressors and turbines
Glassman, Arthur J.
1995-01-01
This report summarizes some NASA Lewis (i.e., government owned) computer codes capable of being used for airbreathing propulsion system studies to determine the design geometry and to predict the design/off-design performance of compressors and turbines. These are not CFD codes; velocity-diagram energy and continuity computations are performed fore and aft of the blade rows using meanline, spanline, or streamline analyses. Losses are provided by empirical methods. Both axial-flow and radial-flow configurations are included.
Modeling Improvements and Users Manual for Axial-flow Turbine Off-design Computer Code AXOD
Glassman, Arthur J.
1994-01-01
An axial-flow turbine off-design performance computer code used for preliminary studies of gas turbine systems was modified and calibrated based on the experimental performance of large aircraft-type turbines. The flow- and loss-model modifications and calibrations are presented in this report. Comparisons are made between computed performances and experimental data for seven turbines over wide ranges of speed and pressure ratio. This report also serves as the users manual for the revised code, which is named AXOD.
Elastic thickness and heat flux estimates for the Uranian satellite Ariel
Peterson, G.; Nimmo, F.; Schenk, P.
2013-12-01
The exterior of Ariel, an icy satellite orbiting Uranus, shows tectonic features suggesting an episode of endogenic heating in the satellite's past [1]. Using topography derived from stereo images, we identified flexural uplift at two different rift zones. The elastic thickness is estimated using the wavelength of the deformation [2], yielding elastic thickness values of 2-4 km for the first region and 5-8 km for the second region. Using creep parameters for ice [3] and the approach of [4], we estimate the temperature at the base of the lithosphere to be in the range 110 to 140 K, depending on the strain rate assumed. The corresponding heat fluxes are 40-120 mW/m^2 and 20-50 mW/m^2, respectively. Neither tidal heating assuming Ariel's current eccentricity nor radiogenic heat production from the silicate core are enough to cause the inferred heat flux. Unstable resonant configurations of the Uranian satellites may have occurred in the past [5], including a 2:1 mean-motion resonance between Ariel and Umbriel. This resonance would have generated a higher eccentricity, possibly explaining the endogenic heat source. However, the maximum equilibrium heating rate in Ariel due to this resonance [1] is 2.9 GW (0.6 mW/m2), inadequate to cause the inferred heat flux. The origin of the inferred high heat fluxes is thus currently mysterious. [1] Peale 1999 [2] Turcotte and Schubert 2002 [3] Goldsby and Kohlstedt 2001 [4] Nimmo et al. 2002 [5] Dermott et al. 1988
Users manual and modeling improvements for axial turbine design and performance computer code TD2-2
Glassman, Arthur J.
1992-01-01
Computer code TD2 computes design point velocity diagrams and performance for multistage, multishaft, cooled or uncooled, axial flow turbines. This streamline analysis code was recently modified to upgrade modeling related to turbine cooling and to the internal loss correlation. These modifications are presented in this report along with descriptions of the code's expanded input and output. This report serves as the users manual for the upgraded code, which is named TD2-2.
Aeschliman, D. P.; Oberkampf, W. L.; Blottner, F. G.
Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.
International Nuclear Information System (INIS)
TERRA is a computer code which calculates concentrations of radionuclides and ingrowing daughters in surface and root-zone soil, produce and feed, beef, and milk from a given deposition rate at any location in the conterminous United States. The code is fully integrated with seven other computer codes which together comprise a Computerized Radiological Risk Investigation System, CRRIS. Output from either the long range (> 100 km) atmospheric dispersion code RETADD-II or the short range (<80 km) atmospheric dispersion code ANEMOS, in the form of radionuclide air concentrations and ground deposition rates by downwind location, serves as input to TERRA. User-defined deposition rates and air concentrations may also be provided as input to TERRA through use of the PRIMUS computer code. The environmental concentrations of radionuclides predicted by TERRA serve as input to the ANDROS computer code which calculates population and individual intakes, exposures, doses, and risks. TERRA incorporates models to calculate uptake from soil and atmospheric deposition on four groups of produce for human consumption and four groups of livestock feeds. During the environmental transport simulation, intermediate calculations of interception fraction for leafy vegetables, produce directly exposed to atmospherically depositing material, pasture, hay, and silage are made based on location-specific estimates of standing crop biomass. Pasture productivity is estimated by a model which considers the number and types of cattle and sheep, pasture area, and annual production of other forages (hay and silage) at a given location. Calculations are made of the fraction of grain imported from outside the assessment area. TERRA output includes the above calculations and estimated radionuclide concentrations in plant produce, milk, and a beef composite by location
Energy Technology Data Exchange (ETDEWEB)
Baes, C.F. III; Sharp, R.D.; Sjoreen, A.L.; Hermann, O.W.
1984-11-01
TERRA is a computer code which calculates concentrations of radionuclides and ingrowing daughters in surface and root-zone soil, produce and feed, beef, and milk from a given deposition rate at any location in the conterminous United States. The code is fully integrated with seven other computer codes which together comprise a Computerized Radiological Risk Investigation System, CRRIS. Output from either the long range (> 100 km) atmospheric dispersion code RETADD-II or the short range (<80 km) atmospheric dispersion code ANEMOS, in the form of radionuclide air concentrations and ground deposition rates by downwind location, serves as input to TERRA. User-defined deposition rates and air concentrations may also be provided as input to TERRA through use of the PRIMUS computer code. The environmental concentrations of radionuclides predicted by TERRA serve as input to the ANDROS computer code which calculates population and individual intakes, exposures, doses, and risks. TERRA incorporates models to calculate uptake from soil and atmospheric deposition on four groups of produce for human consumption and four groups of livestock feeds. During the environmental transport simulation, intermediate calculations of interception fraction for leafy vegetables, produce directly exposed to atmospherically depositing material, pasture, hay, and silage are made based on location-specific estimates of standing crop biomass. Pasture productivity is estimated by a model which considers the number and types of cattle and sheep, pasture area, and annual production of other forages (hay and silage) at a given location. Calculations are made of the fraction of grain imported from outside the assessment area. TERRA output includes the above calculations and estimated radionuclide concentrations in plant produce, milk, and a beef composite by location.
International Nuclear Information System (INIS)
The Radiation Shielding Information Center (RSIC), established in 1962 to collect, package, analyze, and disseminate information, computer codes, and data in the area of radiation transport related to fission, is now being utilized to support fusion neutronics technology. The major activities include: (1) answering technical inquiries on radiation transport problems, (2) collecting, packaging, testing, and disseminating computing technology and data libraries, and (3) reviewing literature and operating a computer-based information retrieval system containing material pertinent to radiation transport analysis. The computer codes emphasize methods for solving the Boltzmann equation such as the discrete ordinates and Monte Carlo techniques, both of which are widely used in fusion neutronics. The data packages include multigroup coupled neutron-gamma-ray cross sections and kerma coefficients, other nuclear data, and radiation transport benchmark problem results
International Nuclear Information System (INIS)
User input data requirements are presented for certain special processors in a nuclear reactor computation system. These processors generally read data in formatted form and generate binary interface data files. Some data processing is done to convert from the user oriented form to the interface file forms. The VENTURE diffusion theory neutronics code and other computation modules in this system use the interface data files which are generated
An improved UO2 thermal conductivity model in the ELESTRES computer code
International Nuclear Information System (INIS)
This paper describes the improved UO2 thermal conductivity model for use in the ELESTRES (ELEment Simulation and sTRESses) computer code. The ELESTRES computer code models the thermal, mechanical and microstructural behaviour of a CANDU® fuel element under normal operating conditions. The main purpose of the code is to calculate fuel temperatures, fission gas release, internal gas pressure, fuel pellet deformation, and fuel sheath strains for fuel element design and assessment. It is also used to provide initial conditions for evaluating fuel behaviour during high temperature transients. The thermal conductivity of UO2 fuel is one of the key parameters that affect ELESTRES calculations. The existing ELESTRES thermal conductivity model has been assessed and improved based on a large amount of thermal conductivity data from measurements of irradiated and un-irradiated UO2 fuel with different densities. The UO2 thermal conductivity data cover 90% to 99% theoretical density of UO2, temperature up to 3027 K, and burnup up to 1224 MW·h/kg U. The improved thermal conductivity model, which is recommended for a full implementation in the ELESTRES computer code, has reduced the ELESTRES code prediction biases of temperature, fission gas release, and fuel sheath strains when compared with the available experimental data. This improved thermal conductivity model has also been checked with a test version of ELESTRES over the full ranges of fuel temperature, fuel burnup, and fuel density expected in CANDU fuel. (author)
Computer code for space-time diagnostics of nuclear safety parameters
Energy Technology Data Exchange (ETDEWEB)
Solovyev, D. A.; Semenov, A. A.; Gruzdov, F. V.; Druzhaev, A. A.; Shchukin, N. V.; Dolgenko, S. G.; Solovyeva, I. V.; Ovchinnikova, E. A. [National Research Nuclear Univ. MEPhI, Kashirskoe, 31, 115409, Moscow (Russian Federation)
2012-07-01
The computer code ECRAN 3D (Experimental and Calculation Reactor Analysis) is designed for continuous monitoring and diagnostics of reactor cores and databases for RBMK-1000 on the basis of analytical methods for the interrelation parameters of nuclear safety. The code algorithms are based on the analysis of deviations between the physically obtained figures and the results of neutron-physical and thermal-hydraulic calculations. Discrepancies between the measured and calculated signals are equivalent to obtaining inadequacy between performance of the physical device and its simulator. The diagnostics system can solve the following problems: identification of facts and time for inconsistent results, localization of failures, identification and quantification of the causes for inconsistencies. These problems can be effectively solved only when the computer code is working in a real-time mode. This leads to increasing requirements for a higher code performance. As false operations can lead to significant economic losses, the diagnostics system must be based on the certified software tools. POLARIS, version 4.2.1 is used for the neutron-physical calculation in the computer code ECRAN 3D. (authors)
Development of a computer code for thermal hydraulics of reactors (THOR). [BWR and PWR
Energy Technology Data Exchange (ETDEWEB)
Wulff, W
1975-01-01
The purpose of the advanced code development work is to construct a computer code for the prediction of thermohydraulic transients in water-cooled nuclear reactor systems. The fundamental formulation of fluid dynamics is to be based on the one-dimensional drift flux model for non-homogeneous, non-equilibrium flows of two-phase mixtures. Particular emphasis is placed on component modeling, automatic prediction of initial steady state conditions, inclusion of one-dimensional transient neutron kinetics, freedom in the selection of computed spatial detail, development of reliable constitutive descriptions, and modular code structure. Numerical solution schemes have been implemented to integrate simultaneously the one-dimensional transient drift flux equations. The lumped-parameter modeling analyses of thermohydraulic transients in the reactor core and in the pressurizer have been completed. The code development for the prediction of the initial steady state has been completed with preliminary representation of individual reactor system components. A program has been developed to predict critical flow expanding from a dead-ended pipe; the computed results have been compared and found in good agreement with idealized flow solutions. Transport properties for liquid water and water vapor have been coded and verified.
PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)
Vincenti, Henri
2016-03-01
The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.
Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I
International Nuclear Information System (INIS)
The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations
Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I
Energy Technology Data Exchange (ETDEWEB)
Thomas, L. (ed.)
1979-01-01
The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations.
Modeling of BWR core meltdown accidents - for application in the MELRPI.MOD2 computer code
International Nuclear Information System (INIS)
This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing
User's manual for the vertical axis wind turbine performance computer code darter
Energy Technology Data Exchange (ETDEWEB)
Klimas, P. C.; French, R. E.
1980-05-01
The computer code DARTER (DARrieus, Turbine, Elemental Reynolds number) is an aerodynamic performance/loads prediction scheme based upon the conservation of momentum principle. It is the latest evolution in a sequence which began with a model developed by Templin of NRC, Canada and progressed through the Sandia National Laboratories-developed SIMOSS (SSImple MOmentum, Single Streamtube) and DART (SARrieus Turbine) to DARTER.
CPS: a continuous-point-source computer code for plume dispersion and deposition calculations
Energy Technology Data Exchange (ETDEWEB)
Peterson, K.R.; Crawford, T.V.; Lawson, L.A.
1976-05-21
The continuous-point-source computer code calculates concentrations and surface deposition of radioactive and chemical pollutants at distances from 0.1 to 100 km, assuming a Gaussian plume. The basic input is atmospheric stability category and wind speed, but a number of refinements are also included.
The community project COSA: comparison of geo-mechanical computer codes for salt
International Nuclear Information System (INIS)
Two benchmark problems related to waste disposal in salt were tackled by ten European organisations using twelve rock-mechanics finite element computer codes. The two problems represented increasing complexity with first a hypothetical verification and then the simulation of a laboratory experiment. The project allowed to ascertain a shapshot of the current combined expertise of European organisations in the modelling of salt behaviour
SIVAR - Computer code for simulation of fuel rod behavior in PWR during fast transients
International Nuclear Information System (INIS)
Fuel rod behavior during a stationary and a transitory operation, is studied. A computer code aiming at simulating PWR type rods, was developed; however, it can be adapted for simulating other type of rods. A finite difference method was used. (E.G.)
SACRD: a data base for fast reactor safety computer codes, general description
Energy Technology Data Exchange (ETDEWEB)
Greene, N.M.; Forsberg, V.M.; Raiford, G.B.; Arwood, J.W.; Simpson, D.B.; Flanagan, G.F.
1979-01-01
SACRD is a data base of material properties and other handbook data needed in computer codes used for fast reactor safety studies. Data are available in the thermodynamics, heat transfer, fluid mechanics, structural mechanics, aerosol transport, meteorology, neutronics, and dosimetry areas. Tabular, graphical and parameterized data are provided in many cases. A general description of the SACRD system is presented in the report.
A Computer Code For Evaluation of Design Parameters of Concrete Piercing Earth Shock Missile Warhead
Directory of Open Access Journals (Sweden)
P. K. Roy
1985-10-01
Full Text Available A simple and reliable computer code has been devised for evaluating various design parameters, and predicting the penetration performance of concrete piercing earth shock missile-warhead and will be useful to the designers of earth penetrating weapon system.
Application of Multiple Description Coding for Adaptive QoS Mechanism for Mobile Cloud Computing
Directory of Open Access Journals (Sweden)
Ilan Sadeh
2014-02-01
Full Text Available Multimedia transmission over cloud infrastructure is a hot research topic worldwide. It is very strongly related to video streaming, VoIP, mobile networks, and computer networks. The goal is a reliable integration of telephony, video and audio transmission, computing and broadband transmission based on cloud computing. One right approach to pave the way for mobile multimedia and cloud computing is Multiple Description Coding (MDC, i.e. the solution would be: TCP/IP and similar protocols to be used for transmission of text files, and Multiple Description Coding “Send and Forget” algorithm to be used as transmission method for Multimedia over the cloud. Multiple Description Coding would improve the Quality of Service and would provide new service of rate adaptive streaming. This paper presents a new approach for improving the quality of multimedia and other services in the cloud, by using Multiple Description Coding (MDC. Firsty MDC Send and Forget Algorithm is compared with the existing protocols such as TCP/IP, UDP, RTP, etc. Then the Achievable Rate Region for MDC system is evaluated. Finally, a new subset of Quality of Service that considers the blocking in multi-terminal multimedia network and fidelity losses is considered.
International Nuclear Information System (INIS)
The computer code SSICC (Safety and Stability of Internally Cooled Conductors) has successfully simulated the multiple stability regions observed experimentally by Lue, Miller, and Dresner of Oak Ridge National Laboratory. The simulation requires asymmetrical boundary conditions and a heating pulse duration short compared to the time for reflection of the transient pressure wave back into the heated region of the conductor
TEMP: a computer code to calculate fuel pin temperatures during a transient
International Nuclear Information System (INIS)
The computer code TEMP calculates fuel pin temperatures during a transient. It was developed to accommodate temperature calculations in any system of axi-symmetric concentric cylinders. When used to calculate fuel pin temperatures, the code will handle a fuel pin as simple as a solid cylinder or as complex as a central void surrounded by fuel that is broken into three regions by two circumferential cracks. Any fuel situation between these two extremes can be analyzed along with additional cladding, heat sink, coolant or capsule regions surrounding the fuel. The one-region version of the code accurately calculates the solution to two problems having closed-form solutions. The code uses an implicit method, an explicit method and a Crank-Nicolson (implicit-explicit) method
The computer code CONDIF-01 (release 2) for transient convective-conductive heat transfer
International Nuclear Information System (INIS)
CONDIF-01 is a finite element computer code developed at J.R.C. Ispra to solve natural and forced convection problems, for use in Post Accident Heat Removal studies following a hypothetical fast-reactor core meltdown. The new version of the code is capable of analysing problems in which there exists initially a liquid (solid) region which may change phase to solid (liquid), as time proceeds. A variant of the enthalpy method is employed to model the phase change process. The presence of structures enclosing the liquid (solid) region is accounted for, but such structures are assumed to remain in the solid phase. Plane and axisymmetric situations may be analysed. The essential characteristics of the code are outlined here. This report gives instructions for preparing input data to CONDIF-01, release 2, and describes two test problems in order to illustrate both the input and the output of the code
International Nuclear Information System (INIS)
This document is intended as a user/programmer manual for the TRANSENERGY-S computer code. The code represents an extension of the steady state ENERGY model, originally developed by E. Khan, to predict coolant and fuel pin temperatures in a single LMFBR core assembly during transient events. Effects which may be modelled in the analysis include temporal variation in gamma heating in the coolant and duct wall, rod power production, coolant inlet temperature, coolant flow rate, and thermal boundary conditions around the single assembly. Numerical formulations of energy equations in the fuel and coolant are presented, and the solution schemes and stability criteria are discussed. A detailed description of the input deck preparation is presented, as well as code logic flowcharts, and a complete program listing. TRANSENERGY-S code predictions are compared with those of two different versions of COBRA, and partial results of a 61 pin bundle test case are presented
Methods, algorithms and computer codes for calculation of electron-impact excitation parameters
Bogdanovich, P; Stonys, D
2015-01-01
We describe the computer codes, developed at Vilnius University, for the calculation of electron-impact excitation cross sections, collision strengths, and excitation rates in the plane-wave Born approximation. These codes utilize the multireference atomic wavefunctions which are also adopted to calculate radiative transition parameters of complex many-electron ions. This leads to consistent data sets suitable in plasma modelling codes. Two versions of electron scattering codes are considered in the present work, both of them employing configuration interaction method for inclusion of correlation effects and Breit-Pauli approximation to account for relativistic effects. These versions differ only by one-electron radial orbitals, where the first one employs the non-relativistic numerical radial orbitals, while another version uses the quasirelativistic radial orbitals. The accuracy of produced results is assessed by comparing radiative transition and electron-impact excitation data for neutral hydrogen, helium...
Computer code to interchange CDS and wave-drag geometry formats
Johnson, V. S.; Turnock, D. L.
1986-01-01
A computer program has been developed on the PRIME minicomputer to provide an interface for the passage of aircraft configuration geometry data between the Rockwell Configuration Development System (CDS) and a wireframe geometry format used by aerodynamic design and analysis codes. The interface program allows aircraft geometry which has been developed in CDS to be directly converted to the wireframe geometry format for analysis. Geometry which has been modified in the analysis codes can be transformed back to a CDS geometry file and examined for physical viability. Previously created wireframe geometry files may also be converted into CDS geometry files. The program provides a useful link between a geometry creation and manipulation code and analysis codes by providing rapid and accurate geometry conversion.
Recommendations for computer modeling codes to support the UMTRA groundwater restoration project
International Nuclear Information System (INIS)
The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended
A proposed framework for computational fluid dynamics code calibration/validation
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, W.L.
1993-12-31
The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ``calibrated code,`` ``validated code,`` and a ``validation experiment`` is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance.
Validation of computer codes used in safety analyses of CANDU power plants
International Nuclear Information System (INIS)
Since the 1960s, the CANDU industry has been developing and using scientific computer codes for designing and analysing CANDU power plants. In this endeavour, the industry has been following nuclear quality-assurance practices of the day, including verification and validation of design and analysis methodologies. These practices have resulted in a large body of experience and expertise in the development and application of computer codes and their associated documentation. Major computer codes used in safety analyses of operating plants and those under development have been, and continue to be subjected to rigorous processes of development and application. To provide a systematic framework for the validation work done to date and planned for the future, the industry has decided to adopt the methodology of validation matrices for computer-code validation, similar to that developed by the Nuclear Energy Agency of the Organization for Economic Co-operation and Development and focused on thermalhydraulic phenomena in Light Water Reactors (LWR). To manage the development of validation matrices for CANDU power plants and to engage experts who can work in parallel on several topics, the CANDU task has been divided into six scientific disciplines. Teams of specialists in each discipline are developing the matrices. A review of each matrix will show if there are gaps or insufficient data for validation purposes and will thus help to focus future research and development, if needed. Also, the industry is examining its suite of computer codes, and their specific, additional validation needs, if any, will follow from the work on the validation matrices. The team in System Thermalhydraulics is the furthest advanced, since it had the earliest start and the international precedent on LWRs, and has developed its validation matrix. The other teams are at various stages in this multiphase, multi-year program, and their progress to date is presented. (author)
Joint Compute and Forward for the Two Way Relay Channel with Spatially Coupled LDPC Codes
Hern, Brett
2012-01-01
We consider the design and analysis of coding schemes for the binary input two way relay channel with erasure noise. We are particularly interested in reliable physical layer network coding in which the relay performs perfect error correction prior to forwarding messages. The best known achievable rates for this problem can be achieved through either decode and forward or compute and forward relaying. We consider a decoding paradigm called joint compute and forward which we numerically show can achieve the best of these rates with a single encoder and decoder. This is accomplished by deriving the exact performance of a message passing decoder based on joint compute and forward for spatially coupled LDPC ensembles.
Modeling And Simulation Of Bar Code Scanners Using Computer Aided Design Software
Hellekson, Ron; Campbell, Scott
1988-06-01
Many optical systems have demanding requirements to package the system in a small 3 dimensional space. The use of computer graphic tools can be a tremendous aid to the designer in analyzing the optical problems created by smaller and less costly systems. The Spectra Physics grocery store bar code scanner employs an especially complex 3 dimensional scan pattern to read bar code labels. By using a specially written program which interfaces with a computer aided design system, we have simulated many of the functions of this complex optical system. In this paper we will illustrate how a recent version of the scanner has been designed. We will discuss the use of computer graphics in the design process including interactive tweaking of the scan pattern, analysis of collected light, analysis of the scan pattern density, and analysis of the manufacturing tolerances used to build the scanner.
ANIGAM: a computer code for the automatic calculation of nuclear group data
International Nuclear Information System (INIS)
The computer code ANIGAM consists mainly of the well-known programmes GAM-I and ANISN as well as of a subroutine which reads the THERMOS cross section library and prepares it for ANISN. ANIGAM has been written for the automatic calculation of microscopic and macroscopic cross sections of light water reactor fuel assemblies. In a single computer run both were calculated, the cross sections representative for fuel assemblies in reactor core calculations and the cross sections of each cell type of a fuel assembly. The calculated data were delivered to EXTERMINATOR and CITATION for following diffusion or burn up calculations by an auxiliary programme. This report contains a detailed description of the computer codes and methods used in ANIGAM, a description of the subroutines, of the OVERLAY structure and an input and output description. (oririg.)
Speedup of MCACE, a Monte Carlo code for evaluation of shielding safety, by parallel computer, 1
International Nuclear Information System (INIS)
In order to improve the accuracy of shielding analysis, we have modified MCACE, a Monte Carlo code for shielding analysis, to be able to execute on a parallel computer. The suitable algorithms for efficient paralleling has been investigated by static and dynamic analyses of the code. This includes a strategy where new units of batches are assigned to the idling cells dynamically during the execution. The efficiency of paralleling has been measured by using a simulator of a parallel computer. It is found that the load factor of all cells reached nearly 100%, and consequently, it can be said that the most effective paralleling has been achieved. The simulator has estimated the effect of paralleling as the speedup of 7.13 times when a sample problem of 8 batches, 400 particles per one batch, is loaded on parallel computer equipped with 8 cells. (author)
Multiple frequencies sequential coding for SSVEP-based brain-computer interface.
Directory of Open Access Journals (Sweden)
Yangsong Zhang
Full Text Available BACKGROUND: Steady-state visual evoked potential (SSVEP-based brain-computer interface (BCI has become one of the most promising modalities for a practical noninvasive BCI system. Owing to both the limitation of refresh rate of liquid crystal display (LCD or cathode ray tube (CRT monitor, and the specific physiological response property that only a very small number of stimuli at certain frequencies could evoke strong SSVEPs, the available frequencies for SSVEP stimuli are limited. Therefore, it may not be enough to code multiple targets with the traditional frequencies coding protocols, which poses a big challenge for the design of a practical SSVEP-based BCI. This study aimed to provide an innovative coding method to tackle this problem. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we present a novel protocol termed multiple frequencies sequential coding (MFSC for SSVEP-based BCI. In MFSC, multiple frequencies are sequentially used in each cycle to code the targets. To fulfill the sequential coding, each cycle is divided into several coding epochs, and during each epoch, certain frequency is used. Obviously, different frequencies or the same frequency can be presented in the coding epochs, and the different epoch sequence corresponds to the different targets. To show the feasibility of MFSC, we used two frequencies to realize four targets and carried on an offline experiment. The current study shows that: 1 MFSC is feasible and efficient; 2 the performance of SSVEP-based BCI based on MFSC can be comparable to some existed systems. CONCLUSIONS/SIGNIFICANCE: The proposed protocol could potentially implement much more targets with the limited available frequencies compared with the traditional frequencies coding protocol. The efficiency of the new protocol was confirmed by real data experiment. We propose that the SSVEP-based BCI under MFSC might be a promising choice in the future.
ELESTRES 2.1 computer code for high burnup CANDU fuel performance analysis
International Nuclear Information System (INIS)
The ELESTRES (ELEment Simulation and sTRESses) computer code models the thermal, mechanical and micro structural behaviours of CANDU® fuel element under normal operating conditions. The main purpose of the code is to calculate fuel temperatures, fission gas release, internal gas pressure, fuel pellet deformation, and fuel sheath strains in fuel element design analysis and assessments. It is also used to provide initial conditions for evaluating fuel behaviour during high temperature transients. ELESTRES 2.1 was developed for high burnup fuel application, based on an industry standard tool version of the code, through the implementation or modification to code models such as fission gas release, fuel pellet densification, flux depression (radial power distribution in the fuel pellet), fuel pellet thermal conductivity, fuel sheath creep, fuel sheath yield strength, fuel sheath oxidation, two dimensional heat transfer between the fuel pellet and the fuel sheath; and an automatic finite element meshing capability to handle various fuel pellet shapes. The ELESTRES 2.1 code design and development was planned, implemented, verified, validated, and documented in accordance with the AECL software quality assurance program, which meets the requirements of the Canadian Standards Association standard for software quality assurance CSA N286.7-99. This paper presents an overview of the ELESTRES 2.1 code with descriptions of the code's theoretical background, solution methodologies, application range, input data, and interface with other analytical tools. Code verification and validation results, which are also discussed in the paper, have confirmed that ELESTRES 2.1 is capable of modelling important fuel phenomena and the code can be used in the design assessment and the verification of high burnup fuels. (author)
Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan
2015-10-01
In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.
MOLOCH computer code for molecular-dynamics simulation of processes in condensed matter
Directory of Open Access Journals (Sweden)
Derbenev I.V.
2011-01-01
Full Text Available Theoretical and experimental investigation into properties of condensed matter is one of the mainstreams in RFNC-VNIITF scientific activity. The method of molecular dynamics (MD is an innovative method of theoretical materials science. Modern supercomputers allow the direct simulation of collective effects in multibillion atom sample, making it possible to model physical processes on the atomistic level, including material response to dynamic load, radiation damage, influence of defects and alloying additions upon material mechanical properties, or aging of actinides. During past ten years, the computer code MOLOCH has been developed at RFNC-VNIITF. It is a parallel code suitable for massive parallel computing. Modern programming techniques were used to make the code almost 100% efficient. Practically all instruments required for modelling were implemented in the code: a potential builder for different materials, simulation of physical processes in arbitrary 3D geometry, and calculated data processing. A set of tests was developed to analyse algorithms efficiency. It can be used to compare codes with different MD implementation between each other.
Radiometry of UVB: Comparisons with results of Lowtran 7 and Premar computer codes
International Nuclear Information System (INIS)
A prototype UVMFR instrument (Ultra Violet MultiFilter Radiometer) with 4 different wavelength sensors in the UVB band has been thoroughly tested in Brasimone (44,11 deg N; 11.11 deg E) prior to tis installation for long term measurement campaigns of the UVB flux at the ground (890 m. local height) with separation of the direct and diffuse components of radiation. It has also been used for the first validation of a new computer code (PREMAR) developed by ENEA which solves the radiation transfer equation in the atmosphere by a Montecarlo approach. The new code considers a multilayer geometry, allows for the computation of albedo effects and exploits the rich library and potentialities of the LOWTRAN-7 U.S. computer code. With reference to the best available data, in days with an optimal meteorology to avoid significant cloud effects, an intercomparison of the instrument and code results has been performed at different times (varying solar zenital angles). A good agreement has been obtained between experiment and calculations as to the diffuse / total radiation ratio, and the deduced local albedo has been found to correspond rather well to theoretical estimates
International Nuclear Information System (INIS)
The REBO computer code has been written for the automatic generation, with relatively simple input data, of cylindrical coordinates for the three dimensional finite element grid of thick walled nozzle cylindrical vessel junctions with curved transitions. The REBO is a FORTRAN IV code written for the IBM system 370. The main feature of the code are presented and a user's manual is given
Energy Technology Data Exchange (ETDEWEB)
Strenge, D.L.; Peloquin, R.A.
1981-04-01
The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested.
Experimental assessment of computer codes used for safety analysis of integral reactors
Energy Technology Data Exchange (ETDEWEB)
Falkov, A.A.; Kuul, V.S.; Samoilov, O.B. [OKB Mechanical Engineering, Nizhny Novgorod (Russian Federation)
1995-09-01
Peculiarities of integral reactor thermohydraulics in accidents are associated with presence of noncondensable gas in built-in pressurizer, absence of pumped ECCS, use of guard vessel for LOCAs localisation and passive RHRS through in-reactor HX`s. These features defined the main trends in experimental investigations and verification efforts for computer codes applied. The paper reviews briefly the performed experimental investigation of thermohydraulics of AST-500, VPBER600-type integral reactors. The characteristic of UROVEN/MB-3 code for LOCAs analysis in integral reactors and results of its verification are given. The assessment of RELAP5/mod3 applicability for accident analysis in integral reactor is presented.
International Nuclear Information System (INIS)
The ASSERT subchannel code has been developed specifically to model flow and phase distributions within CANDU fuel bundles. ASSERT uses a drift-flux model which permits the phases to have unequal velocities, and can thus model phase separation tendencies which may occur in horizontal flow. The basic principles of ASSERT are outlined, and computed results are compared against data from various experiments for validation purposes. The paper concludes with an example of the use of the code to predict critical heat flux in CANDU geometries
Aquelarre. A computer code for fast neutron cross sections from the statistical model
International Nuclear Information System (INIS)
A Fortran V computer code for Univac 1108/6 using the partial statistical (or compound nucleus) model is described. The code calculates fast neutron cross sections for the (n, n'), (n, p), (n, d) and (n, α reactions and the angular distributions and Legendre moments.for the (n, n) and (n, n') processes in heavy and intermediate spherical nuclei. A local Optical Model with spin-orbit interaction for each level is employed, allowing for the width fluctuation and Moldauer corrections, as well as the inclusion of discrete and continuous levels. (Author) 67 refs
International Nuclear Information System (INIS)
The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested
Development of a new generation solid rocket motor ignition computer code
Foster, Winfred A., Jr.; Jenkins, Rhonald M.; Ciucci, Alessandro; Johnson, Shelby D.
1994-01-01
This report presents the results of experimental and numerical investigations of the flow field in the head-end star grain slots of the Space Shuttle Solid Rocket Motor. This work provided the basis for the development of an improved solid rocket motor ignition transient code which is also described in this report. The correlation between the experimental and numerical results is excellent and provides a firm basis for the development of a fully three-dimensional solid rocket motor ignition transient computer code.
Once-through CANDU reactor models for the ORIGEN2 computer code
International Nuclear Information System (INIS)
Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % 235U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given
Once-through CANDU reactor models for the ORIGEN2 computer code
Energy Technology Data Exchange (ETDEWEB)
Croff, A.G.; Bjerke, M.A.
1980-11-01
Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % /sup 235/U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given.
International Nuclear Information System (INIS)
The information on the status of the work on development of the system of the nuclear safety codes for fast liquid metal reactors is presented in paper. The purpose of the work is to create an instrument for NPP neutronic, thermohydraulic and strength justification including human and environment radiation safety. The main task that is to be solved by the system of codes developed is the analysis of the broad spectrum of phenomena taking place on the NPP (including reactor itself, NPP components, containment rooms, industrial site and surrounding area) and analysis of the impact of the regular and accidental releases on the environment. The code system is oriented on the ability of fully integrated modeling of the NPP behavior in the coupled definition accounting for the wide range of significant phenomena taking place on the NPP under normal and accident conditions. It is based on the models that meet the state-of-the-art knowledge level. The codes incorporate advanced numerical methods and modern programming technologies oriented on the high-performance computing systems. The information on the status of the work on verification of the separate codes of the system of codes is also presented. (author)
V.S.O.P. (99/05) computer code system
Energy Technology Data Exchange (ETDEWEB)
Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Scherer, W.
2005-11-01
V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to HTRs and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. V.S.O.P.(99 / 05) represents the further development of V.S.O.P. (99). Compared to its precursor, the code system has been improved in many details. Major improvements and extensions have been included concerning the neutron spectrum calculation, the 3-d neutron diffusion options, and the thermal hydraulic section with respect to 'multi-pass'-fuelled pebblebed cores. This latest code version was developed and tested under the WINDOWS-XP - operating system. The storage requirement for the executables and the basic libraries associated with the code amounts to about 15 MB. Another 5 MB are required - if desired - for storage of the source code ({approx}65000 Fortran statements). (orig.)
Mai, Zhiming; Samuelson, John
1998-01-01
A family of genes, called ariel, are named for and encode asparagine-rich Entamoeba histolytica antigens containing 2 to 16 octapeptide repeats. Ariel proteins, which are constitutively expressed by trophozoites, belong to a large antigen family that includes the serine-rich E. histolytica protein (SREHP), an amebic vaccine candidate.
High-performance computational fluid dynamics: a custom-code approach
Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.
2016-07-01
We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier–Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.
Computer code for the thermal-hydraulic analysis of ITU TRIGA Mark-II reactor
International Nuclear Information System (INIS)
Istanbul Technical University (ITU) TRIGA Mark-II reactor core consists of ninety vertical cylindrical elements located in five rings. Sixty-nine of them are fuel elements. The reactor is operated and cooled with natural convection by pool water, which is also cooled and purified in external coolant circuits by forced convection. This characteristic leads to consider both the natural and forced convection heat transfer in a 'porous-medium analysis'. The safety analysis of the reactor requires a thermal-hydraulic model of the reactor to determine the thermal-hydraulic parameters in each mode of operation. In this study, a computer code cooled TRIGA-PM (TRIGA - Porous Medium) for the thermal-hydraulic analysis of ITU is considered. TRIGA Mark-II reactor code has been developed to obtain velocity, pressure and temperature distributions in the reactor pool as a function of core design parameters and pool configuration. The code is a transient, thermal-hydraulic code and requires geometric and physical modelling parameters. In the model, although the reactor is considered as only porous medium, the other part of the reactor pool is considered partly as continuum and partly as porous medium. COMMIX-1C code is used for the benchmark purpose of TRIGA-PM code. For the normal operating conditions of the reactor, estimations of TRIGA-PM are in good agreement with those of COMMIX-1C. After some more improvements, this code will be employed for the estimation of LOCA scenario, which can not be analyses by COMMIX-1C and the other multi-purpose codes, considering a break at one of the beam tubes of the reactor
International Nuclear Information System (INIS)
The RALLY computer code pack (RALLY pack) is a set of computer codes destinate to the reliability of complex systems, aiming to a risk analysis. Three of the six codes, are commented, presenting their purpose, input description, calculation methods and results obtained with each one of those computer codes. The computer codes are: TREBIL, to obtain the fault tree logical equivalent; CRESSEX, to obtain the minimal cut and the punctual values of the non-reliability and non-availability of the system; and STREUSL, for the dispersion calculation of those values around the media. In spite of the CRESSEX, in its version available at CNEN, uses a little long method to obtain the minimal cut in an HB-CNEN system, the three computer programs show good results, mainly the STREUSL, which permits the simulation of various components. (E.G.)
Laser bar code applied in computer aided design of power fittings
Yang, Xiaohong; Yang, Fan
2010-10-01
A computer aided process planning system is developed based on laser bar code technology to automatize and standardize processing-paper making. The system sorts fittings by analyzing their types, structures, dimensions, materials, and technics characteristics, groups and encodes the fittings with similar technology characteristics base on the theory of Group Technology (GT). The system produces standard technology procedures using integrative-parts method and stores them into technics databases. To work out the technology procedure of fittings, the only thing for users need to do is to scan the bar code of fittings with a laser code reader. The system can produce process-paper using decision trees method and then print the process-cards automatically. The software has already been applied in some power stations and is praised by the users.
International Nuclear Information System (INIS)
A computer code (MIGSTEM-FIT) has been developed to determine the prediction parameters, retardation factor, water flow velocity, dispersion coefficient, etc., of radionuclide migration in soil layer from the concentration distribution of radionuclide in soil layer or in effluent. In this code, the solution of the predicting equation for radionuclide migration is compared with the concentration distribution measured, and the most adequate values of parameter can be determined by the flexible tolerance method. The validity of finite differential method, which was one of the method to solve the predicting equation, was confirmed by comparison with the analytical solution, and also the validity of fitting method was confirmed by the fitting of the concentration distribution calculated from known parameters. From the examination about the error, it was found that the error of the parameter obtained by using this code was smaller than that of the concentration distribution measured. (author)
Enhancement of the Probabilistic CEramic Matrix Composite ANalyzer (PCEMCAN) Computer Code
Shah, Ashwin
2000-01-01
This report represents a final technical report for Order No. C-78019-J entitled "Enhancement of the Probabilistic Ceramic Matrix Composite Analyzer (PCEMCAN) Computer Code." The scope of the enhancement relates to including the probabilistic evaluation of the D-Matrix terms in MAT2 and MAT9 material properties card (available in CEMCAN code) for the MSC/NASTRAN. Technical activities performed during the time period of June 1, 1999 through September 3, 1999 have been summarized, and the final version of the enhanced PCEMCAN code and revisions to the User's Manual is delivered along with. Discussions related to the performed activities were made to the NASA Project Manager during the performance period. The enhanced capabilities have been demonstrated using sample problems.
Moreno, Maggie; Baggio, Giosuè
2015-07-01
In signaling games, a sender has private access to a state of affairs and uses a signal to inform a receiver about that state. If no common association of signals and states is initially available, sender and receiver must coordinate to develop one. How do players divide coordination labor? We show experimentally that, if players switch roles at each communication round, coordination labor is shared. However, in games with fixed roles, coordination labor is divided: Receivers adjust their mappings more frequently, whereas senders maintain the initial code, which is transmitted to receivers and becomes the common code. In a series of computer simulations, player and role asymmetry as observed experimentally were accounted for by a model in which the receiver in the first signaling round has a higher chance of adjusting its code than its partner. From this basic division of labor among players, certain properties of role asymmetry, in particular correlations with game complexity, are seen to follow.
WOLF: a computer code package for the calculation of ion beam trajectories
Energy Technology Data Exchange (ETDEWEB)
Vogel, D.L.
1985-10-01
The WOLF code solves POISSON'S equation within a user-defined problem boundary of arbitrary shape. The code is compatible with ANSI FORTRAN and uses a two-dimensional Cartesian coordinate geometry represented on a triangular lattice. The vacuum electric fields and equipotential lines are calculated for the input problem. The use may then introduce a series of emitters from which particles of different charge-to-mass ratios and initial energies can originate. These non-relativistic particles will then be traced by WOLF through the user-defined region. Effects of ion and electron space charge are included in the calculation. A subprogram PISA forms part of this code and enables optimization of various aspects of the problem. The WOLF package also allows detailed graphics analysis of the computed results to be performed.
Validation of the transportation computer codes HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND
International Nuclear Information System (INIS)
The computer codes HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND were used to estimate radiation doses from the transportation of radioactive material in the Department of Energy Programmatic Spent Nuclear Fuel Management and Idaho National Engineering Laboratory Environmental Restoration and Waste Management Programs Environmental Impact Statement. HIGHWAY and INTERLINE were used to estimate transportation routes for truck and rail shipments, respectively. RADTRAN 4 was used to estimate collective doses from incident-free transportation and the risk (probability x consequence) from transportation accidents. RISKIND was used to estimate incident-free radiation doses for maximally exposed individuals and the consequences from reasonably foreseeable transportation accidents. The purpose of this analysis is to validate the estimates made by these computer codes; critiques of the conceptual models used in RADTRAN 4 are also discussed. Validation is defined as ''the test and evaluation of the completed software to ensure compliance with software requirements.'' In this analysis, validation means that the differences between the estimates generated by these codes and independent observations are small (i.e., within the acceptance criterion established for the validation analysis). In some cases, the independent observations used in the validation were measurements; in other cases, the independent observations used in the validation analysis were generated using hand calculations. The results of the validation analyses performed for HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND show that the differences between the estimates generated using the computer codes and independent observations were small. Based on the acceptance criterion established for the validation analyses, the codes yielded acceptable results; in all cases the estimates met the requirements for successful validation
HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments
International Nuclear Information System (INIS)
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs
HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual
International Nuclear Information System (INIS)
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs
HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual
Energy Technology Data Exchange (ETDEWEB)
McCann, R.A.; Lowery, P.S.; Lessor, D.L.
1987-09-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.
Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes
Piro, Markus Hans Alexander
Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system
Manual of a suite of computer codes, EXPRESS (EXact PREparedness Supporting System)
International Nuclear Information System (INIS)
The emergency response supporting system EXPRESS (EXact PREparedness Supporting System) is constructed in JAERI for low cost engineering work stations under the UNIX operation. The purpose of this system is real-time predictions of affected areas due to radioactivities discharged into atmosphere from nuclear facilities. The computational models in EXPRESS are the mass-consistent wind field model EXPRESS-I and the particle dispersion model EXPRESS-II for atmospheric dispersions. In order to attain the quick response even when the codes are used in a small-scale computer, a high-speed iteration method MILUCR (Modified Incomplete Linear Unitary Conjugate Residual) is applied to EXPRESS-I and kernel density method is to EXPRESS-II. This manual describes the model configurations, code structures, related files, namelists and sample outputs of EXPRESS-I and -II. (author)
Computing element evolution towards Exascale and its impact on legacy simulation codes
International Nuclear Information System (INIS)
In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes. (orig.)
Computing element evolution towards Exascale and its impact on legacy simulation codes
Colin de Verdière, Guillaume J. L.
2015-12-01
In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.
Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center
International Nuclear Information System (INIS)
This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art
Chen, Y. S.
1986-03-01
In this report, a numerical method for solving the equations of motion of three-dimensional incompressible flows in nonorthogonal body-fitted coordinate (BFC) systems has been developed. The equations of motion are transformed to a generalized curvilinear coordinate system from which the transformed equations are discretized using finite difference approximations in the transformed domain. The hybrid scheme is used to approximate the convection terms in the governing equations. Solutions of the finite difference equations are obtained iteratively by using a pressure-velocity correction algorithm (SIMPLE-C). Numerical examples of two- and three-dimensional, laminar and turbulent flow problems are employed to evaluate the accuracy and efficiency of the present computer code. The user's guide and computer program listing of the present code are also included.
Computing element evolution towards Exascale and its impact on legacy simulation codes
Energy Technology Data Exchange (ETDEWEB)
Colin de Verdiere, Guillaume J.L. [CEA, DAM, DIF, Arpajon (France)
2015-12-15
In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes. (orig.)
COSA II Further benchmark exercises to compare geomechanical computer codes for salt
International Nuclear Information System (INIS)
Project COSA (COmputer COdes COmparison for SAlt) was a benchmarking exercise involving the numerical modelling of the geomechanical behaviour of heated rock salt. Its main objective was to assess the current European capability to predict the geomechanical behaviour of salt, in the context of the disposal of heat-producing radioactive waste in salt formations. Twelve organisations participated in the exercise in which their solutions to a number of benchmark problems were compared. The project was organised in two distinct phases: The first, from 1984-1986, concentrated on the verification of the computer codes. The second, from 1986-1988 progressed to validation, using three in-situ experiments at the Asse research facility in West Germany as a basis for comparison. This document reports the activities of the second phase of the project and presents the results, assessments and conclusions
Improvement of Level-1 PSA computer code package -A study for nuclear safety improvement-
International Nuclear Information System (INIS)
This year is the second year of the Government-sponsored Mid- and Long-Term Nuclear Power Technology Development Project. The scope of this subproject titled on 'The Improvement of Level-1 PSA Computer Codes' is divided into three main activities : (1) Methodology development on the under-developed fields such as risk assessment technology for plant shutdown and external events, (2) Computer code package development for Level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in the area of PSA methodology development, foreign PSA reports on shutdown and external events have been reviewed and various PSA methodologies have been compared. Level-1 PSA code KIRAP and CCF analysis code COCOA are converted from KOS to Windows. Human reliability database has been also established in this year. In the area of new technology applications, fuzzy set theory and entropy theory are used to estimate component life and to develop a new measure of uncertainty importance. Finally, in the field of application study of PSA technique to reactor regulation, a strategic study to develop a dynamic risk management tool PEPSI and the determination of inspection and test priority of motor operated valves based on risk importance worths have been studied. (Author)
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
PyCUDA: GPU Run-Time Code Generation for High-Performance Computing
Klöckner, Andreas; Lee, Yunsup; Catanzaro, Bryan; Ivanov, Paul; Fasih, Ahmed
2009-01-01
High-performance scientific computing has recently seen a surge of interest in heterogeneous systems, with an emphasis on modern Graphics Processing Units (GPUs). These devices offer tremendous potential for performance and efficiency in important large-scale applications of computational science. However, exploiting this potential can be challenging, as one must adapt to the specialized and rapidly evolving computing environment currently exhibited by GPUs. One way of addressing this challenge is to embrace better techniques and develop tools tailored to their needs. This article presents one simple technique, GPU run-time code generation (RTCG), and PyCUDA, an open-source toolkit that supports this technique. In introducing PyCUDA, this article proposes the combination of a dynamic, high-level scripting language with the massive performance of a GPU as a compelling two-tiered computing platform, potentially offering significant performance and productivity advantages over conventional single-tier, static sy...
ANCON: A code for the evaluation of complex fault trees in personal computers
International Nuclear Information System (INIS)
Performing probabilistic safety analysis has been recognized worldwide as one of the more effective ways for further enhancing safety of Nuclear Power Plants. The evaluation of fault trees plays a fundamental role in these analysis. Some existing limitations in RAM and execution speed of personal computers (PC) has restricted so far their use in the analysis of complex fault trees. Starting from new approaches in the data structure and other possibilities the ANCON code can evaluate complex fault trees in a PC, allowing the user to do a more comprehensive analysis of the considered system in reduced computing time
International Nuclear Information System (INIS)
A computer code WHAMS for calculating pressure and velocity transients in liquid filled piping networks is described. 11 different boundary conditions are applied of which two specific ones are described in some detail. Several example calculations are described and results are compared with those of other programs. WHAMS is capable of analyzing a network of 75 pipes which can be coupled in an arbitrary way. The program is written in Fortran IV language on UNIVAC 1108 computer and the program size is approximately 60 kwords. (author)
Method for computing self-consistent solution in a gun code
Nelson, Eric M
2014-09-23
Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
DEFF Research Database (Denmark)
Gal, A.; Hansen, Kristoffer Arnsfelt; Koucky, Michal;
2013-01-01
We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n)→{0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: 1) if d=2, then w=Θ(n (lgn/lglgn)2); 2) if d=3, then w=Θ(n...
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
DEFF Research Database (Denmark)
Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal;
2012-01-01
We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n) -> {0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w = Θ(n ({log n/ log log n})2). (2) If ...
PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance
Energy Technology Data Exchange (ETDEWEB)
Vondy, D.R.
1979-10-01
The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.
Hardman, Robert R.
1990-01-01
The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6Î¼m wavelength band as well as conventional pitot tube and thermocouple methods. These infrared i...
DEFF Research Database (Denmark)
Johansen, Peter Meincke
1996-01-01
New uniform closed-form expressions for physical theory of diffraction equivalent edge currents are derived for truncated incremental wedge strips. In contrast to previously reported expressions, the new expressions are well-behaved for all directions of incidence and observation and take a finit...... value for zero strip length. Consequently, the new equivalent edge currents are, to the knowledge of the author, the first that are well-suited for implementation in general computer codes...
International Nuclear Information System (INIS)
The computer codes were developed to evaluate internal radiation dose when radioactive isotopes released from nuclear facilities are taken through ingestion and inhalation pathways. Food chain models and relevant data base representing the agricultural and social environment of Korea are set up. An equilibrium model-KFOOD, which can deal with routine releases from a nuclear facility and a dynamic model-ECOREA, which is suitable for the description of acute radioactivity release following nuclear accident. (Author)
NCT-ART - a neutron computer tomography code based on the algebraic reconstruction technique
International Nuclear Information System (INIS)
A computer code is presented, which calculates two-dimensional cuts of material assemblies from a number of neutron radiographic projections. Mathematically, the reconstruction is performed by an iterative solution of a system of linear equations. If the system is fully determined, clear pictures are obtained. Even for an underdetermined system (low number of projections) reasonable pictures are reconstructed, but then picture artefacts and convergence problems occur increasingly. (orig.) With 37 figs
A Fast New Public Code for Computing Photon Orbits in a Kerr Spacetime
Dexter, Jason; Agol, Eric
2009-05-01
Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. We present a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semianalytically for the first time. We discuss the method, its implementation in a freely available FORTRAN code, and its application to toy problems from the literature.
RACETRACK - a computer code for the simulation of nonlinear particle motion in accelerators
International Nuclear Information System (INIS)
RACETRACK is a computer code to simulate transverse nonlinear particle motion in accelerators. Transverse magnetic fields of higher order are treated in thin magnet approximation. Multipoles up to 20 poles are included. Energy oscillations due to the nonlinear synchrotron motion are taken into account. Several additional features, as linear optics calculations, chromaticity adjustment, tune variation, orbit adjustment and others are available to guarantee a fast treatment of nonlinear dynamical problems. (orig.)
The Ariel V (SSI) catalogue of high galactic latitude (bar b bar > 100) X-ray sources
International Nuclear Information System (INIS)
The 2A catalogue is the result of 10 000 orbits of observation by the Leicester University Sky Survey Instrument on the Ariel V satellite and it contains 105 X-ray sources with modulus b modulus > 100. The procedures and criteria used in establishing these sources and measuring their intensities and positions are described. As consequence of the comparatively small error boxes (0.1 to 0.5 square degree) and the sensitivity limit of the survey (90 per cent of the sky to better than 1.2 Ariel count/s approximately 3.2 Uhuru count/s), new optical identifications are suggested. (author)
RODSWELL: a computer code for the thermomechanical analysis of fuel rods under LOCA conditions
International Nuclear Information System (INIS)
The present report is the user's manual for the computer code RODSWELL developed at the JRC-Ispra for the thermomechanical analysis of LWR fuel rods under simulated loss-of-coolant accident (LOCA) conditions. The code calculates the variation in space and time of all significant fuel rod variables, including fuel, gap and cladding temperature, fuel and cladding deformation, cladding oxidation and rod internal pressure. The essential characteristics of the code are briefly outlined here. The model is particularly designed to perform a full thermal and mechanical analysis in both the azimuthal and radial directions. Thus, azimuthal temperature gradients arising from pellet eccentricity, flux tilt, arbitrary distribution of heat sources in the fuel and the cladding and azimuthal variation of coolant conditions can be treated. The code combines a transient 2-dimensional heat conduction code and a 1-dimentional mechanical model for the cladding deformation. The fuel rod is divided into a number of axial sections and a detailed thermomechanical analysis is performed within each section in radial and azimuthal directions. In the following sections, instructions are given for the definition of the data files and the semi-variable dimensions. Then follows a complete description of the input data. Finally, the restart option is described
Multilevel Coding Schemes for Compute-and-Forward with Flexible Decoding
Hern, Brett
2011-01-01
We consider the design of coding schemes for the wireless two-way relaying channel when there is no channel state information at the transmitter. In the spirit of the compute and forward paradigm, we present a multilevel coding scheme that permits computation (or, decoding) of a class of functions at the relay. The function to be computed (or, decoded) is then chosen depending on the channel realization. We define such a class of functions which can be decoded at the relay using the proposed coding scheme and derive rates that are universally achievable over a set of channel gains when this class of functions is used at the relay. We develop our framework with general modulation formats in mind, but numerical results are presented for the case where each node transmits using the QPSK constellation. Numerical results with QPSK show that the flexibility afforded by our proposed scheme results in substantially higher rates than those achievable by always using a fixed function or by adapting the function at the ...
Experiments to validate computer codes used in the safety assessment of concrete containments
International Nuclear Information System (INIS)
The safety analysis for the hazardous plants with reinforced and prestressed concrete containments includes the assessment of the containment performance under severe accident loading. Such assessment is normally based on the prediction using computer codes, supported by the measured evidence of small scale experiment. A program of small scale experiments is in progress at AEE Winfrith. The first series included five tests on simple concrete frame specimens for providing the basic response data under static loading. The second series included the test on reinforced concrete slab specimens having the geometry representing steel-lined containment walls. The experiment, the properties of the materials used and the measurement are reported. The selection of the measured results is presented. The validation of the finite element computer codes, ABAQUS and DYNA 3D, by collating with the measured results is in progress. Two aspects of the safety analysis for concrete containments under severe accident loading are the need for computer codes to predict accurately the structural response at re-entrant corners and the integrity of liners. (K.I.)
International Nuclear Information System (INIS)
A compilation of technical computer codes related to ongoing work under the cognizance of the US Department of Energy's Office of Civilian Radioactive Waste Management (DOE/OCRWM) is presented. Much of the information was obtained from responses to a questionnaire distributed by DOE/OCRWM to all DOE offices associated with the radioactive waste management program. The codes are arranged alphabetically by name. In addition to the code description, each sheet includes other data such as computer hardware and software requirements, document references, name of respondent, and code variants. The codes are categorized into seventeen subject areas plus a miscellaneous category. Some of the subject areas covered are atmospheric dispersion, biosphere transport, geochemistry, nuclear radiation transport, nuclide inventory, and risk assessment. Three appendixes are included which list the names of the contributors, a list of the literature reviewed, and a glossary of computer code terminology and definitions. 50 refs., 3 tabs
Directory of Open Access Journals (Sweden)
Karfopoulos Konstantinos L.
2014-01-01
Full Text Available The determination of 235U in environmental samples from its 185.72 keV photons may require the deconvolution of the multiplet photopeak at ~186 keV, due to the co-existence of the 186.25 keV photons of 226Ra in the spectrum. Successful deconvolution depends on many parameters, such as the detector characteristics, the activity concentration of the 235U and 226Ra in the sample, the background continuum in the 186 keV energy region and the gamma-spectrometry computer code used. In this work two sets of experimental test spectra were constructed for examining the deconvolution of the multiplet photopeak performed by different codes. For the construction of the test spectra, a high-resolution low energy germanium detector was used. The first series consists of 140 spectra and simulates environmental samples containing various activity concentration levels of 235U and 226Ra. The second series consists of 280 spectra and has been derived by adding 137Cs, corresponding to various activity concentration levels, to specific first series test spectra. As the 137Cs backscatter edge is detected in the energy region of the multiplet photopeak at ~186 keV, this second series of test spectra tests the analysis of the multiplet photopeak in high background continuum conditions. The analysis of the test spectra is performed by two different g-spectrometry analysis codes: (a spectrum unix analysis code, a computer code developed in-house and (b analysis of germanium detector spectra, a program freely available from the IAEA. The results obtained by the two programs are compared in terms of photopeak detection and photopeak area determination.
Multiphase integral reacting flow computer code (ICOMFLO): User`s guide
Energy Technology Data Exchange (ETDEWEB)
Chang, S.L.; Lottes, S.A.; Petrick, M.
1997-11-01
A copyrighted computational fluid dynamics computer code, ICOMFLO, has been developed for the simulation of multiphase reacting flows. The code solves conservation equations for gaseous species and droplets (or solid particles) of various sizes. General conservation laws, expressed by elliptic type partial differential equations, are used in conjunction with rate equations governing the mass, momentum, enthalpy, species, turbulent kinetic energy, and turbulent dissipation. Associated phenomenological submodels of the code include integral combustion, two parameter turbulence, particle evaporation, and interfacial submodels. A newly developed integral combustion submodel replacing an Arrhenius type differential reaction submodel has been implemented to improve numerical convergence and enhance numerical stability. A two parameter turbulence submodel is modified for both gas and solid phases. An evaporation submodel treats not only droplet evaporation but size dispersion. Interfacial submodels use correlations to model interfacial momentum and energy transfer. The ICOMFLO code solves the governing equations in three steps. First, a staggered grid system is constructed in the flow domain. The staggered grid system defines gas velocity components on the surfaces of a control volume, while the other flow properties are defined at the volume center. A blocked cell technique is used to handle complex geometry. Then, the partial differential equations are integrated over each control volume and transformed into discrete difference equations. Finally, the difference equations are solved iteratively by using a modified SIMPLER algorithm. The results of the solution include gas flow properties (pressure, temperature, density, species concentration, velocity, and turbulence parameters) and particle flow properties (number density, temperature, velocity, and void fraction). The code has been used in many engineering applications, such as coal-fired combustors, air
Directory of Open Access Journals (Sweden)
Sapan eAgarwal
2016-01-01
Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.
Agarwal, Sapan; Quach, Tu-Thach; Parekh, Ojas; Hsia, Alexander H; DeBenedictis, Erik P; James, Conrad D; Marinella, Matthew J; Aimone, James B
2015-01-01
The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning. PMID:26778946
Error threshold in topological quantum-computing models with color codes
Katzgraber, Helmut; Bombin, Hector; Martin-Delgado, Miguel A.
2009-03-01
Dealing with errors in quantum computing systems is possibly one of the hardest tasks when attempting to realize physical devices. By encoding the qubits in topological properties of a system, an inherent protection of the quantum states can be achieved. Traditional topologically-protected approaches are based on the braiding of quasiparticles. Recently, a braid-less implementation using brane-net condensates in 3-colexes has been proposed. In 2D it allows the transversal implementation of the whole Clifford group of quantum gates. In this work, we compute the error threshold for this topologically-protected quantum computing system in 2D, by means of mapping its error correction process onto a random 3-body Ising model on a triangular lattice. Errors manifest themselves as random perturbation of the plaquette interaction terms thus introducing frustration. Our results from Monte Carlo simulations suggest that these topological color codes are similarly robust to perturbations as the toric codes. Furthermore, they provide more computational capabilities and the possibility of having more qubits encoded in the quantum memory.
Computation of thermodynamic equilibria of nuclear materials in multi-physics codes
International Nuclear Information System (INIS)
A new equilibrium thermodynamic solver is being developed with the primary impetus of direct integration into nuclear fuel performance and safety codes to provide improved predictions of fuel behavior. This solver is intended to provide boundary conditions and material properties for continuum transport calculations. There are several legitimate concerns with the use of existing commercial thermodynamic codes: 1) licensing entanglements associated with code distribution, 2) computational performance, and 3) limited capabilities of handling large multi-component systems of interest to the nuclear industry. The development of this solver is specifically aimed at addressing these concerns. In support of this goal, a new numerical algorithm for computing chemical equilibria is presented which is not based on the traditional steepest descent method or 'Gibbs energy minimization' technique. This new approach exploits fundamental principles of equilibrium thermodynamics, which simplifies the optimization equations. The chemical potentials of all species and phases in the system are constrained by the system chemical potentials, and the objective is to minimize the residuals of the mass balance equations. Several numerical advantages are achieved through this simplification, as described in this paper. (author)
Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1995-07-01
This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on `The improvement of level-1 PSA computer codes` is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author).
SEACC: the systems engineering and analysis computer code for small wind systems
Energy Technology Data Exchange (ETDEWEB)
Tu, P.K.C.; Kertesz, V.
1983-03-01
The systems engineering and analysis (SEA) computer program (code) evaluates complete horizontal-axis SWECS performance. Rotor power output as a function of wind speed and energy production at various wind regions are predicted by the code. Efficiencies of components such as gearbox, electric generators, rectifiers, electronic inverters, and batteries can be included in the evaluation process to reflect the complete system performance. Parametric studies can be carried out for blade design characteristics such as airfoil series, taper rate, twist degrees and pitch setting; and for geometry such as rotor radius, hub radius, number of blades, coning angle, rotor rpm, etc. Design tradeoffs can also be performed to optimize system configurations for constant rpm, constant tip speed ratio and rpm-specific rotors. SWECS energy supply as compared to the load demand for each hour of the day and during each session of the year can be assessed by the code if the diurnal wind and load distributions are known. Also available during each run of the code is blade aerodynamic loading information.
Analyses of pool swell tests by two-dimensional hydrodynamic computer code
Energy Technology Data Exchange (ETDEWEB)
Shimegi, Nobuo; Suzuki, Kenichi
1988-10-01
A two-dimensional hydrodynamic computer code SOLA-VOF was examined on the analytical capability for dynamic loads by pool swell in the MARK-I type BWR suppression chamber under LBLOCA (Large Break Loss of Coolant Accident) conditions. Two pool swell tests, (LLL 1/5-scale and EPRI 1/12-scale tests) were selected for this purpose and analyzed by the SOLA-VOF code modified with incorporation of a simple downcomer flow model. In these analyses, it was necessary to take account of three-dimensional effect of pool swell behavior along the chamber axis by use of a method such as spatially weighting function experimentally determined, because a simple two-dimensional calculation by the SOLA-VOF code gave too much conservative evaluation for the impact load on the ring header. Applications of this method gave a good agreement between the calculation and measurement. The vertical loads on the suppression chamber wall were well analyzed by this code. It might be because the local pressure difference caused by the nonuniform pool swelling disappeared owing to pressure integration on the surface of suppression chamber wall.
Analyses of pool swell tests by two-dimensional hydrodynamic computer code
International Nuclear Information System (INIS)
A two-dimensional hydrodynamic computer code SOLA-VOF was examined on the analytical capability for dynamic loads by pool swell in the MARK-I type BWR suppression chamber under LBLOCA (Large Break Loss of Coolant Accident) conditions. Two pool swell tests, (LLL 1/5-scale and EPRI 1/12-scale tests) were selected for this purpose and analyzed by the SOLA-VOF code modified with incorporation of a simple downcomer flow model. In these analyses, it was necessary to take account of three-dimensional effect of pool swell behavior along the chamber axis by use of a method such as spatially weighting function experimentally determined, because a simple two-dimensional calculation by the SOLA-VOF code gave too much conservative evaluation for the impact load on the ring header. Applications of this method gave a good agreement between the calculation and measurement. The vertical loads on the suppression chamber wall were well analyzed by this code. It might be because the local pressure difference caused by the nonuniform pool swelling disappeared owing to pressure integration on the surface of suppression chamber wall. (author)
A fully parallel, high precision, N-body code running on hybrid computing platforms
Capuzzo-Dolcetta, R; Punzo, D
2012-01-01
We present a new implementation of the numerical integration of the classical, gravitational, N-body problem based on a high order Hermite's integration scheme with block time steps, with a direct evaluation of the particle-particle forces. The main innovation of this code (called HiGPUs) is its full parallelization, exploiting both OpenMP and MPI in the use of the multicore Central Processing Units as well as either Compute Unified Device Architecture (CUDA) or OpenCL for the hosted Graphic Processing Units. We tested both performance and accuracy of the code using up to 256 GPUs in the supercomputer IBM iDataPlex DX360M3 Linux Infiniband Cluster provided by the italian supercomputing consortium CINECA, for values of N up to 8 millions. We were able to follow the evolution of a system of 8 million bodies for few crossing times, task previously unreached by direct summation codes. The code is freely available to the scientific community.
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes
Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...
Development of a dose assessment computer code for the NPP severe accident
International Nuclear Information System (INIS)
A real-time emergency dose assessment computer code called KEDA (KAIST NPP Emergency Dose Assessment) has been developed for the NPP severe accident. A new mathematical model which can calculate cloud shine has been developed and implemented in the code. KEDA considers the specific Korean situations(complex topography, orientals' thyroid metabolism, continuous washout, etc.), and provides functions of dose-monitoring and automatic decision-making. To verify the code results, KEDA has been compared with an NRC officially certified code, RASCAL, for eight hypertical accident scenarios. Through the comparison, KEDA has been proved to provide reasonable results. Qualitative sensitivity analysis also the been performed for potentially important six input parameters, and the trends of the dose v.s. down-wind distance curve have been analyzed comparing with the physical phenomena occurred in the real atmosphere. The source term and meteorological conditions are turned out to be the most important input parameters. KEDA also has been applied to simulate Kori site and a hyperthetical accident with semi-real meteorological data has been simulated and analyzed
Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)
2014-05-15
In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow
Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment
International Nuclear Information System (INIS)
In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow
Development of system of computer codes for severe accident analysis and its applications
Energy Technology Data Exchange (ETDEWEB)
Jang, H. S.; Jeon, M. H.; Cho, N. J. and others [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
1992-01-15
The objectives of this study is to develop a system of computer codes for postulated severe accident analyses in nuclear power plants. This system of codes is necessary to conduct Individual Plant Examination for domestic nuclear power plants. As a result of this study, one can conduct severe accident assessments more easily, and can extract the plant-specific vulnerabilities for severe accidents and at the same time the ideas for enhancing overall accident-resistance. Severe accident can be mitigated by the proper accident management strategies. Some operator action for mitigation can lead to more disastrous result and thus uncertain severe accident phenomena must be well recognized. There must be further research for development of severe accident management strategies utilizing existing plant resources as well as new design concepts.
A computational theory for the classification of natural biosonar targets based on a spike code
Müller, R
2003-01-01
A computational theory for classification of natural biosonar targets is developed based on the properties of an example stimulus ensemble. An extensive set of echoes 84 800 from four different foliages was transcribed into a spike code using a parsimonious model (linear filtering, half-wave rectification, thresholding). The spike code is assumed to consist of time differences (interspike intervals) between threshold crossings. Among the elementary interspike intervals flanked by exceedances of adjacent thresholds, a few intervals triggered by disjoint half-cycles of the carrier oscillation stand out in terms of resolvability, visibility across resolution levels and a simple stochastic structure (uncorrelatedness). They are therefore argued to be a stochastic analogue to edges in vision. A three-dimensional feature vector representing these interspike intervals sustained a reliable target classification performance (0.06% classification error) in a sequential probability ratio test, which models sequential pr...
Assessment of computer codes for VVER-440/213-type nuclear power plants
Energy Technology Data Exchange (ETDEWEB)
Szabados, L.; Ezsol, Gy.; Perneczky [Atomic Energy Research Institute, Budapest (Hungary)
1995-09-01
Nuclear power plant of VVER-440/213 designed by the former USSR have a number of special features. As a consequence of these features the transient behaviour of such a reactor system should be different from the PWR system behaviour. To study the transient behaviour of the Hungarian Paks Nuclear Power Plant of VVER-440/213-type both analytical and experimental activities have been performed. The experimental basis of the research in the PMK-2 integral-type test facility , which is a scaled down model of the plant. Experiments performed on this facility have been used to assess thermal-hydraulic system codes. Four tests were selected for {open_quotes}Standard Problem Exercises{close_quotes} of the International Atomic Energy Agency. Results of the 4th Exercise, of high international interest, are presented in the paper, focusing on the essential findings of the assessment of computer codes.
Large eddy simulation of fine water sprays: comparative analysis of two models and computer codes
Tsoy, A. S.; Snegirev, A. Yu.
2015-09-01
The model and the computer code FDS, albeit widely used in engineering practice to predict fire development, is not sufficiently validated for fire suppression by fine water sprays. In this work, the effect of numerical resolution of the large scale turbulent pulsations on the accuracy of predicted time-averaged spray parameters is evaluated. Comparison of the simulation results obtained with the two versions of the model and code, as well as that of the predicted and measured radial distributions of the liquid flow rate revealed the need to apply monotonic and yet sufficiently accurate discrete approximations of the convective terms. Failure to do so delays jet break-up, otherwise induced by large turbulent eddies, thereby excessively focuses the predicted flow around its axis. The effect of the pressure drop in the spray nozzle is also examined, and its increase has shown to cause only weak increase of the evaporated fraction and vapor concentration despite the significant increase of flow velocity.
Revised uranium--plutonium cycle PWR and BWR models for the ORIGEN computer code
Energy Technology Data Exchange (ETDEWEB)
Croff, A. G.; Bjerke, M. A.; Morrison, G. W.; Petrie, L. M.
1978-09-01
Reactor physics calculations and literature searches have been conducted, leading to the creation of revised enriched-uranium and enriched-uranium/mixed-oxide-fueled PWR and BWR reactor models for the ORIGEN computer code. These ORIGEN reactor models are based on cross sections that have been taken directly from the reactor physics codes and eliminate the need to make adjustments in uncorrected cross sections in order to obtain correct depletion results. Revised values of the ORIGEN flux parameters THERM, RES, and FAST were calculated along with new parameters related to the activation of fuel-assembly structural materials not located in the active fuel zone. Recommended fuel and structural material masses and compositions are presented. A summary of the new ORIGEN reactor models is given.
Overlapping Communication with Computation Using OpenMP Tasks on the GTS Magnetic Fusion Code
Directory of Open Access Journals (Sweden)
Robert Preissl
2010-01-01
Full Text Available Application codes in a variety of areas are being updated for performance on the latest architectures. In this paper we examine an application, which comes from magnetic fusion for performance acceleration with a particular emphasis on methods that are applicable for many/multicore and future architectural designs. We take an important magnetic fusion particle code that already includes several levels of parallelism including hybrid MPI combined with OpenMP. We study how to include new advanced hybrid models, which extend the applicability of OpenMP tasks and exploit multi-threaded MPI support to overlap communication and computation. Experiments carried out on Cray XT4 and XT5 machines resulting in a speed-up of up to 35% of the investigated GTS particle shifter kernel show the benefits and applicability of this approach.
Institute of Scientific and Technical Information of China (English)
CHEN Ying; LU Chuan-jing
2008-01-01
A computer code, ELANEX, including several Homogenous-Equilibrium-Model (HEM) type cavitation models, were developed, to numerically simulate natural cavitation phenomena. The effectiveness of the code was checked by cavitation flows around the disk and cylinder body for a wide range of different cavitation numbers. Cavity profiles were compared with the analytic solution of disk and empirical formulae fitted from the experiment data, and contrast between different cavitation models were fulfilled as well. The cavity length and maximal cavity diameter were found to agree well with the analytic solutions, and detailed cavity profiles were in accordance with the experimental formula. Comparison with the hemisphere headed cylinder body presented a good agreement of the pressure coefficient with the experiment data. Reasonable drag-force coefficient variation and drag-force reduction effect were obtained.
Validation of a Computer Code for Use in the Mechanical Design of Spallation Neutron Targets
Montanez, P A
2000-01-01
The present work concentrates on comparing a numerical code and a closed-form analytic solution for determining transient stress waves generated by an impinging, high-intensity proton pulse onto a perfectly elastic solid cylindrical target. The comparison of the two methods serves both to benchmark the physics and numerical methods of the codes, and to verify them against analytic expressions that can be established for calculating the response of the target for simple cases of loading and geometry. Additionally, the comparison elucidated the effects of approximations used in the computation of the analytic results. Two load cases have been investigated: (1) an instantaneously uniform thermal loading along the central core, and (2) a ramped and uniform thermal load applied along the central core. In addition, the influence of the approximations applied to the accurate analytic forms has been elucidated. By validating these analytical results, the closed-form solution may be confidently used to "bound" the sol...
International Nuclear Information System (INIS)
The objectives of this work is to develop a computer code ASFRE which analyzes 3D-thermo-hydraulic behaviors of coolant and fuel pins in an LMFBR subassembly under accident conditions such as the local blockage, loss of flow and transient over power accident conditions. Analytical models, calculation procedures and sample calculations for typical experiments are described. The ASFRE code consists of two parts, namely coolant calculation part and fuel pin calculation. The coolant thermal-hydraulic analysis employs basically subchannel analysis approach and the program solves transient mass, momentum and energy conservation equations. The fuel pin thermal analysis program solves transient heat conduction equations by finite difference method in cylindrical coordinate system. Fuel temperature distribution and thermal expansion are calculated taking into account of intra/inter-pin-flux-depression and fuel restructuring. And wire wrap spacer effects for coolant behavior and heat loss through the wrapper tube are also simulated. (author)
SASSYS-1 computer code verification with EBR-II test data
International Nuclear Information System (INIS)
The EBR-II natural circulation experiment, XX08 Test 8A, is simulated with the SASSYS-1 computer code and the results for the latter are compared with published data taken during the transient at selected points in the core. The SASSYS-1 results provide transient temperature and flow responses for all points of interest simultaneously during one run, once such basic parameters as pipe sizes, initial core flows, and elevations are specified. The SASSYS-1 simulation results for the EBR-II experiment XX08 Test 8A, conducted in March 1979, are within the published plant data uncertainties and, thereby, serve as a partial verification/validation of the SASSYS-1 code
International Nuclear Information System (INIS)
Full text of publication follows: Standard Indian PHWRs are provided with a Primary Containment Filtration and Pump-Back System (PCFPB) incorporating charcoal filters in the ventilation circuit to remove radioactive iodine that may be released from reactor core into the containment during LOCA+ECCS failure which is a Design Basis Accident for containment of radioactive release. This system is provided with two identical air circulation loops, each having 2 full capacity fans (1 operating and 1 standby) for a bank of four combined charcoal and High Efficiency Particulate Activity (HEPA) filters, in addition to other filters. While the filtration circuit is designed to operate under forced flow conditions, it is of interest to understand the performance of the charcoal filters, in the event of failure of the fans after operating for some time, i.e., when radio-iodine inventory is at its peak value. It is of interest to check whether the buoyancy driven natural circulation occurring in the filtration circuit is sufficient enough to keep the temperature in the charcoal under safe limits. A computer code TRAFIC (Transient Analysis of Filters in Containment) was developed using conservative one dimensional model to analyze the system. Suitable parametric studies were carried out to understand the problem and to identify the safety of existing system. TRAFIC Code has two important components. The first one estimates the heat generation in charcoal filter based on 'Source Term'; while the other one performs thermal-hydraulic computations. In an attempt validate the Code, experimental studies have been carried out. For this purpose, an experimental set up comprising of scaled down model of filtration circuit with heating coils embedded in charcoal for simulating the heating effect due to radio iodine has been constructed. The present work of validation consists of utilizing the results obtained from experiments conducted for different heat loads, elevations and adsorbent
Development of a computer code for dynamic analysis of the primary circuit of advanced reactors
International Nuclear Information System (INIS)
Currently, advanced reactors are being developed, seeking for enhanced safety, better performance and low environmental impacts. Reactor designs must follow several steps and numerous tests before a conceptual project could be certified. In this sense, computational tools become indispensable in the preparation of such projects. Thus, this study aimed at the development of a computational tool for thermal-hydraulic analysis by coupling two computer codes to evaluate the influence of transients caused by pressure variations and flow surges in the region of the primary circuit of IRIS reactor between the core and the pressurizer. For the simulation, it was used a situation of 'insurge', characterized by the entry of water in the pressurizer, due to the expansion of the refrigerant in the primary circuit. This expansion was represented by a pressure disturbance in step form, through the block 'step' of SIMULINK, thus enabling the transient startup. The results showed that the dynamic tool, obtained through the coupling of the codes, generated very satisfactory responses within model limitations, preserving the most important phenomena in the process. (author)
Development of a computer code for dynamic analysis of the primary circuit of advanced reactors
Energy Technology Data Exchange (ETDEWEB)
Rocha, Jussie Soares da; Lira, Carlos A.B.O.; Magalhaes, Mardson A. de Sa, E-mail: cabol@ufpe.b [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear
2011-07-01
Currently, advanced reactors are being developed, seeking for enhanced safety, better performance and low environmental impacts. Reactor designs must follow several steps and numerous tests before a conceptual project could be certified. In this sense, computational tools become indispensable in the preparation of such projects. Thus, this study aimed at the development of a computational tool for thermal-hydraulic analysis by coupling two computer codes to evaluate the influence of transients caused by pressure variations and flow surges in the region of the primary circuit of IRIS reactor between the core and the pressurizer. For the simulation, it was used a situation of 'insurge', characterized by the entry of water in the pressurizer, due to the expansion of the refrigerant in the primary circuit. This expansion was represented by a pressure disturbance in step form, through the block 'step' of SIMULINK, thus enabling the transient startup. The results showed that the dynamic tool, obtained through the coupling of the codes, generated very satisfactory responses within model limitations, preserving the most important phenomena in the process. (author)
Energy Technology Data Exchange (ETDEWEB)
Müller, C.; Hughes, E. D.; Niederauer, G. F.; Wilkening, H.; Travis, J. R.; Spore, J. W.; Royl, P.; Baumann, W.
1998-10-01
Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best- estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume
Evaluation of nuclear safety from the outputs of computer codes in the presence of uncertainties
International Nuclear Information System (INIS)
We apply methods from order statistics to the problem of satisfying regulations that specify individual criteria to be met by each of a number of outputs, k, from a computer code simulating nuclear accidents. The regulations are assumed to apply to an 'extent', γk, (such as 95%) of the cumulative probability distribution of each output, k, that is obtained by randomly varying the inputs to the code over their ranges of uncertainty. We use a 'bracketing' approach to obtain expressions for the confidence, β, or probability that these desired extents will be covered in N runs of the code. Detailed results are obtained for k=1,2,3, with equal extents, γ, and are shown to depend on the degree of correlation of the outputs. They reduce to the proper expressions in limiting cases. These limiting cases are also analyzed for an arbitrary number of outputs, k. The bracketing methodology is contrasted with the traditional 'coverage' approach in which the objective is to obtain a range of outputs that enclose a total fraction, γ, of all possible outputs, without regard to the extent of individual outputs. For the case of two outputs we develop an alternate formulation and show that the confidence, β, depends on the degree of correlation between outputs. The alternate formulation reduces to the single output case when the outputs are so well correlated that the coverage criterion is always met in a single run of the code if either output lies beyond an extent γ, it reduces to Wilks' expression for un-correlated variables when the outputs are independent, and it reduces to Wald's result when the outputs are so negatively correlated that the coverage criterion could never be met by the two outputs of a single run of the code. The predictions of both formulations are validated by comparison with Monte Carlo simulations
International Nuclear Information System (INIS)
Description of methods and computer codes for Fuel Management and Nuclear Design of Reload Cycles in PWR, developed at JEN by adaptation of previous codes (LEOPARD, NUTRIX, CITATION, FUELCOST) and implementation of original codes (TEMP, SOTHIS, CICLON, NUDO, MELON, ROLLO, LIBRA, PENELOPE) and their application to the project of Management and Design of Reload Cycles of a 510 Mwt PWR, including comparison with results of experimental operation and other calculations for validation of methods. (author)
International Nuclear Information System (INIS)
A computer code system for fast calculation of activation and transmutation has been developed. The system consists of a driver code, cross-section libraries, flux libraries, a material library, and a decay library. The code is used to predict transmutations in a Ti-modified 316 stainless steel, a commercial ferritic alloy (HT9), and a V-15%Cr-5%Ti alloy in various magnetic fusion energy (MFE) test facilities and conceptual reactors
DEFF Research Database (Denmark)
Mohebbi, Ali; Engelsholm, Signe K.D.; Puthusserypady, Sadasivan;
2015-01-01
In this pilot study, a novel and minimalistic Brain Computer Interface (BCI) based wheelchair control application was developed. The system was based on pseudorandom code modulated Visual Evoked Potentials (c-VEPs). The visual stimuli in the scheme were generated based on the Gold code...
Adaptation of OCA-P, a probabilistic fracture-mechanics code, to a personal computer
International Nuclear Information System (INIS)
The OCA-P probabilistic fracture-mechanics code can now be executed on a personal computer with 512 kilobytes of memory, a math coprocessor, and a hard disk. A user's guide for the particular adaptation has been prepared, and additional importance sampling techniques for OCA-P have been developed that allow the sampling of only the tails of selected distributions. Features have also been added to OCA-P that permit RTNDT to be used as an ''independent'' variable in the calculation of P
Phase code multiplexed ROM type holographic memory using the computer generated hologram
Ohuchi, Yasuhiro; Takahata, Yosuke; Yoshida, Shuhei; Yamamoto, Manabu
2009-05-01
For holographic memory, write-once type data recording has been studied by using photopolymer material. By considering the fact that the development of optical disks has been undertaken for both the ROM type and recordable type, there seems to exist a need to develop a ROM type disk for holographic memory. For this ROM type disk, the desired manufacturing method will be the one used for DVD disk production. Also, from the view point of data transfer speed, the function to reproduce data from a disk continuously rotating at high speed seems necessary. This paper describes a phase code multiplexed ROM type holographic memory using computer generated hologram as recorded data.
Generic validation of computer codes used in safety analyses of CANDU power plants
International Nuclear Information System (INIS)
Since the 1960s, the CANDU industry has been developing and using scientific computer codes, validated according to the quality-assurance practices of the day, for designing and analyzing CANDU power plants. To provide a systematic framework for the validation work done to date and planned for the future, the industry has decided to adopt the methodology of validation matrices, similar to that developed by the Nuclear Energy Agency of the Organization for Economic Co-operation and Development for Light Water Reactors (LWR). Specialists in six scientific disciplines are developing the matrices for CANDU plants, and their progress to date is presented. (author)
Resin Matrix/Fiber Reinforced Composite Material, Ⅱ: Method of Solution and Computer Code
Institute of Scientific and Technical Information of China (English)
Li Chensha(李辰砂); Jiao Caishan; Liu Ying; Wang Zhengping; Wang Hongjie; Cao Maosheng
2003-01-01
According to a mathematical model which describes the curing process of composites constructed from continuous fiber-reinforced, thermosetting resin matrix prepreg materials, and the consolidation of the composites, the solution method to the model is made and a computer code is developed, which for flat-plate composites cured by a specified cure cycle, provides the variation of temperature distribution, the cure reaction process in the resin, the resin flow and fibers stress inside the composite, the void variation and the residual stress distribution.
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The fundamental algorithm of light beam propagation in high powerlaser system is investigated and the corresponding computational codes are given. It is shown that the number of modulation ring due to the diffraction is related to the size of the pinhole in spatial filter (in terms of the times of diffraction limitation, i.e. TDL) and the Fresnel number of the laser system; for the complex laser system with multi-spatial filters and free space, the system can be investigated by the reciprocal rule of operators.
A Fast New Public Code for Computing Photon Orbits in a Kerr Spacetime
Dexter, Jason
2009-01-01
Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. We present a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semi-analytically for the first time. We discuss the method, its implementation in a freely available FORTRAN code, and its application to toy problems from the literature.
International Nuclear Information System (INIS)
This report presents the results of the COBRA-SFS (Spent Fuel Storage) computer code validation effort. COBRA-SFS, while refined and specialized for spent fuel storage system analyses, is a lumped-volume thermal-hydraulic analysis computer code that predicts temperature and velocity distributions in a wide variety of systems. Through comparisons of code predictions with spent fuel storage system test data, the code's mathematical, physical, and mechanistic models are assessed, and empirical relations defined. The six test cases used to validate the code and code models include single-assembly and multiassembly storage systems under a variety of fill media and system orientations and include unconsolidated and consolidated spent fuel. In its entirety, the test matrix investigates the contributions of convection, conduction, and radiation heat transfer in spent fuel storage systems. To demonstrate the code's performance for a wide variety of storage systems and conditions, comparisons of code predictions with data are made for 14 runs from the experimental data base. The cases selected exercise the important code models and code logic pathways and are representative of the types of simulations required for spent fuel storage system design and licensing safety analyses. For each test, a test description, a summary of the COBRA-SFS computational model, assumptions, and correlations employed are presented. For the cases selected, axial and radial temperature profile comparisons of code predictions with test data are provided, and conclusions drawn concerning the code models and the ability to predict the data and data trends. Comparisons of code predictions with test data demonstrate the ability of COBRA-SFS to successfully predict temperature distributions in unconsolidated or consolidated single and multiassembly spent fuel storage systems
Directory of Open Access Journals (Sweden)
Christopher B. Ndome
2013-07-01
Full Text Available The acute toxicity of local detergents - Omo detergent (Unilever Nigeria Plc. and Arieldetergent (Procter & Gamble Nigeria Limited - were compared using fingerlings of Clarias gariepinus ♀ xHeterobranchus longifilis ♂ hybrid (Heteroclarias, in a 96 hour bioassay. After a series of range findingtests, the fishes were exposed to concentrations of 0.00 ppm, 20 ppm, 30ppm, 40 ppm, 45 ppm and 50ppm of each detergent for 96 hours. The Median Lethal Concentrations (LC50 values for the detergentsranged between 33.03–35.19 ppm and 37.43–39.79 ppm for Omo and Ariel, respectively. Manifestationtimes decreased from 62–14 and 70–14 hours; overturning times decreased from 80–16 and 92–20hours, while survival times decreased from 96–17 and 96–23 hours for Omo and Ariel, respectively, withincreasing concentrations of the toxicants. Respiratory disturbances, loss of righting balance, lethargyand sudden fish death were observed in the exposed fishes. There was a strong concentration-mortalityrelationship for toxicants, yielding a strong positive correlation co-efficient, r2, of 0.9925 and 0.9882,respectively for Omo and Ariel detergents. The t-test analysis showed significant difference (p 0.05 recorded in other concentrations. There were no significant differences (p > 0.05in overturning and survival times of Omo and Ariel detergents in all the concentrations. The presentstudy shows that Omo detergent with a lesser mean LC50 value of 34.11 ± 1.08 could be more toxic thanAriel with a mean LC50 value of 36.66 ± 1.1. Although there was no statistically significant deferencebetween their LC50s (p > 0.05, it was concluded that effluents containing these detergents must not bedischarged indiscriminately into water bodies in order to avoid harm to fish and other aquatic life.
Program POD. A computer code to calculate cross sections for neutron-induced nuclear reactions
International Nuclear Information System (INIS)
A computer code, POD, was developed for neutron-induced nuclear data evaluations. This program is based on four theoretical models, (1) the optical model to calculate shape-elastic scattering and reaction cross sections, (2) the distorted wave Born approximation to calculate neutron inelastic scattering cross sections, (3) the preequilibrium model, and (4) the multi-step statistical model. With this program, cross sections can be calculated for reactions (n, γ), (n, n'), (n, p), (n, α), (n, d), (n, t), (n, 3He), (n, 2n), (n, np), (n, nα), (n, nd), and (n, 3n) in the neutron energy range above the resonance region to 20 MeV. The computational methods and input parameters are explained in this report, with sample inputs and outputs. (author)
Digital Poetry: A Narrow Relation between Poetics and the Codes of the Computational Logic
Laurentiz, Silvia
The project "Percorrendo Escrituras" (Walking Through Writings Project) has been developed at ECA-USP Fine Arts Department. Summarizing, it intends to study different structures of digital information that share the same universe and are generators of a new aesthetics condition. The aim is to search which are the expressive possibilities of the computer among the algorithm functions and other of its specific properties. It is a practical, theoretical and interdisciplinary project where the study of programming evolutionary language, logic and mathematics take us to poetic experimentations. The focus of this research is the digital poetry, and it comes from poetics of permutation combinations and culminates with dynamic and complex systems, autonomous, multi-user and interactive, through agents generation derivations, filtration and emergent standards. This lecture will present artworks that use some mechanisms introduced by cybernetics and the notion of system in digital poetry that demonstrate the narrow relationship between poetics and the codes of computational logic.
OPT13B and OPTIM4 - computer codes for optical model calculations
International Nuclear Information System (INIS)
OPT13B is a computer code in FORTRAN for optical model calculations with automatic search. A summary of different formulae used for computation is given. Numerical methods are discussed. The 'search' technique followed to obtain the set of optical model parameters which produce best fit to experimental data in a least-square sense is also discussed. Different subroutines of the program are briefly described. Input-output specifications are given in detail. A modified version of OPT13B specifications are given in detail. A modified version of OPT13B is OPTIM4. It can be used for optical model calculations where the form factors of different parts of the optical potential are known point by point. A brief description of the modifications is given. (author)
An implementation of a tree code on a SIMD, parallel computer
Olson, Kevin M.; Dorband, John E.
1994-01-01
We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.
Modeling developments for the SAS4A and SASSYS computer codes
International Nuclear Information System (INIS)
The SAS4A and SASSYS computer codes are being developed at Argonne National Laboratory for transient analysis of liquid metal cooled reactors. The SAS4A code is designed to analyse severe loss-of-coolant flow and overpower accidents involving coolant boiling, Cladding failures, and fuel melting and relocation. Recent SAS4A modeling developments include extension of the coolant boiling model to treat sudden fission gas release upon pin failure, expansion of the DEFORM fuel behavior model to handle advanced cladding materials and metallic fuel, and addition of metallic fuel modeling capability to the PINACLE and LEVITATE fuel relocation models. The SASSYS code is intended for the analysis of operational and beyond-design-basis transients, and provides a detailed transient thermal and hydraulic simulation of the core, the primary and secondary coolant circuits, and the balance-of-plant, in addition to a detailed model of the plant control and protection systems. Recent SASSYS modeling developments have resulted in detailed representations of the balance of plant piping network and components, including steam generators, feedwater heaters and pumps, and the turbine. 12 refs., 2 tabs
International Nuclear Information System (INIS)
One aspect of fast reactor safety analysis consists of calculating the strongly coupled system of physical phenomena which contribute to the reactivity balance in hypothetical whole-core accidents: these phenomena are neutronics, fuel behaviour and heat transfer together with coolant thermohydraulics in single- and two-phase flow. Temperature variations in fuel, coolant and neighbouring structures induce, in fact, thermal reactivity feedbacks which are added up and put in the neutronics calculation to predict the neutron flux and the subsequent heat generation in the reactor. At this point a whole-core analysis code is necessary to examine for any hypothetical transient whether the various feedbacks result effectively in a negative balance, which is the basis condition to ensure stability and safety. The European Accident Code (EAC), developed at the Joint Research Centre of the CEC at Ispra (Italy), fulfills this objective. It is a modular informatics structure (quasi 2-D multichannel approach) aimed at collecting stand-alone computer codes of neutronics, fuel pin mechanics and hydrodynamics, developed both in national laboratories and in the JRC itself. EAC makes these modules interact with each other and produces results for these hypothetical accidents in terms of core damage and total energy release. 10 refs
Analysis of the Behavior of CAREM-25 Fuel Rods Using Computer Code BACO
International Nuclear Information System (INIS)
The thermo-mechanical behavior of a fuel rod subjected to irradiation is a complex process, on which a great quantity of interrelated physical-chemical phenomena are coupled.The code BACO simulates the thermo-mechanical behavior and the evolution of fission gases of a cylindrical rod in operation.The power history of fuel rods, arising from neutronic calculations, is the program input.The code calculates, among others, the temperature distribution and the principal stresses in the pellet and cladding, changes in the porosity and restructuring of pellet, the fission gases release, evolution of the internal gas pressure.In this work some of design limits of CAREM-25's fuel rods are analyzed by means of the computer code BACO.The main variables directly related with the integrity of the fuel rod are: Maximum temperature of pellet; Cladding hoop stresses; Gases pressure in the fuel rod; Cladding axial and radial strains, etc.The analysis of results indicates that, under normal operation conditions, the maximum fuel pellet temperature, cladding stresses, pressure of gases at end of life, etc, are below the design limits considered for the fuel rod of CAREM-25 reactor
GARDEC: a computer code for estimating dose-rate reduction by garden decontamination
International Nuclear Information System (INIS)
Based on studies after the Chernobyl accident, it was found that the greatest contribution to the long-term external dose in the urban environment came from isotopes of radiocaesium deposited onto open areas such as gardens and parks. Cost-benefit analysis on the clean-up of nuclear contaminated urban areas also showed that decontamination of gardens would be the most cost-effective procedure and should be given the highest priority. A computer code, GARDEC, has been developed for estimating the reduction of dose rates by garden decontamination. This code takes account of three methods of decontamination: (i) digging a garden in a special way, (ii) a removal of the upper layer of soil, and (iii) covering with a shielding layer of soil. Sample calculations were carried out to test the performance of the code. There were differences between model predictions and observations for the dose-rate reduction. They might result from the differences of various conditions between calculations and measurements. In spite of the differences, it was confirmed also in the calculations that the garden decontamination had a large effect to reduce the dose rate. (author)
IAMBUS, a computer code for the design and performance prediction of fast breeder fuel rods
International Nuclear Information System (INIS)
IAMBUS is a computer code for the thermal and mechanical design, in-pile performance prediction and post-irradiation analysis of fast breeder fuel rods. The code deals with steady, non-steady and transient operating conditions and enables to predict in-pile behavior of fuel rods in power reactors as well as in experimental rigs. Great effort went into the development of a realistic account of non-steady fuel rod operating conditions. The main emphasis is placed on characterizing the mechanical interaction taking place between the cladding tube and the fuel as a result of contact pressure and friction forces, with due consideration of axial and radial crack configuration within the fuel as well as the gradual transition at the elastic/plastic interface in respect to fuel behavior. IAMBUS can be readily adapted to various fuel and cladding materials. The specific models and material correlations of the reference version deal with the actual in-pile behavior and physical properties of the KNK II and SNR 300 related fuel rod design, confirmed by comparison of the fuel performance model with post-irradiation data. The comparison comprises steady, non-steady and transient irradiation experiments within the German/Belgian fuel rod irradiation program. The code is further validated by comparison of model predictions with post-irradiation data of standard fuel and breeder rods of Phenix and PFR as well as selected LWR fuel rods in non-steady operating conditions
Development of computer code models for analysis of subassembly voiding in the LMFBR
International Nuclear Information System (INIS)
The research program discussed in this report was started in FY1979 under the combined sponsorship of the US Department of Energy (DOE), General Electric (GE) and Hanford Engineering Development Laboratory (HEDL). The objective of the program is to develop multi-dimensional computer codes which can be used for the analysis of subassembly voiding incoherence under postulated accident conditions in the LMFBR. Two codes are being developed in parallel. The first will use a two fluid (6 equation) model which is more difficult to develop but has the potential for providing a code with the utmost in flexibility and physical consistency for use in the long term. The other will use a mixture (< 6 equation) model which is less general but may be more amenable to interpretation and use of experimental data and therefore, easier to develop for use in the near term. To assure that the models developed are not design dependent, geometries and transient conditions typical of both foreign and US designs are being considered
Hierarchical surface code for network quantum computing with modules of arbitrary size
Li, Ying; Benjamin, Simon C.
2016-10-01
The network paradigm for quantum computing involves interconnecting many modules to form a scalable machine. Typically it is assumed that the links between modules are prone to noise while operations within modules have a significantly higher fidelity. To optimize fault tolerance in such architectures we introduce a hierarchical generalization of the surface code: a small "patch" of the code exists within each module and constitutes a single effective qubit of the logic-level surface code. Errors primarily occur in a two-dimensional subspace, i.e., patch perimeters extruded over time, and the resulting noise threshold for intermodule links can exceed ˜10 % even in the absence of purification. Increasing the number of qubits within each module decreases the number of qubits necessary for encoding a logical qubit. But this advantage is relatively modest, and broadly speaking, a "fine-grained" network of small modules containing only about eight qubits is competitive in total qubit count versus a "course" network with modules containing many hundreds of qubits.
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
International Nuclear Information System (INIS)
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)
International Nuclear Information System (INIS)
The computer program, PRIMUS, reads a library of radionuclide branching fractions and half-lives and constructs a decay-chain data library and a problem-specific decay-chain data file. PRIMUS reads the decay data compiled for 496 nuclides from the Evaluated Nuclear Structure Data File (ENSDF). The ease of adding radionuclides to the input library allows the CRRIS system to further expand its comprehensive data base. The decay-chain library produced is input to the ANEMOS code. Also, PRIMUS produces a data set reduced to only the decay chains required in a particular problem, for input to the SUMIT, TERRA, MLSOIL, and ANDROS codes. Air concentrations and deposition rates from the PRIMUS decay-chain data file. Source term data may be entered directly to PRIMUS to be read by MLSOIL, TERRA, and ANDROS. The decay-chain data prepared by PRIMUS is needed for a matrix-operator method that computes either time-dependent decay products from an initial concentration generated from a constant input source. This document describes the input requirements and the output obtained. Also, sections are included on methods, applications, subroutines, and sample cases. A short appendix indicates a method of utilizing PRIMUS and the associated decay subroutines from TERRA or ANDROS for applications to other decay problems. 18 references
International Nuclear Information System (INIS)
Safety evaluation of radioactive material shipping containers entails determining their response to severe impact conditions. In this paper a computer code for obtaining the nonlinear response of axisymmetric shipping containers to end-on impact is described. The nonlinear equations of motion are derived with the finite element method. Large displacements and nonlinear strain and material properties are considered. The resulting computer code, CRASHC, is then used to simulate several impact tests. Results from these analyses indicate that the code can be successfully used for simulating these impact conditions. Based on the simulations, several recommendations are made for improving these kinds of analyses and for interpreting the results
Computer code to predict the heat of explosion of high energy materials.
Muthurajan, H; Sivabalan, R; Pon Saravanan, N; Talawar, M B
2009-01-30
The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-à-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (DeltaH(e)) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R(2)=0.9721 with a linear equation y=0.9262x+101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials. PMID:18513863
Computer code to predict the heat of explosion of high energy materials
Energy Technology Data Exchange (ETDEWEB)
Muthurajan, H. [Armament Research and Development Establishment, Pashan, Pune 411021 (India)], E-mail: muthurajan_h@rediffmail.com; Sivabalan, R.; Pon Saravanan, N.; Talawar, M.B. [High Energy Materials Research Laboratory, Sutarwadi, Pune 411 021 (India)
2009-01-30
The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-a-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion ({delta}H{sub e}) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R{sup 2} = 0.9721 with a linear equation y = 0.9262x + 101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials.
Two-terminal Distributed Source Coding with Alternating Messages for Function Computation
Ma, Nan
2008-01-01
A two-terminal interactive distributed source coding problem with alternating messages is studied. The focus is on function computation at both locations with a probability which tends to one as the blocklength tends to infinity. A single-letter characterization of the rate region is provided. It is observed that interaction is useless (in terms of the minimum sum-rate) if the goal is pure source reproduction at one or both locations but the gains can be arbitrarily large for (general) function computation. For doubly symmetric binary sources and any Boolean function, interaction is useless with even infinite messages, when computation is desired at only one location, but is useful, when desired at both locations. For independent Bernoulli sources and the Boolean AND function computation at both locations, an interesting achievable infinite-message sum-rate is derived. This sum-rate is expressed, in analytic closed-form, in terms of a two-dimensional definite integral with an infinitesimal rate for each messa...
International Nuclear Information System (INIS)
During the development process of a thermal-hydraulic system code, a non-regression test (NRT) must be performed repeatedly in order to prevent software regression. The NRT process, however, is time-consuming and labor-intensive. Thus, automation of this process is an ideal solution. In this study, we have developed a program to support an efficient NRT for the SPACE code and demonstrated its usability. This results in a high degree of efficiency for code development. The program was developed using the Visual Basic for Applications and designed so that it can be easily customized for the NRT of other computer codes.
International Nuclear Information System (INIS)
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.
International Nuclear Information System (INIS)
The Fortran IV computer Code, REY (REsolution and Identification), has been developed for the automatic resolution of the gamma-ray spectra from high resolution Ge-Li detectors. The Code searches the full energy peaks in the spectra background as the base line under the peak and calculates the energy of the statistically significant peaks. Also the Code assigns each peak to the most probable isotope and makes a selection of all the possible radioisotopes of the spectra, according the relative intensities of all the peaks in the whole spectra. Finally, it obtains the activities, in microcuries of each isotope, according the geometry used in the measurement. Although the Code is a general purpose one, their actual library of nuclear data is adapted for the analysis of liquid effluents from nuclear power plants. A computer with a 16 core memory and a hard disk are sufficient for this code.(author)
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
ARIEL e-linac. Electron linear accelerator for photo-fission
Koscielniak, Shane
2014-01-01
The design and implementation of a 1/2 MW beam power electron linear accelerator (e-linac) for the production of rare isotope beams (RIB) via photo-fission in the context of the Advanced Rare IsotopE Laboratory, ARIEL (Koscielniak et al. 2008; Merminga et al. 2011; Dilling et al., Hyperfine Interact, 2013), is described. The 100 % duty factor e-linac is based on super-conducting radiofrequency (SRF) technology at 1.3 GHz and has a nominal energy of 50 MeV. This paper provides an overview of the accelerator major components including the gun, cryomodules and cryoplant, high power RF sources, and machine layout including beam lines. Design features to facilitate operation of the linac as a Recirculating Linear Accelerator (RLA) for various applications, including Free Electron Lasers, are also noted.
Energy Technology Data Exchange (ETDEWEB)
Proskuryakov, K.N.; Bogomazov, D.N.; Poliakov, N. [Moscow Power Engineering Institute (Technical University), Moscow (Russian Federation)
2007-07-01
The new special module to neutron-physic and thermal-hydraulic computer codes for coolant acoustical characteristics calculation is worked out. The Russian computer code Rainbow has been selected for joint use with a developed module. This code system provides the possibility of EFOCP (Eigen Frequencies of Oscillations of the Coolant Pressure) calculations in any coolant acoustical elements of primary circuits of NPP. EFOCP values have been calculated for transient and for stationary operating. The calculated results for nominal operating were compared with results of measured EFOCP. For example, this comparison was provided for the system: 'pressurizer + surge line' of a WWER-1000 reactor. The calculated result 0.58 Hz practically coincides with the result of measurement (0.6 Hz). The EFOCP variations in transients are also shown. The presented results are intended to be useful for NPP vibration-acoustical certification. There are no serious difficulties for using this module with other computer codes.
ARIEL - The Atmospheric Remote-sensing Infrared Exoplanet Large-survey
Eccleston, P.; Tinetti, G.
2015-10-01
More than 1,000 extrasolar systems have been discovered, hosting nearly 2,000 exoplanets. Ongoing and planned ESA and NASA missions from space such as GAIA, Cheops, PLATO, K2 and TESS, plus ground based surveys, will increase the number of known systems to tens of thousands. Of all these exoplanets we know very little; i.e. their orbital data and, for some of these, their physical parameters such as their size and mass. In the past decade, pioneering results have been obtained using transit spectroscopy with Hubble, Spitzer and ground-based facilities, enabling the detection of a few of the most abundant ionic, atomic and molecular species and to constrain the planet's thermal structure. Future general purpose facilities with large collecting areas will allow the acquisition of better exoplanet spectra, compared to the currently available, especially from fainter targets. A few tens of planets will be observed with JWST and E-ELT in great detail. A breakthrough in our understanding of planet formation and evolution mechanisms will only happen through the observation of the planetary bulk and atmospheric composition of a statistically large sample of planets. This requires conducting spectroscopic observations covering simultaneously a broad spectral region from the visible to the mid-IR. It also requires a dedicated space mission with the necessary photometric stability to perform these challenging measurements and sufficient agility to observe multiple times ~500 exoplanets over 3.5 years. The ESA Cosmic Vision M4 mission candidate ARIEL is designed to accomplish this goal and will provide a complete, statistically significant sample of gas-giants, Neptunes and super-Earths with temperatures hotter than 600K, as these types of planets will allow direct observation of their bulk properties, enabling us to constrain models of planet formation and evolution. The ARIEL consortium currently includes academic institutes and industry from eleven countries in Europe; the
The HARWELL version of the computer code E-DEP-1
International Nuclear Information System (INIS)
This document describes the modified HARWELL version of the computer program EDEP-1 which has been in use on the IBM Central Computer for some years. The program can be used to calculate heavy ion ranges and/or profiles of energy deposited into nuclear processes for a wide variety of ion/target combinations. The initial setting up of this program on the IBM Central Computer has been described in an earlier report. A second report was later issued to bring the first report up to date following changes to this code required to suit the needs of workers at HARWELL. This later report described in particular the provision of new electronic stopping powers and an alternative method for calculating the energy straggle of beam ions with depth in a target. This new report describes further extensions to the electronic stopping powers available in the HARWELL version of this program and, for the first time, gives details of alternative nuclear stopping powers now available. This new document is intended as a reference manual for the use of the HARWELL version of EDEP-1. In this respect this document should be the final report on the status of this program. (author)
International Nuclear Information System (INIS)
The paper emphasizes the computational aspects of combined analytical-experimental investigations concerning dynamic elastoplastic deformation of a single fuel subassembly under transverse pressure loading. To simulate the situation within a fast reactor core during a postulated local vapor explosion the cushioning effect of the thin sodium layers between subassemblies has to be included. Therefore a new computer code CORTRAN was developed which combines both a one-dimensional axial squeeze flow model as well as a variable cross-section Timoshenko beam model. The paper outlines the salient features of these computational schemes. In particular, different types of dynamic plastic material behavior formulations like the implementation of various hardening rules (e.g. isotropic, kinematic) and a newly proposed catastrophe-theoretic interpretation of strain-rate dependent dynamic unloading are discussed. As applications to fast reactor safety analysis typical results of parametric studies which show the influence of some relevant energy sources (load histories), fluid support conditions or material data on structural damage are compared with experimental findings
Energy Technology Data Exchange (ETDEWEB)
Berna, G. A; Bohn, M. P.; Rausch, W. N.; Williford, R. E.; Lanning, D. D.
1981-01-01
FRAPCON-2 is a FORTRAN IV computer code that calculates the steady state response of light Mater reactor fuel rods during long-term burnup. The code calculates the temperature, pressure, deformation, and tai lure histories of a fuel rod as functions of time-dependent fuel rod power and coolant boundary conditions. The phenomena modeled by the code include (a) heat conduction through the fuel and cladding, (b) cladding elastic and plastic deformation, (c) fuel-cladding mechanical interaction, (d) fission gas release, (e} fuel rod internal gas pressure, (f) heat transfer between fuel and cladding, (g) cladding oxidation, and (h) heat transfer from cladding to coolant. The code contains necessary material properties, water properties, and heat transfer correlations. FRAPCON-2 is programmed for use on the CDC Cyber 175 and 176 computers. The FRAPCON-2 code Is designed to generate initial conditions for transient fuel rod analysis by either the FRAP-T6 computer code or the thermal-hydraulic code, RELAP4/MOD7 Version 2.
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.
International Nuclear Information System (INIS)
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package
A computational model of cellular mechanisms of temporal coding in the medial geniculate body (MGB.
Directory of Open Access Journals (Sweden)
Cal F Rabang
Full Text Available Acoustic stimuli are often represented in the early auditory pathway as patterns of neural activity synchronized to time-varying features. This phase-locking predominates until the level of the medial geniculate body (MGB, where previous studies have identified two main, largely segregated response types: Stimulus-synchronized responses faithfully preserve the temporal coding from its afferent inputs, and Non-synchronized responses, which are not phase locked to the inputs, represent changes in temporal modulation by a rate code. The cellular mechanisms underlying this transformation from phase-locked to rate code are not well understood. We use a computational model of a MGB thalamocortical neuron to test the hypothesis that these response classes arise from inferior colliculus (IC excitatory afferents with divergent properties similar to those observed in brain slice studies. Large-conductance inputs exhibiting synaptic depression preserved input synchrony as short as 12.5 ms interclick intervals, while maintaining low firing rates and low-pass filtering responses. By contrast, small-conductance inputs with Mixed plasticity (depression of AMPA-receptor component and facilitation of NMDA-receptor component desynchronized afferent inputs, generated a click-rate dependent increase in firing rate, and high-pass filtered the inputs. Synaptic inputs with facilitation often permitted band-pass synchrony along with band-pass rate tuning. These responses could be tuned by changes in membrane potential, strength of the NMDA component, and characteristics of synaptic plasticity. These results demonstrate how the same synchronized input spike trains from the inferior colliculus can be transformed into different representations of temporal modulation by divergent synaptic properties.
Finite Element Simulation Code for Computing Thermal Radiation from a Plasma
Nguyen, C. N.; Rappaport, H. L.
2004-11-01
A finite element code, ``THERMRAD,'' for computing thermal radiation from a plasma is under development. Radiation from plasma test particles is found in cylindrical geometry. Although the plasma equilibrium is assumed axisymmetric individual test particle excitation produces a non-axisymmetric electromagnetic response. Specially designed Whitney class basis functions are to be used to allow the solution to be solved on a two-dimensional grid. The basis functions enforce both a vanishing of the divergence of the electric field within grid elements where the complex index of refraction is assumed constant and continuity of tangential electric field across grid elements while allowing the normal component of the electric field to be discontinuous. An appropriate variational principle which incorporates the Sommerfeld radiation condition on the simulation boundary, as well as its discretization by the Rayleigh-Ritz technique is given. 1. ``Finte Element Method for Electromagnetics Problems,'' Volakis et al., Wiley, 1998.
Bousquet, Nicolas
2010-01-01
This article deals with the estimation of a probability p of an undesirable event. Its occurence is formalized by the exceedance of a threshold reliability value by the unidimensional output of a time-consuming computer code G with multivariate probabilistic input X. When G is assumed monotonous with respect to X, the Monotonous Reliability Method was proposed by de Rocquigny (2009) in an engineering context to provide sequentially narrowing 100%-confidence bounds and a crude estimate of p, via deterministic or stochastic designs of experiments. The present article consists in a formalization and technical deepening of this idea, as a large basis for future theoretical and applied studies. Three kinds of results are especially emphasized. First, the bounds themselves remain too crude and conservative estimators of p for a dimension of X upper than 2. Second, a maximum-likelihood estimator of p can be easily built, presenting a high variance reduction with respect to a standard Monte Carlo case, but suffering ...
ACUTRI a computer code for assessing doses to the general public due to acute tritium releases
Yokoyama, S; Noguchi, H; Ryufuku, S; Sasaki, T
2002-01-01
Tritium, which is used as a fuel of a D-T burning fusion reactor, is the most important radionuclide for the safety assessment of a nuclear fusion experimental reactor such as ITER. Thus, a computer code, ACUTRI, which calculates the radiological impact of tritium released accidentally to the atmosphere, has been developed, aiming to be of use in a discussion of licensing of a fusion experimental reactor and an environmental safety evaluation method in Japan. ACUTRI calculates an individual tritium dose based on transfer models specific to tritium in the environment and ICRP dose models. In this calculation it is also possible to analyze statistically on meteorology in the same way as a conventional dose assessment method according to the meteorological guide of the Nuclear Safety Commission of Japan. A Gaussian plume model is used for calculating the atmospheric dispersion of tritium gas (HT) and/or tritiated water (HTO). The environmental pathway model in ACUTRI considers the following internal exposures: i...
Discrete logarithm computations over finite fields using Reed-Solomon codes
Augot, Daniel
2012-01-01
Cheng and Wan have related the decoding of Reed-Solomon codes to the computation of discrete logarithms over finite fields, with the aim of proving the hardness of their decoding. In this work, we experiment with solving the discrete logarithm over GF(q^h) using Reed-Solomon decoding. For fixed h and q going to infinity, we introduce an algorithm (RSDL) needing O (h! q^2) operations over GF(q), operating on a q x q matrix with (h+2) q non-zero coefficients. We give faster variants including an incremental version and another one that uses auxiliary finite fields that need not be subfields of GF(q^h); this variant is very practical for moderate values of q and h. We include some numerical results of our first implementations.
Evaluation of MOSTAS computer code for predicting dynamic loads in two-bladed wind turbines
Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.
1979-01-01
Calculated dynamic blade loads are compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-0 wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multiblade coordinate transformation for two-bladed rotors to solve the equations of motion by standard eigenanalysis. The results obtained with this approximate analysis do not agree with dynamic blade load amplifications at or close to resonance conditions. The results of the second version, which accounts for periodic coefficients while solving the equations by a time history integration, compare well with the measured data.
Evaluation of MOSTAS computer code for predicting dynamic loads in two bladed wind turbines
Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.
1979-01-01
Calculated dynamic blade loads were compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-O wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multi-blade coordinate transformation for two bladed rotors to solve the equations of motion by standard eigenanalysis. The second version accounts for periodic coefficients while solving the equations by a time history integration. A hypothetical three-degree of freedom dynamic model was investigated. The exact equations of motion of this model were solved using the Floquet-Lipunov method. The equations with time-averaged coefficients were solved by standard eigenanalysis.
RADTRAN 4.0: Advanced computer code for transportation risk assessment
International Nuclear Information System (INIS)
RADTRAN 4.0 is a computer code for transportation risk assessment developed by Sandia National Laboratories for the US Department of Energy. While retaining the most useful and time-proven features of its predecessors, RADTRAN 4.0 incorporates significant advances over the earlier versions. The most useful new features are: improved route-specific analysis capability, internal radionuclide data library, improved logic for analysis of multiple-radionuclide packages such as spent fuel, separate treatment of gamma and neutron components of Transport Index (TI), and increased number of accident-severity categories. In this paper, each of these features will be described, and, where appropriate, potential applications will be discussed. 11 refs
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
International Nuclear Information System (INIS)
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.
Bade, W. L.; Yos, J. M.
1975-01-01
A computer program for calculating quasi-one-dimensional gas flow in axisymmetric and two-dimensional nozzles and rectangular channels is presented. Flow is assumed to start from a state of thermochemical equilibrium at a high temperature in an upstream reservoir. The program provides solutions based on frozen chemistry, chemical equilibrium, and nonequilibrium flow with finite reaction rates. Electronic nonequilibrium effects can be included using a two-temperature model. An approximate laminar boundary layer calculation is given for the shear and heat flux on the nozzle wall. Boundary layer displacement effects on the inviscid flow are considered also. Chemical equilibrium and transport property calculations are provided by subroutines. The code contains precoded thermochemical, chemical kinetic, and transport cross section data for high-temperature air, CO2-N2-Ar mixtures, helium, and argon. It provides calculations of the stagnation conditions on axisymmetric or two-dimensional models, and of the conditions on the flat surface of a blunt wedge. The primary purpose of the code is to describe the flow conditions and test conditions in electric arc heated wind tunnels.
International Nuclear Information System (INIS)
The current doctoral research is focused on the development and validation of a coupled computational tool, to combine the advantages of computational fluid dynamics (CFD) in analyzing complex flow fields and of state-of-the-art system codes employed for nuclear power plant (NPP) simulations. Such a tool can considerably enhance the analysis of NPP transient behavior, e.g. in the case of pressurized water reactor (PWR) accident scenarios such as Main Steam Line Break (MSLB) and boron dilution, in which strong coolant flow asymmetries and multi-dimensional mixing effects strongly influence the reactivity of the reactor core, as described in Chap. 1. To start with, a literature review on code coupling is presented in Chap. 2, together with the corresponding ongoing projects in the international community. Special reference is made to the framework in which this research has been carried out, i.e. the Paul Scherrer Institute's (PSI) project STARS (Steady-state and Transient Analysis Research for the Swiss reactors). In particular, the codes chosen for the coupling, i.e. the CFD code ANSYS CFX V11.0 and the system code US-NRC TRACE V5.0, are part of the STARS codes system. Their main features are also described in Chap. 2. The development of the coupled tool, named CFX/TRACE from the names of the two constitutive codes, has proven to be a complex and broad-based task, and therefore constraints had to be put on the target requirements, while keeping in mind a certain modularity to allow future extensions to be made with minimal efforts. After careful consideration, the coupling was defined to be on-line, parallel and with non-overlapping domains connected by an interface, which was developed through the Parallel Virtual Machines (PVM) software, as described in Chap. 3. Moreover, two numerical coupling schemes were implemented and tested: a sequential explicit scheme and a sequential semi-implicit scheme. Finally, it was decided that the coupling would be single
Computational Code to Determinate the Optical Constants of Materials with Atrophysical Importance
Robson Rocha, Will; Pilling, Sergio
Several environments in the interstellar medium (ISM) are composed by dust grains (e.g. silicates), that in somewhere can be covered by astrophysical ices (frozen molecular species). The presence of this materials inside dense and cold regions in space such as molecular clouds and circumstellar disks around young stars is proven by space telescopes (e. g. Herschel, Spitzer, ISO) using infrared spectroscopy. In such environments, molecules such as H _{2}O, CO, CO _{2}, NH _{3}, CH _{3}OH among others, may exist in the solid phase and constitute what we call as the interstellar ices. In this work we present a code called NKABS (acronym for “N and K determination from ABSorbance data”) to calculate the optical constants of materials with astrophysical importance directly from absorbance data in the infrared. It is a free code, developed in Python Programing Language, available for Windows (®) operating system. The parameters obtained using the NKABS code are essentials to perform studies involving computational modeling of star forming regions in the infrared. The experimental data have been obtained using a high vacuum portable chamber from the Laboratorio de Astroquímica e Astrobiologia (LASA/UNIVAP). The samples used to calculate the optical constants presented here, were obtained from the condensation of pure gases (e.g. CO, CO _{2} , NH _{3} , SO _{2}), from the sublimation in vacuum of pure liquids (e.g. water, acetone, acetonitrile, acetic acid, formic acid, ethanol and methanol) and from mixtures of different species (e.g. H _{2}O:CO _{2}, H _{2}O:CO:NH _{3}, H _{2}O:CO _{2}:NH _{3}:CH _{4}). Additionally films of solid biomolecules samples of astrochemistry/astrobiology interest (e.g. glycine, adenine) were probed. The NKABS code may also calculate the optical constants of materials processed by the radiation, a scenario very common in star forming regions. Authors would like to thanks the agencies FAPESP (JP#2009/18304-0 and PHD#2013/07657-5), FVE
Energy Technology Data Exchange (ETDEWEB)
Santoyo, E. [Universidad Nacional Autonoma de Mexico, Centro de Investigacion en Energia, Temixco (Mexico); Garcia, A.; Santoyo, S. [Unidad Geotermia, Inst. de Investigaciones Electricas, Temixco (Mexico); Espinosa, G. [Universidad Autonoma Metropolitana, Co. Vicentina (Mexico); Hernandez, I. [ITESM, Centro de Sistemas de Manufactura, Monterrey (Mexico)
2000-07-01
The development and application of the computer code STATIC{sub T}EMP, a useful tool for calculating static formation temperatures from actual bottomhole temperature data logged in geothermal wells is described. STATIC{sub T}EMP is based on five analytical methods which are the most frequently used in the geothermal industry. Conductive and convective heat flow models (radial, spherical/radial and cylindrical/radial) were selected. The computer code is a useful tool that can be reliably used in situ to determine static formation temperatures before or during the completion stages of geothermal wells (drilling and cementing). Shut-in time and bottomhole temperature measurements logged during well completion activities are required as input data. Output results can include up to seven computations of the static formation temperature by each wellbore temperature data set analysed. STATIC{sub T}EMP was written in Fortran-77 Microsoft language for MS-DOS environment using structured programming techniques. It runs on most IBM compatible personal computers. The source code and its computational architecture as well as the input and output files are described in detail. Validation and application examples on the use of this computer code with wellbore temperature data (obtained from specialised literature) and with actual bottomhole temperature data (taken from completion operations of some geothermal wells) are also presented. (Author)
International Nuclear Information System (INIS)
This manual describes the technical bases and use of the computer code FIRIN. This code was developed to estimate the source term release of smoke and radioactive particles from potential fires in nuclear fuel cycle facilities. FIRIN is a product of a broader study, Fuel Cycle Accident Analysis, which Pacific Northwest Laboratory conducted for the US Nuclear Regulatory Commission. The technical bases of FIRIN consist of a nonradioactive fire source term model, compartment effects modeling, and radioactive source term models. These three elements interact with each other in the code affecting the course of the fire. This report also serves as a complete FIRIN user's manual. Included are the FIRIN code description with methods/algorithms of calculation and subroutines, code operating instructions with input requirements, and output descriptions. 40 refs., 5 figs., 31 tabs
International Nuclear Information System (INIS)
A short description of the TOPRA-s computer code is presented. The code is developed to calculate the thermophysical cross-section characteristics of the WWER fuel rods: fuel temperature distributions and fuel-to-cladding gap conductance. The TOPRA-s input does not require the fuel rod irradiation pre-history (time dependent distributions of linear power, fast neutron flux and coolant temperature along the rod). The required input consists of the considered cross-section data (coolant temperature, burnup, linear power) and the overall fuel rod data (burnup and linear power). TOPRA-s is included into the KASKAD code package. Some results of the TOPRA-s code validation using the SOFIT-1 and IFA-503.1 experimental data, are shown. A short description of the TRANSURANUS code for thermal and mechanical predictions of the LWR fuel rod behavior at various irradiation conditions and its version for WWER reactors, are presented. (Authors)
International Nuclear Information System (INIS)
Computer codes for modelling the dispersion and transfer of tritium released to the atmosphere were compared. The codes originated from Canada, the United States, Sweden and Japan. The comparisons include acute and chronic emissions of tritiated water vapour or elemental tritium from a hypothetical nuclear facility. Individual and collective doses to the population within 100 km of the site were calculated. The discrepancies among the code predictions were about one order of magnitude for the HTO emissions but were significantly more varied for the HT emissions. Codes that did not account for HT to HTO conversion and cycling of tritium in the environment predicted doses that were several orders of magnitude less than codes that incorporate this feature into the model
Detected jump-error correcting quantum codes, quantum error designs and quantum computation
Alber, G.; Beth, Th.; Charnes, Ch.; Delgado, A; Grassl, M.; Mussinger, M.
2002-01-01
The recently introduced detected-jump correcting quantum codes are capable of stabilizing qubit-systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one detected jump-error correcting quantum codes is shown and t...
Kuvychko, Igor
2000-05-01
Human vision involves higher-level knowledge and top-bottom processes for resolving ambiguity and uncertainty in the real images. Even very advanced low-level image processing can not give any advantages without a highly effective knowledge-representation and reasoning system that is the solution of image understanding problem. Methods of image analysis and coding are directly based on the methods of knowledge representation and processing. Article suggests such models and mechanisms in form of Spatial Turing Machine that in place of symbols and tapes works with hierarchical networks represented dually as discrete and continuous structures. Such networks are able to perform both graph and diagrammatic operations being the basis of intelligence. Computational intelligence methods provide transformation of continuous image information into the discrete structures, making it available for analysis. Article shows that symbols naturally emerge in such networks, giving opportunity to use symbolic operations. Such framework naturally combines methods of machine learning, classification and analogy with induction, deduction and other methods of higher level reasoning. Based on these principles image understanding system provides more flexible ways of handling with ambiguity and uncertainty in the real images and does not require supercomputers. That opens way to new technologies in the computer vision and image databases.
A simulation of a pebble bed reactor core by the MCNP-4C computer code
Directory of Open Access Journals (Sweden)
Bakhshayesh Moshkbar Khalil
2009-01-01
Full Text Available Lack of energy is a major crisis of our century; the irregular increase of fossil fuel costs has forced us to search for novel, cheaper, and safer sources of energy. Pebble bed reactors - an advanced new generation of reactors with specific advantages in safety and cost - might turn out to be the desired candidate for the role. The calculation of the critical height of a pebble bed reactor at room temperature, while using the MCNP-4C computer code, is the main goal of this paper. In order to reduce the MCNP computing time compared to the previously proposed schemes, we have devised a new simulation scheme. Different arrangements of kernels in fuel pebble simulations were investigated and the best arrangement to decrease the MCNP execution time (while keeping the accuracy of the results, chosen. The neutron flux distribution and control rods worth, as well as their shadowing effects, have also been considered in this paper. All calculations done for the HTR-10 reactor core are in good agreement with experimental results.
POSTCON: A postprocessor and unit conversion program for the contain computer code
International Nuclear Information System (INIS)
The numerical predictions from use of the CONTAIN severe reactor accident containment analysis computer code normally take the form of massive quantities of output data. The purpose of the POSTCON computer program is to provide an easy-to-use and efficient method for examining such results. In this report the capabilities of POSTCON are described and instructions for the use of the program are given. In order to clarify the discussion of the input options and output format, several examples are presented including actual input, output, and vector files. The summary sections of this document serve as a user's manual and can be consulted for the construction of simple POSTCON input files. The detailed sections and Appendix A serve as a comprehensive reference manual that can be consulted for advanced POSTCON applications. The overall capabilities of POSTCON include extraction of all transient data from CONTAIN binary plot files, multiple file handling, a flexible unit conversion system, snapshot and histogram options, automatic pagination and labeling of tables, and vector output files for plot program interfacing
Energy Technology Data Exchange (ETDEWEB)
Kostin, Mikhail [FRIB, MSU; Mokhov, Nikolai [FNAL; Niita, Koji [RIST, Japan
2013-09-25
A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.
International Nuclear Information System (INIS)
This report is intended as a user's manual for a general-purpose computer program BAYES to solve Bayes equations for updating parameter values, uncertainties, and correlations. Bayes equations are derived from Bayes theorem, using linearity and normality assumptions. The method of solution is described, and details are given for adapting the code for a specific purpose. Numerous examples are given, including problem description and solution method, FORTRAN coding, and sample input and output. A companion code LEAST, which solves the usual least-squares equations rather than Bayes equations but which encourages nondiagonal data weighting, is also described
Test results of a 40 kW Stirling engine and comparison with the NASA-Lewis computer code predictions
Allen, D.; Cairelli, J.
1985-12-01
A Stirling engine was tested without auxiliaries at NASA-Lewis. Three different regenerator configurations were tested with hydrogen. The test objectives were (1) to obtain steady-state and dynamic engine data, including indicated power, for validation of an existing computer model for this engine; and (2) to evaluate structurally the use of silicon carbide regenerators. This paper presents comparisons of the measured brake performance, indicated mean effective pressure, and cyclic pressure variations with those predicted by the code. The measured data tended to be lower than the computer code predictions. The silicon carbide foam regenerators appear to be structurally suitable, but the foam matrix tested severely reduced performance.
Grid cells generate an analog error-correcting code for singularly precise neural computation.
Sreenivasan, Sameet; Fiete, Ila
2011-09-11
Entorhinal grid cells in mammals fire as a function of animal location, with spatially periodic response patterns. This nonlocal periodic representation of location, a local variable, is unlike other neural codes. There is no theoretical explanation for why such a code should exist. We examined how accurately the grid code with noisy neurons allows an ideal observer to estimate location and found this code to be a previously unknown type of population code with unprecedented robustness to noise. In particular, the representational accuracy attained by grid cells over the coding range was in a qualitatively different class from what is possible with observed sensory and motor population codes. We found that a simple neural network can effectively correct the grid code. To the best of our knowledge, these results are the first demonstration that the brain contains, and may exploit, powerful error-correcting codes for analog variables.
Lin, J. W.; Erickson, T. A.
2011-12-01
Historically, the application of high-performance computing (HPC) to the atmospheric sciences has focused on using the increases in processor speed, storage, and parallelization to run longer simulations of larger and more complex models. Such a focus, however, has led to a user culture where code robustness and reusability is ignored or discouraged. Additionally, such a culture works against nurturing and growing connections between high-performance computational earth sciences and scientific users outside of that community. Given the explosion in computational power available to researchers unconnected with the traditional HPC centers, as well as in the number of quality tools available to conduct analysis and visualization, the programming insularity of the earth science modeling and analysis community acts as a formidible barrier to increasing the usefulness and robustness of computational earth science products. In this talk, we suggest adoption of best practices from the software engineering community, and in particular the open-source community, has the potential to improve the quality of code and increase the impact of earth sciences HPC. In particular, we will discuss the impact of practices such as unit testing and code review, the need and preconditions for code reusability, and the importance of APIs and open frameworks to enable scientific discovery across sub-disciplines. We will present examples of the cross-disciplinary fertilization possible with open APIs. Finally, we will discuss ways funding agencies and the computational earth sciences community can help encourage the adoption of such best practices.
International Nuclear Information System (INIS)
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.
Directory of Open Access Journals (Sweden)
Kumar Parijat Tripathi
Full Text Available RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool, QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery tools. It offers a report on statistical analysis of functional and Gene Ontology (GO annotation's enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein-protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA by ab initio methods helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is
DIST: a computer code system for calculation of distribution ratios of solutes in the purex system
Energy Technology Data Exchange (ETDEWEB)
Tachimori, Shoichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1996-05-01
Purex is a solvent extraction process for reprocessing the spent nuclear fuel using tri n-butylphosphate (TBP). A computer code system DIST has been developed to calculate distribution ratios for the major solutes in the Purex process. The DIST system is composed of database storing experimental distribution data of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}: DISTEX and of Zr(IV), Tc(VII): DISTEXFP and calculation programs to calculate distribution ratios of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}(DIST1), and Zr(IV), Tc(VII)(DITS2). The DIST1 and DIST2 determine, by the best-fit procedures, the most appropriate values of many parameters put on empirical equations by using the DISTEX data which fulfill the assigned conditions and are applied to calculate distribution ratios of the respective solutes. Approximately 5,000 data were stored in the DISTEX and DISTEXFP. In the present report, the following items are described, 1) specific features of DIST1 and DIST2 codes and the examples of calculation 2) explanation of databases, DISTEX, DISTEXFP and a program DISTIN, which manages the data in the DISTEX and DISTEXFP by functions as input, search, correction and delete. and at the annex, 3) programs of DIST1, DIST2, and figure-drawing programs DIST1G and DIST2G 4) user manual for DISTIN. 5) source programs of DIST1 and DIST2. 6) the experimental data stored in the DISTEX and DISTEXFP. (author). 122 refs.
International Nuclear Information System (INIS)
Modelling of fuel-rod behavior during reactor power cycling and ramping (including power-cooling mismatch experiments) with the computer code FRAPCON-2 is discussed. FRAPCON-2 computer calculations, using different mechanical models (Rigid Pellet, Deformable Pellet and Finite Element Mechanical Models) are compared with experimental results. The range of conditions over which FRAPCON-2 may be applied for PWR fuel rod behavior modelling during reactor power cycling and ramping are illustrated
International Nuclear Information System (INIS)
In this manual we describe the use of the FORIG computer code to solve isotope-generation and depletion problems in fusion and fission reactors. FORIG runs on a Cray-1 computer and accepts more extensive activation cross sections than ORIGEN2 from which it was adapted. This report is an updated and a combined version of the previous ORIGEN2 and FORIG manuals. 7 refs., 15 figs., 13 tabs
A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.
Bobka, Marilyn E.; Subramaniam, J.B.
The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…
Computational promoter analysis of mouse, rat and human antimicrobial peptide-coding genes
Directory of Open Access Journals (Sweden)
Kai Chikatoshi
2006-12-01
Full Text Available Abstract Background Mammalian antimicrobial peptides (AMPs are effectors of the innate immune response. A multitude of signals coming from pathways of mammalian pathogen/pattern recognition receptors and other proteins affect the expression of AMP-coding genes (AMPcgs. For many AMPcgs the promoter elements and transcription factors that control their tissue cell-specific expression have yet to be fully identified and characterized. Results Based upon the RIKEN full-length cDNA and public sequence data derived from human, mouse and rat, we identified 178 candidate AMP transcripts derived from 61 genes belonging to 29 AMP families. However, only for 31 mouse genes belonging to 22 AMP families we were able to determine true orthologous relationships with 30 human and 15 rat sequences. We screened the promoter regions of AMPcgs in the three species for motifs by an ab initio motif finding method and analyzed the derived promoter characteristics. Promoter models were developed for alpha-defensins, penk and zap AMP families. The results suggest a core set of transcription factors (TFs that regulate the transcription of AMPcg families in mouse, rat and human. The three most frequent core TFs groups include liver-, nervous system-specific and nuclear hormone receptors (NHRs. Out of 440 motifs analyzed, we found that three represent potentially novel TF-binding motifs enriched in promoters of AMPcgs, while the other four motifs appear to be species-specific. Conclusion Our large-scale computational analysis of promoters of 22 families of AMPcgs across three mammalian species suggests that their key transcriptional regulators are likely to be TFs of the liver-, nervous system-specific and NHR groups. The computationally inferred promoter elements and potential TF binding motifs provide a rich resource for targeted experimental validation of TF binding and signaling studies that aim at the regulation of mouse, rat or human AMPcgs.
International Nuclear Information System (INIS)
A computer code ALMOD 3W3 to analyze the transients in which reverse flow in the primary loop of nuclear power plant may occur has been developed. The method to calculate the fluid dynamics in NRC system is presented. The locked rotor accident in one coolant loop is analyzed. (author)
International Nuclear Information System (INIS)
This report describes a computer code for the Systematic Unification of Multiple Input Tables of data (SUMIT). This code is designed to be an integral part of the Computerized Radiological Risk Investigation System (CRRIS) for assessing the health impacts of airborne releases of radioactive pollutants. SUMIT reads radionuclide air concentrations and ground deposition rates for different release points and combines them over a specified master grid. The resulting SUMIT grid may be circular, rectangular, or consist of irregularly spaced points. SUMIT can apply a different scaling factor to all data from each source. This program is designed to sum data written by the CRRIS code ANEMOS. Of course, SUMIT could read any data organized in the same manner at ANEMOS output. Descriptions of the necessary user input and data files are provided along with a complete listing of the SUMIT code. 10 references, 4 figures, 2 tables
Computation of an ideal gas nozzle flow with basically different codes
International Nuclear Information System (INIS)
A finite difference code and a code based on the method of characteristics applied to the calculation of a stationary flow of an ideal gas through a convergent-divergent nozzle are compared. The stationary profiles of the flow variables are obtained as asymptotic solutions of the transient calculation. An analytical solution serves as a basis to criticize the two different codes: while the code based on characteristics agrees fairly well with the analytical solution, the finite difference code supplies strongly smoothed, unrealistic profiles due to numerical damping. (orig.)
Elfer, N.; Meibaum, R.; Olsen, G.
1995-01-01
A unique collection of computer codes, Space Debris Surfaces (SD_SURF), have been developed to assist in the design and analysis of space debris protection systems. SD_SURF calculates and summarizes a vehicle's vulnerability to space debris as a function of impact velocity and obliquity. An SD_SURF analysis will show which velocities and obliquities are the most probable to cause a penetration. This determination can help the analyst select a shield design that is best suited to the predominant penetration mechanism. The analysis also suggests the most suitable parameters for development or verification testing. The SD_SURF programs offer the option of either FORTRAN programs or Microsoft-EXCEL spreadsheets and macros. The FORTRAN programs work with BUMPERII. The EXCEL spreadsheets and macros can be used independently or with selected output from the SD_SURF FORTRAN programs. Examples will be presented of the interaction between space vehicle geometry, the space debris environment, and the penetration and critical damage ballistic limit surfaces of the shield under consideration.
Stimulus Specificity of Brain-Computer Interfaces Based on Code Modulation Visual Evoked Potentials.
Directory of Open Access Journals (Sweden)
Qingguo Wei
Full Text Available A brain-computer interface (BCI based on code modulated visual evoked potentials (c-VEP is among the fastest BCIs that have ever been reported, but it has not yet been given a thorough study. In this study, a pseudorandom binary M sequence and its time lag sequences are utilized for modulation of different stimuli and template matching is adopted as the method for target recognition. Five experiments were devised to investigate the effect of stimulus specificity on target recognition and we made an effort to find the optimal stimulus parameters for size, color and proximity of the stimuli, length of modulation sequence and its lag between two adjacent stimuli. By changing the values of these parameters and measuring classification accuracy of the c-VEP BCI, an optimal value of each parameter can be attained. Experimental results of ten subjects showed that stimulus size of visual angle 3.8°, white, spatial proximity of visual angle 4.8° center to center apart, modulation sequence of length 63 bits and the lag of 4 bits between adjacent stimuli yield individually superior performance. These findings provide a basis for determining stimulus presentation of a high-performance c-VEP based BCI system.
Computer code validation study of PWR core design system, CASMO-3/MASTER-{alpha}
Energy Technology Data Exchange (ETDEWEB)
Lee, K. H.; Kim, M. H. [Kyounghee Univ., Taejon (Korea, Republic of); Woo, S. W. [KINS, Taejon (Korea, Republic of)
1999-05-01
In this paper, the feasibility of CASMO-3/MASTER-{alpha} nuclear design system was investigated for commercial PWR core. Validation calculation was performed as follows. Firstly, the accuracy of cross section generation from table set using linear feedback model was estimated. Secondly, the results of CASMO-3/MASTER-{alpha} was compared with CASMO-3/NESTLE 5.02 for a few benchmark problems. Microscopic cross sections computed from table set were almost the same with those from CASMO-3. There were small differences between calculated results of two code systems. Thirdly, the repetition of CASMO-3/MASTER-{alpha} calculation for Younggwang Unit-3, Cycle-1 core was done and their results were compared with nuclear design report(NDR) and uncertainty analysis results of KAERI. It was found that uncertainty analysis results were reliable enough because results were agreed each other. It was concluded that the use of nuclear design system CASMO-3/MASTER-{alpha} was validated for commercial PWR core.
BWR core stability prediction on-line with the computer code matstab
International Nuclear Information System (INIS)
MATSTAB is a computer program for three-dimensional prediction of BWR core stability in the frequency domain. This tool has been developed, and is currently used, to perform core design and optimisation with regard to core stability. The requirement regarding the predicted decay ratio of the new core is one of the limiting factors, or key parameters, in core design. To be useful, the tool should be fast and simple to apply. The results must be delivered promptly and experts should not be required to interpret them. Alternatively, the area of application for MATSTAB can be described as on-line monitoring using predictive tools. Core stability properties can be calculated for a number of presumptive reactor states, planned or unplanned. A 3-D code operating in the frequency domain may be the best tool to use for the purposes just mentioned. Some strong advantages are that the results are given promptly, they require no post-processing and are directly amenable to graphic presentation of eigenvectors, etc. (authors)
International Nuclear Information System (INIS)
DIADEME is a computer code developed within the framework of R and D cooperation between the French Atomic Energy Commission (CEA), Electricite de France (EdF) and FRAMATOME-ANP. Its aim is to assess in operation defective fuel characteristics and primary circuit contamination for actinides and long half-life fission products involved in health physics problems as well as in waste and decommissioning studies. DIADEME has been developed and qualified for the EDF nuclear power plants. For many years, both theoretical and experimental studies have been carried out at the CEA on the release of fission products and actinides out of defective fuel rods in operation, their migration and deposition in PWR primary circuits. These studies have allowed defect characteristic diagnosis methods to be developed, based on radiochemical measurements of the primary coolant. These methods are generally used along with gamma spectrometry measurements on primary water sampling. In order to be completely efficient, these methods can also be used in connection with an on-line primary water gamma spectrometry device. This permits to obtain the most comprehensive data on fission product activity evolutions at steady state and during operation transients, and allows the on-line characterization of the defective fuel assemblies. For long half-life fission products and for actinides, DIADEME is also able to assess the activities of soluble and insoluble forms in the primary water and in the chemical and voluminal control system (CVCS) filters and resins, as well as those activities deposited on primary circuit surfaces. (author)
A computational code for resolution of general compartment models applied to internal dosimetry
International Nuclear Information System (INIS)
The dose resulting from internal contamination can be estimated with the use of biokinetic models combined with experimental results obtained from bio analysis and the knowledge of the incorporation time. The biokinetics models can be represented by a set of compartments expressing the transportation, retention and elimination of radionuclides from the body. The ICRP publications, number 66, 78 and 100, present compartmental models for the respiratory tract, gastrointestinal tract and for systemic distribution for an array of radionuclides of interest for the radiological protection. The objective of this work is to develop a computational code for designing, visualization and resolution of compartmental models of any nature. There are available four different techniques for the resolution of system of differential equations, including semi-analytical and numerical methods. The software was developed in C≠ programming, using a Microsoft Access database and XML standards for file exchange with other applications. Compartmental models for uranium, thorium and iodine radionuclides were generated for the validation of the CBT software. The models were subsequently solved by SSID software and the results compared with the values published in the issue 78 of ICRP. In all cases the system is in accordance with the values published by ICRP. (author)
A computational code for resolution of general compartment models applied to internal dosimetry
Energy Technology Data Exchange (ETDEWEB)
Claro, Thiago R.; Todo, Alberto S., E-mail: claro@usp.br, E-mail: astodo@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2011-07-01
The dose resulting from internal contamination can be estimated with the use of biokinetic models combined with experimental results obtained from bio analysis and the knowledge of the incorporation time. The biokinetics models can be represented by a set of compartments expressing the transportation, retention and elimination of radionuclides from the body. The ICRP publications, number 66, 78 and 100, present compartmental models for the respiratory tract, gastrointestinal tract and for systemic distribution for an array of radionuclides of interest for the radiological protection. The objective of this work is to develop a computational code for designing, visualization and resolution of compartmental models of any nature. There are available four different techniques for the resolution of system of differential equations, including semi-analytical and numerical methods. The software was developed in C{ne} programming, using a Microsoft Access database and XML standards for file exchange with other applications. Compartmental models for uranium, thorium and iodine radionuclides were generated for the validation of the CBT software. The models were subsequently solved by SSID software and the results compared with the values published in the issue 78 of ICRP. In all cases the system is in accordance with the values published by ICRP. (author)
Energy Technology Data Exchange (ETDEWEB)
Stewart, S.D.; Sampson, R.J. Jr.; Stonemetz, R.E.; Rouse, S.L.
1980-07-01
A computer program, TAPFIL, has been developed by MSFC to read data from an IBM 360 tape for use on the PDP 11/70. The information (insolation, flowrates, temperatures, etc.) from 48 operational solar heating and cooling test sites is stored on the tapes. Two other programs, CHPLOT and WRTCNL, have been developed to plot and tabulate the data. These data will be used in the evaluation of collector efficiency and solar system performance. This report describes the methodology of the programs, their inputs, and their outputs.
Becoming-Ariel: Viewing Julie Taymor’s The Tempest through an Ecocritical Lens
Directory of Open Access Journals (Sweden)
Clare Sibley-Esposito
2012-10-01
Full Text Available The burgeoning field of ecocriticism takes what Cheryll Glotfelty has referred to as an “earth-centered approach” to cultural productions, with ecocritics sharing a concern with the interconnectedness of human and non-human spheres. This article presents a brief overview of some ecocritical readings of The Tempest, before interpreting Julie Taymor’s cinematographic adaptation in the light of such considerations. Taymor attributes a rather unproblematic power of manipulation of the natural world to her Prospera, yet some of her directorial choices, operating within the specificity of the film medium, tend nonetheless to highlight certain ecocritically-relevant dimensions of Shakespeare’s text. By viewing the film through an ecocritical lens, whilst borrowing terminology from Gilles Deleuze and Félix Guattari, the figure of Ariel can be seen as operating on a “molecular plane” of ontological interconnectivity, in contrast to the “molar mode” of binary oppositions inherent in Prospera’s desire to dominate natural forces.
V.S.O.P. (99/09) computer code system for reactor physics and fuel cycle simulation. Version 2009
Energy Technology Data Exchange (ETDEWEB)
Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Ohlig, U.; Pohl, C.; Scherer, W.
2010-07-15
V.S.O.P. (99/ 09) represents the further development of V.S.O.P. (99/ 05). Compared to its precursor, the code system has been improved again in many details. The main motivation for this new code version was to update the basic nuclear libraries used by the code system. Thus, all cross section libraries involved in the code have now been based on ENDF/B-VII. V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to gas-cooled reactors and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. This latest code version was developed and tested under the WINDOWS-XP - operating system. (orig.)
International Nuclear Information System (INIS)
This document is the User's Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code's capabilities and limitations; Chapter 2 describes the code's structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARC and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs
The DIPSI [Direct Implicit Plasma Surface Interactions] computer code user's manual
International Nuclear Information System (INIS)
DIPSI (Direct Implicit Plasma Surface Interactions) is a one-dimensional, bounded particle-in-cell (PIC) simulation code designed to investigate the interaction of plasma with a solid surface, such as a limiter or divertor plate in a tokamak fusion device. Plasma confinement and transport may be studied in a system which includes an applied magnetic field (oriented normal to the solid surface) and/or a self-consistent electrostatic potential. The PIC code DIPSI is an offshoot of the PIC code TESS (Tandem Experiment Simulation Studies) which was developed to study plasma confinement in mirror devices. The codes DIPSI and TESS are direct descendants of the PIC code ES1 that was created by A. B. Langdon. This document provides the user with a brief description of the methods used in the code and a tutorial on the use of the code. 11 refs., 2 tabs
Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.
2010-12-01
The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.
Energy Technology Data Exchange (ETDEWEB)
Ball, J.; Glowa, G.; Wren, J. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Ewig, F. [GRS Koln (Germany); Dickenson, S. [AEAT, (United Kingdom); Billarand, Y.; Cantrel, L. [IPSN (France); Rydl, A. [NRIR (Czech Republic); Royen, J. [OECD/NEA (France)
2001-11-01
This report describes the results of the second phase of International Standard Problem (ISP) 41, an iodine behaviour code comparison exercise. The first phase of the study, which was based on a simple Radioiodine Test Facility (RTF) experiment, demonstrated that all of the iodine behaviour codes had the capability to reproduce iodine behaviour for a narrow range of conditions (single temperature, no organic impurities, controlled pH steps). The current phase, a parametric study, was designed to evaluate the sensitivity of iodine behaviour codes to boundary conditions such as pH, dose rate, temperature and initial I{sup -} concentration. The codes used in this exercise were IODE(IPSN), IODE(NRIR), IMPAIR(GRS), INSPECT(AEAT), IMOD(AECL) and LIRIC(AECL). The parametric study described in this report identified several areas of discrepancy between the various codes. In general, the codes agree regarding qualitative trends, but their predictions regarding the actual amount of volatile iodine varied considerably. The largest source of the discrepancies between code predictions appears to be their different approaches to modelling the formation and destruction of organic iodides. A recommendation arising from this exercise is that an additional code comparison exercise be performed on organic iodide formation, against data obtained front intermediate-scale studies (two RTF (AECL, Canada) and two CAIMAN facility, (IPSN, France) experiments have been chosen). This comparison will allow each of the code users to realistically evaluate and improve the organic iodide behaviour sub-models within their codes. (author)
Energy Technology Data Exchange (ETDEWEB)
Ikushima, Takeshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-05-01
A computer code system CASKET (CASK thermal and structural analyses and Evaluation code system) for the thermal and structural analyses which are indispensable for radioactive material transport and/or storage cask designs has been developed. The CASKET is a simplified computer code system to perform parametric analyses on sensitivity evaluations in designing a cask and conducting its safety analysis. Main features of the CASKET are as follow: (1) it is capable to perform impact analysis of casks with shock absorbers, (2) it is capable to perform impact analysis of casks with fins. (3) puncture analysis of casks is capable, (4) rocking analysis of casks during seismic load is capable, (5) material property data library are provided for impact analysis of casks, (6) material property data library are provided for thermal analysis of casks, (7) fin energy absorption data library are provided for impact analysis of casks with fins are and (8) not only main frame computers (OS MSP) but also work stations (OS UNIX) and personal computers (OS Windows 3.1) are available. In the paper, brief illustrations of calculation methods are presented. Some calculation results are compared with experimental ones to confirm the computer programs are useful for thermal and structural analyses. (author)
A restructuring of the FL package for the MIDAS computer code
International Nuclear Information System (INIS)
The developmental need for a localized severe accident analysis code is on the rise, and KAERI is developing a severe accident code MIDAS, based on MELCOR. The existing data saving method uses pointer variables for a fix-sized storage management, and it deteriorates the readability, maintainability and portability of the code. But new features in FORTRAN90 such as a dynamic allocation have been used for the restructuring. The restructuring of the data saving and transferring method of the existing code makes it easy to understand the code. Before an entire restructuring of the code, a restructuring template for a simple package was developed and tested. The target for the restructuring was the FL package which is responsible for modeling the thermal-hydraulic behavior of a liquid water, water vapor, and gases in MELCOR with the CVH package. The verification was done through comparing the results before and after the restructuring
Computer simulation of a klystron using a field-charge interaction code (FCI)
International Nuclear Information System (INIS)
The field-charge interaction code (FCI), based on a particle-in-cell simulation, has been used to analyze and develop high power klystrons at KEK as well as at industry labs since 1989. Several new high-power klystrons have been developed by using the FCI code. This lecture describes the code, provides examples of its application, and evaluates performance of the newly developed tubes. Operational details are given in the users manual. (author)
International Nuclear Information System (INIS)
The major aim of this work is a sensitivity analysis related to the influence of the different nuclear data libraries on the k-infinity values and on the void coefficient estimations performed for various CANDU fuel projects, and on the simulations related to the replacement of the original stainless steel adjuster rods by cobalt assemblies in the CANDU reactor core. The computations are performed using the Monte Carlo transport codes MCNP5 and MONTEBURNS 1.0 for the actual, detailed geometry and material composition of the fuel bundles and reactivity devices. Some comparisons with deterministic and probabilistic codes involving the WIMS library are also presented
Directory of Open Access Journals (Sweden)
Chia-Chang Hu
2005-04-01
Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, Ã°ÂÂ’Âª(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of Ã°ÂÂ’Âª((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.
Estimation of staff doses in complex radiological examinations using a Monte Carlo computer code
International Nuclear Information System (INIS)
The protection of medical personnel in interventional radiology is an important issue of radiological protection. The irradiation of the worker is largely non-uniform, and a large part of his body is shielded by a lead apron. The estimation of effective dose (E) under these conditions is difficult and several approaches are used to estimate effective dose involving such a protective apron. This study presents a summary from an extensive series of simulations to determine scatter-dose distribution around the patient and staff effective dose from personal dosimeter readings. The influence of different parameters (like beam energy and size, patient size, irradiated region, worker position and orientation) on the staff doses has been determined. Published algorithms that combine readings of an unshielded and a shielded dosimeter to estimate effective dose have been applied and a new algorithm, that gives more accurate dose estimates for a wide range of situations was proposed. A computational approach was used to determine the dose distribution in the worker's body. The radiation transport and energy deposition was simulated using the MCNP4B code. The human bodies of the patient and radiologist were generated with the Body Builder anthropomorphic model-generating tool. The radiologist is protected with a lead apron (0.5 mm lead equivalent in the front and 0.25 mm lead equivalent in the back and sides) and a thyroid collar (0.35 mm lead equivalent). The lower-arms of the worker were folded to simulate the arms position during clinical examinations. This realistic situation of the folded arms affects the effective dose to the worker. Depending on the worker position and orientation (and of course the beam energy), the difference can go up to 25 percent. A total of 12 Hp(10) dosimeters were positioned above and under the lead apron at the neck, chest and waist levels. Extra dosimeters for the skin dose were positioned at the forehead, the forearms and the front surface of
A Restructuring of the CAV and FDI Package for the MIDAS Computer Code
International Nuclear Information System (INIS)
As one of the processes for a localized severe accident analysis code, KAERI is developing a severe accident code MIDAS. The MIDAS code is being developed based on MELCOR. The existing data saving method of MELCOR uses pointer variables for a fix-sized storage management, and it deteriorates the readability, maintainability and portability of the code. As a most important process for a localized severe accident analysis code, it is needed convenient method for data handling. So, it has been used the new features in FORTRAN90 such as a dynamic allocation for the restructuring. The restructuring of the data saving and transferring method of the existing code makes it easy to understand the code. Before an entire restructuring of the code, a restructuring template for a simple package was developed and tested. The target for the restructuring in this paper was the CAV and FDI packages. The CAV(cavity) package is responsible for modeling the attack on the basement concrete by hot core materials. The FDI(Fuel Dispersal Interactions) package is responsible for modeling both low and high pressure molten fuel ejection from the RPV into the reactor cavity, control volumes and surfaces. The verification was done through comparing the results before and after the restructuring
On the performance of a 2D unstructured computational rheology code on a GPU
Pereira, S.P.; Vuik, K.; Pinho, F.T.; Nobrega, J.M.
2013-01-01
The present work explores the massively parallel capabilities of the most advanced architecture of graphics processing units (GPUs) code named “Fermi”, on a two-dimensional unstructured cell-centred finite volume code. We use the SIMPLE algorithm to solve the continuity and momentum equations that w
A computer code for calculations in the algebraic collective model of the atomic nucleus
Welsh, T A
2016-01-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1,1) x SO(5) dynamical group. This, in particular, obviates the use of coefficients of fractional parentage. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [pi x q x pi]_0 and [pi x pi]_{LM}, where q_M are the model's quadrupole moments, and pi_N are corresponding conjugate momenta (-2>=M,N<=2). The code also provides ready access to SO(3)-reduced SO(5) Clebsch-Gordan coefficients through data files provided with the code.
International Nuclear Information System (INIS)
A three year undergraduate program (B.Sc.) in Medical Radiation Physics was established in the Ariel University Center of Samaria. The program was submitted to the Council of Higher Education (MALAG) in 2003 and was finally approved by the Council on October 2005. Registration for the first class was announced in January 2006. Studies started on October 2006. Of 24 candidates who applied, 16 were admitted. 12 of the 16 students completed their study duties in the first year. All of them started their second year studies in October 2007
2008-01-01
26. okt. toimuvad Tartu Saksa Kultuuri Instituudi saalis kontserdid Tartu ja Tartumaa rahvamuusikutelt. 26.-27. okt. toimub Tallinnas V juudi süvakultuuri festival "Ariel", mille peaesinejaks on klarnetist David Krakauer ansambliga Klezmer Madness1 USAst. Corelli Musicu salongiõhtutel festivali "Fiesta de la Guitarra" raames esineb trio koosseisus Stewart McCoy (lautomängija), Robert Staak (lautomängija) ja Maria Staak (laulja), kontserdid 5. nov. sarjas "Maardu mõisa muusikasalong", 8. nov. Pärnu hotellis sarjas "Café Grandi muusikasalong" ja 9. nov. Mooste mõisas sarjas "Mõisaromantika"
E.l.f./v.l.f. emissions observed on Ariel 4. [wave-particle phenomena in magnetosphere
Bullough, K.; Denby, M.; Gibbons, W.; Hughes, A. R. W.; Kaiser, T. R.; Tatnall, A. R. L.
1975-01-01
The Ariel 4 satellite was designed to study wave-particle phenomena in the magnetosphere by measuring the electromagnetic wave fields over a wide frequency range and the fluxes and pitch angle distributions of energetic particles. We describe here the results of a preliminary study of the various v.l.f./e.l.f. electromagnetic wave phenomena which are observed. These include man-made signals from v.l.f. transmitters, impulsive noise originating in thunderstorms and emissions arising from magnetospheric energetic charged particles.
International Nuclear Information System (INIS)
COBRA-SFS (Spent Fuel Storage) is a general thermal-hydraulic analysis computer code used to predict temperatures and velocities in a wide variety of systems. The code was refined and specialized for spent fuel storage system analyses for the US Department of Energy's Commercial Spent Fuel Management Program. The finite-volume equations governing mass, momentum, and energy conservation are written for an incompressible, single-phase fluid. The flow equations model a wide range of conditions including natural circulation. The energy equations include the effects of solid and fluid conduction, natural convection, and thermal radiation. The COBRA-SFS code is structured to perform both steady-state and transient calculations: however, the transient capability has not yet been validated. This volume describes the finite-volume equations and the method used to solve these equations. It is directed toward the user who is interested in gaining a more complete understanding of these methods
International Nuclear Information System (INIS)
After a description of the context of radiological accidents (definition, history, context, exposure types, associated clinic symptoms of irradiation and contamination, medical treatment, return on experience) and a presentation of dose assessment in the case of external exposure (clinic, biological and physical dosimetry), this research thesis describes the principles of numerical reconstruction of a radiological accident, presents some computation codes (Monte Carlo code, MCNPX code) and the SESAME tool, and reports an application to an actual case (an accident which occurred in Equator in April 2009). The next part reports the developments performed to modify the posture of voxelized phantoms and the experimental and numerical validations. The last part reports a feasibility study for the reconstruction of radiological accidents occurring in external radiotherapy. This work is based on a Monte Carlo simulation of a linear accelerator, with the aim of identifying the most relevant parameters to be implemented in SESAME in the case of external radiotherapy
International Nuclear Information System (INIS)
This report describes the calculation procedure of the TRANCS code, which deals with fission product transport in fuel rod of high temperature gas-cooled reactor (HTGR). The fundamental equation modeled in the code is a cylindrical one-dimensional diffusion equation with generation and decay terms, and the non-stationary solution of the equation is obtained numerically by a finite difference method. The generation terms consist of the diffusional release from coated fuel particles, recoil release from outer-most coating layer of the fuel particle and generation due to contaminating uranium in the graphite matrix of the fuel compact. The decay term deals with neutron capture as well as beta decay. Factors affecting the computation error has been examined, and further extention of the code has been discussed in the fields of radial transport of fission products from graphite sleeve into coolant helium gas and axial transport in the fuel rod. (author)
International Nuclear Information System (INIS)
An arrangement of programs is described providing interactive computer graphics assistance to improve the operation of an existing fitting program. The particular example is the well-known 'Atta-Harvey' code used in resonance area analysis of neutron transmission data. This report is intended both as a manual for running the area analysis programs and as a practical example of program improvement by interactive programming methods. (U.K.)
A neutron spectrum unfolding computer code based on artificial neural networks
International Nuclear Information System (INIS)
The Bonner Spheres Spectrometer consists of a thermal neutron sensor placed at the center of a number of moderating polyethylene spheres of different diameters. From the measured readings, information can be derived about the spectrum of the neutron field where measurements were made. Disadvantages of the Bonner system are the weight associated with each sphere and the need to sequentially irradiate the spheres, requiring long exposure periods. Provided a well-established response matrix and adequate irradiation conditions, the most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Intelligence, mainly Artificial Neural Networks, have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This code is called Neutron Spectrometry and Dosimetry with Artificial Neural networks unfolding code that was designed in a graphical interface. The core of the code is an embedded neural network architecture previously optimized using the robust design of artificial neural networks methodology. The main features of the code are: easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, for unfolding the neutron spectrum, only seven rate counts measured with seven Bonner spheres are required; simultaneously the code calculates 15 dosimetric quantities as well as the total flux for radiation protection purposes. This code generates a full report with all information of the unfolding in
BCG: a computer code for calculating neutron spectra and criticality in cells of fast reactors
International Nuclear Information System (INIS)
The BCG code for determining the space and energy neutron flux distribution and criticality of fast reactor cylindrical cells is discussed. The code solves the unidimensional neutron transport equation together with interface current relations at each energy point in an unionized energy grid prepared for the cell and at an arbitrary number of spatial zones. While the spatial resolution is user specified, the energy dependence of the flux distribution is resolved according to the degree of variation in the reconstruced total microscopic cross sections of the atomic species in the cell. Results for a simplified fuel cell illustrate the high resolution and accuracy that can be obtained with the code. (author)
Energy Technology Data Exchange (ETDEWEB)
Srinath Vadlamani; Scott Kruger; Travis Austin
2008-06-19
Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems. For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.
International Nuclear Information System (INIS)
SPLASH-ALE is a three-dimensional finite-element fluid-dynamics computer code employing Arbitrary Lagrangian Eulerian (ALE) method. Because the ALE method enables computational nodes to move in arbitrary velocity, this code can deal with moving boundaries such as free surface. In a reactor vessel of liquid metal cooled Fast Breeder Reactor (FBR), there is such situation that upward coolant flow impinges on its free surface and makes swell of the surface. In a experiment of simplified flow geometry of such situation, an upward plane jet impinging on free surface oscillates without any external force. In this study, the oscillation was numerically analyzed by the SPLASH-ALE code. The numerical results corresponded well with experimental those, e.g., streakline of the oscillating jet. It was confirmed that the SPLASH-ALE code is capable to analyze unstable phenomena caused by interaction between flow and free surface, and the numerical result is available to solve the mechanism of the oscillation. (author)
International Nuclear Information System (INIS)
The computer program, TRANCS, has been developed for evaluating the fractional release of long-lived fission products from coated fuel particles. This code numerically gives the non-stationary solution of the diffusion equation with birth and decay terms. The birth term deals with the fissile material in the fuel kernel, the contamination in the coating layers and the fission-recoil transfer from the kernel into the buffer layer; and the decay term deals with effective decay not only due to beta decay but also due to neutron capture, if appropriate input data are given. The code calculates the concentration profile, the release to birth rates (R/B), and the release and residual fractions in the coated fuel particle. Results obtained numerically have been in good agreement with the corresponding analytical solutions after the Booth model. Thus, the validity of the present code was confirmed, and further undate of the code has been discussed for extention of its computation scopes and models. (author)
International Nuclear Information System (INIS)
The computer code ORION has been developed to evaluate the environmental concentrations and the dose equivalent to human organs or tissue from air-borne radionuclides released from multiple nuclear installations. The modified Gaussian plume model is applied to calculate the dispersion of the radionuclide. Gravitational settling, dry deposition, precipitation scavenging and radioactive decay are considered to be the causes of depletion and deposition on the ground or on vegetation. ORION is written in the FORTRAN IV language and can be run on IBM 360, 370, 303X, 43XX and FACOM M-series computers. 8 references, 6 tables
Herbert, H. E.; Lamar, J. E.
1982-01-01
The source code for the latest production version, MARK IV, of the NASA-Langley Vortex Lattice Computer Program is presented. All viable subcritical aerodynamic features of previous versions were retained. This version extends the previously documented program capabilities to four planforms, 400 panels, and enables the user to obtain vortex-flow aerodynamics on cambered planforms, flowfield properties off the configuration in attached flow, and planform longitudinal load distributions.
International Nuclear Information System (INIS)
The BODYFIT-2PE (Boundary-Fitted Coordinate, 2-Phase Flow with Partially Elliptic) computer code has been developed and was employed to simulate the critical heat flux experiment in a General Electric 3 x 3 rod bundle by using a CHF correlation recently developed at Columbia University under EPRI sponsorship. CHF predictions are important in analyzing rod bundle performance in nuclear reactor operation. The results of the BODYFIT calculations compared favorably with the experimental measurements
MAT-FLX: a simplified code for computing material balances in fuel cycle
International Nuclear Information System (INIS)
This work illustrates a calculation code designed to provide a materials balance for the electro nuclear fuel cycle. The calculation method is simplified but relatively precise and employs a progressive tabulated data approach
Sandalski, Stou
Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.
Implantation of TRAC-PF1 computer code of VAX-11/750
International Nuclear Information System (INIS)
The implantation of TRAC-PF1 code, IBM version, in VAX-11/750 is described. This work provides Reator Department with an advanced best-estimate tool to perform loss-of-coolant accident analysis. (Author)
Sato, Jun-Ichi; Washizawa, Yoshikazu
2015-08-01
We propose two methods to improve code modulation visual evoked potential brain computer interfaces (cVEP BCIs). Most of BCIs average brain signals from several trials in order to improve the classification performance. The number of averaging defines the trade-off between input speed and accuracy, and the optimal averaging number depends on individual, signal acquisition system, and so forth. Firstly, we propose a novel dynamic method to estimate the averaging number for cVEP BCIs. The proposed method is based on the automatic repeat request (ARQ) that is used in communication systems. The existing cVEP BCIs employ rather longer code, such as 63-bit M-sequence. The code length also defines the trade-off between input speed and accuracy. Since the reliability of the proposed BCI can be controlled by the proposed ARQ method, we introduce shorter codes, 32-bit M-sequence and the Kasami-sequence. Thanks to combine the dynamic averaging number estimation method and the shorter codes, the proposed system exhibited higher information transfer rate compared to existing cVEP BCIs.
Schmidt, James F.
1995-01-01
An off-design axial-flow compressor code is presented and is available from COSMIC for predicting the aerodynamic performance maps of fans and compressors. Steady axisymmetric flow is assumed and the aerodynamic solution reduces to solving the two-dimensional flow field in the meridional plane. A streamline curvature method is used for calculating this flow-field outside the blade rows. This code allows for bleed flows and the first five stators can be reset for each rotational speed, capabilities which are necessary for large multistage compressors. The accuracy of the off-design performance predictions depend upon the validity of the flow loss and deviation correlation models. These empirical correlations for the flow loss and deviation are used to model the real flow effects and the off-design code will compute through small reverse flow regions. The input to this off-design code is fully described and a user's example case for a two-stage fan is included with complete input and output data sets. Also, a comparison of the off-design code predictions with experimental data is included which generally shows good agreement.
User manual for version 4.3 of the Tripoli-4 Monte-Carlo method particle transport computer code
International Nuclear Information System (INIS)
This manual relates to Version 4.3 TRIPOLI-4 code. TRIPOLI-4 is a computer code simulating the transport of neutrons, photons, electrons and positrons. It can be used for radiation shielding calculations (long-distance propagation with flux attenuation in non-multiplying media) and neutronic calculations (fissile medium, criticality or sub-criticality basis). This makes it possible to calculate keff (for criticality), flux, currents, reaction rates and multi-group cross-sections. TRIPOLI-4 is a three-dimensional code that uses the Monte-Carlo method. It allows for point-wise description in terms of energy of cross-sections and multi-group homogenized cross-sections and features two modes of geometrical representation: surface and combinatorial. The code uses cross-section libraries in ENDF/B format (such as JEF2-2, ENDF/B-VI and JENDL) for point-wise description cross-sections in APOTRIM format (from the APOLLO2 code) or a format specific to TRIPOLI-4 for multi-group description. (authors)
Tracking studies in PEP and description of the computer code PATRICIA
International Nuclear Information System (INIS)
The code PATRICIA (particle tracking in circular accelerators) is designed mainly for tracking particles in an electron storage ring. A modification of this program is a part of a system of codes PAQUASEX which is designated for configuration survey over a grid of points in the space of main configuration parameters nu/sub x/, nu/sub y/, ν/sub x/, ν/sub y/, and eta/sub x/
International Nuclear Information System (INIS)
The activities of the Radiation Shielding Information Center (RSIC) of the Oak Ridge National Laboratory are being utilized in support of fusion reactor technology. The major activities of RSIC include the operation of a computer-based information storage and retrieval system, the collection, packaging, and distribution of large computer codes, and the compilation and dissemination of processed and evaluated data libraries, with particular emphasis on neutron and gamma-ray cross-section data. The Center has acquired thirteen years of experience in serving fission reactor, weapons, and accelerator shielding research communities, and the extension of its technical base to fusion reactor research represents a logical progression. RSIC is currently working with fusion reactor researchers and contractors in computer code development to provide tested radiation transport and shielding codes and data library packages. Of significant interest to the CTR community are the 100 energy group neutron and 21 energy group gamma-ray coupled cross-section data package (DLC-37) for neutronics studies, a comprehensive 171 energy group neutron and 36 energy group gamma-ray coupled cross-section data base with retrieval programs, including resonance self-shielding, that are tailored to CTR application, and a data base for the generation of energy-dependent atomic displacement and gas production cross sections and heavy-particle-recoil spectra for estimating radiation damage to CTR structural components
Bartels, Robert E.
2012-01-01
This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.
A computer code for calculations in the algebraic collective model of the atomic nucleus
Welsh, T. A.; Rowe, D. J.
2016-03-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.
Energy Technology Data Exchange (ETDEWEB)
Watson, S.B.; Ford, M.R.
1980-02-01
A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration.
International Nuclear Information System (INIS)
A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration
International Nuclear Information System (INIS)
Computer codes are widely used in Member States for the analysis of safety at nuclear power plants (NPPs). Coupling of computer codes, a further tool for safety analysis, is especially beneficial to safety analysis. The significantly increased capacity of new computation technology has made it possible to switch to a newer generation of computer codes, which are capable of representing physical phenomena in detail and include a more precise consideration of multidimensional effects. The coupling of advanced, best estimate computer codes is an efficient method of addressing the multidisciplinary nature of reactor accidents with complex interfaces between disciplines. Coupling of computer codes is very advantageous for studies which relate to licensing of new NPPs, safety upgrading programmes for existing plants, periodic safety reviews, renewal of operating licences, use of safety margins for reactor power uprating, better utilization of nuclear fuel and higher operational flexibility, justification for lifetime extensions, development of new emergency operating procedures, analysis of operational events and development of accident management programmes. In this connection, the OECD/NEA Working Group on the Analysis and Management of Accidents (GAMA) recently highlighted the application of coupled computer codes as an area of 'high collective interest'. Coupled computer codes are being developed in many Member States independently or within small groups composed of several technical organizations. These developments revealed that there are many types and methods of code coupling. In this context, it was believed that an exchange of views and experience while addressing these problems at an international meeting could contribute to the more efficient and reliable use of advanced computer codes in nuclear safety applications. The present publication constitutes the report on the Technical Meeting on Progress in the Development and Use of Coupled Codes for Accident
Houston, Johnny L.
1989-01-01
Program EAGLE (Eglin Arbitrary Geometry Implicit Euler) Numerical Grid Generation System is a composite (multi-block) algebraic or elliptic grid generation system designed to discretize the domain in and/or around any arbitrarily shaped three dimensional regions. This system combines a boundary conforming surface generation scheme and includes plotting routines designed to take full advantage of the DISSPLA Graphics Package (Version 9.0). Program EAGLE is written to compile and execute efficiently on any Cray machine with or without solid state disk (SSD) devices. Also, the code uses namelist inputs which are supported by all Cray machines using the FORTRAN compiler CFT77. The namelist inputs makes it easier for the user to understand the inputs and operation of Program EAGLE. EAGLE's numerical grid generator is constructed in the following form: main program, EGG (executive routine); subroutine SURFAC (surface generation routine); subroutine GRID (grid generation routine); and subroutine GRDPLOT (grid plotting routines). The EAGLE code was modified to use on the NASA-LaRC SNS computer (Cray 2S) system. During the modification a conversion program was developed for the output data of EAGLE's subroutine GRID to permit the data to be graphically displayed by IRIS workstations, using Plot3D. The code of program EAGLE was modified to make operational subroutine GRDPLOT (using DI-3000 Graphics Software Packages) on the NASA-LaRC SNS Computer System. How to implement graphically, the output data of subroutine GRID was determined on any NASA-LaRC graphics terminal that has access to the SNS Computer System DI-300 Graphics Software Packages. A Quick Reference User Guide was developed for the use of program EAGLE on the NASA-LaRC SNS Computer System. One or more application program(s) was illustrated using program EAGLE on the NASA LaRC SNS Computer System, with emphasis on graphics illustrations.
Computer codes for particle accelerator design and analysis: A compendium. Second edition
International Nuclear Information System (INIS)
The design of the next generation of high-energy accelerators will probably be done as an international collaborative efforts and it would make sense to establish, either formally or informally, an international center for accelerator codes with branches for maintenance, distribution, and consultation at strategically located accelerator centers around the world. This arrangement could have at least three beneficial effects. It would cut down duplication of effort, provide long-term support for the best codes, and provide a stimulating atmosphere for the evolution of new codes. It does not take much foresight to see that the natural evolution of accelerator design codes is toward the development of so-called Expert Systems, systems capable of taking design specifications of future accelerators and producing specifications for optimized magnetic transport and acceleration components, making a layout, and giving a fairly impartial cost estimate. Such an expert program would use present-day programs such as TRANSPORT, POISSON, and SUPERFISH as tools in the optimization process. Such a program would also serve to codify the experience of two generations of accelerator designers before it is lost as these designers reach retirement age. This document describes 203 codes that originate from 10 countries and are currently in use. The authors feel that this compendium will contribute to the dialogue supporting the international collaborative effort that is taking place in the field of accelerator physics today
Taewan Kim; Victor Petrov; Annalisa Manera; Simon Lo
2012-01-01
In order to assess the accuracy and validity of subchannel, system, and computational fluid dynamics codes, the Paul Scherrer Institut has participated in the OECD/NRC PSBT benchmark with the thermal-hydraulic system code TRACE5.0 developed by US NRC, the subchannel code FLICA4 developed by CEA, and the computational fluid dynamic code STAR-CD developed by CD-adapco. The PSBT benchmark consists of a series of void distribution exercises and departure from nucleate boiling exercises. The resul...
International Nuclear Information System (INIS)
Highlights: ► A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. ► There is no need to use QSAR and QSTR methods, which are based on computer codes. ► The predicted results of 58 compounds are more reliable than those predicted by QSTR method. ► The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD50 with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure–toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.
Energy Technology Data Exchange (ETDEWEB)
Keshavarz, Mohammad Hossein, E-mail: mhkeshavarz@mut-es.ac.ir [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115, Isfahan, Islamic Republic of Iran (Iran, Islamic Republic of); Gharagheizi, Farhad [Department of Chemical Engineering, Buinzahra Branch, Islamic Azad University, Buinzahra, Islamic Republic of Iran (Iran, Islamic Republic of); Shokrolahi, Arash; Zakinejad, Sajjad [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115, Isfahan, Islamic Republic of Iran (Iran, Islamic Republic of)
2012-10-30
Highlights: Black-Right-Pointing-Pointer A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. Black-Right-Pointing-Pointer There is no need to use QSAR and QSTR methods, which are based on computer codes. Black-Right-Pointing-Pointer The predicted results of 58 compounds are more reliable than those predicted by QSTR method. Black-Right-Pointing-Pointer The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD{sub 50} with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure-toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.
GOBLIN computer code. Comparison between calculations and TLTA small break test
International Nuclear Information System (INIS)
GOBLIN calcuations have been performed for two simulation tests of the boiling water reactor (BWR) small break loss-of-coolant accidents (LOCAs) which were conducted in the two loop test apparatus (TLTA). The first test investigated the small break with nondegraded emergency core coolant (ECC) systems and the second test studied the same small break but with degraded ECC systems in which the high pressure core spray (HPCS) was assumed unavailable. Very good agreement between test data and calculations is achieved. The second test is the most challenging from code comparison point of view and the code prediction of the complicated mass distribution pattern which changes with time is very satisfactory. In the first test and to some extent late in the second test multidimensional subchannel effects are evident in the core bundle region. These are not and cannot be reproduced by the code since the bundle model of GOBLIN is strictly one-dimensional. (Author)
A Code to Compute High Energy Cosmic Ray Effects on Terrestrial Atmospheric Chemistry
Krejci, Alex J; Thomas, Brian C
2008-01-01
A variety of events such as gamma-ray bursts may expose the Earth to an increased flux of high-energy cosmic rays, with potentially important effects on the biosphere. An atmospheric code, the NASA-Goddard Space Flight Center two-dimensional (latitude, altitude) time-dependent atmospheric model (NGSFC), can be used to study atmospheric chemistry changes. The effect on atmospheric chemistry from astrophysically created high energy cosmic rays can now be studied using the NGSFC code. A table has been created that, with the use of the NGSFC code can be used to simulate the effects of high energy cosmic rays (10 GeV to 1 PeV) ionizing the atmosphere. We discuss the table, its use, weaknesses, and strengths.
V.S.O.P.('94) computer code system for reactor physics and fuel cycle simulation
International Nuclear Information System (INIS)
V.S.O.P. (Very Superior Old Programs) is a system of codes linked together for the simulation of reactor life histories and temporary in-depth research. It comprises neutron cross section libraries and processing routines, repeated neutron spectrum evaluation, 2-D diffusion calculation with depletion and shut-down features, in-core and out-of-pile fuel management, fuel cycle cost analysis, and thermal hydraulics (at present restricted to HTR's). Various techniques have been employed to accelerate the iterative processes and to optimize the internal data transfer. The storage requirement is confined to 17 M-Bytes. The code system has extensively been used for comparison studies of reactors, their fuel cycles, simulation of safety features, developmental research, and reactor assessments. Beside its use in research and development work for the gas cooled High Temperature Reactor the code has succesfully been applied to Light Water Reactors, Heavy Water Reactors, and hybride systems with different moderators. (orig.)
International Nuclear Information System (INIS)
This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C
Energy Technology Data Exchange (ETDEWEB)
Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.
1986-11-01
This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.
Simultaneous fluid-flow, heat-transfer and solid-stress computation in a single computer code
Energy Technology Data Exchange (ETDEWEB)
Spalding, D.B. [Concentration Heat and Momentum Ltd, London (United Kingdom)
1997-12-31
Computer simulation of flow- and thermally-induced stresses in mechanical-equipment assemblies has, in the past, required the use of two distinct software packages, one to determine the forces and the temperatures, and the other to compute the resultant stresses. The present paper describes how a single computer program can perform both tasks at the same time. The technique relies on the similarity of the equations governing velocity distributions in fluids to those governing displacements in solids. The same SIMPLE-like algorithm is used for solving both. Applications to 1-, 2- and 3-dimensional situations are presented. It is further suggested that Solid-Fluid-Thermal, ie SFT analysis may come to replace CFD on the one hand and the analysis of stresses in solids on the other, by performing the functions of both. (author) 7 refs.