Energy Technology Data Exchange (ETDEWEB)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L. [Sandia National Labs., Albuquerque, NM (United States); Hodge, S.A.; Hyman, C.R.; Sanders, R.L. [Oak Ridge National Lab., TN (United States)
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.
Network coding for computing: Linear codes
Appuswamy, Rathinakumar; Karamchandani, Nikhil; Zeger, Kenneth
2011-01-01
In network coding it is known that linear codes are sufficient to achieve the coding capacity in multicast networks and that they are not sufficient in general to achieve the coding capacity in non-multicast networks. In network computing, Rai, Dey, and Shenvi have recently shown that linear codes are not sufficient in general for solvability of multi-receiver networks with scalar linear target functions. We study single receiver networks where the receiver node demands a target function of the source messages. We show that linear codes may provide a computing capacity advantage over routing only when the receiver demands a `linearly-reducible' target function. % Many known target functions including the arithmetic sum, minimum, and maximum are not linearly-reducible. Thus, the use of non-linear codes is essential in order to obtain a computing capacity advantage over routing if the receiver demands a target function that is not linearly-reducible. We also show that if a target function is linearly-reducible,...
Shapiro, Wilbur
1996-01-01
This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.
Characterizing Video Coding Computing in Conference Systems
Tuquerres, G.
2000-01-01
In this paper, a number of coding operations is provided for computing continuous data streams, in particular, video streams. A coding capability of the operations is expressed by a pyramidal structure in which coding processes and requirements of a distributed information system are represented. Th
Computer Code for Nanostructure Simulation
Filikhin, Igor; Vlahovic, Branislav
2009-01-01
Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.
Cloud Computing for Complex Performance Codes.
Energy Technology Data Exchange (ETDEWEB)
Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-02-01
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
Gender codes why women are leaving computing
Misa, Thomas J
2010-01-01
The computing profession is facing a serious gender crisis. Women are abandoning the computing field at an alarming rate. Fewer are entering the profession than anytime in the past twenty-five years, while too many are leaving the field in mid-career. With a maximum of insight and a minimum of jargon, Gender Codes explains the complex social and cultural processes at work in gender and computing today. Edited by Thomas Misa and featuring a Foreword by Linda Shafer, Chair of the IEEE Computer Society Press, this insightful collection of essays explores the persisting gender imbalance in computing and presents a clear course of action for turning things around.
Computer Security: is your code sane?
Stefan Lueders, Computer Security Team
2015-01-01
How many of us write code? Software? Programs? Scripts? How many of us are properly trained in this and how well do we do it? Do we write functional, clean and correct code, without flaws, bugs and vulnerabilities*? In other words: are our codes sane? Figuring out weaknesses is not that easy (see our quiz in an earlier Bulletin article). Therefore, in order to improve the sanity of your code, prevent common pit-falls, and avoid the bugs and vulnerabilities that can crash your code, or – worse – that can be misused and exploited by attackers, the CERN Computer Security team has reviewed its recommendations for checking the security compliance of your code. “Static Code Analysers” are stand-alone programs that can be run on top of your software stack, regardless of whether it uses Java, C/C++, Perl, PHP, Python, etc. These analysers identify weaknesses and inconsistencies including: employing undeclared variables; expressions resu...
Incompressible face seals: Computer code IFACE
Artiles, Antonio
1994-01-01
Capabilities of the computer code IFACE are given in viewgraph format. These include: two dimensional, incompressible, isoviscous flow; rotation of both rotor and housing; roughness in both rotor and housing; arbitrary film thickness distribution, including steps, pockets, and tapers; three degrees of freedom; dynamic coefficients; prescribed force and moments; pocket pressures or orifice size; turbulence, Couette and Poiseuille flow; cavitation; and inertia pressure drops at inlets to film.
Computing Challenges in Coded Mask Imaging
Skinner, Gerald
2009-01-01
This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.
New developments in the Saphire computer codes
Energy Technology Data Exchange (ETDEWEB)
Russell, K.D.; Wood, S.T.; Kvarfordt, K.J. [Idaho Engineering Lab., Idaho Falls, ID (United States)] [and others
1996-03-01
The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a suite of computer programs that were developed to create and analyze a probabilistic risk assessment (PRA) of a nuclear power plant. Many recent enhancements to this suite of codes have been made. This presentation will provide an overview of these features and capabilities. The presentation will include a discussion of the new GEM module. This module greatly reduces and simplifies the work necessary to use the SAPHIRE code in event assessment applications. An overview of the features provided in the new Windows version will also be provided. This version is a full Windows 32-bit implementation and offers many new and exciting features. [A separate computer demonstration was held to allow interested participants to get a preview of these features.] The new capabilities that have been added since version 5.0 will be covered. Some of these major new features include the ability to store an unlimited number of basic events, gates, systems, sequences, etc.; the addition of improved reporting capabilities to allow the user to generate and {open_quotes}scroll{close_quotes} through custom reports; the addition of multi-variable importance measures; and the simplification of the user interface. Although originally designed as a PRA Level 1 suite of codes, capabilities have recently been added to SAPHIRE to allow the user to apply the code in Level 2 analyses. These features will be discussed in detail during the presentation. The modifications and capabilities added to this version of SAPHIRE significantly extend the code in many important areas. Together, these extensions represent a major step forward in PC-based risk analysis tools. This presentation provides a current up-to-date status of these important PRA analysis tools.
Neutron spectrum unfolding using computer code SAIPS
Karim, S
1999-01-01
The main objective of this project was to study the neutron energy spectrum at rabbit station-1 in Pakistan Research Reactor (PARR-I). To do so, multiple foils activation method was used to get the saturated activities. The computer code SAIPS was used to unfold the neutron spectra from the measured reaction rates. Of the three built in codes in SAIPS, only SANDI and WINDOWS were used. Contribution of thermal part of the spectra was observed to be higher than the fast one. It was found that the WINDOWS gave smooth spectra while SANDII spectra have violet oscillations in the resonance region. The uncertainties in the WINDOWS results are higher than those of SANDII. The results show reasonable agreement with the published results.
From Coding to Computational Thinking and Back
DePryck, K.
2016-01-01
Presentation of Dr. Koen DePryck in the Computational Thinking Session in TEEM 2016 Conference, held in the University of Salamanca (Spain), Nov 2-4, 2016. Introducing coding in the curriculum at an early age is considered a long-term investment in bridging the skills gap between the technology demands of the labour market and the availability of people to fill them. The keys to success include moving from mere literacy to active control – not only at the level of learners but also ...
Spiking network simulation code for petascale computers
Directory of Open Access Journals (Sweden)
Susanne eKunkel
2014-10-01
Full Text Available Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
Superimposed Code Theorectic Analysis of DNA Codes and DNA Computing
2010-03-01
Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008 IEEE Proceedings of International Symposium on Information Theory, pp. 2292...5, June 2008, pp. 525-34. 32 28. A. Macula, et al., “Random Coding Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008...combinatorial method of bio-memory design and detection that encodes item or process information as numerical sequences represented in DNA. ComDMem is a
Development of probabilistic internal dosimetry computer code
Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki
2017-02-01
Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of
ICAN Computer Code Adapted for Building Materials
Murthy, Pappu L. N.
1997-01-01
The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.
A surface code quantum computer in silicon.
Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L
2015-10-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.
Energy Technology Data Exchange (ETDEWEB)
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
40 CFR 194.23 - Models and computer codes.
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...
Hanford Meteorological Station computer codes: Volume 6, The SFC computer code
Energy Technology Data Exchange (ETDEWEB)
Andrews, G.L.; Buck, J.W.
1987-11-01
Each hour the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, records and archives weather observations. Hourly surface weather observations consist of weather phenomena such as cloud type and coverage; dry bulb, wet bulb, and dew point temperatures; relative humidity; atmospheric pressure; and wind speed and direction. The SFC computer code is used to archive those weather observations and apply quality assurance checks to the data. This code accesses an input file, which contains the previous archive's date and hour and an output file, which contains surface observations for the current day. As part of the program, a data entry form consisting of 24 fields must be filled in. The information on the form is appended to the daily file, which provides an archive for the hourly surface observations.
Private Computing and Mobile Code Systems
Cartrysse, K.
2005-01-01
This thesis' objective is to provide privacy to mobile code. A practical example of mobile code is a mobile software agent that performs a task on behalf of its user. The agent travels over the network and is executed at different locations of which beforehand it is not known whether or not these ca
Reducing Computational Overhead of Network Coding with Intrinsic Information Conveying
DEFF Research Database (Denmark)
Heide, Janus; Zhang, Qi; Pedersen, Morten V.;
is RLNC (Random Linear Network Coding) and the goal is to reduce the amount of coding operations both at the coding and decoding node, and at the same time remove the need for dedicated signaling messages. In a traditional RLNC system, coding operation takes up significant computational resources and adds......This paper investigated the possibility of intrinsic information conveying in network coding systems. The information is embedded into the coding vector by constructing the vector based on a set of predefined rules. This information can subsequently be retrieved by any receiver. The starting point...
Computer codes for birds of North America
US Fish and Wildlife Service, Department of the Interior — Purpose of paper was to provide a more useful way to provide codes for all North American species, thus making the list useful for virtually all projects concerning...
PORPST: A statistical postprocessor for the PORMC computer code
Energy Technology Data Exchange (ETDEWEB)
Eslinger, P.W.; Didier, B.T. (Pacific Northwest Lab., Richland, WA (United States))
1991-06-01
This report describes the theory underlying the PORPST code and gives details for using the code. The PORPST code is designed to do statistical postprocessing on files written by the PORMC computer code. The data written by PORMC are summarized in terms of means, variances, standard deviations, or statistical distributions. In addition, the PORPST code provides for plotting of the results, either internal to the code or through use of the CONTOUR3 postprocessor. Section 2.0 discusses the mathematical basis of the code, and Section 3.0 discusses the code structure. Section 4.0 describes the free-format point command language. Section 5.0 describes in detail the commands to run the program. Section 6.0 provides an example program run, and Section 7.0 provides the references. 11 refs., 1 fig., 17 tabs.
Optimization of KINETICS Chemical Computation Code
Donastorg, Cristina
2012-01-01
NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.
Codes of Ethics for Computing at Russian Institutions and Universities.
Pourciau, Lester J.; Spain, Victoria, Ed.
1997-01-01
To determine the degree to which Russian institutions and universities have formulated and promulgated codes of ethics or policies for acceptable computer use, the author examined Russian institution and university home pages. Lists home pages examined, 10 commandments for computer ethics from the Computer Ethics Institute, and a policy statement…
Continuous Materiality: Through a Hierarchy of Computational Codes
Directory of Open Access Journals (Sweden)
Jichen Zhu
2008-01-01
Full Text Available The legacy of Cartesian dualism inherent in linguistic theory deeply influences current views on the relation between natural language, computer code, and the physical world. However, the oversimplified distinction between mind and body falls short of capturing the complex interaction between the material and the immaterial. In this paper, we posit a hierarchy of codes to delineate a wide spectrum of continuous materiality. Our research suggests that diagrams in architecture provide a valuable analog for approaching computer code in emergent digital systems. After commenting on ways that Cartesian dualism continues to haunt discussions of code, we turn our attention to diagrams and design morphology. Finally we notice the implications a material understanding of code bears for further research on the relation between human cognition and digital code. Our discussion concludes by noticing several areas that we have projected for ongoing research.
Structural Computer Code Evaluation. Volume I
1976-11-01
Rivlin model for large strains. Other exanmples are given in Reference 5. Hypoelasticity A hypoelastic material is one in which the components of...remains is the application of these codes to specific rocket nozzle problems and the evaluation of their capabilities to model modern nozzle mraterial...behavior. Further work may also require the development of appropriate material property data or new material models to adequately characterize these
Quantum computation with Turaev-Viro codes
Koenig, Robert; Reichardt, Ben W
2010-01-01
The Turaev-Viro invariant for a closed 3-manifold is defined as the contraction of a certain tensor network. The tensors correspond to tetrahedra in a triangulation of the manifold, with values determined by a fixed spherical category. For a manifold with boundary, the tensor network has free indices that can be associated to qudits, and its contraction gives the coefficients of a quantum error-correcting code. The code has local stabilizers determined by Levin and Wen. For example, applied to the genus-one handlebody using the Z_2 category, this construction yields the well-known toric code. For other categories, such as the Fibonacci category, the construction realizes a non-abelian anyon model over a discrete lattice. By studying braid group representations acting on equivalence classes of colored ribbon graphs embedded in a punctured sphere, we identify the anyons, and give a simple recipe for mapping fusion basis states of the doubled category to ribbon graphs. We explain how suitable initial states can ...
Development of an MCNP-tally based burnup code and validation through PWR benchmark exercises
Energy Technology Data Exchange (ETDEWEB)
El Bakkari, B. [ERSN-LMR, Department of physics, Faculty of Sciences P.O.Box 2121, Tetuan (Morocco)], E-mail: bakkari@gmail.com; El Bardouni, T.; Merroun, O.; El Younoussi, Ch.; Boulaich, Y. [ERSN-LMR, Department of physics, Faculty of Sciences P.O.Box 2121, Tetuan (Morocco); Chakir, E. [EPTN-LPMR, Faculty of Sciences Kenitra (Morocco)
2009-05-15
The aim of this study is to evaluate the capabilities of a newly developed burnup code called BUCAL1. The code provides the full capabilities of the Monte Carlo code MCNP5, through the use of the MCNP tally information. BUCAL1 uses the fourth order Runge Kutta method with the predictor-corrector approach as the integration method to determine the fuel composition at a desired burnup step. Validation of BUCAL1 was done by code vs. code comparison. Results of two different kinds of codes are employed. The first one is CASMO-4, a deterministic multi-group two-dimensional transport code. The second kind is MCODE and MOCUP, a link MCNP-ORIGEN codes. These codes use different burnup algorithms to solve the depletion equations system. Eigenvalue and isotope concentrations were compared for two PWR uranium and thorium benchmark exercises at cold (300 K) and hot (900 K) conditions, respectively. The eigenvalue comparison between BUCAL1 and the aforementioned two kinds of codes shows a good prediction of the systems'k-inf values during the entire burnup history, and the maximum difference is within 2%. The differences between the BUCAL1 isotope concentrations and the predictions of CASMO-4, MCODE and MOCUP are generally better, and only for a few sets of isotopes these differences exceed 10%.
Computer aided power flow software engineering and code generation
Energy Technology Data Exchange (ETDEWEB)
Bacher, R. [Swiss Federal Inst. of Tech., Zuerich (Switzerland)
1996-02-01
In this paper a software engineering concept is described which permits the automatic solution of a non-linear set of network equations. The power flow equation set can be seen as a defined subset of a network equation set. The automated solution process is the numerical Newton-Raphson solution process of the power flow equations where the key code parts are the numeric mismatch and the numeric Jacobian term computation. It is shown that both the Jacobian and the mismatch term source code can be automatically generated in a conventional language such as Fortran or C. Thereby one starts from a high level, symbolic language with automatic differentiation and code generation facilities. As a result of this software engineering process an efficient, very high quality newton-Raphson solution code is generated which allows easier implementation of network equation model enhancements and easier code maintenance as compared to hand-coded Fortran or C code.
Computer aided power flow software engineering and code generation
Energy Technology Data Exchange (ETDEWEB)
Bacher, R. [Swiss Federal Inst. of Tech., Zuerich (Switzerland)
1995-12-31
In this paper a software engineering concept is described which permits the automatic solution of a non-linear set of network equations. The power flow equation set can be seen as a defined subset of a network equation set. The automated solution process is the numerical Newton-Raphson solution process of the power flow equations where the key code parts are the numeric mismatch and the numeric Jacobian term computation. It is shown that both the Jacobian and the mismatch term source code can be automatically generated in a conventional language such as Fortran or C. Thereby one starts from a high level, symbolic language with automatic differentiation and code generation facilities. As a result of this software engineering process an efficient, very high quality Newton-Raphson solution code is generated which allows easier implementation of network equation model enhancements and easier code maintenance as compared to hand-coded Fortran or C code.
Tuning complex computer code to data
Energy Technology Data Exchange (ETDEWEB)
Cox, D.; Park, J.S.; Sacks, J.; Singer, C.
1992-01-01
The problem of estimating parameters in a complex computer simulator of a nuclear fusion reactor from an experimental database is treated. Practical limitations do not permit a standard statistical analysis using nonlinear regression methodology. The assumption that the function giving the true theoretical predictions is a realization of a Gaussian stochastic process provides a statistical method for combining information from relatively few computer runs with information from the experimental database and making inferences on the parameters.
APC: A New Code for Atmospheric Polarization Computations
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
Neutron noise computation using panda deterministic code
Energy Technology Data Exchange (ETDEWEB)
Humbert, Ph. [CEA Bruyeres le Chatel (France)
2003-07-01
PANDA is a general purpose discrete ordinates neutron transport code with deterministic and non deterministic applications. In this paper we consider the adaptation of PANDA to stochastic neutron counting problems. More specifically we consider the first two moments of the count number probability distribution. In a first part we will recall the equations for the single neutron and source induced count number moments with the corresponding expression for the excess of relative variance or Feynman function. In a second part we discuss the numerical solution of these inhomogeneous adjoint time dependent transport coupled equations with discrete ordinate methods. Finally, numerical applications are presented in the third part. (author)
Computer code for intraply hybrid composite design
Chamis, C. C.; Sinclair, J. H.
1981-01-01
A computer program has been developed and is described herein for intraply hybrid composite design (INHYD). The program includes several composite micromechanics theories, intraply hybrid composite theories and a hygrothermomechanical theory. These theories provide INHYD with considerable flexibility and capability which the user can exercise through several available options. Key features and capabilities of INHYD are illustrated through selected samples.
Computer vision cracks the leaf code.
Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas
2016-03-22
Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies.
Computer code applicability assessment for the advanced Candu reactor
Energy Technology Data Exchange (ETDEWEB)
Wren, D.J.; Langman, V.J.; Popov, N.; Snell, V.G. [Atomic Energy of Canada Ltd (Canada)
2004-07-01
AECL Technologies, the 100%-owned US subsidiary of Atomic Energy of Canada Ltd. (AECL), is currently the proponents of a pre-licensing review of the Advanced Candu Reactor (ACR) with the United States Nuclear Regulatory Commission (NRC). A key focus topic for this pre-application review is the NRC acceptance of the computer codes used in the safety analysis of the ACR. These codes have been developed and their predictions compared against experimental results over extended periods of time in Canada. These codes have also undergone formal validation in the 1990's. In support of this formal validation effort AECL has developed, implemented and currently maintains a Software Quality Assurance program (SQA) to ensure that its analytical, scientific and design computer codes meet the required standards for software used in safety analyses. This paper discusses the SQA program used to develop, qualify and maintain the computer codes used in ACR safety analysis, including the current program underway to confirm the applicability of these computer codes for use in ACR safety analyses. (authors)
Experimental methodology for computational fluid dynamics code validation
Energy Technology Data Exchange (ETDEWEB)
Aeschliman, D.P.; Oberkampf, W.L.
1997-09-01
Validation of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. Typically, CFD code validation is accomplished through comparison of computed results to previously published experimental data that were obtained for some other purpose, unrelated to code validation. As a result, it is a near certainty that not all of the information required by the code, particularly the boundary conditions, will be available. The common approach is therefore unsatisfactory, and a different method is required. This paper describes a methodology developed specifically for experimental validation of CFD codes. The methodology requires teamwork and cooperation between code developers and experimentalists throughout the validation process, and takes advantage of certain synergisms between CFD and experiment. The methodology employs a novel uncertainty analysis technique which helps to define the experimental plan for code validation wind tunnel experiments, and to distinguish between and quantify various types of experimental error. The methodology is demonstrated with an example of surface pressure measurements over a model of varying geometrical complexity in laminar, hypersonic, near perfect gas, 3-dimensional flow.
Computer Security: better code, fewer problems
Stefan Lueders, Computer Security Team
2016-01-01
The origin of many security incidents is negligence or unintentional mistakes made by web developers or programmers. In the rush to complete the work, due to skewed priorities, or just to ignorance, basic security principles can be omitted or forgotten. The resulting vulnerabilities lie dormant until the evil side spots them and decides to hit hard. Computer security incidents in the past have put CERN’s reputation at risk due to websites being defaced with negative messages about the Organization, hash files of passwords being extracted, restricted data exposed… And it all started with a little bit of negligence! If you check out the Top 10 web development blunders, you will see that the most prevalent mistakes are: Not filtering input, e.g. accepting “<“ or “>” in input fields even if only a number is expected. Not validating that input: you expect a birth date? So why accept letters? &...
Low Computational Complexity Network Coding For Mobile Networks
DEFF Research Database (Denmark)
Heide, Janus
2012-01-01
Network Coding (NC) is a technique that can provide benefits in many types of networks, some examples from wireless networks are: In relay networks, either the physical or the data link layer, to reduce the number of transmissions. In reliable multicast, to reduce the amount of signaling and enable...... cooperation among receivers. In meshed networks, to simplify routing schemes and to increase robustness toward node failures. This thesis deals with implementation issues of one NC technique namely Random Linear Network Coding (RLNC) which can be described as a highly decentralized non-deterministic intra......-flow coding technique. One of the key challenges of this technique is its inherent computational complexity which can lead to high computational load and energy consumption in particular on the mobile platforms that are the target platform in this work. To increase the coding throughput several...
A three-dimensional magnetostatics computer code for insertion devices.
Chubar, O; Elleaume, P; Chavanne, J
1998-05-01
RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica [Mathematica is a registered trademark of Wolfram Research, Inc.]. The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented.
FLASH: A finite element computer code for variably saturated flow
Energy Technology Data Exchange (ETDEWEB)
Baca, R.G.; Magnuson, S.O.
1992-05-01
A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.
Highly Optimized Code Generation for Stencil Codes with Computation Reuse for GPUs
Institute of Scientific and Technical Information of China (English)
Wen-Jing Ma; Kan Gao; Guo-Ping Long
2016-01-01
Computation reuse is known as an effective optimization technique. However, due to the complexity of modern GPU architectures, there is yet not enough understanding regarding the intriguing implications of the interplay of compu-tation reuse and hardware specifics on application performance. In this paper, we propose an automatic code generator for a class of stencil codes with inherent computation reuse on GPUs. For such applications, the proper reuse of intermediate results, combined with careful register and on-chip local memory usage, has profound implications on performance. Current state of the art does not address this problem in depth, partially due to the lack of a good program representation that can expose all potential computation reuse. In this paper, we leverage the computation overlap graph (COG), a simple representation of data dependence and data reuse with “element view”, to expose potential reuse opportunities. Using COG, we propose a portable code generation and tuning framework for GPUs. Compared with current state-of-the-art code generators, our experimental results show up to 56.7%performance improvement on modern GPUs such as NVIDIA C2050.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Prodeto, a computer code for probabilistic fatigue design
Energy Technology Data Exchange (ETDEWEB)
Braam, H. [ECN-Solar and Wind Energy, Petten (Netherlands); Christensen, C.J.; Thoegersen, M.L. [Risoe National Lab., Roskilde (Denmark); Ronold, K.O. [Det Norske Veritas, Hoevik (Norway)
1999-03-01
A computer code for structural relibility analyses of wind turbine rotor blades subjected to fatigue loading is presented. With pre-processors that can transform measured and theoretically predicted load series to load range distributions by rain-flow counting and with a family of generic distribution models for parametric representation of these distribution this computer program is available for carying through probabilistic fatigue analyses of rotor blades. (au)
Methods and computer codes for nuclear systems calculations
Indian Academy of Sciences (India)
B P Kochurov; A P Knyazev; A Yu Kwaretzkheli
2007-02-01
Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.
Computer code for double beta decay QRPA based calculations
Energy Technology Data Exchange (ETDEWEB)
Barbero, C. A.; Mariano, A. [Departamento de Física, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, La Plata, Argentina and Instituto de Física La Plata, CONICET, La Plata (Argentina); Krmpotić, F. [Instituto de Física La Plata, CONICET, La Plata, Argentina and Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo (Brazil); Samana, A. R.; Ferreira, V. dos Santos [Departamento de Ciências Exatas e Tecnológicas, Universidade Estadual de Santa Cruz, BA (Brazil); Bertulani, C. A. [Department of Physics, Texas A and M University-Commerce, Commerce, TX (United States)
2014-11-11
The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β{sup ±} processes, is extended to include also the nuclear double beta decay.
Connecting Neural Coding to Number Cognition: A Computational Account
Prather, Richard W.
2012-01-01
The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…
Plagiarism Detection Algorithm for Source Code in Computer Science Education
Liu, Xin; Xu, Chan; Ouyang, Boyu
2015-01-01
Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…
Computed radiography simulation using the Monte Carlo code MCNPX
Energy Technology Data Exchange (ETDEWEB)
Correa, S.C.A. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Centro Universitario Estadual da Zona Oeste (CCMAT)/UEZO, Av. Manuel Caldeira de Alvarenga, 1203, Campo Grande, 23070-200, Rio de Janeiro, RJ (Brazil); Souza, E.M. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Silva, A.X., E-mail: ademir@con.ufrj.b [PEN/COPPE-DNC/Poli CT, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Cassiano, D.H. [Instituto de Radioprotecao e Dosimetria/CNEN Av. Salvador Allende, s/n, Recreio, 22780-160, Rio de Janeiro, RJ (Brazil); Lopes, R.T. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil)
2010-09-15
Simulating X-ray images has been of great interest in recent years as it makes possible an analysis of how X-ray images are affected owing to relevant operating parameters. In this paper, a procedure for simulating computed radiographic images using the Monte Carlo code MCNPX is proposed. The sensitivity curve of the BaFBr image plate detector as well as the characteristic noise of a 16-bit computed radiography system were considered during the methodology's development. The results obtained confirm that the proposed procedure for simulating computed radiographic images is satisfactory, as it allows obtaining results comparable with experimental data.
Fault-tolerant quantum computing with color codes
Landahl, Andrew J; Rice, Patrick R
2011-01-01
We present and analyze protocols for fault-tolerant quantum computing using color codes. We present circuit-level schemes for extracting the error syndrome of these codes fault-tolerantly. We further present an integer-program-based decoding algorithm for identifying the most likely error given the syndrome. We simulated our syndrome extraction and decoding algorithms against three physically-motivated noise models using Monte Carlo methods, and used the simulations to estimate the corresponding accuracy thresholds for fault-tolerant quantum error correction. We also used a self-avoiding walk analysis to lower-bound the accuracy threshold for two of these noise models. We present and analyze two architectures for fault-tolerantly computing with these codes: one with 2D arrays of qubits are stacked atop each other and one in a single 2D substrate. Our analysis demonstrates that color codes perform slightly better than Kitaev's surface codes when circuit details are ignored. When these details are considered, w...
New Parallel computing framework for radiation transport codes
Energy Technology Data Exchange (ETDEWEB)
Kostin, M.A.; /Michigan State U., NSCL; Mokhov, N.V.; /Fermilab; Niita, K.; /JAERI, Tokai
2010-09-01
A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.
New Parallel computing framework for radiation transport codes
Kostin, M A; Niita, K
2012-01-01
A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The frame work was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the sch eduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be...
User's manual for HDR3 computer code
Energy Technology Data Exchange (ETDEWEB)
Arundale, C.J.
1982-10-01
A description of the HDR3 computer code and instructions for its use are provided. HDR3 calculates space heating costs for a hot dry rock (HDR) geothermal space heating system. The code also compares these costs to those of a specific oil heating system in use at the National Aeronautics and Space Administration Flight Center at Wallops Island, Virginia. HDR3 allows many HDR system parameters to be varied so that the user may examine various reservoir management schemes and may optimize reservoir design to suit a particular set of geophysical and economic parameters.
LMFBR models for the ORIGEN2 computer code
Energy Technology Data Exchange (ETDEWEB)
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1983-06-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
LMFBR models for the ORIGEN2 computer code
Energy Technology Data Exchange (ETDEWEB)
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1981-10-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
War of ontology worlds: mathematics, computer code, or Esperanto?
Directory of Open Access Journals (Sweden)
Andrey Rzhetsky
2011-09-01
Full Text Available The use of structured knowledge representations-ontologies and terminologies-has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies.
Computer codes for evaluation of control room habitability (HABIT)
Energy Technology Data Exchange (ETDEWEB)
Stage, S.A. [Pacific Northwest Lab., Richland, WA (United States)
1996-06-01
This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs.
Computational Complexity of Decoding Orthogonal Space-Time Block Codes
Ayanoglu, Ender; Karipidis, Eleftherios
2009-01-01
The computational complexity of optimum decoding for an orthogonal space-time block code G satisfying the orthogonality property that the Hermitian transpose of G multiplied by G is equal to a constant c times the sum of the squared symbols of the code times an identity matrix, where c is a positive integer is quantified. Four equivalent techniques of optimum decoding which have the same computational complexity are specified. Modifications to the basic formulation in special cases are calculated and illustrated by means of examples. This paper corrects and extends [1],[2], and unifies them with the results from the literature. In addition, a number of results from the literature are extended to the case c > 1.
Benchmarking of computer codes and approaches for modeling exposure scenarios
Energy Technology Data Exchange (ETDEWEB)
Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)
1994-08-01
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.
General review of the MOSTAS computer code for wind turbines
Energy Technology Data Exchange (ETDEWEB)
Dugundji, J.; Wendell, J.H.
1981-06-01
The MOSTAS computer code for wind turbine analysis is reviewed, and the techniques and methods used in its analyses are described in some detail. Some impressions of its strengths and weaknesses, and some recommendations for its application, modification, and further development are made. Additionally, some basic techniques used in wind turbine stability and response analyses for systems with constant and periodic coefficients are reviewed in the Appendices.
Bragg optics computer codes for neutron scattering instrument design
Energy Technology Data Exchange (ETDEWEB)
Popovici, M.; Yelon, W.B.; Berliner, R.R. [Missouri Univ. Research Reactor, Columbia, MO (United States); Stoica, A.D. [Institute of Physics and Technology of Materials, Bucharest (Romania)
1997-09-01
Computer codes for neutron crystal spectrometer design, optimization and experiment planning are described. Phase space distributions, linewidths and absolute intensities are calculated by matrix methods in an extension of the Cooper-Nathans resolution function formalism. For modeling the Bragg reflection on bent crystals the lamellar approximation is used. Optimization is done by satisfying conditions of focusing in scattering and in real space, and by numerically maximizing figures of merit. Examples for three-axis and two-axis spectrometers are given.
Methodology for computational fluid dynamics code verification/validation
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, W.L.; Blottner, F.G.; Aeschliman, D.P.
1995-07-01
The issues of verification, calibration, and validation of computational fluid dynamics (CFD) codes has been receiving increasing levels of attention in the research literature and in engineering technology. Both CFD researchers and users of CFD codes are asking more critical and detailed questions concerning the accuracy, range of applicability, reliability and robustness of CFD codes and their predictions. This is a welcomed trend because it demonstrates that CFD is maturing from a research tool to the world of impacting engineering hardware and system design. In this environment, the broad issue of code quality assurance becomes paramount. However, the philosophy and methodology of building confidence in CFD code predictions has proven to be more difficult than many expected. A wide variety of physical modeling errors and discretization errors are discussed. Here, discretization errors refer to all errors caused by conversion of the original partial differential equations to algebraic equations, and their solution. Boundary conditions for both the partial differential equations and the discretized equations will be discussed. Contrasts are drawn between the assumptions and actual use of numerical method consistency and stability. Comments are also made concerning the existence and uniqueness of solutions for both the partial differential equations and the discrete equations. Various techniques are suggested for the detection and estimation of errors caused by physical modeling and discretization of the partial differential equations.
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
Energy Technology Data Exchange (ETDEWEB)
Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)
1997-12-01
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.
Improvement of level-1 PSA computer code package
Energy Technology Data Exchange (ETDEWEB)
Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.
1997-07-01
This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on `The improvement of level-1 PSA Computer Codes` is divided into two main activities : (1) improvement of level-1 PSA methodology, (2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs.
Computationally efficient sub-band coding of ECG signals.
Husøy, J H; Gjerde, T
1996-03-01
A data compression technique is presented for the compression of discrete time electrocardiogram (ECG) signals. The compression system is based on sub-band coding, a technique traditionally used for compressing speech and images. The sub-band coder employs quadrature mirror filter banks (QMF) with up to 32 critically sampled sub-bands. Both finite impulse response (FIR) and the more computationally efficient infinite impulse response (IIR) filter banks are considered as candidates in a complete ECG coding system. The sub-bands are threshold, quantized using uniform quantizers and run-length coded. The output of the run-length coder is further compressed by a Huffman coder. Extensive simulations indicate that 16 sub-bands are a suitable choice for this application. Furthermore, IIR filter banks are preferable due to their superiority in terms of computational efficiency. We conclude that the present scheme, which is suitable for real time implementation on a PC, can provide compression ratios between 5 and 15 without loss of clinical information.
Compressing industrial computed tomography images by means of contour coding
Jiang, Haina; Zeng, Li
2013-10-01
An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.
Codes for Computationally Simple Channels: Explicit Constructions with Optimal Rate
Guruswami, Venkatesan
2010-01-01
In this paper, we consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors is bounded with high probability by a parameter p and (b) the process which adds the errors can be described by a sufficiently "simple" circuit. For three classes of channels, we provide explicit, efficiently encodable/decodable codes of optimal rate where only inefficiently decodable codes were previously known. In each case, we provide one encoder/decoder that works for every channel in the class. (1) Unique decoding for additive errors: We give the first construction of poly-time encodable/decodable codes for additive (a.k.a. oblivious) channels that achieve the Shannon capacity 1-H(p). Such channels capture binary symmetric errors and burst errors as special cases. (2) List-decoding for log-space channels: A space-S(n) channel reads and modifies the transmitted codeword as a stream, using at most S(n) bits of workspace on transmissions of n bi...
Code Verification of the HIGRAD Computational Fluid Dynamics Solver
Energy Technology Data Exchange (ETDEWEB)
Van Buren, Kendra L. [Los Alamos National Laboratory; Canfield, Jesse M. [Los Alamos National Laboratory; Hemez, Francois M. [Los Alamos National Laboratory; Sauer, Jeremy A. [Los Alamos National Laboratory
2012-05-04
The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.
Multicode comparison of selected source-term computer codes
Energy Technology Data Exchange (ETDEWEB)
Hermann, O.W.; Parks, C.V.; Renier, J.P.; Roddy, J.W.; Ashline, R.C.; Wilson, W.B.; LaBauve, R.J.
1989-04-01
This report summarizes the results of a study to assess the predictive capabilities of three radionuclide inventory/depletion computer codes, ORIGEN2, ORIGEN-S, and CINDER-2. The task was accomplished through a series of comparisons of their output for several light-water reactor (LWR) models (i.e., verification). Of the five cases chosen, two modeled typical boiling-water reactors (BWR) at burnups of 27.5 and 40 GWd/MTU and two represented typical pressurized-water reactors (PWR) at burnups of 33 and 50 GWd/MTU. In the fifth case, identical input data were used for each of the codes to examine the results of decay only and to show differences in nuclear decay constants and decay heat rates. Comparisons were made for several different characteristics (mass, radioactivity, and decay heat rate) for 52 radionuclides and for nine decay periods ranging from 30 d to 10,000 years. Only fission products and actinides were considered. The results are presented in comparative-ratio tables for each of the characteristics, decay periods, and cases. A brief summary description of each of the codes has been included. Of the more than 21,000 individual comparisons made for the three codes (taken two at a time), nearly half (45%) agreed to within 1%, and an additional 17% fell within the range of 1 to 5%. Approximately 8% of the comparison results disagreed by more than 30%. However, relatively good agreement was obtained for most of the radionuclides that are expected to contribute the greatest impact to waste disposal. Even though some defects have been noted, each of the codes in the comparison appears to produce respectable results. 12 figs., 12 tabs.
pyro: A teaching code for computational astrophysical hydrodynamics
Zingale, Michael
2013-01-01
We describe pyro: a simple, freely-available code to aid students in learning the computational hydrodynamics methods widely used in astrophysics. pyro is written with simplicity and learning in mind and intended to allow students to experiment with various methods popular in the field, including those for advection, compressible and incompressible hydrodynamics, multigrid, and diffusion in a finite-volume framework. We show some of the test problems from pyro, describe its design philosophy, and suggest extensions for students to build their understanding of these methods.
pyro: A teaching code for computational astrophysical hydrodynamics
Zingale, M.
2014-10-01
We describe pyro: a simple, freely-available code to aid students in learning the computational hydrodynamics methods widely used in astrophysics. pyro is written with simplicity and learning in mind and intended to allow students to experiment with various methods popular in the field, including those for advection, compressible and incompressible hydrodynamics, multigrid, and diffusion in a finite-volume framework. We show some of the test problems from pyro, describe its design philosophy, and suggest extensions for students to build their understanding of these methods.
Knowlton, Marie; Wetzel, Robin
2006-01-01
This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…
Evaluation of detonation energy from EXPLO5 computer code results
Energy Technology Data Exchange (ETDEWEB)
Suceska, M. [Brodarski Institute, Zagreb (Croatia). Marine Research and Special Technologies
1999-10-01
The detonation energies of several high explosives are evaluated from the results of chemical-equilibrium computer code named EXPLO5. Two methods of the evaluation of detonation energy are applied: (a) Direct evaluation from the internal energy of detonation products at the CJ point and the energy of shock compression of the detonation products, i.e. by equating the detonation energy and the heat of detonation, and (b) evaluation from the expansion isentrope of detonation products, applying the JWL model. These energies are compared to the energies computed from cylinder test derived JWL coefficients. It is found out that the detonation energies obtained directly from the energy of detonation products at the CJ point are uniformly to high (0.9445{+-}0.577 kJ/cm{sup 3}) while the detonation energies evaluated from the expansion isentrope, are in a considerable agreement (0.2072{+-}0.396 kJ/cm{sup 3}) with the energies calculated from cylinder test derived JWL coefficients. (orig.) [German] Die Detonationsenergien verschiedener Hochleistungssprengstoffe werden bewertet aus den Ergebnissen des Computer Codes fuer chemische Gleichgewichte genannt EXPLO5. Zwei Methoden wurden angewendet: (a) Direkte Bewertung aus der inneren Energie der Detonationsprodukte am CJ-Punkt und aus der Energie der Stosskompression der Detonationsprodukte, d.h. durch Gleichsetzung von Detonationsenergie und Detonationswaerme, (b) Auswertung durch die Expansions-Isentrope der Detonationsprodukte unter Anwendung des JWL-Modells. Diese Energien werden verglichen mit den berechneten Energien mit aus dem Zylindertest abgeleiteten JWL-Koeffizienten. Es wird gefunden, dass die Detonationsenergien, die direkt aus der Energie der Detonationsprodukte beim CJ-Punkt erhalten wurden, einheitlich zu hoch sind (0,9445{+-}0,577 kJ/cm{sup 3}), waehrend die aus der Expansions-Isentrope erhaltenen in guter Uebereinstimmung sind (0,2072{+-}0,396 kJ/cm{sup 3}) mit den berechneten Energien mit aus dem Zylindertest
A computer code to simulate X-ray imaging techniques
Energy Technology Data Exchange (ETDEWEB)
Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel
2000-09-01
A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.
Reasoning with Computer Code: a new Mathematical Logic
Pissanetzky, Sergio
2013-01-01
A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self
Interface design of VSOP'94 computer code for safety analysis
Natsir, Khairina; Yazid, Putranto Ilham; Andiwijayakusuma, D.; Wahanani, Nursinta Adi
2014-09-01
Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.
Compute-and-Forward: Harnessing Interference through Structured Codes
Nazer, Bobak
2009-01-01
Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information. Its potential is demonstrated through examples drawn ...
Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation
Scott, James R.
2004-01-01
NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.
Computer Tensor Codes to Design the War Drive
Maccone, C.
To address problems in Breakthrough Propulsion Physics (BPP) and design the Warp Drive one needs sheer computing capabilities. This is because General Relativity (GR) and Quantum Field Theory (QFT) are so mathematically sophisticated that the amount of analytical calculations is prohibitive and one can hardly do all of them by hand. In this paper we make a comparative review of the main tensor calculus capabilities of the three most advanced and commercially available “symbolic manipulator” codes. We also point out that currently one faces such a variety of different conventions in tensor calculus that it is difficult or impossible to compare results obtained by different scholars in GR and QFT. Mathematical physicists, experimental physicists and engineers have each their own way of customizing tensors, especially by using different metric signatures, different metric determinant signs, different definitions of the basic Riemann and Ricci tensors, and by adopting different systems of physical units. This chaos greatly hampers progress toward the design of the Warp Drive. It is thus suggested that NASA would be a suitable organization to establish standards in symbolic tensor calculus and anyone working in BPP should adopt these standards. Alternatively other institutions, like CERN in Europe, might consider the challenge of starting the preliminary implementation of a Universal Tensor Code to design the Warp Drive.
Computer code for the atomistic simulation of lattice defects and dynamics. [COMENT code
Energy Technology Data Exchange (ETDEWEB)
Schiffgens, J.O.; Graves, N.J.; Oster, C.A.
1980-04-01
This document has been prepared to satisfy the need for a detailed, up-to-date description of a computer code that can be used to simulate phenomena on an atomistic level. COMENT was written in FORTRAN IV and COMPASS (CDC assembly language) to solve the classical equations of motion for a large number of atoms interacting according to a given force law, and to perform the desired ancillary analysis of the resulting data. COMENT is a dual-purpose intended to describe static defect configurations as well as the detailed motion of atoms in a crystal lattice. It can be used to simulate the effect of temperature, impurities, and pre-existing defects on radiation-induced defect production mechanisms, defect migration, and defect stability.
Assessment of the computer code COBRA/CFTL
Energy Technology Data Exchange (ETDEWEB)
Baxi, C. B.; Burhop, C. J.
1981-07-01
The COBRA/CFTL code has been developed by Oak Ridge National Laboratory (ORNL) for thermal-hydraulic analysis of simulated gas-cooled fast breeder reactor (GCFR) core assemblies to be tested in the core flow test loop (CFTL). The COBRA/CFTL code was obtained by modifying the General Atomic code COBRA*GCFR. This report discusses these modifications, compares the two code results for three cases which represent conditions from fully rough turbulent flow to laminar flow. Case 1 represented fully rough turbulent flow in the bundle. Cases 2 and 3 represented laminar and transition flow regimes. The required input for the COBRA/CFTL code, a sample problem input/output and the code listing are included in the Appendices.
Superimposed Code Theoretic Analysis of Deoxyribonucleic Acid (DNA) Codes and DNA Computing
2010-01-01
DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008 IEEE Proceedings of International Symposium on Information Theory, pp. 2292 – 2296...2008, pp. 525-34. 28. A. Macula, et al., “Random Coding Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008 IEEE...component of this innovation is the combinatorial method of bio-memory design and detection that encodes item or process information as numerical sequences
MMA, A Computer Code for Multi-Model Analysis
Energy Technology Data Exchange (ETDEWEB)
Eileen P. Poeter and Mary C. Hill
2007-08-20
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.
ORNL ALICE: a statistical model computer code including fission competition. [In FORTRAN
Energy Technology Data Exchange (ETDEWEB)
Plasil, F.
1977-11-01
A listing of the computer code ORNL ALICE is given. This code is a modified version of computer codes ALICE and OVERLAID ALICE. It allows for higher excitation energies and for a greater number of evaporated particles than the earlier versions. The angular momentum removal option was made more general and more internally consistent. Certain roundoff errors are avoided by keeping a strict accounting of partial probabilities. Several output options were added.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will
Quantum error correcting codes and one-way quantum computing: Towards a quantum memory
Schlingemann, D
2003-01-01
For realizing a quantum memory we suggest to first encode quantum information via a quantum error correcting code and then concatenate combined decoding and re-encoding operations. This requires that the encoding and the decoding operation can be performed faster than the typical decoherence time of the underlying system. The computational model underlying the one-way quantum computer, which has been introduced by Hans Briegel and Robert Raussendorf, provides a suitable concept for a fast implementation of quantum error correcting codes. It is shown explicitly in this article is how encoding and decoding operations for stabilizer codes can be realized on a one-way quantum computer. This is based on the graph code representation for stabilizer codes, on the one hand, and the relation between cluster states and graph codes, on the other hand.
Application of computational fluid dynamics methods to improve thermal hydraulic code analysis
Sentell, Dennis Shannon, Jr.
A computational fluid dynamics code is used to model the primary natural circulation loop of a proposed small modular reactor for comparison to experimental data and best-estimate thermal-hydraulic code results. Recent advances in computational fluid dynamics code modeling capabilities make them attractive alternatives to the current conservative approach of coupled best-estimate thermal hydraulic codes and uncertainty evaluations. The results from a computational fluid dynamics analysis are benchmarked against the experimental test results of a 1:3 length, 1:254 volume, full pressure and full temperature scale small modular reactor during steady-state power operations and during a depressurization transient. A comparative evaluation of the experimental data, the thermal hydraulic code results and the computational fluid dynamics code results provides an opportunity to validate the best-estimate thermal hydraulic code's treatment of a natural circulation loop and provide insights into expanded use of the computational fluid dynamics code in future designs and operations. Additionally, a sensitivity analysis is conducted to determine those physical phenomena most impactful on operations of the proposed reactor's natural circulation loop. The combination of the comparative evaluation and sensitivity analysis provides the resources for increased confidence in model developments for natural circulation loops and provides for reliability improvements of the thermal hydraulic code.
Energy Technology Data Exchange (ETDEWEB)
VOOGD, J.A.
1999-04-19
An analysis of three software proposals is performed to recommend a computer code for immobilized low activity waste flow and transport modeling. The document uses criteria restablished in HNF-1839, ''Computer Code Selection Criteria for Flow and Transport Codes to be Used in Undisturbed Vadose Zone Calculation for TWRS Environmental Analyses'' as the basis for this analysis.
Quantum computation with topological codes from qubit to topological fault-tolerance
Fujii, Keisuke
2015-01-01
This book presents a self-consistent review of quantum computation with topological quantum codes. The book covers everything required to understand topological fault-tolerant quantum computation, ranging from the definition of the surface code to topological quantum error correction and topological fault-tolerant operations. The underlying basic concepts and powerful tools, such as universal quantum computation, quantum algorithms, stabilizer formalism, and measurement-based quantum computation, are also introduced in a self-consistent way. The interdisciplinary fields between quantum information and other fields of physics such as condensed matter physics and statistical physics are also explored in terms of the topological quantum codes. This book thus provides the first comprehensive description of the whole picture of topological quantum codes and quantum computation with them.
Two-Phase Flow in Geothermal Wells: Development and Uses of a Good Computer Code
Energy Technology Data Exchange (ETDEWEB)
Ortiz-Ramirez, Jaime
1983-06-01
A computer code is developed for vertical two-phase flow in geothermal wellbores. The two-phase correlations used were developed by Orkiszewski (1967) and others and are widely applicable in the oil and gas industry. The computer code is compared to the flowing survey measurements from wells in the East Mesa, Cerro Prieto, and Roosevelt Hot Springs geothermal fields with success. Well data from the Svartsengi field in Iceland are also used. Several applications of the computer code are considered. They range from reservoir analysis to wellbore deposition studies. It is considered that accurate and workable wellbore simulators have an important role to play in geothermal reservoir engineering.
Code of Ethical Conduct for Computer-Using Educators: An ICCE Policy Statement.
Computing Teacher, 1987
1987-01-01
Prepared by the International Council for Computers in Education's Ethics and Equity Committee, this code of ethics for educators using computers covers nine main areas: curriculum issues, issues relating to computer access, privacy/confidentiality issues, teacher-related issues, student issues, the community, school organizational issues,…
Efficient Quantification of Uncertainties in Complex Computer Code Results Project
National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...
Computational Participation: Understanding Coding as an Extension of Literacy Instruction
Burke, Quinn; O'Byrne, W. Ian; Kafai, Yasmin B.
2016-01-01
Understanding the computational concepts on which countless digital applications run offers learners the opportunity to no longer simply read such media but also become more discerning end users and potentially innovative "writers" of new media themselves. To think computationally--to solve problems, to design systems, and to process and…
Selection of a computer code for Hanford low-level waste engineered-system performance assessment
Energy Technology Data Exchange (ETDEWEB)
McGrail, B.P.; Mahoney, L.A.
1995-10-01
Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected to affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.
Challenges of Computational Processing of Code-Switching
Çetinoğlu, Özlem; Schulz, Sarah; Vu, Ngoc Thang
2016-01-01
This paper addresses challenges of Natural Language Processing (NLP) on non-canonical multilingual data in which two or more languages are mixed. It refers to code-switching which has become more popular in our daily life and therefore obtains an increasing amount of attention from the research community. We report our experience that cov- ers not only core NLP tasks such as normalisation, language identification, language modelling, part-of-speech tagging and dependency parsing but also more...
POTRE: A computer code for the assessment of dose from ingestion
Energy Technology Data Exchange (ETDEWEB)
Hanusik, V.; Mitro, A.; Niedel, S.; Grosikova, B.; Uvirova, E.; Stranai, I. (Institute of Radioecology and Applied Nuclear Techniques, Kosice (Czechoslovakia))
1991-01-01
The paper describes the computer code PORET and the auxiliary database system which allow to assess the radiation exposure from ingestion of foodstuffs contaminated by radionuclides released from a nuclear facility during normal operation into the atmosphere. (orig.).
Speeding-up MADYMO 3D on serial and parallel computers using a portable coding environment
Tsiandikos, T.; Rooijackers, H.F.L.; Asperen, F.G.J. van; Lupker, H.A.
1996-01-01
This paper outlines the strategy and methodology used to create a portable coding environment for the commercial package MADYMO. The objective is to design a global data structure that efficiently utilises the memory and cache of computers, so that one source code can be used for serial, vector and
Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela
2015-01-01
Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…
Holbrook, M. Cay; MacCuspie, P. Ann
2010-01-01
Braille-reading mathematicians, scientists, and computer scientists were asked to examine the usability of the Unified English Braille Code (UEB) for technical materials. They had little knowledge of the code prior to the study. The research included two reading tasks, a short tutorial about UEB, and a focus group. The results indicated that the…
Comparison of different computer platforms for running the Versatile Advection Code
Toth, G.; Keppens, R.; Sloot, P.; Bubak, M.; Hertzberger, B.
1998-01-01
The Versatile Advection Code is a general tool for solving hydrodynamical and magnetohydrodynamical problems arising in astrophysics. We compare the performance of the code on different computer platforms, including work stations and vector and parallel supercomputers. Good parallel scaling can be a
Proposed standards for peer-reviewed publication of computer code
Computer simulation models are mathematical abstractions of physical systems. In the area of natural resources and agriculture, these physical systems encompass selected interacting processes in plants, soils, animals, or watersheds. These models are scientific products and have become important i...
Code and papers: computing publication patterns in the LHC era
CERN. Geneva
2012-01-01
Publications in scholarly journals establish the body of knowledge deriving from scientific research; they also play a fundamental role in the career path of scientists and in the evaluation criteria of funding agencies. This presentation reviews the evolution of computing-oriented publications in HEP following the start of operation of LHC. Quantitative analyses are illustrated, which document the production of scholarly papers on computing-related topics by HEP experiments and core tools projects (including distributed computing R&D), and the citations they receive. Several scientometric indicators are analyzed to characterize the role of computing in HEP literature. Distinctive features of scholarly publication production in the software-oriented and hardware-oriented experimental HEP communities are highlighted. Current patterns and trends are compared to the situation in previous generations' HEP experiments at LEP, Tevatron and B-factories. The results of this scientometric analysis document objec...
A FORTRAN computer code for calculating flows in multiple-blade-element cascades
Mcfarland, E. R.
1985-01-01
A solution technique has been developed for solving the multiple-blade-element, surface-of-revolution, blade-to-blade flow problem in turbomachinery. The calculation solves approximate flow equations which include the effects of compressibility, radius change, blade-row rotation, and variable stream sheet thickness. An integral equation solution (i.e., panel method) is used to solve the equations. A description of the computer code and computer code input is given in this report.
Computing the Feng-Rao distances for codes from order domains
DEFF Research Database (Denmark)
Ruano Benito, Diego
2007-01-01
We compute the Feng–Rao distance of a code coming from an order domain with a simplicial value semigroup. The main tool is the Apéry set of a semigroup that can be computed using a Gröbner basis.......We compute the Feng–Rao distance of a code coming from an order domain with a simplicial value semigroup. The main tool is the Apéry set of a semigroup that can be computed using a Gröbner basis....
Computation of Grobner basis for systematic encoding of generalized quasi-cyclic codes
Van, Vo Tam; Mita, Seiichi
2008-01-01
Generalized quasi-cyclic (GQC) codes form a wide and useful class of linear codes that includes thoroughly quasi-cyclic codes, finite geometry (FG) low density parity check (LDPC) codes, and Hermitian codes. Although it is known that the systematic encoding of GQC codes is equivalent to the division algorithm in the theory of Grobner basis of modules, there has been no algorithm that computes Grobner basis for all types of GQC codes. In this paper, we propose two algorithms to compute Grobner basis for GQC codes from their parity check matrices: echelon canonical form algorithm and transpose algorithm. Both algorithms require sufficiently small number of finite-field operations with the order of the third power of code-length. Each algorithm has its own characteristic; the first algorithm is composed of elementary methods, and the second algorithm is based on a novel formula and is faster than the first one for high-rate codes. Moreover, we show that a serial-in serial-out encoder architecture for FG LDPC cod...
Plutonium explosive dispersal modeling using the MACCS2 computer code
Energy Technology Data Exchange (ETDEWEB)
Steele, C.M.; Wald, T.L.; Chanin, D.I.
1998-11-01
The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.
Windtalking Computers: Frequency Normalization, Binary Coding Systems and Encryption
Zirkind, Givon
2009-01-01
The goal of this paper is to discuss the application of known techniques, knowledge and technology in a novel way, to encrypt computer and non-computer data. To-date most computers use base 2 and most encryption systems use ciphering and/or an encryption algorithm, to convert data into a secret message. The method of having the computer "speak another secret language" as used in human military secret communications has never been imitated. The author presents the theory and several possible implementations of a method for computers for secret communications analogous to human beings using a secret language or; speaking multiple languages. The kind of encryption scheme proposed significantly increases the complexity of and the effort needed for, decryption. As every methodology has its drawbacks, so too, the data of the proposed system has its drawbacks. It is not as compressed as base 2 would be. However, this is manageable and acceptable, if the goal is very strong encryption: At least two methods and their ...
TPASS: a gamma-ray spectrum analysis and isotope identification computer code
Energy Technology Data Exchange (ETDEWEB)
Dickens, J.K.
1981-03-01
The gamma-ray spectral data-reduction and analysis computer code TPASS is described. This computer code is used to analyze complex Ge(Li) gamma-ray spectra to obtain peak areas corrected for detector efficiencies, from which are determined gamma-ray yields. These yields are compared with an isotope gamma-ray data file to determine the contributions to the observed spectrum from decay of specific radionuclides. A complete FORTRAN listing of the code and a complex test case are given.
Development of a system of computer codes for severe accident analyses and its applications
Energy Technology Data Exchange (ETDEWEB)
Chang, Soon Hong; Cheon, Moon Heon; Cho, Nam jin; No, Hui Cheon; Chang, Hyeon Seop; Moon, Sang Kee; Park, Seok Jeong; Chung, Jee Hwan [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
1991-12-15
The objectives of this study is to develop a system of computer codes for postulated severe accident analyses in Nuclear Power Plants. This system of codes is necessary to conduct individual plant examination for domestic nuclear power plants. As a result of this study, one can conduct severe accident assessments more easily, and can extract the plant-specific vulnerabilities for severe accidents and at the same time the ideas for enhancing overall accident resistance. The scope and contents of this study are as follows : development of a system of computer codes for severe accident analyses, development of severe accident management strategy.
Proceedings of the conference on computer codes and the linear accelerator community
Energy Technology Data Exchange (ETDEWEB)
Cooper, R.K. (comp.)
1990-07-01
The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.
Exact Gap Computation for Code Coverage Metrics in ISO-C
Richter, Dirk; 10.4204/EPTCS.80.4
2012-01-01
Test generation and test data selection are difficult tasks for model based testing. Tests for a program can be meld to a test suite. A lot of research is done to quantify the quality and improve a test suite. Code coverage metrics estimate the quality of a test suite. This quality is fine, if the code coverage value is high or 100%. Unfortunately it might be impossible to achieve 100% code coverage because of dead code for example. There is a gap between the feasible and theoretical maximal possible code coverage value. Our review of the research indicates, none of current research is concerned with exact gap computation. This paper presents a framework to compute such gaps exactly in an ISO-C compatible semantic and similar languages. We describe an efficient approximation of the gap in all the other cases. Thus, a tester can decide if more tests might be able or necessary to achieve better coverage.
Visualization of elastic wavefields computed with a finite difference code
Energy Technology Data Exchange (ETDEWEB)
Larsen, S. [Lawrence Livermore National Lab., CA (United States); Harris, D.
1994-11-15
The authors have developed a finite difference elastic propagation model to simulate seismic wave propagation through geophysically complex regions. To facilitate debugging and to assist seismologists in interpreting the seismograms generated by the code, they have developed an X Windows interface that permits viewing of successive temporal snapshots of the (2D) wavefield as they are calculated. The authors present a brief video displaying the generation of seismic waves by an explosive source on a continent, which propagate to the edge of the continent then convert to two types of acoustic waves. This sample calculation was part of an effort to study the potential of offshore hydroacoustic systems to monitor seismic events occurring onshore.
Compendium of computer codes for the safety analysis of fast breeder reactors
Energy Technology Data Exchange (ETDEWEB)
1977-10-01
The objective of the compendium is to provide the reader with a guide which briefly describes many of the computer codes used for liquid metal fast breeder reactor safety analyses, since it is for this system that most of the codes have been developed. The compendium is designed to address the following frequently asked questions from individuals in licensing and research and development activities: (1) What does the code do. (2) To what safety problems has it been applied. (3) What are the code's limitations. (4) What is being done to remove these limitations. (5) How does the code compare with experimental observations and other code predictions. (6) What reference documents are available.
Introduction to error correcting codes in quantum computers
Salas, P J
2006-01-01
The goal of this paper is to review the theoretical basis for achieving a faithful quantum information transmission and processing in the presence of noise. Initially encoding and decoding, implementing gates and quantum error correction will be considered error free. Finally we will relax this non realistic assumption, introducing the quantum fault-tolerant concept. The existence of an error threshold permits to conclude that there is no physical law preventing a quantum computer from being built. An error model based on the depolarizing channel will be able to provide a simple estimation of the storage or memory computation error threshold: < 5.2 10-5. The encoding is made by means of the [[7,1,3
High-Performance Java Codes for Computational Fluid Dynamics
Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.
Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution
Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi
2015-05-01
In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs
Automatic Parallelization Tool: Classification of Program Code for Parallel Computing
Directory of Open Access Journals (Sweden)
Mustafa Basthikodi
2016-04-01
Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.
Robust Coding for Lossy Computing with Observation Costs
Ahmadi, Behzad
2011-01-01
An encoder wishes to minimize the bit rate necessary to guarantee that a decoder is able to calculate a symbol-wise function of a sequence available only at the encoder and a sequence that can be measured only at the decoder. This classical problem, first studied by Yamamoto, is addressed here by including two new aspects: (i) The decoder obtains noisy measurements of its sequence, where the quality of such measurements can be controlled via a cost-constrained "action" sequence, which is taken at the decoder or at the encoder; (ii) Measurement at the decoder may fail in a way that is unpredictable to the encoder, thus requiring robust encoding. The considered scenario generalizes known settings such as the Heegard-Berger-Kaspi and the "source coding with a vending machine" problems. The rate-distortion-cost function is derived in relevant special cases, along with general upper and lower bounds. Numerical examples are also worked out to obtain further insight into the optimal system design.
The Uncertainty Test for the MAAP Computer Code
Energy Technology Data Exchange (ETDEWEB)
Park, S. H.; Song, Y. M.; Park, S. Y.; Ahn, K. I.; Kim, K. R.; Lee, Y. J. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2008-10-15
After the Three Mile Island Unit 2 (TMI-2) and Chernobyl accidents, safety issues for a severe accident are treated in various aspects. Major issues in our research part include a level 2 PSA. The difficulty in expanding the level 2 PSA as a risk information activity is the uncertainty. In former days, it attached a weight to improve the quality in a internal accident PSA, but the effort is insufficient for decrease the phenomenon uncertainty in the level 2 PSA. In our country, the uncertainty degree is high in the case of a level 2 PSA model, and it is necessary to secure a model to decrease the uncertainty. We have not yet experienced the uncertainty assessment technology, the assessment system itself depends on advanced nations. In advanced nations, the severe accident simulator is implemented in the hardware level. But in our case, basic function in a software level can be implemented. In these circumstance at home and abroad, similar instances are surveyed such as UQM and MELCOR. Referred to these instances, SAUNA (Severe Accident UNcertainty Analysis) system is being developed in our project to assess and decrease the uncertainty in a level 2 PSA. It selects the MAAP code to analyze the uncertainty in a severe accident.
Development of a model and computer code to describe solar grade silicon production processes
Gould, R. K.; Srivastava, R.
1979-01-01
Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.
Skála, J.; Baruffa, F.; Büchner, J.; Rampp, M.
2015-08-01
Context. The numerical simulation of turbulence and flows in almost ideal astrophysical plasmas with large Reynolds numbers motivates the implementation of magnetohydrodynamical (MHD) computer codes with low resistivity. They need to be computationally efficient and scale well with large numbers of CPU cores, allow obtaining a high grid resolution over large simulation domains, and be easily and modularly extensible, for instance, to new initial and boundary conditions. Aims: Our aims are the implementation, optimization, and verification of a computationally efficient, highly scalable, and easily extensible low-dissipative MHD simulation code for the numerical investigation of the dynamics of astrophysical plasmas with large Reynolds numbers in three dimensions (3D). Methods: The new GOEMHD3 code discretizes the ideal part of the MHD equations using a fast and efficient leap-frog scheme that is second-order accurate in space and time and whose initial and boundary conditions can easily be modified. For the investigation of diffusive and dissipative processes the corresponding terms are discretized by a DuFort-Frankel scheme. To always fulfill the Courant-Friedrichs-Lewy stability criterion, the time step of the code is adapted dynamically. Numerically induced local oscillations are suppressed by explicit, externally controlled diffusion terms. Non-equidistant grids are implemented, which enhance the spatial resolution, where needed. GOEMHD3 is parallelized based on the hybrid MPI-OpenMP programing paradigm, adopting a standard two-dimensional domain-decomposition approach. Results: The ideal part of the equation solver is verified by performing numerical tests of the evolution of the well-understood Kelvin-Helmholtz instability and of Orszag-Tang vortices. The accuracy of solving the (resistive) induction equation is tested by simulating the decay of a cylindrical current column. Furthermore, we show that the computational performance of the code scales very
Validation of physics and thermalhydraulic computer codes for advanced Candu reactor applications
Energy Technology Data Exchange (ETDEWEB)
Wren, D.J.; Popov, N.; Snell, V.G. [Atomic Energy of Canada Ltd, (Canada)
2004-07-01
Atomic Energy of Canada Ltd. (AECL) is developing an Advanced Candu Reactor (ACR) that is an evolutionary advancement of the currently operating Candu 6 reactors. The ACR is being designed to produce electrical power for a capital cost and at a unit-energy cost significantly less than that of the current reactor designs. The ACR retains the modular Candu concept of horizontal fuel channels surrounded by a heavy water moderator. However, ACR uses slightly enriched uranium fuel compared to the natural uranium used in Candu 6. This achieves the twin goals of improved economics (via large reductions in the heavy water moderator volume and replacement of the heavy water coolant with light water coolant) and improved safety. AECL has developed and implemented a software quality assurance program to ensure that its analytical, scientific and design computer codes meet the required standards for software used in safety analyses. Since the basic design of the ACR is equivalent to that of the Candu 6, most of the key phenomena associated with the safety analyses of ACR are common, and the Candu industry standard tool-set of safety analysis codes can be applied to the analysis of the ACR. A systematic assessment of computer code applicability addressing the unique features of the ACR design was performed covering the important aspects of the computer code structure, models, constitutive correlations, and validation database. Arising from this assessment, limited additional requirements for code modifications and extensions to the validation databases have been identified. This paper provides an outline of the AECL software quality assurance program process for the validation of computer codes used to perform physics and thermal-hydraulics safety analyses of the ACR. It describes the additional validation work that has been identified for these codes and the planned, and ongoing, experimental programs to extend the code validation as required to address specific ACR design
Computer code simulations of explosions in flow networks and comparison with experiments
Gregory, W. S.; Nichols, B. D.; Moore, J. A.; Smith, P. R.; Steinke, R. G.; Idzorek, R. D.
1987-10-01
A program of experimental testing and computer code development for predicting the effects of explosions in air-cleaning systems is being carried out for the Department of Energy. This work is a combined effort by the Los Alamos National Laboratory and New Mexico State University (NMSU). Los Alamos has the lead responsibility in the project and develops the computer codes; NMSU performs the experimental testing. The emphasis in the program is on obtaining experimental data to verify the analytical work. The primary benefit of this work will be the development of a verified computer code that safety analysts can use to analyze the effects of hypothetical explosions in nuclear plant air cleaning systems. The experimental data show the combined effects of explosions in air-cleaning systems that contain all of the important air-cleaning elements (blowers, dampers, filters, ductwork, and cells). A small experimental set-up consisting of multiple rooms, ductwork, a damper, a filter, and a blower was constructed. Explosions were simulated with a shock tube, hydrogen/air-filled gas balloons, and blasting caps. Analytical predictions were made using the EVENT84 and NF85 computer codes. The EVENT84 code predictions were in good agreement with the effects of the hydrogen/air explosions, but they did not model the blasting cap explosions adequately. NF85 predicted shock entrance to and within the experimental set-up very well. The NF85 code was not used to model the hydrogen/air or blasting cap explosions.
Algorithms and computer codes for atomic and molecular quantum scattering theory
Energy Technology Data Exchange (ETDEWEB)
Thomas, L. (ed.)
1979-01-01
This workshop has succeeded in bringing up 11 different coupled equation codes on the NRCC computer, testing them against a set of 24 different test problems and making them available to the user community. These codes span a wide variety of methodologies, and factors of up to 300 were observed in the spread of computer times on specific problems. A very effective method was devised for examining the performance of the individual codes in the different regions of the integration range. Many of the strengths and weaknesses of the codes have been identified. Based on these observations, a hybrid code has been developed which is significantly superior to any single code tested. Thus, not only have the original goals been fully met, the workshop has resulted directly in an advancement of the field. All of the computer programs except VIVS are available upon request from the NRCC. Since an improved version of VIVS is contained in the hybrid program, VIVAS, it was not made available for distribution. The individual program LOGD is, however, available. In addition, programs which compute the potential energy matrices of the test problems are also available. The software library names for Tests 1, 2 and 4 are HEH2, LICO, and EN2, respectively.
Energy Technology Data Exchange (ETDEWEB)
Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d' %C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)
2011-06-01
This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the
Issues in computational fluid dynamics code verification and validation
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, W.L.; Blottner, F.G.
1997-09-01
A broad range of mathematical modeling errors of fluid flow physics and numerical approximation errors are addressed in computational fluid dynamics (CFD). It is strongly believed that if CFD is to have a major impact on the design of engineering hardware and flight systems, the level of confidence in complex simulations must substantially improve. To better understand the present limitations of CFD simulations, a wide variety of physical modeling, discretization, and solution errors are identified and discussed. Here, discretization and solution errors refer to all errors caused by conversion of the original partial differential, or integral, conservation equations representing the physical process, to algebraic equations and their solution on a computer. The impact of boundary conditions on the solution of the partial differential equations and their discrete representation will also be discussed. Throughout the article, clear distinctions are made between the analytical mathematical models of fluid dynamics and the numerical models. Lax`s Equivalence Theorem and its frailties in practical CFD solutions are pointed out. Distinctions are also made between the existence and uniqueness of solutions to the partial differential equations as opposed to the discrete equations. Two techniques are briefly discussed for the detection and quantification of certain types of discretization and grid resolution errors.
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
The eukaryotic genome contains varying numbers of non-coding RNA(ncRNA) genes."Computational RNomics" takes a multidisciplinary approach,like information science,to resolve the structure and function of ncRNAs.Here,we review the main issues in "Computational RNomics" of data storage and management,ncRNA gene identification and characterization,ncRNA target identification and functional prediction,and we summarize the main methods and current content of "computational RNomics".
Symbolic coding for noninvertible systems: uniform approximation and numerical computation
Beyn, Wolf-Jürgen; Hüls, Thorsten; Schenke, Andre
2016-11-01
It is well known that the homoclinic theorem, which conjugates a map near a transversal homoclinic orbit to a Bernoulli subshift, extends from invertible to specific noninvertible dynamical systems. In this paper, we provide a unifying approach that combines such a result with a fully discrete analog of the conjugacy for finite but sufficiently long orbit segments. The underlying idea is to solve appropriate discrete boundary value problems in both cases, and to use the theory of exponential dichotomies to control the errors. This leads to a numerical approach that allows us to compute the conjugacy to any prescribed accuracy. The method is demonstrated for several examples where invertibility of the map fails in different ways.
Benchmark Problems Used to Assess Computational Aeroacoustics Codes
Dahl, Milo D.; Envia, Edmane
2005-01-01
The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.
Energy Technology Data Exchange (ETDEWEB)
Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M
2006-07-01
The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.
Energy Technology Data Exchange (ETDEWEB)
Hoffman, F. O.; Miller, C. W.; Shaeffer, D. L.; Garten, Jr., C. T.; Shor, R. W.; Ensminger, J. T.
1977-04-01
The objective of this paper is to present a compilation of computer codes for the assessment of accidental or routine releases of radioactivity to the environment from nuclear power facilities. The capabilities of 83 computer codes in the areas of environmental transport and radiation dosimetry are summarized in tabular form. This preliminary analysis clearly indicates that the initial efforts in assessment methodology development have concentrated on atmospheric dispersion, external dosimetry, and internal dosimetry via inhalation. The incorporation of terrestrial and aquatic food chain pathways has been a more recent development and reflects the current requirements of environmental legislation and the needs of regulatory agencies. The characteristics of the conceptual models employed by these codes are reviewed. The appendixes include abstracts of the codes and indexes by author, key words, publication description, and title.
Compendium of computer codes for the researcher in magnetic fusion energy
Energy Technology Data Exchange (ETDEWEB)
Porter, G.D. (ed.)
1989-03-10
This is a compendium of computer codes, which are available to the fusion researcher. It is intended to be a document that permits a quick evaluation of the tools available to the experimenter who wants to both analyze his data, and compare the results of his analysis with the predictions of available theories. This document will be updated frequently to maintain its usefulness. I would appreciate receiving further information about codes not included here from anyone who has used them. The information required includes a brief description of the code (including any special features), a bibliography of the documentation available for the code and/or the underlying physics, a list of people to contact for help in running the code, instructions on how to access the code, and a description of the output from the code. Wherever possible, the code contacts should include people from each of the fusion facilities so that the novice can talk to someone ''down the hall'' when he first tries to use a code. I would also appreciate any comments about possible additions and improvements in the index. I encourage any additional criticism of this document. 137 refs.
HIFI: a computer code for projectile fragmentation accompanied by incomplete fusion
Energy Technology Data Exchange (ETDEWEB)
Wu, J.R.
1980-07-01
A brief summary of a model proposed to describe projectile fragmentation accompanied by incomplete fusion and the instructions for the use of the computer code HIFI are given. The code HIFI calculates single inclusive spectra, coincident spectra and excitation functions resulting from particle-induced reactions. It is a multipurpose program which can calculate any type of coincident spectra as long as the reaction is assumed to take place in two steps.
SAMDIST A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters
Leal, L C
1995-01-01
The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.
Fault-tolerant quantum computation with asymmetric Bacon-Shor codes
Brooks, Peter; Preskill, John
2013-03-01
We develop a scheme for fault-tolerant quantum computation based on asymmetric Bacon-Shor codes, which works effectively against highly biased noise dominated by dephasing. We find the optimal Bacon-Shor block size as a function of the noise strength and the noise bias, and estimate the logical error rate and overhead cost achieved by this optimal code. Our fault-tolerant gadgets, based on gate teleportation, are well suited for hardware platforms with geometrically local gates in two dimensions.
The development of an intelligent interface to a computational fluid dynamics flow-solver code
Williams, Anthony D.
1988-01-01
Researchers at NASA Lewis are currently developing an 'intelligent' interface to aid in the development and use of large, computational fluid dynamics flow-solver codes for studying the internal fluid behavior of aerospace propulsion systems. This paper discusses the requirements, design, and implementation of an intelligent interface to Proteus, a general purpose, three-dimensional, Navier-Stokes flow solver. The interface is called PROTAIS to denote its introduction of artificial intelligence (AI) concepts to the Proteus code.
LEADS-DC: A computer code for intense dc beam nonlinear transport simulation
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
An intense dc beam nonlinear transport code has been developed. The code is written in Visual FORTRAN 6.6 and has ~13000 lines. The particle distribution in the transverse cross section is uniform or Gaussian. The space charge forces are calculated by the PIC (particle in cell) scheme, and the effects of the applied fields on the particle motion are calculated with the Lie algebraic method through the third order approximation. Obviously,the solutions to the equations of particle motion are self-consistent. The results obtained from the theoretical analysis have been put in the computer code. Many optical beam elements are contained in the code. So, the code can simulate the intense dc particle motions in the beam transport lines, high voltage dc accelerators and ion implanters.
A Multiple Sphere T-Matrix Fortran Code for Use on Parallel Computer Clusters
Mackowski, D. W.; Mishchenko, M. I.
2011-01-01
A general-purpose Fortran-90 code for calculation of the electromagnetic scattering and absorption properties of multiple sphere clusters is described. The code can calculate the efficiency factors and scattering matrix elements of the cluster for either fixed or random orientation with respect to the incident beam and for plane wave or localized- approximation Gaussian incident fields. In addition, the code can calculate maps of the electric field both interior and exterior to the spheres.The code is written with message passing interface instructions to enable the use on distributed memory compute clusters, and for such platforms the code can make feasible the calculation of absorption, scattering, and general EM characteristics of systems containing several thousand spheres.
Energy Technology Data Exchange (ETDEWEB)
Mann, F.M.
1998-01-26
The Tank Waste Remediation System (TWRS) is responsible for the safe storage, retrieval, and disposal of waste currently being held in 177 underground tanks at the Hanford Site. In order to successfully carry out its mission, TWRS must perform environmental analyses describing the consequences of tank contents leaking from tanks and associated facilities during the storage, retrieval, or closure periods and immobilized low-activity tank waste contaminants leaving disposal facilities. Because of the large size of the facilities and the great depth of the dry zone (known as the vadose zone) underneath the facilities, sophisticated computer codes are needed to model the transport of the tank contents or contaminants. This document presents the code selection criteria for those vadose zone analyses (a subset of the above analyses) where the hydraulic properties of the vadose zone are constant in time the geochemical behavior of the contaminant-soil interaction can be described by simple models, and the geologic or engineered structures are complicated enough to require a two-or three dimensional model. Thus, simple analyses would not need to use the fairly sophisticated codes which would meet the selection criteria in this document. Similarly, those analyses which involve complex chemical modeling (such as those analyses involving large tank leaks or those analyses involving the modeling of contaminant release from glass waste forms) are excluded. The analyses covered here are those where the movement of contaminants can be relatively simply calculated from the moisture flow. These code selection criteria are based on the information from the low-level waste programs of the US Department of Energy (DOE) and of the US Nuclear Regulatory Commission as well as experience gained in the DOE Complex in applying these criteria. Appendix table A-1 provides a comparison between the criteria in these documents and those used here. This document does not define the models (that
SCALE: A modular code system for performing standardized computer analyses for licensing evaluation
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.
ASHMET: a computer code for estimating insolation incident on tilted surfaces
Energy Technology Data Exchange (ETDEWEB)
Elkin, R.F.; Toelle, R.G.
1980-05-01
A computer code, ASHMET, has been developed by MSFC to estimate the amount of solar insolation incident on the surfaces of solar collectors. Both tracking and fixed-position collectors have been included. Climatological data for 248 US locations are built into the code. This report describes the methodology of the code, and its input and output. The basic methodology used by ASHMET is the ASHRAE clear-day insolation relationships modified by a clearness index derived from SOLMET-measured solar radiation data to a horizontal surface.
Aeschliman, D. P.; Oberkampf, W. L.; Blottner, F. G.
Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.
Lilley, D. G.; Rhode, D. L.
1982-01-01
A primitive pressure-velocity variable finite difference computer code was developed to predict swirling recirculating inert turbulent flows in axisymmetric combustors in general, and for application to a specific idealized combustion chamber with sudden or gradual expansion. The technique involves a staggered grid system for axial and radial velocities, a line relaxation procedure for efficient solution of the equations, a two-equation k-epsilon turbulence model, a stairstep boundary representation of the expansion flow, and realistic accommodation of swirl effects. A user's manual, dealing with the computational problem, showing how the mathematical basis and computational scheme may be translated into a computer program is presented. A flow chart, FORTRAN IV listing, notes about various subroutines and a user's guide are supplied as an aid to prospective users of the code.
Just-in-Time Compilation-Inspired Methodology for Parallelization of Compute Intensive Java Code
Directory of Open Access Journals (Sweden)
GHULAM MUSTAFA
2017-01-01
Full Text Available Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots and exploit their parallelization potential on multicore systems. Proposed methodology selects and parallelizes each DOALL loop that is either contained in a hotspot method or calls a hotspot method. The methodology could be integrated in front-end of a JIT compiler to parallelize sequential code, just before native translation. However, compilation to native code is out of scope of this work. As a case study, we analyze eighteen JGF (Java Grande Forum benchmarks to determine parallelization potential of hotspots. Eight benchmarks demonstrate a speedup of up to 7.6x on an 8-core system
Development of a computer code for thermal hydraulics of reactors (THOR). [BWR and PWR
Energy Technology Data Exchange (ETDEWEB)
Wulff, W
1975-01-01
The purpose of the advanced code development work is to construct a computer code for the prediction of thermohydraulic transients in water-cooled nuclear reactor systems. The fundamental formulation of fluid dynamics is to be based on the one-dimensional drift flux model for non-homogeneous, non-equilibrium flows of two-phase mixtures. Particular emphasis is placed on component modeling, automatic prediction of initial steady state conditions, inclusion of one-dimensional transient neutron kinetics, freedom in the selection of computed spatial detail, development of reliable constitutive descriptions, and modular code structure. Numerical solution schemes have been implemented to integrate simultaneously the one-dimensional transient drift flux equations. The lumped-parameter modeling analyses of thermohydraulic transients in the reactor core and in the pressurizer have been completed. The code development for the prediction of the initial steady state has been completed with preliminary representation of individual reactor system components. A program has been developed to predict critical flow expanding from a dead-ended pipe; the computed results have been compared and found in good agreement with idealized flow solutions. Transport properties for liquid water and water vapor have been coded and verified.
PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)
Vincenti, Henri
2016-03-01
The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.
Energy Technology Data Exchange (ETDEWEB)
TP Clement
1999-06-24
RT3DV1 (Reactive Transport in 3-Dimensions) is computer code that solves the coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in three-dimensional saturated groundwater systems. RT3D is a generalized multi-species version of the US Environmental Protection Agency (EPA) transport code, MT3D (Zheng, 1990). The current version of RT3D uses the advection and dispersion solvers from the DOD-1.5 (1997) version of MT3D. As with MT3D, RT3D also requires the groundwater flow code MODFLOW for computing spatial and temporal variations in groundwater head distribution. The RT3D code was originally developed to support the contaminant transport modeling efforts at natural attenuation demonstration sites. As a research tool, RT3D has also been used to model several laboratory and pilot-scale active bioremediation experiments. The performance of RT3D has been validated by comparing the code results against various numerical and analytical solutions. The code is currently being used to model field-scale natural attenuation at multiple sites. The RT3D code is unique in that it includes an implicit reaction solver that makes the code sufficiently flexible for simulating various types of chemical and microbial reaction kinetics. RT3D V1.0 supports seven pre-programmed reaction modules that can be used to simulate different types of reactive contaminants including benzene-toluene-xylene mixtures (BTEX), and chlorinated solvents such as tetrachloroethene (PCE) and trichloroethene (TCE). In addition, RT3D has a user-defined reaction option that can be used to simulate any other types of user-specified reactive transport systems. This report describes the mathematical details of the RT3D computer code and its input/output data structure. It is assumed that the user is familiar with the basics of groundwater flow and contaminant transport mechanics. In addition, RT3D users are expected to have some experience in
Computational approaches towards understanding human long non-coding RNA biology.
Jalali, Saakshi; Kapoor, Shruti; Sivadas, Ambily; Bhartiya, Deeksha; Scaria, Vinod
2015-07-15
Long non-coding RNAs (lncRNAs) form the largest class of non-protein coding genes in the human genome. While a small subset of well-characterized lncRNAs has demonstrated their significant role in diverse biological functions like chromatin modifications, post-transcriptional regulation, imprinting etc., the functional significance of a vast majority of them still remains an enigma. Increasing evidence of the implications of lncRNAs in various diseases including cancer and major developmental processes has further enhanced the need to gain mechanistic insights into the lncRNA functions. Here, we present a comprehensive review of the various computational approaches and tools available for the identification and annotation of long non-coding RNAs. We also discuss a conceptual roadmap to systematically explore the functional properties of the lncRNAs using computational approaches.
Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I
Energy Technology Data Exchange (ETDEWEB)
Thomas, L. (ed.)
1979-01-01
The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations.
Physical implementation of a Majorana fermion surface code for fault-tolerant quantum computation
Vijay, Sagar; Fu, Liang
2016-12-01
We propose a physical realization of a commuting Hamiltonian of interacting Majorana fermions realizing Z 2 topological order, using an array of Josephson-coupled topological superconductor islands. The required multi-body interaction Hamiltonian is naturally generated by a combination of charging energy induced quantum phase-slips on the superconducting islands and electron tunneling between islands. Our setup improves on a recent proposal for implementing a Majorana fermion surface code (Vijay et al 2015 Phys. Rev. X 5 041038), a ‘hybrid’ approach to fault-tolerant quantum computation that combines (1) the engineering of a stabilizer Hamiltonian with a topologically ordered ground state with (2) projective stabilizer measurements to implement error correction and a universal set of logical gates. Our hybrid strategy has advantages over the traditional surface code architecture in error suppression and single-step stabilizer measurements, and is widely applicable to implementing stabilizer codes for quantum computation.
Users manual for CAFE-3D : a computational fluid dynamics fire code.
Energy Technology Data Exchange (ETDEWEB)
Khalil, Imane; Lopez, Carlos; Suo-Anttila, Ahti Jorma (Alion Science and Technology, Albuquerque, NM)
2005-03-01
The Container Analysis Fire Environment (CAFE) computer code has been developed to model all relevant fire physics for predicting the thermal response of massive objects engulfed in large fires. It provides realistic fire thermal boundary conditions for use in design of radioactive material packages and in risk-based transportation studies. The CAFE code can be coupled to commercial finite-element codes such as MSC PATRAN/THERMAL and ANSYS. This coupled system of codes can be used to determine the internal thermal response of finite element models of packages to a range of fire environments. This document is a user manual describing how to use the three-dimensional version of CAFE, as well as a description of CAFE input and output parameters. Since this is a user manual, only a brief theoretical description of the equations and physical models is included.
Computer code to interchange CDS and wave-drag geometry formats
Johnson, V. S.; Turnock, D. L.
1986-01-01
A computer program has been developed on the PRIME minicomputer to provide an interface for the passage of aircraft configuration geometry data between the Rockwell Configuration Development System (CDS) and a wireframe geometry format used by aerodynamic design and analysis codes. The interface program allows aircraft geometry which has been developed in CDS to be directly converted to the wireframe geometry format for analysis. Geometry which has been modified in the analysis codes can be transformed back to a CDS geometry file and examined for physical viability. Previously created wireframe geometry files may also be converted into CDS geometry files. The program provides a useful link between a geometry creation and manipulation code and analysis codes by providing rapid and accurate geometry conversion.
Methods, algorithms and computer codes for calculation of electron-impact excitation parameters
Bogdanovich, P; Stonys, D
2015-01-01
We describe the computer codes, developed at Vilnius University, for the calculation of electron-impact excitation cross sections, collision strengths, and excitation rates in the plane-wave Born approximation. These codes utilize the multireference atomic wavefunctions which are also adopted to calculate radiative transition parameters of complex many-electron ions. This leads to consistent data sets suitable in plasma modelling codes. Two versions of electron scattering codes are considered in the present work, both of them employing configuration interaction method for inclusion of correlation effects and Breit-Pauli approximation to account for relativistic effects. These versions differ only by one-electron radial orbitals, where the first one employs the non-relativistic numerical radial orbitals, while another version uses the quasirelativistic radial orbitals. The accuracy of produced results is assessed by comparing radiative transition and electron-impact excitation data for neutral hydrogen, helium...
Application of Multiple Description Coding for Adaptive QoS Mechanism for Mobile Cloud Computing
Directory of Open Access Journals (Sweden)
Ilan Sadeh
2014-02-01
Full Text Available Multimedia transmission over cloud infrastructure is a hot research topic worldwide. It is very strongly related to video streaming, VoIP, mobile networks, and computer networks. The goal is a reliable integration of telephony, video and audio transmission, computing and broadband transmission based on cloud computing. One right approach to pave the way for mobile multimedia and cloud computing is Multiple Description Coding (MDC, i.e. the solution would be: TCP/IP and similar protocols to be used for transmission of text files, and Multiple Description Coding “Send and Forget” algorithm to be used as transmission method for Multimedia over the cloud. Multiple Description Coding would improve the Quality of Service and would provide new service of rate adaptive streaming. This paper presents a new approach for improving the quality of multimedia and other services in the cloud, by using Multiple Description Coding (MDC. Firsty MDC Send and Forget Algorithm is compared with the existing protocols such as TCP/IP, UDP, RTP, etc. Then the Achievable Rate Region for MDC system is evaluated. Finally, a new subset of Quality of Service that considers the blocking in multi-terminal multimedia network and fidelity losses is considered.
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
DEFF Research Database (Denmark)
Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal;
2011-01-01
We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:01(n)01n with minimum distance (n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w=(n(lognloglogn)2) . (2) If d=3 then w=(nlglgn). (3...
Ivanov, Anisoara; Neacsu, Andrei
2011-01-01
This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…
A proposed framework for computational fluid dynamics code calibration/validation
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, W.L.
1993-12-31
The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ``calibrated code,`` ``validated code,`` and a ``validation experiment`` is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance.
An Object-Oriented Computer Code for Aircraft Engine Weight Estimation
Tong, Michael T.; Naylor, Bret A.
2009-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.
Multiple frequencies sequential coding for SSVEP-based brain-computer interface.
Directory of Open Access Journals (Sweden)
Yangsong Zhang
Full Text Available BACKGROUND: Steady-state visual evoked potential (SSVEP-based brain-computer interface (BCI has become one of the most promising modalities for a practical noninvasive BCI system. Owing to both the limitation of refresh rate of liquid crystal display (LCD or cathode ray tube (CRT monitor, and the specific physiological response property that only a very small number of stimuli at certain frequencies could evoke strong SSVEPs, the available frequencies for SSVEP stimuli are limited. Therefore, it may not be enough to code multiple targets with the traditional frequencies coding protocols, which poses a big challenge for the design of a practical SSVEP-based BCI. This study aimed to provide an innovative coding method to tackle this problem. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we present a novel protocol termed multiple frequencies sequential coding (MFSC for SSVEP-based BCI. In MFSC, multiple frequencies are sequentially used in each cycle to code the targets. To fulfill the sequential coding, each cycle is divided into several coding epochs, and during each epoch, certain frequency is used. Obviously, different frequencies or the same frequency can be presented in the coding epochs, and the different epoch sequence corresponds to the different targets. To show the feasibility of MFSC, we used two frequencies to realize four targets and carried on an offline experiment. The current study shows that: 1 MFSC is feasible and efficient; 2 the performance of SSVEP-based BCI based on MFSC can be comparable to some existed systems. CONCLUSIONS/SIGNIFICANCE: The proposed protocol could potentially implement much more targets with the limited available frequencies compared with the traditional frequencies coding protocol. The efficiency of the new protocol was confirmed by real data experiment. We propose that the SSVEP-based BCI under MFSC might be a promising choice in the future.
Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan
2015-10-01
In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.
MOLOCH computer code for molecular-dynamics simulation of processes in condensed matter
Directory of Open Access Journals (Sweden)
Derbenev I.V.
2011-01-01
Full Text Available Theoretical and experimental investigation into properties of condensed matter is one of the mainstreams in RFNC-VNIITF scientific activity. The method of molecular dynamics (MD is an innovative method of theoretical materials science. Modern supercomputers allow the direct simulation of collective effects in multibillion atom sample, making it possible to model physical processes on the atomistic level, including material response to dynamic load, radiation damage, influence of defects and alloying additions upon material mechanical properties, or aging of actinides. During past ten years, the computer code MOLOCH has been developed at RFNC-VNIITF. It is a parallel code suitable for massive parallel computing. Modern programming techniques were used to make the code almost 100% efficient. Practically all instruments required for modelling were implemented in the code: a potential builder for different materials, simulation of physical processes in arbitrary 3D geometry, and calculated data processing. A set of tests was developed to analyse algorithms efficiency. It can be used to compare codes with different MD implementation between each other.
Energy Technology Data Exchange (ETDEWEB)
Strenge, D.L.; Peloquin, R.A.
1981-04-01
The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested.
Experimental assessment of computer codes used for safety analysis of integral reactors
Energy Technology Data Exchange (ETDEWEB)
Falkov, A.A.; Kuul, V.S.; Samoilov, O.B. [OKB Mechanical Engineering, Nizhny Novgorod (Russian Federation)
1995-09-01
Peculiarities of integral reactor thermohydraulics in accidents are associated with presence of noncondensable gas in built-in pressurizer, absence of pumped ECCS, use of guard vessel for LOCAs localisation and passive RHRS through in-reactor HX`s. These features defined the main trends in experimental investigations and verification efforts for computer codes applied. The paper reviews briefly the performed experimental investigation of thermohydraulics of AST-500, VPBER600-type integral reactors. The characteristic of UROVEN/MB-3 code for LOCAs analysis in integral reactors and results of its verification are given. The assessment of RELAP5/mod3 applicability for accident analysis in integral reactor is presented.
The MELTSPREAD-1 computer code for the analysis of transient spreading in containments
Energy Technology Data Exchange (ETDEWEB)
Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.
1990-01-01
A one-dimensional, multicell, Eulerian finite difference computer code (MELTSPREAD-1) has been developed to provide an improved prediction of the gravity driven spreading and thermal interactions of molten corium flowing over a concrete or steel surface. In this paper, the modeling incorporated into the code is described and the spreading models are benchmarked against a simple dam break'' problem as well as water simulant spreading data obtained in a scaled apparatus of the Mk I containment. Results are also presented for a scoping calculation of the spreading behavior and shell thermal response in the full scale Mk I system following vessel meltthrough. 24 refs., 15 figs.
Once-through CANDU reactor models for the ORIGEN2 computer code
Energy Technology Data Exchange (ETDEWEB)
Croff, A.G.; Bjerke, M.A.
1980-11-01
Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % /sup 235/U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given.
V.S.O.P. (99/05) computer code system
Energy Technology Data Exchange (ETDEWEB)
Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Scherer, W.
2005-11-01
V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to HTRs and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. V.S.O.P.(99 / 05) represents the further development of V.S.O.P. (99). Compared to its precursor, the code system has been improved in many details. Major improvements and extensions have been included concerning the neutron spectrum calculation, the 3-d neutron diffusion options, and the thermal hydraulic section with respect to 'multi-pass'-fuelled pebblebed cores. This latest code version was developed and tested under the WINDOWS-XP - operating system. The storage requirement for the executables and the basic libraries associated with the code amounts to about 15 MB. Another 5 MB are required - if desired - for storage of the source code ({approx}65000 Fortran statements). (orig.)
Rutishauser, David
2006-01-01
The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters
Energy Technology Data Exchange (ETDEWEB)
Reginatto, M.; Goldhagen, P.
1998-06-01
The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user`s guide for the code MAXED is included in an appendix. The code is available from the authors upon request.
Moreno, Maggie; Baggio, Giosuè
2015-07-01
In signaling games, a sender has private access to a state of affairs and uses a signal to inform a receiver about that state. If no common association of signals and states is initially available, sender and receiver must coordinate to develop one. How do players divide coordination labor? We show experimentally that, if players switch roles at each communication round, coordination labor is shared. However, in games with fixed roles, coordination labor is divided: Receivers adjust their mappings more frequently, whereas senders maintain the initial code, which is transmitted to receivers and becomes the common code. In a series of computer simulations, player and role asymmetry as observed experimentally were accounted for by a model in which the receiver in the first signaling round has a higher chance of adjusting its code than its partner. From this basic division of labor among players, certain properties of role asymmetry, in particular correlations with game complexity, are seen to follow.
The MELTSPREAD-1 computer code for the analysis of transient spreading in containments
Energy Technology Data Exchange (ETDEWEB)
Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.
1990-01-01
Transient spreading of molten core materials is important in the assessment of severe-accident sequences for Mk-I boiling water reactors (BWRs). Of interest is whether core materials are able to spread over the pedestal and drywell floors to contact the containment shell and cause thermally induced shell failure, or whether heat transfer to underlying concrete and overlying water will freeze the melt short of the shell. The development of a computational capability for the assessment of this problem was initiated by Sienicki et al. in the form of MELTSPREAD-O code. Development is continuing in the form of the MELTSPREAD-1 code, which contains new models for phenomena that were ignored in the earlier code. This paper summarizes these new models, provides benchmarking calculations of the relocation model against an analytical solution as well as simulant spreading data, and summarizes the results of a scoping calculation for the full Mk-I system.
WOLF: a computer code package for the calculation of ion beam trajectories
Energy Technology Data Exchange (ETDEWEB)
Vogel, D.L.
1985-10-01
The WOLF code solves POISSON'S equation within a user-defined problem boundary of arbitrary shape. The code is compatible with ANSI FORTRAN and uses a two-dimensional Cartesian coordinate geometry represented on a triangular lattice. The vacuum electric fields and equipotential lines are calculated for the input problem. The use may then introduce a series of emitters from which particles of different charge-to-mass ratios and initial energies can originate. These non-relativistic particles will then be traced by WOLF through the user-defined region. Effects of ion and electron space charge are included in the calculation. A subprogram PISA forms part of this code and enables optimization of various aspects of the problem. The WOLF package also allows detailed graphics analysis of the computed results to be performed.
HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments
Energy Technology Data Exchange (ETDEWEB)
McCann, R.A.; Lowery, P.S.
1987-10-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.
HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual
Energy Technology Data Exchange (ETDEWEB)
McCann, R.A.; Lowery, P.S.; Lessor, D.L.
1987-09-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.
Chen, Y. S.
1986-03-01
In this report, a numerical method for solving the equations of motion of three-dimensional incompressible flows in nonorthogonal body-fitted coordinate (BFC) systems has been developed. The equations of motion are transformed to a generalized curvilinear coordinate system from which the transformed equations are discretized using finite difference approximations in the transformed domain. The hybrid scheme is used to approximate the convection terms in the governing equations. Solutions of the finite difference equations are obtained iteratively by using a pressure-velocity correction algorithm (SIMPLE-C). Numerical examples of two- and three-dimensional, laminar and turbulent flow problems are employed to evaluate the accuracy and efficiency of the present computer code. The user's guide and computer program listing of the present code are also included.
Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers
Energy Technology Data Exchange (ETDEWEB)
Nataf, J.M.; Winkelmann, F.
1992-09-01
We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.
Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers
Energy Technology Data Exchange (ETDEWEB)
Nataf, J.M.; Winkelmann, F.
1992-09-01
We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.
Computing element evolution towards Exascale and its impact on legacy simulation codes
Energy Technology Data Exchange (ETDEWEB)
Colin de Verdiere, Guillaume J.L. [CEA, DAM, DIF, Arpajon (France)
2015-12-15
In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes. (orig.)
Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi
The COMPASS code is designed based on the moving particle semi-implicit method to simulate various complex mesoscale phenomena relevant to core disruptive accidents of sodium-cooled fast reactors. In this study, a computational framework for fluid-solid mixture flow simulations was developed for the COMPASS code. The passively moving solid model was used to simulate hydrodynamic interactions between fluid and solids. Mechanical interactions between solids were modeled by the distinct element method. A multi-time-step algorithm was introduced to couple these two calculations. The proposed computational framework for fluid-solid mixture flow simulations was verified by the comparison between experimental and numerical studies on the water-dam break with multiple solid rods.
Improvement of Level-1 PSA computer code package -A study for nuclear safety improvement-
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Kyu; Kim, Tae Woon; Ha, Jae Joo; Han, Sang Hoon; Cho, Yeong Kyun; Jeong, Won Dae; Jang, Seung Cheol; Choi, Young; Seong, Tae Yong; Kang, Dae Il; Hwang, Mi Jeong; Choi, Seon Yeong; An, Kwang Il [Korea Atomic Energy Res. Inst., Taejon (Korea, Republic of)
1994-07-01
This year is the second year of the Government-sponsored Mid- and Long-Term Nuclear Power Technology Development Project. The scope of this subproject titled on `The Improvement of Level-1 PSA Computer Codes` is divided into three main activities : (1) Methodology development on the under-developed fields such as risk assessment technology for plant shutdown and external events, (2) Computer code package development for Level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in the area of PSA methodology development, foreign PSA reports on shutdown and external events have been reviewed and various PSA methodologies have been compared. Level-1 PSA code KIRAP and CCF analysis code COCOA are converted from KOS to Windows. Human reliability database has been also established in this year. In the area of new technology applications, fuzzy set theory and entropy theory are used to estimate component life and to develop a new measure of uncertainty importance. Finally, in the field of application study of PSA technique to reactor regulation, a strategic study to develop a dynamic risk management tool PEPSI and the determination of inspection and test priority of motor operated valves based on risk importance worths have been studied. (Author).
Method for computing self-consistent solution in a gun code
Nelson, Eric M
2014-09-23
Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.
PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance
Energy Technology Data Exchange (ETDEWEB)
Vondy, D.R.
1979-10-01
The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
DEFF Research Database (Denmark)
Gal, A.; Hansen, Kristoffer Arnsfelt; Koucky, Michal
2013-01-01
We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n)→{0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: 1) if d=2, then w=Θ(n (lgn/lglgn)2); 2) if d=3, then w...
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
DEFF Research Database (Denmark)
Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal;
2012-01-01
We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n) -> {0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w = Θ(n ({log n/ log log n})2). (2) If d...
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
Walowit, Jed A.
1994-01-01
A viewgraph presentation is made showing the capabilities of the computer code SPIRALI. Overall capabilities of SPIRALI include: computes rotor dynamic coefficients, flow, and power loss for cylindrical and face seals; treats turbulent, laminar, Couette, and Poiseuille dominated flows; fluid inertia effects are included; rotor dynamic coefficients in three (face) or four (cylindrical) degrees of freedom; includes effects of spiral grooves; user definable transverse film geometry including circular steps and grooves; independent user definable friction factor models for rotor and stator; and user definable loss coefficients for sudden expansions and contractions.
Multiphase integral reacting flow computer code (ICOMFLO): User`s guide
Energy Technology Data Exchange (ETDEWEB)
Chang, S.L.; Lottes, S.A.; Petrick, M.
1997-11-01
A copyrighted computational fluid dynamics computer code, ICOMFLO, has been developed for the simulation of multiphase reacting flows. The code solves conservation equations for gaseous species and droplets (or solid particles) of various sizes. General conservation laws, expressed by elliptic type partial differential equations, are used in conjunction with rate equations governing the mass, momentum, enthalpy, species, turbulent kinetic energy, and turbulent dissipation. Associated phenomenological submodels of the code include integral combustion, two parameter turbulence, particle evaporation, and interfacial submodels. A newly developed integral combustion submodel replacing an Arrhenius type differential reaction submodel has been implemented to improve numerical convergence and enhance numerical stability. A two parameter turbulence submodel is modified for both gas and solid phases. An evaporation submodel treats not only droplet evaporation but size dispersion. Interfacial submodels use correlations to model interfacial momentum and energy transfer. The ICOMFLO code solves the governing equations in three steps. First, a staggered grid system is constructed in the flow domain. The staggered grid system defines gas velocity components on the surfaces of a control volume, while the other flow properties are defined at the volume center. A blocked cell technique is used to handle complex geometry. Then, the partial differential equations are integrated over each control volume and transformed into discrete difference equations. Finally, the difference equations are solved iteratively by using a modified SIMPLER algorithm. The results of the solution include gas flow properties (pressure, temperature, density, species concentration, velocity, and turbulence parameters) and particle flow properties (number density, temperature, velocity, and void fraction). The code has been used in many engineering applications, such as coal-fired combustors, air
Error threshold in topological quantum-computing models with color codes
Katzgraber, Helmut; Bombin, Hector; Martin-Delgado, Miguel A.
2009-03-01
Dealing with errors in quantum computing systems is possibly one of the hardest tasks when attempting to realize physical devices. By encoding the qubits in topological properties of a system, an inherent protection of the quantum states can be achieved. Traditional topologically-protected approaches are based on the braiding of quasiparticles. Recently, a braid-less implementation using brane-net condensates in 3-colexes has been proposed. In 2D it allows the transversal implementation of the whole Clifford group of quantum gates. In this work, we compute the error threshold for this topologically-protected quantum computing system in 2D, by means of mapping its error correction process onto a random 3-body Ising model on a triangular lattice. Errors manifest themselves as random perturbation of the plaquette interaction terms thus introducing frustration. Our results from Monte Carlo simulations suggest that these topological color codes are similarly robust to perturbations as the toric codes. Furthermore, they provide more computational capabilities and the possibility of having more qubits encoded in the quantum memory.
Agarwal, Sapan; Quach, Tu-Thach; Parekh, Ojas; Hsia, Alexander H.; DeBenedictis, Erik P.; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.
2016-01-01
The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning. PMID:26778946
Directory of Open Access Journals (Sweden)
Sapan eAgarwal
2016-01-01
Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.
Energy Technology Data Exchange (ETDEWEB)
Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.; Chu, C.C.
1992-01-01
A transient, one dimensional, finite difference computer code (MELTSPREAD-1) has been developed to predict spreading behavior of high temperature melts flowing over concrete and/or steel surfaces submerged in water, or without the effects of water if the surface is initially dry. This paper provides a summary overview of models and correlations currently implemented in the code, code validation activities completed thus far, LWR spreading-related safety issues for which the code has been applied, and the status of documentation for the code.
Energy Technology Data Exchange (ETDEWEB)
Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.; Chu, C.C.
1992-04-01
A transient, one dimensional, finite difference computer code (MELTSPREAD-1) has been developed to predict spreading behavior of high temperature melts flowing over concrete and/or steel surfaces submerged in water, or without the effects of water if the surface is initially dry. This paper provides a summary overview of models and correlations currently implemented in the code, code validation activities completed thus far, LWR spreading-related safety issues for which the code has been applied, and the status of documentation for the code.
Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes
Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)
2000-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.
Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide
Energy Technology Data Exchange (ETDEWEB)
Ritchie, L.T.; Johnson, J.D.; Blond, R.M.
1983-02-01
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.
On the Computational Complexity of Sphere Decoder for Lattice Space-Time Coded MIMO Channel
Abediseid, Walid
2011-01-01
The exact complexity analysis of the basic sphere decoder for general space-time codes applied to multi-input multi-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi-static, LAttice Space-Time (LAST) coded MIMO channel. Specifically, we derive the asymptotic tail distribution of the decoder's computational complexity in the high signal-to-noise ratio (SNR) regime. For the uncoded $M\\times N$ MIMO channel (e.g., V-BLAST), the analysis in [6] revealed that the tail distribution of such a decoder is of a Pareto-type with tail exponent that is equivalent to $N-M+1$. In our analysis, we show that the tail exponent of the sphere decoder's complexity distribution is equivalent to the diversity-multiplexing tradeoff achieved by LAST coding and lattice decoding schemes. This leads to extend the channel's tradeoff to include the decoding complexity. Moreover, we show analytically how minimum-mean square-error decisio...
Energy Technology Data Exchange (ETDEWEB)
Chung, Chang Hyun; You, Young Woo; Huh, Chang Wook; Kim, Ju Yeul; Kim Do Hyung; Kim, Yoon Ik; Yang, Hui Chang [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hansung University, Seoul (Korea, Republic of)
1997-07-01
The objective of this study is to develop the appropriate procedure that can evaluate the human error in LP/S(lower power/shutdown) and the computer code that calculate the human error probabilities(HEPs) using this framework. The assessment of applicability of the typical HRA methodologies to LP/S is conducted and a new HRA procedure, SEPLOT (Systematic Evaluation Procedure for LP/S Operation Tasks) which presents the characteristics of LP/S is developed by selection and categorization of human actions by reviewing present studies. This procedure is applied to evaluate the LOOP(Loss of Off-site Power) sequence and the HEPs obtained by using SEPLOT are used to quantitative evaluation of the core uncovery frequency. In this evaluation one of the dynamic reliability computer codes, DYLAM-3 which has the advantages against the ET/FT is used. The SEPLOT developed in this study can give the basis and arrangement as to the human error evaluation technique. And this procedure can make it possible to assess the dynamic aspects of accidents leading to core uncovery applying the HEPs obtained by using the SEPLOT as input data to DYLAM-3 code, Eventually, it is expected that the results of this study will contribute to improve safety in LP/S and reduce uncertainties in risk. 57 refs. 17 tabs., 33 figs. (author)
Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1995-07-01
This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on `The improvement of level-1 PSA computer codes` is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author).
A fully parallel, high precision, N-body code running on hybrid computing platforms
Capuzzo-Dolcetta, R; Punzo, D
2012-01-01
We present a new implementation of the numerical integration of the classical, gravitational, N-body problem based on a high order Hermite's integration scheme with block time steps, with a direct evaluation of the particle-particle forces. The main innovation of this code (called HiGPUs) is its full parallelization, exploiting both OpenMP and MPI in the use of the multicore Central Processing Units as well as either Compute Unified Device Architecture (CUDA) or OpenCL for the hosted Graphic Processing Units. We tested both performance and accuracy of the code using up to 256 GPUs in the supercomputer IBM iDataPlex DX360M3 Linux Infiniband Cluster provided by the italian supercomputing consortium CINECA, for values of N up to 8 millions. We were able to follow the evolution of a system of 8 million bodies for few crossing times, task previously unreached by direct summation codes. The code is freely available to the scientific community.
SEACC: the systems engineering and analysis computer code for small wind systems
Energy Technology Data Exchange (ETDEWEB)
Tu, P.K.C.; Kertesz, V.
1983-03-01
The systems engineering and analysis (SEA) computer program (code) evaluates complete horizontal-axis SWECS performance. Rotor power output as a function of wind speed and energy production at various wind regions are predicted by the code. Efficiencies of components such as gearbox, electric generators, rectifiers, electronic inverters, and batteries can be included in the evaluation process to reflect the complete system performance. Parametric studies can be carried out for blade design characteristics such as airfoil series, taper rate, twist degrees and pitch setting; and for geometry such as rotor radius, hub radius, number of blades, coning angle, rotor rpm, etc. Design tradeoffs can also be performed to optimize system configurations for constant rpm, constant tip speed ratio and rpm-specific rotors. SWECS energy supply as compared to the load demand for each hour of the day and during each session of the year can be assessed by the code if the diurnal wind and load distributions are known. Also available during each run of the code is blade aerodynamic loading information.
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes
Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...
Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)
2014-05-15
In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow
A general panel sizing computer code and its application to composite structural panels
Anderson, M. S.; Stroud, W. J.
1978-01-01
A computer code for obtaining the dimensions of optimum (least mass) stiffened composite structural panels is described. The procedure, which is based on nonlinear mathematical programming and a rigorous buckling analysis, is applicable to general cross sections under general loading conditions causing buckling. A simplified method of accounting for bow-type imperfections is also included. Design studies in the form of structural efficiency charts for axial compression loading are made with the code for blade and hat stiffened panels. The effects on panel mass of imperfections, material strength limitations, and panel stiffness requirements are also examined. Comparisons with previously published experimental data show that accounting for imperfections improves correlation between theory and experiment.
Revised uranium--plutonium cycle PWR and BWR models for the ORIGEN computer code
Energy Technology Data Exchange (ETDEWEB)
Croff, A. G.; Bjerke, M. A.; Morrison, G. W.; Petrie, L. M.
1978-09-01
Reactor physics calculations and literature searches have been conducted, leading to the creation of revised enriched-uranium and enriched-uranium/mixed-oxide-fueled PWR and BWR reactor models for the ORIGEN computer code. These ORIGEN reactor models are based on cross sections that have been taken directly from the reactor physics codes and eliminate the need to make adjustments in uncorrected cross sections in order to obtain correct depletion results. Revised values of the ORIGEN flux parameters THERM, RES, and FAST were calculated along with new parameters related to the activation of fuel-assembly structural materials not located in the active fuel zone. Recommended fuel and structural material masses and compositions are presented. A summary of the new ORIGEN reactor models is given.
Assessment of computer codes for VVER-440/213-type nuclear power plants
Energy Technology Data Exchange (ETDEWEB)
Szabados, L.; Ezsol, Gy.; Perneczky [Atomic Energy Research Institute, Budapest (Hungary)
1995-09-01
Nuclear power plant of VVER-440/213 designed by the former USSR have a number of special features. As a consequence of these features the transient behaviour of such a reactor system should be different from the PWR system behaviour. To study the transient behaviour of the Hungarian Paks Nuclear Power Plant of VVER-440/213-type both analytical and experimental activities have been performed. The experimental basis of the research in the PMK-2 integral-type test facility , which is a scaled down model of the plant. Experiments performed on this facility have been used to assess thermal-hydraulic system codes. Four tests were selected for {open_quotes}Standard Problem Exercises{close_quotes} of the International Atomic Energy Agency. Results of the 4th Exercise, of high international interest, are presented in the paper, focusing on the essential findings of the assessment of computer codes.
Development of system of computer codes for severe accident analysis and its applications
Energy Technology Data Exchange (ETDEWEB)
Jang, H. S.; Jeon, M. H.; Cho, N. J. and others [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
1992-01-15
The objectives of this study is to develop a system of computer codes for postulated severe accident analyses in nuclear power plants. This system of codes is necessary to conduct Individual Plant Examination for domestic nuclear power plants. As a result of this study, one can conduct severe accident assessments more easily, and can extract the plant-specific vulnerabilities for severe accidents and at the same time the ideas for enhancing overall accident-resistance. Severe accident can be mitigated by the proper accident management strategies. Some operator action for mitigation can lead to more disastrous result and thus uncertain severe accident phenomena must be well recognized. There must be further research for development of severe accident management strategies utilizing existing plant resources as well as new design concepts.
Development of a computer code for dynamic analysis of the primary circuit of advanced reactors
Energy Technology Data Exchange (ETDEWEB)
Rocha, Jussie Soares da; Lira, Carlos A.B.O.; Magalhaes, Mardson A. de Sa, E-mail: cabol@ufpe.b [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear
2011-07-01
Currently, advanced reactors are being developed, seeking for enhanced safety, better performance and low environmental impacts. Reactor designs must follow several steps and numerous tests before a conceptual project could be certified. In this sense, computational tools become indispensable in the preparation of such projects. Thus, this study aimed at the development of a computational tool for thermal-hydraulic analysis by coupling two computer codes to evaluate the influence of transients caused by pressure variations and flow surges in the region of the primary circuit of IRIS reactor between the core and the pressurizer. For the simulation, it was used a situation of 'insurge', characterized by the entry of water in the pressurizer, due to the expansion of the refrigerant in the primary circuit. This expansion was represented by a pressure disturbance in step form, through the block 'step' of SIMULINK, thus enabling the transient startup. The results showed that the dynamic tool, obtained through the coupling of the codes, generated very satisfactory responses within model limitations, preserving the most important phenomena in the process. (author)
Energy Technology Data Exchange (ETDEWEB)
Müller, C.; Hughes, E. D.; Niederauer, G. F.; Wilkening, H.; Travis, J. R.; Spore, J. W.; Royl, P.; Baumann, W.
1998-10-01
Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best- estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume
DEFF Research Database (Denmark)
Mohebbi, Ali; Engelsholm, Signe K.D.; Puthusserypady, Sadasivan;
2015-01-01
In this pilot study, a novel and minimalistic Brain Computer Interface (BCI) based wheelchair control application was developed. The system was based on pseudorandom code modulated Visual Evoked Potentials (c-VEPs). The visual stimuli in the scheme were generated based on the Gold code...
DEFF Research Database (Denmark)
Sessarego, Matias; Ramos García, Néstor; Sørensen, Jens Nørkær
2017-01-01
Aerodynamic and structural dynamic performance analysis of modern wind turbines are routinely estimated in the wind energy field using computational tools known as aeroelastic codes. Most aeroelastic codes use the blade element momentum (BEM) technique to model the rotor aerodynamics and a modal...
Apparatus, Method, and Computer Program for a Resolution-Enhanced Pseudo-Noise Code Technique
Li, Steven X. (Inventor)
2015-01-01
An apparatus, method, and computer program for a resolution enhanced pseudo-noise coding technique for 3D imaging is provided. In one embodiment, a pattern generator may generate a plurality of unique patterns for a return to zero signal. A plurality of laser diodes may be configured such that each laser diode transmits the return to zero signal to an object. Each of the return to zero signal includes one unique pattern from the plurality of unique patterns to distinguish each of the transmitted return to zero signals from one another.
Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.
2016-11-01
Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The fundamental algorithm of light beam propagation in high powerlaser system is investigated and the corresponding computational codes are given. It is shown that the number of modulation ring due to the diffraction is related to the size of the pinhole in spatial filter (in terms of the times of diffraction limitation, i.e. TDL) and the Fresnel number of the laser system; for the complex laser system with multi-spatial filters and free space, the system can be investigated by the reciprocal rule of operators.
Discrete logarithm computations over finite fields using Reed-Solomon codes
Augot, Daniel; Morain, François
2012-01-01
Cheng and Wan have related the decoding of Reed-Solomon codes to the computation of discrete logarithms over finite fields, with the aim of proving the hardness of their decoding. In this work, we experiment with solving the discrete logarithm over GF(q^h) using Reed-Solomon decoding. For fixed h and q going to infinity, we introduce an algorithm (RSDL) needing O~(h! q^2) operations over GF(q), operating on a q x q matrix with (h+2) q non-zero coefficients. We give faster variants including a...
Resin Matrix/Fiber Reinforced Composite Material, Ⅱ: Method of Solution and Computer Code
Institute of Scientific and Technical Information of China (English)
Li Chensha(李辰砂); Jiao Caishan; Liu Ying; Wang Zhengping; Wang Hongjie; Cao Maosheng
2003-01-01
According to a mathematical model which describes the curing process of composites constructed from continuous fiber-reinforced, thermosetting resin matrix prepreg materials, and the consolidation of the composites, the solution method to the model is made and a computer code is developed, which for flat-plate composites cured by a specified cure cycle, provides the variation of temperature distribution, the cure reaction process in the resin, the resin flow and fibers stress inside the composite, the void variation and the residual stress distribution.
On the application of computational fluid dynamics codes for liquefied natural gas dispersion.
Luketa-Hanlin, Anay; Koopman, Ronald P; Ermak, Donald L
2007-02-20
Computational fluid dynamics (CFD) codes are increasingly being used in the liquefied natural gas (LNG) industry to predict natural gas dispersion distances. This paper addresses several issues regarding the use of CFD for LNG dispersion such as specification of the domain, grid, boundary and initial conditions. A description of the k-epsilon model is presented, along with modifications required for atmospheric flows. Validation issues pertaining to the experimental data from the Burro, Coyote, and Falcon series of LNG dispersion experiments are also discussed. A description of the atmosphere is provided as well as discussion on the inclusion of the Coriolis force to model very large LNG spills.
Fletcher, C. D.
The capability to perform thermal-hydraulic analyses of a space reactor using the ATHENA computer code is demonstrated. The fast reactor, liquid-lithium coolant loops, and lithium-filled heat pipes of the preliminary General electric SP-100 design were modeled with ATHENA. Two demonstration transient calculations were performed simulating accident conditions. Calculated results are available for display using the Nuclear Plant Analyzer color graphics analysis tool in addition to traditional plots. ATHENA-calculated results appear reasonable, both for steady state full power conditions, and for the two transients. This analysis represents the first known transient thermal-hydraulic simulation using an integral space reactor system model incorporating heat pipes.
Capabilities of the ATHENA computer code for modeling the SP-100 space reactor concept
Fletcher, C. D.
1985-09-01
The capability to perform thermal-hydraulic analyses of an SP-100 space reactor was demonstrated using the ATHENA computer code. The preliminary General Electric SP-100 design was modeled using Athena. The model simulates the fast reactor, liquid-lithium coolant loops, and lithium-filled heat pipes of this design. Two ATHENA demonstration calculations were performed simulating accident scenarios. A mask for the SP-100 model and an interface with the Nuclear Plant Analyzer (NPA) were developed, allowing a graphic display of the calculated results on the NPA.
Modeling of field lysimeter release data using the computer code dust
Energy Technology Data Exchange (ETDEWEB)
Sullivan, T.M.; Fitzgerald, I.T. (Brookhaven National Lab., Upton, NY (United States)); McConnell, J.W.; Rogers, R.D. (Idaho National Engineering Lab., Idaho Falls, ID (United States))
1993-01-01
In this study, it was attempted to match the experimentally measured mass release data collected over a period of seven years by investigators from Idaho National Engineering Laboratory from the lysimeters at Oak Ridge National Laboratory and Argonne National Laboratory using the computer code DUST. The influence of the dispersion coefficient and distribution coefficient on mass release was investigated. Both were found to significantly influence mass release over the seven year period. It is recommended that these parameters be measured on a site specific basis to enhance the understanding of the system.
Modeling of field lysimeter release data using the computer code dust
Energy Technology Data Exchange (ETDEWEB)
Sullivan, T.M.; Fitzgerald, I.T. [Brookhaven National Lab., Upton, NY (United States); McConnell, J.W.; Rogers, R.D. [Idaho National Engineering Lab., Idaho Falls, ID (United States)
1993-03-01
In this study, it was attempted to match the experimentally measured mass release data collected over a period of seven years by investigators from Idaho National Engineering Laboratory from the lysimeters at Oak Ridge National Laboratory and Argonne National Laboratory using the computer code DUST. The influence of the dispersion coefficient and distribution coefficient on mass release was investigated. Both were found to significantly influence mass release over the seven year period. It is recommended that these parameters be measured on a site specific basis to enhance the understanding of the system.
Digital Poetry: A Narrow Relation between Poetics and the Codes of the Computational Logic
Laurentiz, Silvia
The project "Percorrendo Escrituras" (Walking Through Writings Project) has been developed at ECA-USP Fine Arts Department. Summarizing, it intends to study different structures of digital information that share the same universe and are generators of a new aesthetics condition. The aim is to search which are the expressive possibilities of the computer among the algorithm functions and other of its specific properties. It is a practical, theoretical and interdisciplinary project where the study of programming evolutionary language, logic and mathematics take us to poetic experimentations. The focus of this research is the digital poetry, and it comes from poetics of permutation combinations and culminates with dynamic and complex systems, autonomous, multi-user and interactive, through agents generation derivations, filtration and emergent standards. This lecture will present artworks that use some mechanisms introduced by cybernetics and the notion of system in digital poetry that demonstrate the narrow relationship between poetics and the codes of computational logic.
An implementation of a tree code on a SIMD, parallel computer
Olson, Kevin M.; Dorband, John E.
1994-01-01
We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.
Wavelet subband coding of computer simulation output using the A++ array class library
Energy Technology Data Exchange (ETDEWEB)
Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.; Zhang, H.D. [Los Alamos National Lab., NM (United States); Nuri, V. [Washington State Univ., Pullman, WA (United States). School of EECS
1995-07-01
The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using a bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.
Computer simulation of Angra-2 PWR nuclear reactor core using MCNPX code
Energy Technology Data Exchange (ETDEWEB)
Medeiros, Marcos P.C. de; Rebello, Wilson F., E-mail: eng.cavaliere@ime.eb.br, E-mail: rebello@ime.eb.br [Instituto Militar de Engenharia - Secao de Engenharia Nuclear, Rio de Janeiro, RJ (Brazil); Oliveira, Claudio L. [Universidade Gama Filho, Departamento de Matematica, Rio de Janeiro, RJ (Brazil); Vellozo, Sergio O., E-mail: vellozo@cbpf.br [Centro Tecnologico do Exercito. Divisao de Defesa Quimica, Biologica e Nuclear, Rio de Janeiro, RJ (Brazil); Silva, Ademir X. da, E-mail: ademir@nuclear.ufrj.br [Coordenacao dos Programas de Pos Gaduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)
2011-07-01
In this work the MCNPX (Monte Carlo N-Particle Transport Code) code was used to develop a computerized model of the core of Angra 2 PWR (Pressurized Water Reactor) nuclear reactor. The model was created without any kind of homogenization, but using real geometric information and material composition of that reactor, obtained from the FSAR (Final Safety Analysis Report). The model is still being improved and the version presented in this work is validated by comparing values calculated by MCNPX with results calculated by others means and presented on FSAR. This paper shows the results already obtained to K{sub eff} and K{infinity}, general parameters of the core, considering the reactor operating under stationary conditions of initial testing and operation. Other stationary operation conditions have been simulated and, in all tested cases, there was a close agreement between values calculated computationally through this model and data presented on the FSAR, which were obtained by other codes. This model is expected to become a valuable tool for many future applications. (author)
Development of computer code models for analysis of subassembly voiding in the LMFBR
Energy Technology Data Exchange (ETDEWEB)
Hinkle, W [ed.
1979-12-01
The research program discussed in this report was started in FY1979 under the combined sponsorship of the US Department of Energy (DOE), General Electric (GE) and Hanford Engineering Development Laboratory (HEDL). The objective of the program is to develop multi-dimensional computer codes which can be used for the analysis of subassembly voiding incoherence under postulated accident conditions in the LMFBR. Two codes are being developed in parallel. The first will use a two fluid (6 equation) model which is more difficult to develop but has the potential for providing a code with the utmost in flexibility and physical consistency for use in the long term. The other will use a mixture (< 6 equation) model which is less general but may be more amenable to interpretation and use of experimental data and therefore, easier to develop for use in the near term. To assure that the models developed are not design dependent, geometries and transient conditions typical of both foreign and US designs are being considered.
Hierarchical surface code for network quantum computing with modules of arbitrary size
Li, Ying; Benjamin, Simon C.
2016-10-01
The network paradigm for quantum computing involves interconnecting many modules to form a scalable machine. Typically it is assumed that the links between modules are prone to noise while operations within modules have a significantly higher fidelity. To optimize fault tolerance in such architectures we introduce a hierarchical generalization of the surface code: a small "patch" of the code exists within each module and constitutes a single effective qubit of the logic-level surface code. Errors primarily occur in a two-dimensional subspace, i.e., patch perimeters extruded over time, and the resulting noise threshold for intermodule links can exceed ˜10 % even in the absence of purification. Increasing the number of qubits within each module decreases the number of qubits necessary for encoding a logical qubit. But this advantage is relatively modest, and broadly speaking, a "fine-grained" network of small modules containing only about eight qubits is competitive in total qubit count versus a "course" network with modules containing many hundreds of qubits.
Institute of Scientific and Technical Information of China (English)
M. Garbey; C. Picard
2008-01-01
The goal of this paper is to present a versatile framework for solution verification of PDE's.We first generalize the Richardson Extrapolation technique to an optimized extrapolation solution procedure that constructs the best consistent solution from a set of two or three coarse grid solution in the discrete norm of choice. This technique generalizes the Least Square Extrapolation method introduced by one of the author and W. Shyy. We second establish the conditioning number of the problem in a reduced space that approximates the main feature of the numerical solution thanks to a sensitivity analysis. Overall our method produces an a posteriori error estimation in this reduced space of approximation. The key feature of our method is that our construction does not require an internal knowledge of the software neither the source code that produces the solution to be verified. It can be applied in principle as a postprocessing procedure to off the shelf commercial code. We demonstrate the robustness of our method with two steady problems that are separately an incompressible back step flow test case and a heat transfer problem for a battery. Our error estimate might be ultimately verified with a near by manufactured solution. While our procedure is systematic and requires numerous computation of residuals, one can take advantage of distributed computing to get quickly the error estimate.
Directory of Open Access Journals (Sweden)
JUN YEOB LEE
2014-10-01
Full Text Available During the development process of a thermal-hydraulic system code, a non-regression test (NRT must be performed repeatedly in order to prevent software regression. The NRT process, however, is time-consuming and labor-intensive. Thus, automation of this process is an ideal solution. In this study, we have developed a program to support an efficient NRT for the SPACE code and demonstrated its usability. This results in a high degree of efficiency for code development. The program was developed using the Visual Basic for Applications and designed so that it can be easily customized for the NRT of other computer codes.
Computer code to predict the heat of explosion of high energy materials
Energy Technology Data Exchange (ETDEWEB)
Muthurajan, H. [Armament Research and Development Establishment, Pashan, Pune 411021 (India)], E-mail: muthurajan_h@rediffmail.com; Sivabalan, R.; Pon Saravanan, N.; Talawar, M.B. [High Energy Materials Research Laboratory, Sutarwadi, Pune 411 021 (India)
2009-01-30
The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-a-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion ({delta}H{sub e}) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R{sup 2} = 0.9721 with a linear equation y = 0.9262x + 101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials.
Computer code to predict the heat of explosion of high energy materials.
Muthurajan, H; Sivabalan, R; Pon Saravanan, N; Talawar, M B
2009-01-30
The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-à-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (DeltaH(e)) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R(2)=0.9721 with a linear equation y=0.9262x+101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials.
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
Energy Technology Data Exchange (ETDEWEB)
Proskuryakov, K.N.; Bogomazov, D.N.; Poliakov, N. [Moscow Power Engineering Institute (Technical University), Moscow (Russian Federation)
2007-07-01
The new special module to neutron-physic and thermal-hydraulic computer codes for coolant acoustical characteristics calculation is worked out. The Russian computer code Rainbow has been selected for joint use with a developed module. This code system provides the possibility of EFOCP (Eigen Frequencies of Oscillations of the Coolant Pressure) calculations in any coolant acoustical elements of primary circuits of NPP. EFOCP values have been calculated for transient and for stationary operating. The calculated results for nominal operating were compared with results of measured EFOCP. For example, this comparison was provided for the system: 'pressurizer + surge line' of a WWER-1000 reactor. The calculated result 0.58 Hz practically coincides with the result of measurement (0.6 Hz). The EFOCP variations in transients are also shown. The presented results are intended to be useful for NPP vibration-acoustical certification. There are no serious difficulties for using this module with other computer codes.
Wood, Jerry R.; Schmidt, James F.; Steinke, Ronald J.; Chima, Rodrick V.; Kunik, William G.
1987-01-01
Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advanced computational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.
Energy Technology Data Exchange (ETDEWEB)
Berna, G. A; Bohn, M. P.; Rausch, W. N.; Williford, R. E.; Lanning, D. D.
1981-01-01
FRAPCON-2 is a FORTRAN IV computer code that calculates the steady state response of light Mater reactor fuel rods during long-term burnup. The code calculates the temperature, pressure, deformation, and tai lure histories of a fuel rod as functions of time-dependent fuel rod power and coolant boundary conditions. The phenomena modeled by the code include (a) heat conduction through the fuel and cladding, (b) cladding elastic and plastic deformation, (c) fuel-cladding mechanical interaction, (d) fission gas release, (e} fuel rod internal gas pressure, (f) heat transfer between fuel and cladding, (g) cladding oxidation, and (h) heat transfer from cladding to coolant. The code contains necessary material properties, water properties, and heat transfer correlations. FRAPCON-2 is programmed for use on the CDC Cyber 175 and 176 computers. The FRAPCON-2 code Is designed to generate initial conditions for transient fuel rod analysis by either the FRAP-T6 computer code or the thermal-hydraulic code, RELAP4/MOD7 Version 2.
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.
A computational model of cellular mechanisms of temporal coding in the medial geniculate body (MGB.
Directory of Open Access Journals (Sweden)
Cal F Rabang
Full Text Available Acoustic stimuli are often represented in the early auditory pathway as patterns of neural activity synchronized to time-varying features. This phase-locking predominates until the level of the medial geniculate body (MGB, where previous studies have identified two main, largely segregated response types: Stimulus-synchronized responses faithfully preserve the temporal coding from its afferent inputs, and Non-synchronized responses, which are not phase locked to the inputs, represent changes in temporal modulation by a rate code. The cellular mechanisms underlying this transformation from phase-locked to rate code are not well understood. We use a computational model of a MGB thalamocortical neuron to test the hypothesis that these response classes arise from inferior colliculus (IC excitatory afferents with divergent properties similar to those observed in brain slice studies. Large-conductance inputs exhibiting synaptic depression preserved input synchrony as short as 12.5 ms interclick intervals, while maintaining low firing rates and low-pass filtering responses. By contrast, small-conductance inputs with Mixed plasticity (depression of AMPA-receptor component and facilitation of NMDA-receptor component desynchronized afferent inputs, generated a click-rate dependent increase in firing rate, and high-pass filtered the inputs. Synaptic inputs with facilitation often permitted band-pass synchrony along with band-pass rate tuning. These responses could be tuned by changes in membrane potential, strength of the NMDA component, and characteristics of synaptic plasticity. These results demonstrate how the same synchronized input spike trains from the inferior colliculus can be transformed into different representations of temporal modulation by divergent synaptic properties.
Interface design of VSOP'94 computer code for safety analysis
Energy Technology Data Exchange (ETDEWEB)
Natsir, Khairina, E-mail: yenny@batan.go.id; Andiwijayakusuma, D.; Wahanani, Nursinta Adi [Center for Development of Nuclear Informatics - National Nuclear Energy Agency, PUSPIPTEK, Serpong, Tangerang, Banten (Indonesia); Yazid, Putranto Ilham [Center for Nuclear Technology, Material and Radiometry- National Nuclear Energy Agency, Jl. Tamansari No.71, Bandung 40132 (Indonesia)
2014-09-30
Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.
Finite Element Simulation Code for Computing Thermal Radiation from a Plasma
Nguyen, C. N.; Rappaport, H. L.
2004-11-01
A finite element code, ``THERMRAD,'' for computing thermal radiation from a plasma is under development. Radiation from plasma test particles is found in cylindrical geometry. Although the plasma equilibrium is assumed axisymmetric individual test particle excitation produces a non-axisymmetric electromagnetic response. Specially designed Whitney class basis functions are to be used to allow the solution to be solved on a two-dimensional grid. The basis functions enforce both a vanishing of the divergence of the electric field within grid elements where the complex index of refraction is assumed constant and continuity of tangential electric field across grid elements while allowing the normal component of the electric field to be discontinuous. An appropriate variational principle which incorporates the Sommerfeld radiation condition on the simulation boundary, as well as its discretization by the Rayleigh-Ritz technique is given. 1. ``Finte Element Method for Electromagnetics Problems,'' Volakis et al., Wiley, 1998.
Bousquet, Nicolas
2010-01-01
This article deals with the estimation of a probability p of an undesirable event. Its occurence is formalized by the exceedance of a threshold reliability value by the unidimensional output of a time-consuming computer code G with multivariate probabilistic input X. When G is assumed monotonous with respect to X, the Monotonous Reliability Method was proposed by de Rocquigny (2009) in an engineering context to provide sequentially narrowing 100%-confidence bounds and a crude estimate of p, via deterministic or stochastic designs of experiments. The present article consists in a formalization and technical deepening of this idea, as a large basis for future theoretical and applied studies. Three kinds of results are especially emphasized. First, the bounds themselves remain too crude and conservative estimators of p for a dimension of X upper than 2. Second, a maximum-likelihood estimator of p can be easily built, presenting a high variance reduction with respect to a standard Monte Carlo case, but suffering ...
Discrete logarithm computations over finite fields using Reed-Solomon codes
Augot, Daniel
2012-01-01
Cheng and Wan have related the decoding of Reed-Solomon codes to the computation of discrete logarithms over finite fields, with the aim of proving the hardness of their decoding. In this work, we experiment with solving the discrete logarithm over GF(q^h) using Reed-Solomon decoding. For fixed h and q going to infinity, we introduce an algorithm (RSDL) needing O (h! q^2) operations over GF(q), operating on a q x q matrix with (h+2) q non-zero coefficients. We give faster variants including an incremental version and another one that uses auxiliary finite fields that need not be subfields of GF(q^h); this variant is very practical for moderate values of q and h. We include some numerical results of our first implementations.
ACUTRI a computer code for assessing doses to the general public due to acute tritium releases
Yokoyama, S; Noguchi, H; Ryufuku, S; Sasaki, T
2002-01-01
Tritium, which is used as a fuel of a D-T burning fusion reactor, is the most important radionuclide for the safety assessment of a nuclear fusion experimental reactor such as ITER. Thus, a computer code, ACUTRI, which calculates the radiological impact of tritium released accidentally to the atmosphere, has been developed, aiming to be of use in a discussion of licensing of a fusion experimental reactor and an environmental safety evaluation method in Japan. ACUTRI calculates an individual tritium dose based on transfer models specific to tritium in the environment and ICRP dose models. In this calculation it is also possible to analyze statistically on meteorology in the same way as a conventional dose assessment method according to the meteorological guide of the Nuclear Safety Commission of Japan. A Gaussian plume model is used for calculating the atmospheric dispersion of tritium gas (HT) and/or tritiated water (HTO). The environmental pathway model in ACUTRI considers the following internal exposures: i...
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Kyu; Jae, Moo Sung; Jo, Young Gyun; Park, Rae Jun; Kim, Jae Hwan; Ha, Jae Ju; Kang, Dae Il; Choi, Sun Young; Kim, Si Hwan [Korea Atomic Energy Res. Inst., Taejon (Korea, Republic of)
1994-07-01
We have surveyed new technologies and research results for the accident management of nuclear power plants. And, based on the concept of using the existing plant capabilities for accident management, both in-vessel and ex-vessel strategies were identified and analyzed. When assessing accident management strategies, their effectiveness, adverse effects, and their feasibility must be considered. We have developed a framework for assessing the strategies with these factors in mind. We have applied the developed framework to assessing the strategies, including the likelihood that the operator correctly diagnoses the situation and successfully implements the strategies. Finally, the cavity flooding strategy was assessed by applying it to the station blackout sequence, which have been identified as one of the major contributors to risk at the reference plant. The thermohydraulic analyses with sensitivity calculations have been performed using MAAP 4 computer code. (Author).
Reduced gravity boiling and condensing experiments simulated with the COBRA/TRAC computer code
Cuta, Judith M.; Krotiuk, William
1988-01-01
A series of reduced-gravity two-phase flow experiments has been conducted with a boiler/condenser apparatus in the NASA KC-135 aircraft in order to obtain basic thermal-hydraulic data applicable to analytical design tools. Several test points from the KC-135 tests were selected for simulation by means of the COBRA/TRAC two-fluid, three-field thermal-hydraulic computer code; the points were chosen for a 25-90 percent void-fraction range. The possible causes for the lack of agreement noted between simulations and experiments are explored, with attention to the physical characteristics of two-phase flow in one-G and near-zero-G conditions.
Coded aperture x-ray diffraction imaging with transmission computed tomography side-information
Odinaka, Ikenna; Greenberg, Joel A.; Kaganovsky, Yan; Holmgren, Andrew; Hassan, Mehadi; Politte, David G.; O'Sullivan, Joseph A.; Carin, Lawrence; Brady, David J.
2016-03-01
Coded aperture X-ray diffraction (coherent scatter spectral) imaging provides fast and dose-efficient measurements of the molecular structure of an object. The information provided is spatially-dependent and material-specific, and can be utilized in medical applications requiring material discrimination, such as tumor imaging. However, current coded aperture coherent scatter spectral imaging system assume a uniformly or weakly attenuating object, and are plagued by image degradation due to non-uniform self-attenuation. We propose accounting for such non-uniformities in the self-attenuation by utilizing an X-ray computed tomography (CT) image (reconstructed attenuation map). In particular, we present an iterative algorithm for coherent scatter spectral image reconstruction, which incorporates the attenuation map, at different stages, resulting in more accurate coherent scatter spectral images in comparison to their uncorrected counterpart. The algorithm is based on a spectrally grouped edge-preserving regularizer, where the neighborhood edge weights are determined by spatial distances and attenuation values.
Bade, W. L.; Yos, J. M.
1975-01-01
A computer program for calculating quasi-one-dimensional gas flow in axisymmetric and two-dimensional nozzles and rectangular channels is presented. Flow is assumed to start from a state of thermochemical equilibrium at a high temperature in an upstream reservoir. The program provides solutions based on frozen chemistry, chemical equilibrium, and nonequilibrium flow with finite reaction rates. Electronic nonequilibrium effects can be included using a two-temperature model. An approximate laminar boundary layer calculation is given for the shear and heat flux on the nozzle wall. Boundary layer displacement effects on the inviscid flow are considered also. Chemical equilibrium and transport property calculations are provided by subroutines. The code contains precoded thermochemical, chemical kinetic, and transport cross section data for high-temperature air, CO2-N2-Ar mixtures, helium, and argon. It provides calculations of the stagnation conditions on axisymmetric or two-dimensional models, and of the conditions on the flat surface of a blunt wedge. The primary purpose of the code is to describe the flow conditions and test conditions in electric arc heated wind tunnels.
ACUTRI: a computer code for assessing doses to the general public due to acute tritium releases
Energy Technology Data Exchange (ETDEWEB)
Yokoyama, Sumi; Noguchi, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ryufuku, Susumu; Sasaki, Toshihisa; Kurosawa, Naohiro [Visible Information Center, Inc., Tokai, Ibaraki (Japan)
2002-11-01
Tritium, which is used as a fuel of a D-T burning fusion reactor, is the most important radionuclide for the safety assessment of a nuclear fusion experimental reactor such as ITER. Thus, a computer code, ACUTRI, which calculates the radiological impact of tritium released accidentally to the atmosphere, has been developed, aiming to be of use in a discussion of licensing of a fusion experimental reactor and an environmental safety evaluation method in Japan. ACUTRI calculates an individual tritium dose based on transfer models specific to tritium in the environment and ICRP dose models. In this calculation it is also possible to analyze statistically on meteorology in the same way as a conventional dose assessment method according to the meteorological guide of the Nuclear Safety Commission of Japan. A Gaussian plume model is used for calculating the atmospheric dispersion of tritium gas (HT) and/or tritiated water (HTO). The environmental pathway model in ACUTRI considers the following internal exposures: inhalation from a primary plume (HT and/or HTO) released from the facilities and inhalation from a secondary plume (HTO) reemitted from the ground following deposition of HT and HTO. This report describes an outline of the ACUTRI code, a user guide and the results of test calculation. (author)
Energy Technology Data Exchange (ETDEWEB)
Santoyo, E. [Universidad Nacional Autonoma de Mexico, Centro de Investigacion en Energia, Temixco (Mexico); Garcia, A.; Santoyo, S. [Unidad Geotermia, Inst. de Investigaciones Electricas, Temixco (Mexico); Espinosa, G. [Universidad Autonoma Metropolitana, Co. Vicentina (Mexico); Hernandez, I. [ITESM, Centro de Sistemas de Manufactura, Monterrey (Mexico)
2000-07-01
The development and application of the computer code STATIC{sub T}EMP, a useful tool for calculating static formation temperatures from actual bottomhole temperature data logged in geothermal wells is described. STATIC{sub T}EMP is based on five analytical methods which are the most frequently used in the geothermal industry. Conductive and convective heat flow models (radial, spherical/radial and cylindrical/radial) were selected. The computer code is a useful tool that can be reliably used in situ to determine static formation temperatures before or during the completion stages of geothermal wells (drilling and cementing). Shut-in time and bottomhole temperature measurements logged during well completion activities are required as input data. Output results can include up to seven computations of the static formation temperature by each wellbore temperature data set analysed. STATIC{sub T}EMP was written in Fortran-77 Microsoft language for MS-DOS environment using structured programming techniques. It runs on most IBM compatible personal computers. The source code and its computational architecture as well as the input and output files are described in detail. Validation and application examples on the use of this computer code with wellbore temperature data (obtained from specialised literature) and with actual bottomhole temperature data (taken from completion operations of some geothermal wells) are also presented. (Author)
Walitt, L.
1982-01-01
The VANS successive approximation numerical method was extended to the computation of three dimensional, viscous, transonic flows in turbomachines. A cross-sectional computer code, which conserves mass flux at each point of the cross-sectional surface of computation was developed. In the VANS numerical method, the cross-sectional computation follows a blade-to-blade calculation. Numerical calculations were made for an axial annular turbine cascade and a transonic, centrifugal impeller with splitter vanes. The subsonic turbine cascade computation was generated in blade-to-blade surface to evaluate the accuracy of the blade-to-blade mode of marching. Calculated blade pressures at the hub, mid, and tip radii of the cascade agreed with corresponding measurements. The transonic impeller computation was conducted to test the newly developed locally mass flux conservative cross-sectional computer code. Both blade-to-blade and cross sectional modes of calculation were implemented for this problem. A triplet point shock structure was computed in the inducer region of the impeller. In addition, time-averaged shroud static pressures generally agreed with measured shroud pressures. It is concluded that the blade-to-blade computation produces a useful engineering flow field in regions of subsonic relative flow; and cross-sectional computation, with a locally mass flux conservative continuity equation, is required to compute the shock waves in regions of supersonic relative flow.
Energy Technology Data Exchange (ETDEWEB)
McGrail, B.P.; Bacon, D.H.
1998-02-01
Planned performance assessments for the proposed disposal of low-activity waste (LAW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. The available computer codes with suitable capabilities at the time Revision 0 of this document was prepared were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical processes expected to affect LAW glass corrosion and the mobility of radionuclides. This analysis was repeated in this report but updated to include additional processes that have been found to be important since Revision 0 was issued and to include additional codes that have been released. The highest ranked computer code was found to be the STORM code developed at PNNL for the US Department of Energy for evaluation of arid land disposal sites.
Keshavarz, Mohammad Hossein; Motamedoshariati, Hadi; Moghayadnia, Reza; Nazari, Hamid Reza; Azarniamehraban, Jamshid
2009-12-30
In this paper a new simple user-friendly computer code, in Visual Basic, has been introduced to evaluate detonation performance of high explosives and their thermochemical properties. The code is based on recently developed methods to obtain thermochemical and performance parameters of energetic materials, which can complement the computer outputs of the other thermodynamic chemical equilibrium codes. It can predict various important properties of high explosive including velocity of detonation, detonation pressure, heat of detonation, detonation temperature, Gurney velocity, adiabatic exponent and specific impulse of high explosives. It can also predict detonation performance of aluminized explosives that can have non-ideal behaviors. This code has been validated with well-known and standard explosives and compared the predicted results, where the predictions of desired properties were possible, with outputs of some computer codes. A large amount of data for detonation performance on different classes of explosives from C-NO(2), O-NO(2) and N-NO(2) energetic groups have also been generated and compared with well-known complex code BKW.
Energy Technology Data Exchange (ETDEWEB)
Keshavarz, Mohammad Hossein, E-mail: mhkeshavarz@mut-es.ac.ir [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115 (Iran, Islamic Republic of); Motamedoshariati, Hadi; Moghayadnia, Reza; Nazari, Hamid Reza; Azarniamehraban, Jamshid [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115 (Iran, Islamic Republic of)
2009-12-30
In this paper a new simple user-friendly computer code, in Visual Basic, has been introduced to evaluate detonation performance of high explosives and their thermochemical properties. The code is based on recently developed methods to obtain thermochemical and performance parameters of energetic materials, which can complement the computer outputs of the other thermodynamic chemical equilibrium codes. It can predict various important properties of high explosive including velocity of detonation, detonation pressure, heat of detonation, detonation temperature, Gurney velocity, adiabatic exponent and specific impulse of high explosives. It can also predict detonation performance of aluminized explosives that can have non-ideal behaviors. This code has been validated with well-known and standard explosives and compared the predicted results, where the predictions of desired properties were possible, with outputs of some computer codes. A large amount of data for detonation performance on different classes of explosives from C-NO{sub 2}, O-NO{sub 2} and N-NO{sub 2} energetic groups have also been generated and compared with well-known complex code BKW.
Energy Technology Data Exchange (ETDEWEB)
Kostin, Mikhail [FRIB, MSU; Mokhov, Nikolai [FNAL; Niita, Koji [RIST, Japan
2013-09-25
A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.
MULTI-fs - A computer code for laser-plasma interaction in the femtosecond regime
Ramis, R.; Eidmann, K.; Meyer-ter-Vehn, J.; Hüller, S.
2012-03-01
The code MULTI-fs is a numerical tool devoted to the study of the interaction of ultrashort sub-picosecond laser pulses with matter in the intensity range from 10 11 to 10 17 W cm -2. Hydrodynamics is solved in one-dimensional geometry together with laser energy deposition and transport by thermal conduction and radiation. In contrast to long nanosecond pulses, short pulses generate steep gradient plasmas with typical scale lengths in the order of the laser wavelength and smaller. Under these conditions, Maxwell's equations are solved explicitly to obtain the light field. Concerning laser absorption, two different models for the electron-ion collision frequency are implemented to cover the regime of warm dense matter between high-temperature plasma and solid matter and also interaction with short-wave-length (VUV) light. MULTI-fs code is based on the MULTI radiation-hydrodynamic code [R. Ramis, R. Schmalz, J. Meyer-ter-Vehn, Comp. Phys. Comm. 49 (1988) 475] and most of the original features for the treatment of radiation are maintained. Program summaryProgram title: MULTI-fs Catalogue identifier: AEKT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 598 No. of bytes in distributed program, including test data, etc.: 443 771 Distribution format: tar.gz Programming language: FORTRAN Computer: PC (32 bits and 64 bits architecture) Operating system: Linux/Unix RAM: 1.6 MiB Classification: 19.13, 21.2 Subprograms used: Cat Id: AECV_v1_0; Title: MULTI2D; Reference: CPC 180 (2009) 977 Nature of problem: One-dimensional interaction of intense ultrashort (sub-picosecond) and ultraintense (up to 10 17 W cm -2) laser beams with matter. Solution method: The hydrodynamic motion coupled to laser propagation and
A simulation of a pebble bed reactor core by the MCNP-4C computer code
Directory of Open Access Journals (Sweden)
Bakhshayesh Moshkbar Khalil
2009-01-01
Full Text Available Lack of energy is a major crisis of our century; the irregular increase of fossil fuel costs has forced us to search for novel, cheaper, and safer sources of energy. Pebble bed reactors - an advanced new generation of reactors with specific advantages in safety and cost - might turn out to be the desired candidate for the role. The calculation of the critical height of a pebble bed reactor at room temperature, while using the MCNP-4C computer code, is the main goal of this paper. In order to reduce the MCNP computing time compared to the previously proposed schemes, we have devised a new simulation scheme. Different arrangements of kernels in fuel pebble simulations were investigated and the best arrangement to decrease the MCNP execution time (while keeping the accuracy of the results, chosen. The neutron flux distribution and control rods worth, as well as their shadowing effects, have also been considered in this paper. All calculations done for the HTR-10 reactor core are in good agreement with experimental results.
Directory of Open Access Journals (Sweden)
C.S. Ierotheou
2001-01-01
Full Text Available The shared-memory programming model can be an effective way to achieve parallelism on shared memory parallel computers. Historically however, the lack of a programming standard using directives and the limited scalability have affected its take-up. Recent advances in hardware and software technologies have resulted in improvements to both the performance of parallel programs with compiler directives and the issue of portability with the introduction of OpenMP. In this study, the Computer Aided Parallelisation Toolkit has been extended to automatically generate OpenMP-based parallel programs with nominal user assistance. We categorize the different loop types and show how efficient directives can be placed using the toolkit's in-depth interprocedural analysis. Examples are taken from the NAS parallel benchmarks and a number of real-world application codes. This demonstrates the great potential of using the toolkit to quickly parallelise serial programs as well as the good performance achievable on up to 300 processors for hybrid message passing-directive parallelisations.
CATARACT: Computer code for improving power calculations at NREL's high-flux solar furnace
Scholl, K.; Bingham, C.; Lewandowski, A.
1994-01-01
The High-Flux Solar Furnace (HFSF), operated by the National Renewable Energy Laboratory, uses a camera-based, flux-mapping system to analyze the distribution and to determine total power at the focal point. The flux-mapping system consists of a diffusively reflecting plate with seven circular foil calorimeters, a charge-coupled device (CCD) camera, an IBM-compatible personal computer with a frame-grabber board, and commercial image analysis software. The calorimeters provide flux readings that are used to scale the image captured from the plate by the camera. The image analysis software can estimate total power incident on the plate by integrating under the 3-dimensional image. Because of the physical layout of the HFSF, the camera is positioned at a 20 angle to the flux mapping plate normal. The foreshortening of the captured images that results represents a systematic error in the power calculations because the software incorrectly assumes the image is parallel to the camera's array. We have written a FORTRAN computer program called CATARACT (camera/target angle correction) that we use to transform the original flux-mapper image to a plane that is normal to the camera's optical axis. A description of the code and the results of experiments performed to verify it are presented. Also presented are comparisons of the total power available from the HFSF as determined from the flux mapping system and theoretical considerations.
Grid cells generate an analog error-correcting code for singularly precise neural computation.
Sreenivasan, Sameet; Fiete, Ila
2011-09-11
Entorhinal grid cells in mammals fire as a function of animal location, with spatially periodic response patterns. This nonlocal periodic representation of location, a local variable, is unlike other neural codes. There is no theoretical explanation for why such a code should exist. We examined how accurately the grid code with noisy neurons allows an ideal observer to estimate location and found this code to be a previously unknown type of population code with unprecedented robustness to noise. In particular, the representational accuracy attained by grid cells over the coding range was in a qualitatively different class from what is possible with observed sensory and motor population codes. We found that a simple neural network can effectively correct the grid code. To the best of our knowledge, these results are the first demonstration that the brain contains, and may exploit, powerful error-correcting codes for analog variables.
A new class of codes for Boolean masking of cryptographic computations
Carlet, Claude; Kim, Jon-Lark; Solé, Patrick
2011-01-01
We introduce a new class of rate one half binary codes: complementary information set codes. A binary linear code of length 2n and dimension n is called a complementary information set code (CIS code for short) if it has two disjoint information sets. This class of codes contains self-dual codes as a subclass. It is connected to graph correlation immune Boolean functions of use in the security of hardware implementations of cryptographic primitives. Such codes permit to improve the cost of masking cryptographic algorithms against side channel attacks. In this paper we investigate this new class of codes: we give optimal or best known CIS codes of length < 132. We derive general constructions based on cyclic codes and on double circulant codes. We derive a Varshamov-Gilbert bound for long CIS codes, and show that they can all be classified in small lengths \\leq 12 by the building up construction. Some nonlinear S-boxes are constructed by using Z4-codes, based on the notion of dual distance of an unrestricte...
Lin, J. W.; Erickson, T. A.
2011-12-01
Historically, the application of high-performance computing (HPC) to the atmospheric sciences has focused on using the increases in processor speed, storage, and parallelization to run longer simulations of larger and more complex models. Such a focus, however, has led to a user culture where code robustness and reusability is ignored or discouraged. Additionally, such a culture works against nurturing and growing connections between high-performance computational earth sciences and scientific users outside of that community. Given the explosion in computational power available to researchers unconnected with the traditional HPC centers, as well as in the number of quality tools available to conduct analysis and visualization, the programming insularity of the earth science modeling and analysis community acts as a formidible barrier to increasing the usefulness and robustness of computational earth science products. In this talk, we suggest adoption of best practices from the software engineering community, and in particular the open-source community, has the potential to improve the quality of code and increase the impact of earth sciences HPC. In particular, we will discuss the impact of practices such as unit testing and code review, the need and preconditions for code reusability, and the importance of APIs and open frameworks to enable scientific discovery across sub-disciplines. We will present examples of the cross-disciplinary fertilization possible with open APIs. Finally, we will discuss ways funding agencies and the computational earth sciences community can help encourage the adoption of such best practices.
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.
Directory of Open Access Journals (Sweden)
Marcos Antonio Klunk
Full Text Available ABSTRACTDiagenetic reactions, characterized by the dissolution and precipitation of minerals at low temperatures, control the quality of sedimentary rocks as hydrocarbon reservoirs. Geochemical modeling, a tool used to understand diagenetic processes, is performed through computer codes based on thermodynamic and kinetic parameters. In a comparative study, we reproduced the diagenetic reactions observed in Snorre Field reservoir sandstones, Norwegian North Sea. These reactions had been previously modeled in the literature using DISSOL-THERMAL code. In this study, we modeled the diagenetic reactions in the reservoirs using Geochemist's Workbench (GWB and TOUGHREACT software, based on a convective-diffusive-reactive model and on the thermodynamic and kinetic parameters compiled for each reaction. TOUGHREACT and DISSOL-THERMAL modeling showed dissolution of quartz, K-feldspar and plagioclase in a similar temperature range from 25 to 80°C. In contrast, GWB modeling showed dissolution of albite, plagioclase and illite, as well as precipitation of quartz, K-feldspar and kaolinite in the same temperature range. The modeling generated by the different software for temperatures of 100, 120 and 140°C showed similarly the dissolution of quartz, K-feldspar, plagioclase and kaolinite, but differed in the precipitation of albite and illite. At temperatures of 150 and 160°C, GWB and TOUGHREACT produced different results from the DISSOL-THERMAL, except for the dissolution of quartz, plagioclase and kaolinite. The comparative study allows choosing the numerical modeling software whose results are closer to the diagenetic reactions observed in the petrographic analysis of the modeled reservoirs.
Directory of Open Access Journals (Sweden)
Kumar Parijat Tripathi
Full Text Available RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool, QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery tools. It offers a report on statistical analysis of functional and Gene Ontology (GO annotation's enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein-protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA by ab initio methods helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is
DIST: a computer code system for calculation of distribution ratios of solutes in the purex system
Energy Technology Data Exchange (ETDEWEB)
Tachimori, Shoichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1996-05-01
Purex is a solvent extraction process for reprocessing the spent nuclear fuel using tri n-butylphosphate (TBP). A computer code system DIST has been developed to calculate distribution ratios for the major solutes in the Purex process. The DIST system is composed of database storing experimental distribution data of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}: DISTEX and of Zr(IV), Tc(VII): DISTEXFP and calculation programs to calculate distribution ratios of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}(DIST1), and Zr(IV), Tc(VII)(DITS2). The DIST1 and DIST2 determine, by the best-fit procedures, the most appropriate values of many parameters put on empirical equations by using the DISTEX data which fulfill the assigned conditions and are applied to calculate distribution ratios of the respective solutes. Approximately 5,000 data were stored in the DISTEX and DISTEXFP. In the present report, the following items are described, 1) specific features of DIST1 and DIST2 codes and the examples of calculation 2) explanation of databases, DISTEX, DISTEXFP and a program DISTIN, which manages the data in the DISTEX and DISTEXFP by functions as input, search, correction and delete. and at the annex, 3) programs of DIST1, DIST2, and figure-drawing programs DIST1G and DIST2G 4) user manual for DISTIN. 5) source programs of DIST1 and DIST2. 6) the experimental data stored in the DISTEX and DISTEXFP. (author). 122 refs.
Zuccaro, Antonio; Guarracino, Mario Rosario
2015-01-01
RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool), QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery) tools. It offers a report on statistical analysis of functional and Gene Ontology (GO) annotation’s enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein—protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA) by ab initio methods) helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is freely
Energy Technology Data Exchange (ETDEWEB)
Blink, J.A.
1985-03-01
In this manual we describe the use of the FORIG computer code to solve isotope-generation and depletion problems in fusion and fission reactors. FORIG runs on a Cray-1 computer and accepts more extensive activation cross sections than ORIGEN2 from which it was adapted. This report is an updated and a combined version of the previous ORIGEN2 and FORIG manuals. 7 refs., 15 figs., 13 tabs.
Rathjen, K. A.
1977-01-01
A digital computer code CAVE (Conduction Analysis Via Eigenvalues), which finds application in the analysis of two dimensional transient heating of hypersonic vehicles is described. The CAVE is written in FORTRAN 4 and is operational on both IBM 360-67 and CDC 6600 computers. The method of solution is a hybrid analytical numerical technique that is inherently stable permitting large time steps even with the best of conductors having the finest of mesh size. The aerodynamic heating boundary conditions are calculated by the code based on the input flight trajectory or can optionally be calculated external to the code and then entered as input data. The code computes the network conduction and convection links, as well as capacitance values, given basic geometrical and mesh sizes, for four generations (leading edges, cooled panels, X-24C structure and slabs). Input and output formats are presented and explained. Sample problems are included. A brief summary of the hybrid analytical-numerical technique, which utilizes eigenvalues (thermal frequencies) and eigenvectors (thermal mode vectors) is given along with aerodynamic heating equations that have been incorporated in the code and flow charts.
Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code
Waithe, Kenrick A.
2004-01-01
A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
Computational promoter analysis of mouse, rat and human antimicrobial peptide-coding genes
Directory of Open Access Journals (Sweden)
Kai Chikatoshi
2006-12-01
Full Text Available Abstract Background Mammalian antimicrobial peptides (AMPs are effectors of the innate immune response. A multitude of signals coming from pathways of mammalian pathogen/pattern recognition receptors and other proteins affect the expression of AMP-coding genes (AMPcgs. For many AMPcgs the promoter elements and transcription factors that control their tissue cell-specific expression have yet to be fully identified and characterized. Results Based upon the RIKEN full-length cDNA and public sequence data derived from human, mouse and rat, we identified 178 candidate AMP transcripts derived from 61 genes belonging to 29 AMP families. However, only for 31 mouse genes belonging to 22 AMP families we were able to determine true orthologous relationships with 30 human and 15 rat sequences. We screened the promoter regions of AMPcgs in the three species for motifs by an ab initio motif finding method and analyzed the derived promoter characteristics. Promoter models were developed for alpha-defensins, penk and zap AMP families. The results suggest a core set of transcription factors (TFs that regulate the transcription of AMPcg families in mouse, rat and human. The three most frequent core TFs groups include liver-, nervous system-specific and nuclear hormone receptors (NHRs. Out of 440 motifs analyzed, we found that three represent potentially novel TF-binding motifs enriched in promoters of AMPcgs, while the other four motifs appear to be species-specific. Conclusion Our large-scale computational analysis of promoters of 22 families of AMPcgs across three mammalian species suggests that their key transcriptional regulators are likely to be TFs of the liver-, nervous system-specific and NHR groups. The computationally inferred promoter elements and potential TF binding motifs provide a rich resource for targeted experimental validation of TF binding and signaling studies that aim at the regulation of mouse, rat or human AMPcgs.
Energy Technology Data Exchange (ETDEWEB)
Oster, C.A.
1976-02-01
DIGRTS is a computer program for calculating the concentration distribution of an inert gas diffusing through a composite solid which permits reversible trapping of the gas atoms. The program is coded entirely in FORTRAN IV. The composite solid can consist of up to ten regions. These regions are further subdivided in a manner so that the total number of nodes (subdivision points) is less than or equal to 200. The code can readily be modified should these limits be undesirable. The code permits only constant parameters for describing the diffusion process but the extension to allow time dependent parameters is clearly marked in the program. In addition to calculations of the concentration distributions, the time dependent release fraction is also computed. (auth)
Assessment of Turbulent Shock-Boundary Layer Interaction Computations Using the OVERFLOW Code
Oliver, A. B.; Lillard, R. P.; Schwing, A. M.; Blaisdell, G> A.; Lyrintzis, A. S.
2007-01-01
The performance of two popular turbulence models, the Spalart-Allmaras model and Menter s SST model, and one relatively new model, Olsen & Coakley s Lag model, are evaluated using the OVERFLOWcode. Turbulent shock-boundary layer interaction predictions are evaluated with three different experimental datasets: a series of 2D compression ramps at Mach 2.87, a series of 2D compression ramps at Mach 2.94, and an axisymmetric coneflare at Mach 11. The experimental datasets include flows with no separation, moderate separation, and significant separation, and use several different experimental measurement techniques (including laser doppler velocimetry (LDV), pitot-probe measurement, inclined hot-wire probe measurement, preston tube skin friction measurement, and surface pressure measurement). Additionally, the OVERFLOW solutions are compared to the solutions of a second CFD code, DPLR. The predictions for weak shock-boundary layer interactions are in reasonable agreement with the experimental data. For strong shock-boundary layer interactions, all of the turbulence models overpredict the separation size and fail to predict the correct skin friction recovery distribution. In most cases, surface pressure predictions show too much upstream influence, however including the tunnel side-wall boundary layers in the computation improves the separation predictions.
Investigation of station blackout scenario in VVER440/v230 with RELAP5 computer code
Energy Technology Data Exchange (ETDEWEB)
Gencheva, Rositsa Veselinova, E-mail: roseh@mail.bg; Stefanova, Antoaneta Emilova, E-mail: antoanet@inrne.bas.bg; Groudev, Pavlin Petkov, E-mail: pavlinpg@inrne.bas.bg
2015-12-15
Highlights: • We have modeled SBO in VVER440. • RELAP5/MOD3 computer code has been used. • Base case calculation has been done. • Fail case calculation has been done. • Operator and alternative operator actions have been investigated. - Abstract: During the development of symptom-based emergency operating procedures (SB-EOPs) for VVER440/v230 units at Kozloduy Nuclear Power Plant (NPP) a number of analyses have been performed using the RELAP5/MOD3 (Carlson et al., 1990). Some of them investigate the response of VVER440/v230 during the station blackout (SBO). The main purpose of the analyses presented in this paper is to identify the behavior of important VVER440 parameters in case of total station blackout. The RELAP5/MOD3 has been used to simulate the SBO in VVER440 NPP model (Fletcher and Schultz, 1995). This model was developed at the Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences (INRNE-BAS), Sofia, for analyses of operational occurrences, abnormal events and design based scenarios. The model provides a significant analytical capability for specialists working in the field of NPP safety.
Stimulus Specificity of Brain-Computer Interfaces Based on Code Modulation Visual Evoked Potentials.
Directory of Open Access Journals (Sweden)
Qingguo Wei
Full Text Available A brain-computer interface (BCI based on code modulated visual evoked potentials (c-VEP is among the fastest BCIs that have ever been reported, but it has not yet been given a thorough study. In this study, a pseudorandom binary M sequence and its time lag sequences are utilized for modulation of different stimuli and template matching is adopted as the method for target recognition. Five experiments were devised to investigate the effect of stimulus specificity on target recognition and we made an effort to find the optimal stimulus parameters for size, color and proximity of the stimuli, length of modulation sequence and its lag between two adjacent stimuli. By changing the values of these parameters and measuring classification accuracy of the c-VEP BCI, an optimal value of each parameter can be attained. Experimental results of ten subjects showed that stimulus size of visual angle 3.8°, white, spatial proximity of visual angle 4.8° center to center apart, modulation sequence of length 63 bits and the lag of 4 bits between adjacent stimuli yield individually superior performance. These findings provide a basis for determining stimulus presentation of a high-performance c-VEP based BCI system.
A computational code for resolution of general compartment models applied to internal dosimetry
Energy Technology Data Exchange (ETDEWEB)
Claro, Thiago R.; Todo, Alberto S., E-mail: claro@usp.br, E-mail: astodo@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2011-07-01
The dose resulting from internal contamination can be estimated with the use of biokinetic models combined with experimental results obtained from bio analysis and the knowledge of the incorporation time. The biokinetics models can be represented by a set of compartments expressing the transportation, retention and elimination of radionuclides from the body. The ICRP publications, number 66, 78 and 100, present compartmental models for the respiratory tract, gastrointestinal tract and for systemic distribution for an array of radionuclides of interest for the radiological protection. The objective of this work is to develop a computational code for designing, visualization and resolution of compartmental models of any nature. There are available four different techniques for the resolution of system of differential equations, including semi-analytical and numerical methods. The software was developed in C{ne} programming, using a Microsoft Access database and XML standards for file exchange with other applications. Compartmental models for uranium, thorium and iodine radionuclides were generated for the validation of the CBT software. The models were subsequently solved by SSID software and the results compared with the values published in the issue 78 of ICRP. In all cases the system is in accordance with the values published by ICRP. (author)
V.S.O.P. (99/09) computer code system for reactor physics and fuel cycle simulation. Version 2009
Energy Technology Data Exchange (ETDEWEB)
Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Ohlig, U.; Pohl, C.; Scherer, W.
2010-07-15
V.S.O.P. (99/ 09) represents the further development of V.S.O.P. (99/ 05). Compared to its precursor, the code system has been improved again in many details. The main motivation for this new code version was to update the basic nuclear libraries used by the code system. Thus, all cross section libraries involved in the code have now been based on ENDF/B-VII. V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to gas-cooled reactors and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. This latest code version was developed and tested under the WINDOWS-XP - operating system. (orig.)
Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.
2010-12-01
The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.
Energy Technology Data Exchange (ETDEWEB)
Ball, J.; Glowa, G.; Wren, J. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Ewig, F. [GRS Koln (Germany); Dickenson, S. [AEAT, (United Kingdom); Billarand, Y.; Cantrel, L. [IPSN (France); Rydl, A. [NRIR (Czech Republic); Royen, J. [OECD/NEA (France)
2001-11-01
This report describes the results of the second phase of International Standard Problem (ISP) 41, an iodine behaviour code comparison exercise. The first phase of the study, which was based on a simple Radioiodine Test Facility (RTF) experiment, demonstrated that all of the iodine behaviour codes had the capability to reproduce iodine behaviour for a narrow range of conditions (single temperature, no organic impurities, controlled pH steps). The current phase, a parametric study, was designed to evaluate the sensitivity of iodine behaviour codes to boundary conditions such as pH, dose rate, temperature and initial I{sup -} concentration. The codes used in this exercise were IODE(IPSN), IODE(NRIR), IMPAIR(GRS), INSPECT(AEAT), IMOD(AECL) and LIRIC(AECL). The parametric study described in this report identified several areas of discrepancy between the various codes. In general, the codes agree regarding qualitative trends, but their predictions regarding the actual amount of volatile iodine varied considerably. The largest source of the discrepancies between code predictions appears to be their different approaches to modelling the formation and destruction of organic iodides. A recommendation arising from this exercise is that an additional code comparison exercise be performed on organic iodide formation, against data obtained front intermediate-scale studies (two RTF (AECL, Canada) and two CAIMAN facility, (IPSN, France) experiments have been chosen). This comparison will allow each of the code users to realistically evaluate and improve the organic iodide behaviour sub-models within their codes. (author)
Energy Technology Data Exchange (ETDEWEB)
Ikushima, Takeshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-05-01
A computer code system CASKET (CASK thermal and structural analyses and Evaluation code system) for the thermal and structural analyses which are indispensable for radioactive material transport and/or storage cask designs has been developed. The CASKET is a simplified computer code system to perform parametric analyses on sensitivity evaluations in designing a cask and conducting its safety analysis. Main features of the CASKET are as follow: (1) it is capable to perform impact analysis of casks with shock absorbers, (2) it is capable to perform impact analysis of casks with fins. (3) puncture analysis of casks is capable, (4) rocking analysis of casks during seismic load is capable, (5) material property data library are provided for impact analysis of casks, (6) material property data library are provided for thermal analysis of casks, (7) fin energy absorption data library are provided for impact analysis of casks with fins are and (8) not only main frame computers (OS MSP) but also work stations (OS UNIX) and personal computers (OS Windows 3.1) are available. In the paper, brief illustrations of calculation methods are presented. Some calculation results are compared with experimental ones to confirm the computer programs are useful for thermal and structural analyses. (author)
Shershnev, Anton A.; Kudryavtsev, Alexey N.; Kashkovsky, Alexander V.; Khotyanovsky, Dmitry V.
2016-10-01
The present paper describes HyCFS code, developed for numerical simulation of compressible high-speed flows on hybrid CPU/GPU (Central Processing Unit / Graphical Processing Unit) computational clusters on the basis of full unsteady Navier-Stokes equations, using modern shock capturing high-order TVD (Total Variation Diminishing) and WENO (Weighted Essentially Non-Oscillatory) schemes on general curvilinear structured grids. We discuss the specific features of hybrid architecture and details of program implementation and present the results of code verification.
Energy Technology Data Exchange (ETDEWEB)
Bandini, B.R. [Los Alamos National Lab., NM (United States)
1990-05-01
No present light water reactor accident analysis code employs both high state of the art neutronics and thermal-hydraulics computational algorithms. Adding a modern three-dimensional neutron kinetics model to the present TRAC-PFI/MOD2 code would create a fully up to date pressurized water reactor accident evaluation code. After reviewing several options, it was decided that the Nodal Expansion Method would best provide the basis for this multidimensional transient neutronic analysis capability. Steady-state and transient versions of the Nodal Expansion Method were coded in both three-dimensional Cartesian and cylindrical geometries. In stand-alone form this method of solving the few group neutron diffusion equations was shown to yield efficient and accurate results for a variety of steady-state and transient benchmark problems. The Nodal Expansion Method was then incorporated into TRAC-PFl/MOD2. The combined NEM/TRAC code results agreed well with the EPRI-ARROTTA core-only transient analysis code when modelling a severe PWR control rod ejection accident.
Directory of Open Access Journals (Sweden)
Chia-Chang Hu
2005-04-01
Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, Ã°ÂÂ’Âª(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of Ã°ÂÂ’Âª((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.
Comparison of AMOS computer code wakefield real part impedances with analytic results
Energy Technology Data Exchange (ETDEWEB)
Mayhall, D J; Nelson, S D
2000-11-30
We have performed eleven AMOS (Azimuthal Mode Simulator)[1] code runs with a simple, right circular cylindrical accelerating cavity inserted into a circular, cylindrical, lossless beam pipe to calculate the real part of the n = 1(dipole) transverse wakefield impedance of this structure. We have compared this wakefield impedance in units of ohms/m(Wm) over the frequency range of 0-1 GHz to analytic predictions from Equation (2.3.8) of Briggs et al[2]. The results from Equation (2.3.8) were converted from the CGS units of statohms to the MKS units of ohms({Omega}) and then multiplied by (2{pi}f)/c = {Omega}/c = 2{pi}/{lambda}, where f is the frequency in Hz, c is the speed of light in vacuum in m/sec, {omega} is the angular frequency in radians/sec, and {lambda} is the wavelength in m. The dipole transverse wakefield impedance written to file from AMOS must be multiplied by c/o to convert it from units of {Omega}/m to units of {Omega}. The agreement between the AMOS runs and the analytic predictions are excellent for computational grids with square cells (dz = dr) and good for grids with rectangular cells (dz < dr). The quantity dz is the fixed-size axial grid spacing, and dr is the fixed-size radial grid spacing. We have also performed one AMOS run for the same geometry to calculate the real part of the n = 0(monopole) longitudinal wakefield impedance of this structure. We have compared this wakefield impedance in units of {Omega} with analytic predictions from Equation (1.4.8) of Briggs et al[1] converted to the MKS units of {Omega}. The agreement between the two results is excellent in this case. For the monopole longitudinal wakefield impedance written to file from AMOS, nothing must be done to convert the results to units of {Omega}. In each case, the computer calculations were carried out to 50 nsec of simulation time.
On codes, matroids and secure computation from linear secret sharing schemes
Cramer, R.J.F.; Daza, V.; Gracia, J.L.; Jimenez Urroz, J.; Leander, G.; Marti-Farre, J.; Padro, C.
2008-01-01
Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes ar
On the performance of a 2D unstructured computational rheology code on a GPU
Pereira, S.P.; Vuik, K.; Pinho, F.T.; Nobrega, J.M.
2013-01-01
The present work explores the massively parallel capabilities of the most advanced architecture of graphics processing units (GPUs) code named “Fermi”, on a two-dimensional unstructured cell-centred finite volume code. We use the SIMPLE algorithm to solve the continuity and momentum equations that w
[Is the Furness-Moore Code applicable for computer and telex?].
Rötzscher, K
1979-12-01
The extent to which the identification of disaster victims could be improved with the code for dental findings developed by Furness and Moore (1969) was studied. This code records the most important data for a set of teeth in 12 digits.
A computer code for calculations in the algebraic collective model of the atomic nucleus
Welsh, T A
2016-01-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1,1) x SO(5) dynamical group. This, in particular, obviates the use of coefficients of fractional parentage. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [pi x q x pi]_0 and [pi x pi]_{LM}, where q_M are the model's quadrupole moments, and pi_N are corresponding conjugate momenta (-2>=M,N<=2). The code also provides ready access to SO(3)-reduced SO(5) Clebsch-Gordan coefficients through data files provided with the code.
Directory of Open Access Journals (Sweden)
Eric Psota
2010-01-01
Full Text Available The error mechanisms of iterative message-passing decoders for low-density parity-check codes are studied. A tutorial review is given of the various graphical structures, including trapping sets, stopping sets, and absorbing sets that are frequently used to characterize the errors observed in simulations of iterative decoding of low-density parity-check codes. The connections between trapping sets and deviations on computation trees are explored in depth using the notion of problematic trapping sets in order to bridge the experimental and analytic approaches to these error mechanisms. A new iterative algorithm for finding low-weight problematic trapping sets is presented and shown to be capable of identifying many trapping sets that are frequently observed during iterative decoding of low-density parity-check codes on the additive white Gaussian noise channel. Finally, a new method is given for characterizing the weight of deviations that result from problematic trapping sets.
Kumar, A.; Graeves, R. A.
1980-06-01
A user's guide for a computer code 'COLTS' (Coupled Laminar and Turbulent Solutions) is provided which calculates the laminar and turbulent hypersonic flows with radiation and coupled ablation injection past a Jovian entry probe. Time-dependent viscous-shock-layer equations are used to describe the flow field. These equations are solved by an explicit, two-step, time-asymptotic finite-difference method. Eddy viscosity in the turbulent flow is approximated by a two-layer model. In all, 19 chemical species are used to describe the injection of carbon-phenolic ablator in the hydrogen-helium gas mixture. The equilibrium composition of the mixture is determined by a free-energy minimization technique. A detailed frequency dependence of the absorption coefficient for various species is considered to obtain the radiative flux. The code is written for a CDC-CYBER-203 computer and is capable of providing solutions for ablated probe shapes also.
Energy Technology Data Exchange (ETDEWEB)
Srinath Vadlamani; Scott Kruger; Travis Austin
2008-06-19
Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems. For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.
Energy Technology Data Exchange (ETDEWEB)
Greene, N.M.; Forsberg, V.M.; Raiford, G.B.; Arwood, J.W.; Flanagan, G.F.
1979-01-01
SACRD is a data base of material properties and other handbook data needed in computer codes used for fast reactor safety studies. This document lists the contents of Version 1 and also serves as a glossary of terminology used in the data base. Data are available in the thermodynamics, heat transfer, fluid mechanics, structural mechanics, aerosol transport, meteorology, neutronics and dosimetry areas. Tabular, graphical and parameterized data are provided in many cases.
Sandalski, Stou
Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.
Sato, Jun-Ichi; Washizawa, Yoshikazu
2015-08-01
We propose two methods to improve code modulation visual evoked potential brain computer interfaces (cVEP BCIs). Most of BCIs average brain signals from several trials in order to improve the classification performance. The number of averaging defines the trade-off between input speed and accuracy, and the optimal averaging number depends on individual, signal acquisition system, and so forth. Firstly, we propose a novel dynamic method to estimate the averaging number for cVEP BCIs. The proposed method is based on the automatic repeat request (ARQ) that is used in communication systems. The existing cVEP BCIs employ rather longer code, such as 63-bit M-sequence. The code length also defines the trade-off between input speed and accuracy. Since the reliability of the proposed BCI can be controlled by the proposed ARQ method, we introduce shorter codes, 32-bit M-sequence and the Kasami-sequence. Thanks to combine the dynamic averaging number estimation method and the shorter codes, the proposed system exhibited higher information transfer rate compared to existing cVEP BCIs.
Schmidt, James F.
1995-01-01
An off-design axial-flow compressor code is presented and is available from COSMIC for predicting the aerodynamic performance maps of fans and compressors. Steady axisymmetric flow is assumed and the aerodynamic solution reduces to solving the two-dimensional flow field in the meridional plane. A streamline curvature method is used for calculating this flow-field outside the blade rows. This code allows for bleed flows and the first five stators can be reset for each rotational speed, capabilities which are necessary for large multistage compressors. The accuracy of the off-design performance predictions depend upon the validity of the flow loss and deviation correlation models. These empirical correlations for the flow loss and deviation are used to model the real flow effects and the off-design code will compute through small reverse flow regions. The input to this off-design code is fully described and a user's example case for a two-stage fan is included with complete input and output data sets. Also, a comparison of the off-design code predictions with experimental data is included which generally shows good agreement.
Bartels, Robert E.
2012-01-01
This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.
Energy Technology Data Exchange (ETDEWEB)
Chen, S. Y.; Yu, C.; Mo. T.; Trottier, C.
2000-10-17
In 1999, the US Nuclear Regulatory Commission (NRC) tasked Argonne National Laboratory to modify the existing RESRAD and RESRAD-BUILD codes to perform probabilistic, site-specific dose analysis for use with the NRC's Standard Review Plan for demonstrating compliance with the license termination rule. The RESRAD codes have been developed by Argonne to support the US Department of Energy's (DOEs) cleanup efforts. Through more than a decade of application, the codes already have established a large user base in the nation and a rigorous QA support. The primary objectives of the NRC task are to: (1) extend the codes' capabilities to include probabilistic analysis, and (2) develop parameter distribution functions and perform probabilistic analysis with the codes. The new codes also contain user-friendly features specially designed with graphic-user interface. In October 2000, the revised RESRAD (version 6.0) and RESRAD-BUILD (version 3.0), together with the user's guide and relevant parameter information, have been developed and are made available to the general public via the Internet for use.
A computer code for calculations in the algebraic collective model of the atomic nucleus
Welsh, T. A.; Rowe, D. J.
2016-03-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.
Energy Technology Data Exchange (ETDEWEB)
Watson, S.B.; Ford, M.R.
1980-02-01
A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration.
A parallel code for multiprecision computations of the Lane-Emden differential equation
Geroyannis, Vassilis S
2016-01-01
We compute multiprecision solutions of the Lane-Emden equation. This differential equation arises when introducing the well-known polytropic model into the equation of hydrostatic equilibrium for a nondistorted star. Since such multiprecision computations are time-consuming, we apply to this problem parallel programming techniques and thus the execution time of the computations is drastically reduced.
MULTI-IFE-A one-dimensional computer code for Inertial Fusion Energy (IFE) target simulations
Ramis, R.; Meyer-ter-Vehn, J.
2016-06-01
The code MULTI-IFE is a numerical tool devoted to the study of Inertial Fusion Energy (IFE) microcapsules. It includes the relevant physics for the implosion and thermonuclear ignition and burning: hydrodynamics of two component plasmas (ions and electrons), three-dimensional laser light ray-tracing, thermal diffusion, multigroup radiation transport, deuterium-tritium burning, and alpha particle diffusion. The corresponding differential equations are discretized in spherical one-dimensional Lagrangian coordinates. Two typical application examples, a high gain laser driven capsule and a low gain radiation driven marginally igniting capsule are discussed. In addition to phenomena relevant for IFE, the code includes also components (planar and cylindrical geometries, transport coefficients at low temperature, explicit treatment of Maxwell's equations) that extend its range of applicability to laser-matter interaction at moderate intensities (<1016 W cm-2). The source code design has been kept simple and structured with the aim to encourage user's modifications for specialized purposes.
A Code to Compute High Energy Cosmic Ray Effects on Terrestrial Atmospheric Chemistry
Krejci, Alex J; Thomas, Brian C
2008-01-01
A variety of events such as gamma-ray bursts may expose the Earth to an increased flux of high-energy cosmic rays, with potentially important effects on the biosphere. An atmospheric code, the NASA-Goddard Space Flight Center two-dimensional (latitude, altitude) time-dependent atmospheric model (NGSFC), can be used to study atmospheric chemistry changes. The effect on atmospheric chemistry from astrophysically created high energy cosmic rays can now be studied using the NGSFC code. A table has been created that, with the use of the NGSFC code can be used to simulate the effects of high energy cosmic rays (10 GeV to 1 PeV) ionizing the atmosphere. We discuss the table, its use, weaknesses, and strengths.
Energy Technology Data Exchange (ETDEWEB)
Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.
1986-11-01
This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.
User's manual for PRESTO: A computer code for the performance of regenerative steam turbine cycles
Fuller, L. C.; Stovall, T. K.
1979-01-01
Standard turbine cycles for baseload power plants and cycles with such additional features as process steam extraction and induction and feedwater heating by external heat sources may be modeled. Peaking and high back pressure cycles are also included. The code's methodology is to use the expansion line efficiencies, exhaust loss, leakages, mechanical losses, and generator losses to calculate the heat rate and generator output. A general description of the code is given as well as the instructions for input data preparation. Appended are two complete example cases.
Modern Coding Theory: The Statistical Mechanics and Computer Science Point of View
Montanari, Andrea
2007-01-01
These are the notes for a set of lectures delivered by the two authors at the Les Houches Summer School on `Complex Systems' in July 2006. They provide an introduction to the basic concepts in modern (probabilistic) coding theory, highlighting connections with statistical mechanics. We also stress common concepts with other disciplines dealing with similar problems that can be generically referred to as `large graphical models'. While most of the lectures are devoted to the classical channel coding problem over simple memoryless channels, we present a discussion of more complex channel models. We conclude with an overview of the main open challenges in the field.
MELCOR computer code manuals: Primer and user`s guides, Version 1.8.3 September 1994. Volume 1
Energy Technology Data Exchange (ETDEWEB)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L. [Sandia National Labs., Albuquerque, NM (United States); Hodge, S.A.; Hyman, C.R.; Sanders, R.L. [Oak Ridge National Lab., TN (United States)
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the US Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users` Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.
Directory of Open Access Journals (Sweden)
Koniges Alice
2013-11-01
Full Text Available The Neutralized Drift Compression Experiment II (NDCX II is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE hydrodynamics with Adaptive Mesh Refinement (AMR, has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. We also briefly discuss the effects of the move to exascale computing and related computational changes on general modeling codes in fusion.
Energy Technology Data Exchange (ETDEWEB)
Landers, N.F.; Petrie, L.M.; Knight, J.R. [Oak Ridge National Lab., TN (United States)] [and others
1995-04-01
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3 for the documentation of the data libraries and subroutine libraries.
Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.
2016-10-01
Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.
Energy Technology Data Exchange (ETDEWEB)
Petrie, L.M.; Jordon, W.C. [Oak Ridge National Lab., TN (United States); Edwards, A.L. [Oak Ridge National Lab., TN (United States)]|[Lawrence Livermore National Lab., CA (United States)] [and others
1995-04-01
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice; (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System developments has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3--for the data libraries and subroutine libraries.
Simultaneous fluid-flow, heat-transfer and solid-stress computation in a single computer code
Energy Technology Data Exchange (ETDEWEB)
Spalding, D.B. [Concentration Heat and Momentum Ltd, London (United Kingdom)
1997-12-31
Computer simulation of flow- and thermally-induced stresses in mechanical-equipment assemblies has, in the past, required the use of two distinct software packages, one to determine the forces and the temperatures, and the other to compute the resultant stresses. The present paper describes how a single computer program can perform both tasks at the same time. The technique relies on the similarity of the equations governing velocity distributions in fluids to those governing displacements in solids. The same SIMPLE-like algorithm is used for solving both. Applications to 1-, 2- and 3-dimensional situations are presented. It is further suggested that Solid-Fluid-Thermal, ie SFT analysis may come to replace CFD on the one hand and the analysis of stresses in solids on the other, by performing the functions of both. (author) 7 refs.
CAVE3: A general transient heat transfer computer code utilizing eigenvectors and eigenvalues
Palmieri, J. V.; Rathjen, K. A.
1978-01-01
The method of solution is a hybrid analytical numerical technique which utilizes eigenvalues and eigenvectors. The method is inherently stable, permitting large time steps even with the best of conductors with the finest of mesh sizes which can provide a factor of five reduction in machine time compared to conventional explicit finite difference methods when structures with small time constants are analyzed over long time periods. This code will find utility in analyzing hypersonic missile and aircraft structures which fall naturally into this class. The code is a completely general one in that problems involving any geometry, boundary conditions and materials can be analyzed. This is made possible by requiring the user to establish the thermal network conductances between nodes. Dynamic storage allocation is used to minimize core storage requirements. This report is primarily a user's manual for CAVE3 code. Input and output formats are presented and explained. Sample problems are included which illustrate the usage of the code as well as establish the validity and accuracy of the method.
Investigate Methods to Decrease Compilation Time-AX-Program Code Group Computer Science R& D Project
Energy Technology Data Exchange (ETDEWEB)
Cottom, T
2003-06-11
Large simulation codes can take on the order of hours to compile from scratch. In Kull, which uses generic programming techniques, a significant portion of the time is spent generating and compiling template instantiations. I would like to investigate methods that would decrease the overall compilation time for large codes. These would be methods which could then be applied, hopefully, as standard practice to any large code. Success is measured by the overall decrease in wall clock time a developer spends waiting for an executable. Analyzing the make system of a slow to build project can benefit all developers on the project. Taking the time to analyze the number of processors used over the life of the build and restructuring the system to maximize the parallelization can significantly reduce build times. Distributing the build across multiple machines with the same configuration can increase the number of available processors for building and can help evenly balance the load. Becoming familiar with compiler options can have its benefits as well. The time improvements of the sum can be significant. Initial compilation time for Kull on OSF1 was {approx} 3 hours. Final time on OSF1 after completion is 16 minutes. Initial compilation time for Kull on AIX was {approx} 2 hours. Final time on AIX after completion is 25 minutes. Developers now spend 3 hours less waiting for a Kull executable on OSF1, and 2 hours less on AIX platforms. In the eyes of many Kull code developers, the project was a huge success.
Use of numerical simulation computer codes to fire problems in nuclear power plants in Finland
Energy Technology Data Exchange (ETDEWEB)
Keski-Rahkonen, O.; Eloranta, E. (Valtion Teknillinen Tutkimuskeskus, Espoo (Finland). Fire Technology Lab.); Huhtanen, R. (Valtion Teknillinen Tutkimuskeskus, Helsinki (Finland). Nuclear Engineering Lab.)
1991-03-01
Zone and field model codes are used for fire simulations, including nuclear facilities, in Finland. Here two examples are described: (a) calculation of evaporation rate of a pool fire (8 MW) in a compartment using FIRST, and calculation of an oil spill fire (180 MW) in a turbine hall using PHOENICS. (orig.).
Energy Technology Data Exchange (ETDEWEB)
Kroeger, P.G.; Kennett, R.J.; Colman, J.; Ginsberg, T. (Brookhaven National Lab., Upton, NY (United States))
1991-10-01
This report documents the THATCH code, which can be used to model general thermal and flow networks of solids and coolant channels in two-dimensional r-z geometries. The main application of THATCH is to model reactor thermo-hydraulic transients in High-Temperature Gas-Cooled Reactors (HTGRs). The available modules simulate pressurized or depressurized core heatup transients, heat transfer to general exterior sinks or to specific passive Reactor Cavity Cooling Systems, which can be air or water-cooled. Graphite oxidation during air or water ingress can be modelled, including the effects of added combustion products to the gas flow and the additional chemical energy release. A point kinetics model is available for analyzing reactivity excursions; for instance due to water ingress, and also for hypothetical no-scram scenarios. For most HTGR transients, which generally range over hours, a user-selected nodalization of the core in r-z geometry is used. However, a separate model of heat transfer in the symmetry element of each fuel element is also available for very rapid transients. This model can be applied coupled to the traditional coarser r-z nodalization. This report described the mathematical models used in the code and the method of solution. It describes the code and its various sub-elements. Details of the input data and file usage, with file formats, is given for the code, as well as for several preprocessing and postprocessing options. The THATCH model of the currently applicable 350 MW{sub th} reactor is described. Input data for four sample cases are given with output available in fiche form. Installation requirements and code limitations, as well as the most common error indications are listed. 31 refs., 23 figs., 32 tabs.
Gobbo, F.; Benini, M.
2015-01-01
This article analyses the knowledge needed to understand a computer program within the philosophy of information. L. Floridi's method of levels of abstraction is applied to the relation between an ideal programmer and a modern computer seen together as an informational organism. The results obtained
Energy Technology Data Exchange (ETDEWEB)
Virtanen, E.
1995-12-31
In the study three loss-of-feedwater type experiments which were preformed with the PACTEL facility has been calculated with two computer codes. The purpose of the experiments was to gain information about the behaviour of horizontal steam generator in a situation where the water level on the secondary side of the steam generator is decreasing. At the same time data that can be used in the assessment of thermal-hydraulic computer codes was assembled. The purpose of the work was to study the capabilities of two computer codes, APROS version 2.11 and RELAP5/MOD3.1, to calculate the phenomena in horizontal steam generator. In order to make the comparison of the calculation results easier the same kind of model of the steam generator was made for both codes. Only the steam generator was modelled, the rest of the facility was given for the codes as a boundary condition. (23 refs.).
GPU上计算流体力学的加速%Acceleration of Computational Fluid Dynamics Codes on GPU
Institute of Scientific and Technical Information of China (English)
董廷星; 李新亮; 李森; 迟学斌
2011-01-01
Computational Fluid Dynamic (CFD) codes based on incompressible Navier-Stokes, compressible Euler and compressible Navier-Stokes solvers are ported on NVIDIA GPU. As validation test, we have simulated a two-dimension cavity flow, Riemann problem and a transonic flow over a RAE2822 airfoil. Maximum 33.2x speedup is reported in our test. To maximum the GPU code performance, we also explore a number of GPU-specific optimization strategies. It demonstrates GPU code gives the expected results compared CPU code and experimental result and GPU computing has good compatibility and bright future.%本文将计算流体力学中的可压缩的纳维叶-斯托克斯(Navier-Stokes),不可压缩的Navier-Stokes和欧拉(Euler)方程移植到NVIDIA GPU上.模拟了3个测试例子,2维的黎曼问题,方腔流问题和RAE2822型的机翼绕流.相比于CPU,我们在GPU平台上最高得到了33.2倍的加速比.为了最大程度提高代码的性能,针对GPU平台上探索了几种优化策略.和CPU以及实验结果对比表明,利用计算流体力学在GPU平台上能够得到预想的结果,具有很好的应用前景.
Energy Technology Data Exchange (ETDEWEB)
Nichols, B. D.; Mueller, C.; Necker, G. A.; Travis, J. R.; Spore, J. W.; Lam, K. L.; Royl, P.; Wilson, T. L.
1998-10-01
Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume III
MacLeod, Matthew K
2015-01-01
Analytical nuclear gradients for fully internally contracted complete active space second-order perturbation theory (CASPT2) are reported. This implementation has been realized by an automated code generator that can handle spin-free formulas for the CASPT2 energy and its derivatives with respect to variations of molecular orbitals and reference coefficients. The underlying complete active space self-consistent field and the so-called Z-vector equations are solved using density fitting. With full internal contraction the size of first-order wave functions scales polynomially with the number of active orbitals. The CASPT2 gradient program and the code generator are both publicly available. This work enables the CASPT2 geometry optimization of molecules as complex as those investigated by respective single-point calculations.
1975-06-01
transfer Distribution Comarisons 3-7 3-6 Ran 207 Camphor Shape Change Prediction (Reg - 10 x 101/ft) 3-9 3-7 Run 208 Camphor Shape Change Prediction...environment. The film coefficient approach enables the modeling of heterogeneous reaction and sublimation kinetics, unequal species diffusion coefficients...ilar predictions. As an exercise of the shape change numerical procedures in the EROS com- puter code, two camphor shape change solutions were generated
Delta: An object-oriented finite element code architecture for massively parallel computers
Energy Technology Data Exchange (ETDEWEB)
Weatherby, J.R.; Schutt, J.A.; Peery, J.S.; Hogan, R.E.
1996-02-01
Delta is an object-oriented code architecture based on the finite element method which enables simulation of a wide range of engineering mechanics problems in a parallel processing environment. Written in C{sup ++}, Delta is a natural framework for algorithm development and for research involving coupling of mechanics from different Engineering Science disciplines. To enhance flexibility and encourage code reuse, the architecture provides a clean separation of the major aspects of finite element programming. Spatial discretization, temporal discretization, and the solution of linear and nonlinear systems of equations are each implemented separately, independent from the governing field equations. Other attractive features of the Delta architecture include support for constitutive models with internal variables, reusable ``matrix-free`` equation solvers, and support for region-to-region variations in the governing equations and the active degrees of freedom. A demonstration code built from the Delta architecture has been used in two-dimensional and three-dimensional simulations involving dynamic and quasi-static solid mechanics, transient and steady heat transport, and flow in porous media.
3D-MAPTOR Code for computation of magnetic fields in tokamaks
ChÁVez-AlarcÓN, Esteban; Herrera-VelÁZquez, Julio
2009-11-01
A 3dimensional code has been developed in order to determine the magnetic field in tokamaks, starting from the assumption that the toroidal and vertical field coils are all circular, as well as the cross section of the plasma current distribution. It was earlier used to study the stochastization of the outer magnetic surfaces [1] and to reconstruct the evolution of the plasma column, using the experimental signals of tokamak discharges. These results were compared with tomographic reconsructions of the ISTTOK tokamak [2]. We present an upgrade of the code, in which rectangular toroidal field coils and D shaped plasma current cross sections can be included. The code is particularly useful to study the effect of the ripple along the toroidal coordinate.[4pt] [1] E. Ch'avez, et al., ``Stochastization of Magnetic Field Surfaces in Tokamaks by an Inner Coil'' in Plasma and Fusion Science, AIP Conference Proceedings Series 875 (2006) pp.347-349.[0pt] [2] B.B. Carvalho, et al., ``Real-time plasma control based on the ISTTOK tomography diagnostic'', Rev. Sci. Instrum. 79 (2008)10F329.
Computational identification of human long intergenic non-coding RNAs using a GA-SVM algorithm.
Wang, Yanqiu; Li, Yang; Wang, Qi; Lv, Yingli; Wang, Shiyuan; Chen, Xi; Yu, Xuexin; Jiang, Wei; Li, Xia
2014-01-01
Long intergenic non-coding RNAs (lincRNAs) are a new type of non-coding RNAs and are closely related with the occurrence and development of diseases. In previous studies, most lincRNAs have been identified through next-generation sequencing. Because lincRNAs exhibit tissue-specific expression, the reproducibility of lincRNA discovery in different studies is very poor. In this study, not including lincRNA expression, we used the sequence, structural and protein-coding potential features as potential features to construct a classifier that can be used to distinguish lincRNAs from non-lincRNAs. The GA-SVM algorithm was performed to extract the optimized feature subset. Compared with several feature subsets, the five-fold cross validation results showed that this optimized feature subset exhibited the best performance for the identification of human lincRNAs. Moreover, the LincRNA Classifier based on Selected Features (linc-SF) was constructed by support vector machine (SVM) based on the optimized feature subset. The performance of this classifier was further evaluated by predicting lincRNAs from two independent lincRNA sets. Because the recognition rates for the two lincRNA sets were 100% and 99.8%, the linc-SF was found to be effective for the prediction of human lincRNAs.
Directory of Open Access Journals (Sweden)
Fabio Burderi
2007-05-01
Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.
Energy Technology Data Exchange (ETDEWEB)
Madela, Vinicius Zacarias; Pauliny, Luis F. de A.; Veras, Carlos A. Gurgel [Brasilia Univ., DF (Brazil). Dept. de Engenharia Mecanica]. E-mail: gurgel@enm.unb.br; Costa, Fernando de S. [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Lab. Associado de Combustao e Propulsao]. E-mail: fernando@cptec.inpe.br
2000-07-01
This work presents the results obtained with the simulation of multi fuel micro turbines combustion chambers. In particular, the predictions for the methane and Diesel burning are presented. The appropriate routines of the CHEMKIN III computer code were used.
Energy Technology Data Exchange (ETDEWEB)
Guasp, J.; Navarro, C.
1973-07-01
A FORTRAN V computer code for UNIVAC 1108/6 using a local Optical Model with spin-orbit interaction is described. The code calculates fast neutron cross sections, angular distribution, and Legendre moments for heavy and intermediate spherical nuclei. It allows for the possibility of automatic variation of potential parameters for experimental data fitting. (Author) 55 refs.
Gudoshnikov, A. N.; Migrov, Yu. A.
2008-11-01
Calculations to verify the Russian computer code KORSAR were carried out for the B4.1 experimental operating conditions, in which nitrogen was supplied to the reactor coolant (primary) circuit of a reactor plant model, and which were simulated at the PKL III integral test facility. It is shown that dissolution of gases in coolant has an essential effect on the thermal-hydraulic processes during long-term passive removal of heat from the primary to secondary coolant circuit of the reactor plant model under the conditions of natural circulation.
Alam, Tanvir
2016-11-28
Regulation and function of protein-coding genes are increasingly well-understood, but no comparable evidence exists for non-coding RNA (ncRNA) genes, which appear to be more numerous than protein-coding genes. We developed a novel machine-learning model to distinguish promoters of long ncRNA (lncRNA) genes from those of protein-coding genes. This represents the first attempt to make this distinction based on properties of the associated gene promoters. From our analyses, several transcription factors (TFs), which are known to be regulated by lncRNAs, also emerged as potential global regulators of lncRNAs, suggesting that lncRNAs and TFs may participate in bidirectional feedback regulatory network. Our results also raise the possibility that, due to the historical dependence on protein-coding gene in defining the chromatin states of active promoters, an adjustment of these chromatin signature profiles to incorporate lncRNAs is warranted in the future. Secondly, we developed a novel method to infer functions for lncRNA and microRNA (miRNA) transcripts based on their transcriptional regulatory networks in 119 tissues and 177 primary cells of human. This method for the first time combines information of cell/tissueVspecific expression of a transcript and the TFs and transcription coVfactors (TcoFs) that control activation of that transcript. Transcripts were annotated using statistically enriched GO terms, pathways and diseases across cells/tissues and associated knowledgebase (FARNA) is developed. FARNA, having the most comprehensive function annotation of considered ncRNAs across the widest spectrum of cells/tissues, has a potential to contribute to our understanding of ncRNA roles and their regulatory mechanisms in human. Thirdly, we developed a novel machine-learning model to identify LD motif (a protein interaction motif) of paxillin, a ncRNA target that is involved in cell motility and cancer metastasis. Our recognition model identified new proteins not
Directory of Open Access Journals (Sweden)
Mohammadnia Meysam
2013-01-01
Full Text Available The flux expansion nodal method is a suitable method for considering nodalization effects in node corners. In this paper we used this method to solve the intra-nodal flux analytically. Then, a computer code, named MA.CODE, was developed using the C# programming language. The code is capable of reactor core calculations for hexagonal geometries in two energy groups and three dimensions. The MA.CODE imports two group constants from the WIMS code and calculates the effective multiplication factor, thermal and fast neutron flux in three dimensions, power density, reactivity, and the power peaking factor of each fuel assembly. Some of the code's merits are low calculation time and a user friendly interface. MA.CODE results showed good agreement with IAEA benchmarks, i. e. AER-FCM-101 and AER-FCM-001.
Energy Technology Data Exchange (ETDEWEB)
Dragulescu, E.; Duma, M.; Ivascu, M.; Popescu, D.; Semenescu, G.; Mihu, R.
1981-01-01
This paper presents a package of computer programs, to be used as a tool for the obtaining of spectroscopic information, such as theoretical yields, reduced transition probabilities and multipole mixing ratios from experimental Coulomb excitation data. 12 references.
A computer code for beam optics calculation--third order approximation
Institute of Scientific and Technical Information of China (English)
L(U) Jianqin; LI Jinhai
2006-01-01
To calculate the beam transport in the ion optical systems accurately, a beam dynamics computer program of third order approximation is developed. Many conventional optical elements are incorporated in the program. Particle distributions of uniform type or Gaussian type in the ( x, y, z ) 3D ellipses can be selected by the users. The optimization procedures are provided to make the calculations reasonable and fast. The calculated results can be graphically displayed on the computer monitor.
Emergency Doses (ED) - Revision 3: A calculator code for environmental dose computations
Energy Technology Data Exchange (ETDEWEB)
Rittmann, P.D.
1990-12-01
The calculator program ED (Emergency Doses) was developed from several HP-41CV calculator programs documented in the report Seven Health Physics Calculator Programs for the HP-41CV, RHO-HS-ST-5P (Rittman 1984). The program was developed to enable estimates of offsite impacts more rapidly and reliably than was possible with the software available for emergency response at that time. The ED - Revision 3, documented in this report, revises the inhalation dose model to match that of ICRP 30, and adds the simple estimates for air concentration downwind from a chemical release. In addition, the method for calculating the Pasquill dispersion parameters was revised to match the GENII code within the limitations of a hand-held calculator (e.g., plume rise and building wake effects are not included). The summary report generator for printed output, which had been present in the code from the original version, was eliminated in Revision 3 to make room for the dispersion model, the chemical release portion, and the methods of looping back to an input menu until there is no further no change. This program runs on the Hewlett-Packard programmable calculators known as the HP-41CV and the HP-41CX. The documentation for ED - Revision 3 includes a guide for users, sample problems, detailed verification tests and results, model descriptions, code description (with program listing), and independent peer review. This software is intended to be used by individuals with some training in the use of air transport models. There are some user inputs that require intelligent application of the model to the actual conditions of the accident. The results calculated using ED - Revision 3 are only correct to the extent allowed by the mathematical models. 9 refs., 36 tabs.
Energy Technology Data Exchange (ETDEWEB)
MacLeod, Matthew K.; Shiozaki, Toru [Department of Chemistry, Northwestern University, 2145 Sheridan Rd., Evanston, Illinois 60208 (United States)
2015-02-07
Analytical nuclear gradients for fully internally contracted complete active space second-order perturbation theory (CASPT2) are reported. This implementation has been realized by an automated code generator that can handle spin-free formulas for the CASPT2 energy and its derivatives with respect to variations of molecular orbitals and reference coefficients. The underlying complete active space self-consistent field and the so-called Z-vector equations are solved using density fitting. The implementation has been applied to the vertical and adiabatic ionization potentials of the porphin molecule to illustrate its capability.
SUPERENERGY-2: a multiassembly, steady-state computer code for LMFBR core thermal-hydraulic analysis
Energy Technology Data Exchange (ETDEWEB)
Basehore, K.L.; Todreas, N.E.
1980-08-01
Core thermal-hydraulic design and performance analyses for Liquid Metal Fast Breeder Reactors (LMFBRs) require repeated detailed multiassembly calculations to determine radial temperature profiles and subchannel outlet temperatures for various core configurations and subassembly structural analyses. At steady-state, detailed core-wide temperature profiles are required for core restraint calculations and subassembly structural analysis. In addition, sodium outlet temperatures are routinely needed for each reactor operating cycle. The SUPERENERGY-2 thermal-hydraulic code was designed specifically to meet these designer needs. It is applicable only to steady-state, forced-convection flow in LMFBR core geometries.
Directory of Open Access Journals (Sweden)
Priti Srinivas Sajja
2015-04-01
Full Text Available This paper illustrates architecture for a multi agent system in healthcare domain. The architecture is generic and designed in form of multiple layers. One of the layers of the architecture contains many proactive, co-operative and intelligent agents such as resource management agent, query agent, pattern detection agent and patient management agent. Another layer of the architecture is a collection of libraries to auto-generate code for agents using soft computing techniques. At this stage, codes for artificial neural network and fuzzy logic are developed and encompassed in this layer. The agents use these codes for development of neural network, fuzzy logic or hybrid solutions such as neuro-fuzzy solution. Third layer encompasses knowledge base, metadata and other local databases. The multi layer architecture is supported by personalized user interfaces for friendly interaction with its users. The framework is generic, flexible, and designed for a distributed environment like the Web; with minor modifications it can be employed on grid or cloud platform. The paper also discusses detail design issues, suitable applications and future enhancement of the work.
Energy Technology Data Exchange (ETDEWEB)
Park, Soo Yong; Kim, Ko Ryu; Kim, Dong Ha; Kim, See Darl; Song, Yong Mann; Choi, Young; Jin, Young Ho
2005-03-15
The objective of the project is to develop a generic severe accident management guidance(SAMG) applicable to Korean PHWR and the objective of this 3 year continued phase is to construct a base of the generic SAMG. Another objective is to improve a domestic computer code, ISAAC (Integrated Severe Accident Analysis code for CANDU), which still has many deficiencies to be improved in order to apply for the SAMG development. The scope and contents performed in this Phase-2 are as follows: The characteristics of major design and operation for the domestic Wolsong NPP are analyzed from the severe accident aspects. On the basis, preliminary strategies for SAM of PHWR are selected. The information needed for SAM and the methods to get that information are analyzed. Both the individual strategies applicable for accident mitigation under PHWR severe accident conditions and the technical background for those strategies are developed. A new version of ISAAC 2.0 has been developed after analyzing and modifying the existing models of ISAAC 1.0. The general SAMG applicable for PHWRs confirms severe accident management techniques for emergencies, provides the base technique to develop the plant specific SAMG by utility company and finally contributes to the public safety enhancement as a NPP safety assuring step. The ISAAC code will be used inevitably for the PSA, living PSA, severe accident analysis, SAM program development and operator training in PHWR.
Institute of Scientific and Technical Information of China (English)
GUO; Hong
2001-01-01
［1］Sacks, R. A., The PROP 92 Fourier Beam Propagation Code, UCRL-LR-105821-96-4.［2］Williams, W. H., Modeling of Self-Focusing Experiments by Beam Propagation Codes, UCRL-LR-105821-96-1.［3］User guide for FRESNEL software.［4］Hunt, J. H., Renard, P. A., Simmons, W. W., Improved performance of fusion lasers using the imaging properties of multiple spatial filters, Appl. Opt., 1977, 16: 779.［5］Deng Ximing, Guo Hong, Cao Qing, Invariant integral and statistical equations for the paraxial beam propagation in free space, Science in China (in Chinese) Ser. A, 1997, 27(1): 64.［6］Goodman, J. W., Introduction to Fourier Optics, New York: McGraw-Hill, 1968.［7］Born, M., Wolf, E., Principles of Optics, New York: Pergamon Press, 1975.［8］Siegman, A. E., Lasers, New York: Mill Valley CA, 1986.［9］Fan Dianyuan, Fresnel number of complex system, Optica Sinica (in Chinese), 1983, 3(4): 319.［10］L
AN EFFICIENT CODE TO COMPUTE NONPARALLEL STEADY FLOWS AND THEIR LINEAR-STABILITY
DIJKSTRA, HA; MOLEMAKER, MJ; VANDERPLOEG, A; BOTTA, EFF
1995-01-01
A simple, fast and efficient algorithm to compute steady non-parallel flows and their linear stability in parameter space is described. The pseudo-arclength continuation method is used to trace branches of steady states as one of the parameters is varied. To determine the linear stability of each st
High Performance Computing of Three-Dimensional Finite Element Codes on a 64-bit Machine
Directory of Open Access Journals (Sweden)
M.P Raju
2012-01-01
Full Text Available Three dimensional Navier-Stokes finite element formulations require huge computational power in terms of memory and CPU time. Recent developments in sparse direct solvers have significantly reduced the memory and computational time of direct solution methods. The objective of this study is twofold. First is to evaluate the performance of various state-of-the-art sequential sparse direct solvers in the context of finite element formulation of fluid flow problems. Second is to examine the merit in upgrading from 32 bit machine to a 64 bit machine with larger RAM capacity in terms of its capacity to solve larger problems. The choice of a direct solver is dependent on its computational time and its in-core memory requirements. Here four different solvers, UMFPACK, MUMPS, HSL_MA78 and PARDISO are compared. The performances of these solvers with respect to the computational time and memory requirements on a 64-bit windows server machine with 16GB RAM is evaluated.
Sosedkin, Alexander
2015-01-01
LCODE is a freely-distributed quasistatic 2D3V code for simulating plasma wakefield acceleration, mainly specialized at resource-efficient studies of long-term propagation of ultrarelativistic particle beams in plasmas. The beam is modeled with fully relativistic macro-particles in a simulation window copropagating with the light velocity; the plasma can be simulated with either kinetic or fluid model. Several techniques are used to obtain exceptional numerical stability and precision while maintaining high resource efficiency, enabling LCODE to simulate the evolution of long particle beams over long propagation distances even on a laptop. A recent upgrade enabled LCODE to perform the calculations in parallel. A pipeline of several LCODE processes communicating via MPI (Message-Passing Interface) is capable of executing multiple consecutive time steps of the simulation in a single pass. This approach can speed up the calculations by hundreds of times.
Robust Coding for Lossy Computing with Receiver-Side Observation Costs
Ahmadi, Behzad
2011-01-01
An encoder wishes to minimize the bit rate necessary to guarantee that a decoder is able to calculate a symbol-wise function of a sequence available only at the encoder and a sequence that can be measured only at the decoder. This classical problem, first studied by Yamamoto, is addressed here by including two new aspects: (i) The decoder obtains noisy measurements of its sequence, where the quality of such measurements can be controlled via a cost-constrained "action" sequence; (ii) Measurement at the decoder may fail in a way that is unpredictable to the encoder, thus requiring robust encoding. The considered scenario generalizes known settings such as the Heegard-Berger-Kaspi and the "source coding with a vending machine" problems. The rate-distortion-cost function is derived in relevant special cases, along with general upper and lower bounds. Numerical examples are also worked out to obtain further insight into the optimal system design.
Energy Technology Data Exchange (ETDEWEB)
Sosedkin, A.P.; Lotov, K.V. [Budker Institute of Nuclear Physics SB RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation)
2016-09-01
LCODE is a freely distributed quasistatic 2D3V code for simulating plasma wakefield acceleration, mainly specialized at resource-efficient studies of long-term propagation of ultrarelativistic particle beams in plasmas. The beam is modeled with fully relativistic macro-particles in a simulation window copropagating with the light velocity; the plasma can be simulated with either kinetic or fluid model. Several techniques are used to obtain exceptional numerical stability and precision while maintaining high resource efficiency, enabling LCODE to simulate the evolution of long particle beams over long propagation distances even on a laptop. A recent upgrade enabled LCODE to perform the calculations in parallel. A pipeline of several LCODE processes communicating via MPI (Message‐Passing Interface) is capable of executing multiple consecutive time steps of the simulation in a single pass. This approach can speed up the calculations by hundreds of times.
Development of computer code SAFFRON for evaluating breached pin performance in FBR's
Energy Technology Data Exchange (ETDEWEB)
Ukai, Shigeharu; Shikakura, Sakae (Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center); Sano, Yuji; Takita, Masami
1994-07-01
In order to evaluate the breached pin behavior in FBR, the breached pin performance analysis code SAFFRON was developed. Based on the results of run-beyond-cladding-breach test in FBR-II as a collaborative program between PNC and U.S.DOE, the following behaviors were taken into consideration; fuel sodium reaction product (FSRP) formation, resultant fuel expansion, breach extension of cladding and release of delayed neutron precursors into the coolant. Using 3-dimensional elastic analyses by finite element method, breached pin diameter increase is adequately predicted with the reduced Young's modulus of the breached fuel. The delayed neutron signal response in on-line diagnosis was evaluated in relation to the growth of FSRP and breached area enlargement. (author).
Sosedkin, A. P.; Lotov, K. V.
2016-09-01
LCODE is a freely distributed quasistatic 2D3V code for simulating plasma wakefield acceleration, mainly specialized at resource-efficient studies of long-term propagation of ultrarelativistic particle beams in plasmas. The beam is modeled with fully relativistic macro-particles in a simulation window copropagating with the light velocity; the plasma can be simulated with either kinetic or fluid model. Several techniques are used to obtain exceptional numerical stability and precision while maintaining high resource efficiency, enabling LCODE to simulate the evolution of long particle beams over long propagation distances even on a laptop. A recent upgrade enabled LCODE to perform the calculations in parallel. A pipeline of several LCODE processes communicating via MPI (Message-Passing Interface) is capable of executing multiple consecutive time steps of the simulation in a single pass. This approach can speed up the calculations by hundreds of times.
Computer vision for detecting and quantifying gamma-ray sources in coded-aperture images
Energy Technology Data Exchange (ETDEWEB)
Schaich, P.C.; Clark, G.A.; Sengupta, S.K.; Ziock, K.P.
1994-11-02
The authors report the development of an automatic image analysis system that detects gamma-ray source regions in images obtained from a coded aperture, gamma-ray imager. The number of gamma sources in the image is not known prior to analysis. The system counts the number (K) of gamma sources detected in the image and estimates the lower bound for the probability that the number of sources in the image is K. The system consists of a two-stage pattern classification scheme in which the Probabilistic Neural Network is used in the supervised learning mode. The algorithms were developed and tested using real gamma-ray images from controlled experiments in which the number and location of depleted uranium source disks in the scene are known.
Three computer codes to read, plot and tabulate operational test-site recorded solar data
Stewart, S. D.; Sampson, R. S., Jr.; Stonemetz, R. E.; Rouse, S. L.
1980-01-01
Computer programs used to process data that will be used in the evaluation of collector efficiency and solar system performance are described. The program, TAPFIL, reads data from an IBM 360 tape containing information (insolation, flowrates, temperatures, etc.) from 48 operational solar heating and cooling test sites. Two other programs, CHPLOT and WRTCNL, plot and tabulate the data from the direct access, unformatted TAPFIL file. The methodology of the programs, their inputs, and their outputs are described.
Energy Technology Data Exchange (ETDEWEB)
Yokoyama, Kenji; Hazama, Taira; Chiba, Go; Ohki, Shigeo; Ishikawa, Makoto [Japan Nuclear Cycle Development Inst., Oarai, Ibaraki (Japan). Oarai Engineering Center
2002-12-01
In the core design of fast reactors (FRs), it is very important to improve the prediction accuracy of the nuclear characteristics for both reducing cost and ensuring reliability of FR plants. A nuclear reactor analysis code system for FRs has been developed by the Japan Nuclear Cycle Development Institute (JNC). This paper describes the outline of the calculation models and methods in the system consisting of several analysis codes, such as the cell calculation code CASUP, the core calculation code TRITAC and the sensitivity analysis code SAGEP. Some examples of verification results and improvement of the design accuracy are also introduced based on the measurement data from critical assemblies, e.g, the JUPITER experiment (USA/Japan), FCA (Japan), MASURCA (France), and BFS (Russia). Furthermore, application fields and future plans, such as the development of new generation nuclear constants and applications to MA{center_dot}FP transmutation, are described. (author)
A computer code to calculate the fast induced signals by electron swarms in gases
Energy Technology Data Exchange (ETDEWEB)
Tobias, Carmen C.B. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Mangiarotti, Alessio [Universidade de Coimbra (Portugal). Dept. de Fisica. Lab. de Instrumentacao e Fisica Experimental de Particulas
2010-07-01
Full text: The study of electron transport parameters (i.e. drift velocity, diffusion coefficients and first Townsend coefficient) in gases is very important in several areas of applied nuclear science. For example, they are a relevant input to the design of particle detector employing micro-structures (MSGC's, micromegas, GEM's) and RPC's (resistive plate chambers). Moreover, if the data are accurate and complete enough, they can be used to derive a set of electron impact cross-sections with their energy dependence, that are a key ingredient in micro-dosimetry calculations. Despite the fundamental need of such data and the long age of the field, the gases of possible interest are so many and the effort of obtaining good quality data so time demanding, that an important contribution can still be made. As an example, electrons drift velocity at moderate field strengths (up to 50 Td) in pure Isobutane (a tissue equivalent gas) has been measured only recently by the IPEN-LIP collaboration using a dedicated setup. The transport parameters are derived from the recorded electric pulse induced by a swarm started with a pulsed laser shining on the cathode. To aid the data analysis, a special code has been developed to calculate the induced pulse by solving the electrons continuity equation including growth, drift and diffusion. A realistic profile of the initial laser beam is taken into account as well as the boundary conditions at the cathode and anode. The approach is either semi-analytic, based on the expression derived by P. H. Purdie and J. Fletcher, or fully numerical, using a finite difference scheme improved over the one introduced by J. de Urquijo et al. The agreement between the two will be demonstrated under typical conditions for the mentioned experimental setup. A brief discussion on the stability of the finite difference scheme will be given. The new finite difference scheme allows a detailed investigation of the importance of back diffusion to
Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA
Energy Technology Data Exchange (ETDEWEB)
Carbajo, Juan J [ORNL; Qualls, A L [ORNL
2008-01-01
The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE will generate over 6 kWe; the excess power will be needed for the pumps and other power management devices. The reactor will be cooled by NaK (a eutectic mixture of sodium and potassium which is liquid at ambient temperature). This space reactor is intended to be deployed over the surface of the Moon or Mars. The reactor operating life will be 8 to 10 years. The RELAP5-3D/ATHENA code is being developed and maintained by Idaho National Laboratory. The code can employ a variety of coolants in addition to water, the original coolant employed with early versions of the code. The code can also use 3-D volumes and 3-D junctions, thus allowing for more realistic representation of complex geometries. A combination of 3-D and 1-D volumes is employed in this study. The space reactor model consists of a primary loop and two secondary loops connected by two heat exchangers (HXs). Each secondary loop provides heat to four SEs. The primary loop includes the nuclear reactor with the lower and upper plena, the core with 85 fuel pins, and two vertical heat exchangers (HX). The maximum coolant temperature of the primary loop is 900 K. The secondary loops also employ NaK as a coolant at a maximum temperature of 877 K. The SEs heads are at a temperature of 800 K and the cold sinks are at a temperature of ~400 K. Two radiators will be employed to remove heat from the SEs. The SE HXs surrounding the SE heads are of annular design and have been modeled using 3-D volumes. These 3-D models have been used to improve the HX design by optimizing the flows of coolant and maximizing the heat transferred to the SE heads. The transients analyzed include failure of one or more Stirling engines, trip of the reactor pump, and trips of the secondary loop pumps feeding the HXs of the
Directory of Open Access Journals (Sweden)
Leonardo da Silva Boia
2014-03-01
Full Text Available Purpose: A computational system was developed for this paper in the C++ programming language, to create a 125I radioactive seed entry file, based on the positioning of a virtual grid (template in voxel geometries, with the purpose of performing prostate cancer treatment simulations using the MCNPX code.Methods: The system is fed with information from the planning system with regard to each seed’s location and its depth, and an entry file is automatically created with all the cards (instructions for each seed regarding their cell blocks and surfaces spread out spatially in the 3D environment. The system provides with precision a reproduction of the clinical scenario for the MCNPX code’s simulation environment, thereby allowing the technique’s in-depth study.Results and Conclusion: The preliminary results from this study showed that the lateral penumbra of uniform scanning proton beams was less sensitive In order to validate the computational system, an entry file was created with 88 125I seeds that were inserted in the phantom’s MAX06 prostate region with initial activity determined for the seeds at the 0.27 mCi value. Isodose curves were obtained in all the prostate slices in 5 mm steps in the 7 to 10 cm interval, totaling 7 slices. Variance reduction techniques were applied in order to optimize computational time and the reduction of uncertainties such as photon and electron energy interruptions in 4 keV and forced collisions regarding cells of interest. Through the acquisition of isodose curves, the results obtained show that hot spots have values above 300 Gy, as anticipated in literature, stressing the importance of the sources’ correct positioning, in which the computational system developed provides, in order not to release excessive doses in adjacent risk organs. The 144 Gy prescription curve showed in the validation process that it covers perfectly a large percentage of the volume, at the same time that it demonstrates a large
Furlong, K. L.; Fearn, R. L.
1983-01-01
A method is proposed to combine a numerical description of a jet in a crossflow with a lifting surface panel code to calculate the jet/aerodynamic-surface interference effects on a V/STOL aircraft. An iterative technique is suggested that starts with a model for the properties of a jet/flat plate configuration and modifies these properties based on the flow field calculated for the configuration of interest. The method would estimate the pressures, forces, and moments on an aircraft out of ground effect. A first-order approximation to the method suggested is developed and applied to two simple configurations. The first-order approximation is a noniterative precedure which does not allow for interactions between multiple jets in a crossflow and also does not account for the influence of lifting surfaces on the jet properties. The jet/flat plate model utilized in the examples presented is restricted to a uniform round jet injected perpendicularly into a uniform crossflow for a range of jet-to-crossflow velocity ratios from three to ten.
Einkemmer, Lukas
2016-05-01
The recently developed semi-Lagrangian discontinuous Galerkin approach is used to discretize hyperbolic partial differential equations (usually first order equations). Since these methods are conservative, local in space, and able to limit numerical diffusion, they are considered a promising alternative to more traditional semi-Lagrangian schemes (which are usually based on polynomial or spline interpolation). In this paper, we consider a parallel implementation of a semi-Lagrangian discontinuous Galerkin method for distributed memory systems (so-called clusters). Both strong and weak scaling studies are performed on the Vienna Scientific Cluster 2 (VSC-2). In the case of weak scaling we observe a parallel efficiency above 0.8 for both two and four dimensional problems and up to 8192 cores. Strong scaling results show good scalability to at least 512 cores (we consider problems that can be run on a single processor in reasonable time). In addition, we study the scaling of a two dimensional Vlasov-Poisson solver that is implemented using the framework provided. All of the simulations are conducted in the context of worst case communication overhead; i.e., in a setting where the CFL (Courant-Friedrichs-Lewy) number increases linearly with the problem size. The framework introduced in this paper facilitates a dimension independent implementation of scientific codes (based on C++ templates) using both an MPI and a hybrid approach to parallelization. We describe the essential ingredients of our implementation.
Mano, Omer
2017-01-01
Sensory neuroscience seeks to understand and predict how sensory neurons respond to stimuli. Nonlinear components of neural responses are frequently characterized by the second-order Wiener kernel and the closely-related spike-triggered covariance (STC). Recent advances in data acquisition have made it increasingly common and computationally intensive to compute second-order Wiener kernels/STC matrices. In order to speed up this sort of analysis, we developed a graphics processing unit (GPU)-accelerated module that computes the second-order Wiener kernel of a system’s response to a stimulus. The generated kernel can be easily transformed for use in standard STC analyses. Our code speeds up such analyses by factors of over 100 relative to current methods that utilize central processing units (CPUs). It works on any modern GPU and may be integrated into many data analysis workflows. This module accelerates data analysis so that more time can be spent exploring parameter space and interpreting data. PMID:28068420
Gertner, I.; Heber, O.; Zajfman, J.; Zajfman, D.; Rosner, B.
1989-01-01
Two different methods of analysis applicable for PIXE data are introduced and compared. In the first method Gaussian shaped peaks are fitted to the X-ray spectrum, and the complete analysis can be done on a microcomputer. The second is based on the Bayesian deconvolution method for simultaneous peak fitting and has to be carried out on a larger IBM computer. The advantage of the second method becomes evident for regions of poor statistics or where many overlapping peaks occur in the spectrum. The comparisons between the methods made on PIXE measurements obtained from 55 amniotic fluid samples gave satisfactory agreement.
Latorre, Jose I
2015-01-01
There exists a remarkable four-qutrit state that carries absolute maximal entanglement in all its partitions. Employing this state, we construct a tensor network that delivers a holographic many body state, the H-code, where the physical properties of the boundary determine those of the bulk. This H-code is made of an even superposition of states whose relative Hamming distances are exponentially large with the size of the boundary. This property makes H-codes natural states for a quantum memory. H-codes exist on tori of definite sizes and get classified in three different sectors characterized by the sum of their qutrits on cycles wrapped through the boundaries of the system. We construct a parent Hamiltonian for the H-code which is highly non local and finally we compute the topological entanglement entropy of the H-code.
Energy Technology Data Exchange (ETDEWEB)
Krylov, A.L.; Nossov, A.V.; Kisselev, V.P. [Nuclear Safety Institute of Russian Academy of Sciences, 52, B. Tulskaya, Moscow (Russian Federation)
2014-07-01
Fukushima accident proved once more the necessity of computer codes for modelling of radioactive substances migration in the marine environment. Radionuclides were discharged (and leaked) into the sea with contaminated waters and fell-out from the atmosphere. Unfortunately assessments of the radioactivity sources differ significantly. The uncertainty is significant as for contamination that took place in months following the disaster as for leakages that took place in 2013. According to most researches, in the spring of 2011 the most important sources of radioactive pollution of the sea were direct inflows of contaminated water. In the long-term, due to contamination of river basins, the inflow of radioactivity with river waters may become the most significant source. Strontium, iodine and cesium tend to migrate in seas in dissolved state due to small values of K{sub d} (distribution factor water - suspended sediments). However distribution factor of Cs in fresh water is high. Thus it can be assumed that most of cesium entering the sea with a river flow will be sorbed on suspended particles. Sedimentation of the particles can lead to development of contaminated areas of bottom sediments. Thus modelling migration and transformation of radionuclides in water bodies is an important radioecological problem. The three-dimensional dynamic computer code POMRad is a tool for solution of the problem. It can be used to implement full cycle of modelling: - hydrological modelling - computation of fields of currents (and other important hydrological characteristics); - sediment transport modelling (cohesive, non-cohesive and 'hot particles' if necessary); - radioactivity transport modelling (taking into account decay, sorption, desorption, etc). The article is aimed to give a brief description of the computer code and examples of its use for modelling of migration in the sea of radionuclides from Fukushima Daiichi nuclear power plant (NPP). The base of POMRad is the
Jiang, Jun; Zhou, Zongtan; Yin, Erwei; Yu, Yang; Liu, Yadong; Hu, Dewen
2015-11-01
Motor imagery (MI)-based brain-computer interfaces (BCIs) allow disabled individuals to control external devices voluntarily, helping us to restore lost motor functions. However, the number of control commands available in MI-based BCIs remains limited, limiting the usability of BCI systems in control applications involving multiple degrees of freedom (DOF), such as control of a robot arm. To address this problem, we developed a novel Morse code-inspired method for MI-based BCI design to increase the number of output commands. Using this method, brain activities are modulated by sequences of MI (sMI) tasks, which are constructed by alternately imagining movements of the left or right hand or no motion. The codes of the sMI task was detected from EEG signals and mapped to special commands. According to permutation theory, an sMI task with N-length allows 2 × (2(N)-1) possible commands with the left and right MI tasks under self-paced conditions. To verify its feasibility, the new method was used to construct a six-class BCI system to control the arm of a humanoid robot. Four subjects participated in our experiment and the averaged accuracy of the six-class sMI tasks was 89.4%. The Cohen's kappa coefficient and the throughput of our BCI paradigm are 0.88 ± 0.060 and 23.5bits per minute (bpm), respectively. Furthermore, all of the subjects could operate an actual three-joint robot arm to grasp an object in around 49.1s using our approach. These promising results suggest that the Morse code-inspired method could be used in the design of BCIs for multi-DOF control.
Energy Technology Data Exchange (ETDEWEB)
Lee, Y.J.; Dalpiaz, E.L. [ICF Kaiser Hanford Co., Richland, WA (United States)
1997-08-01
Computer code, WTVFE (Waste Tank Ventilation Flow Evaluation), has been developed to evaluate the ventilation requirement for an underground storage tank for radioactive waste. Heat generated by the radioactive waste and mixing pumps in the tank is removed mainly through the ventilation system. The heat removal process by the ventilation system includes the evaporation of water from the waste and the heat transfer by natural convection from the waste surface. Also, a portion of the heat will be removed through the soil and the air circulating through the gap between the primary and secondary tanks. The heat loss caused by evaporation is modeled based on recent evaporation test results by the Westinghouse Hanford Company using a simulated small scale waste tank. Other heat transfer phenomena are evaluated based on well established conduction and convection heat transfer relationships. 10 refs., 3 tabs.
Energy Technology Data Exchange (ETDEWEB)
Duda, L.E.
1985-01-01
The high temperatures of geothermal wells present severe problems for drilling, logging, and developing these reservoirs. Cooling the wellbore is perhaps the most common method to solve these problems. However, it is usually not clear what may be the most effective wellbore cooling mechanism for a given well. In this paper, wellbore cooling by the use of circulation or by fluid injection into the surrounding rock is investigated using a wellbore thermal simulator computer code. Short circulation times offer no prolonged cooling of fluid in the wellbore, but long circulation times (greater than ten or twenty days) greatly reduce the warming rate after shut-in. The dependence of the warming rate on the penetration distance of cooler temperatures into the rock formation (as by fluid injection) is investigated. Penetration distances of greater than 0.6 m appear to offer a substantial reduction in the warming rate. Several plots are shown which demonstrate these effects. 16 refs., 6 figs.
García-Jerez, Antonio; Piña-Flores, José; Sánchez-Sesma, Francisco J.; Luzón, Francisco; Perton, Mathieu
2016-12-01
During a quarter of a century, the main characteristics of the horizontal-to-vertical spectral ratio of ambient noise HVSRN have been extensively used for site effect assessment. In spite of the uncertainties about the optimum theoretical model to describe these observations, over the last decade several schemes for inversion of the full HVSRN curve for near surface surveying have been developed. In this work, a computer code for forward calculation of H/V spectra based on the diffuse field assumption (DFA) is presented and tested. It takes advantage of the recently stated connection between the HVSRN and the elastodynamic Green's function which arises from the ambient noise interferometry theory. The algorithm allows for (1) a natural calculation of the Green's functions imaginary parts by using suitable contour integrals in the complex wavenumber plane, and (2) separate calculation of the contributions of Rayleigh, Love, P-SV and SH waves as well. The stability of the algorithm at high frequencies is preserved by means of an adaptation of the Wang's orthonormalization method to the calculation of dispersion curves, surface-waves medium responses and contributions of body waves. This code has been combined with a variety of inversion methods to make up a powerful tool for passive seismic surveying.
Computer code for gas-liquid two-phase vortex motions: GLVM
Yeh, T. T.
1986-01-01
A computer program aimed at the phase separation between gas and liquid at zero gravity, induced by vortex motion, is developed. It utilizes an explicit solution method for a set of equations describing rotating gas-liquid flows. The vortex motion is established by a tangential fluid injection. A Lax-Wendroff two-step (McCormack's) numerical scheme is used. The program can be used to study the fluid dynamical behavior of the rotational two-phase fluids in a cylindrical tank. It provides a quick/easy sensitivity test on various parameters and thus provides the guidance for the design and use of actual physical systems for handling two-phase fluids.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Feng; Zhang, Xin; Xie, Jun; Li, Yeping; Han, Chengcheng; Lili, Li; Wang, Jing [School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Xu, Guang-Hua [School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an 710054 (China)
2015-03-10
This study presents a new steady-state visual evoked potential (SSVEP) paradigm for brain computer interface (BCI) systems. The goal of this study is to increase the number of targets using fewer stimulation high frequencies, with diminishing subject’s fatigue and reducing the risk of photosensitive epileptic seizures. The new paradigm is High-Frequency Combination Coding-Based High-Frequency Steady-State Visual Evoked Potential (HFCC-SSVEP).Firstly, we studied SSVEP high frequency(beyond 25 Hz)response of SSVEP, whose paradigm is presented on the LED. The SNR (Signal to Noise Ratio) of high frequency(beyond 40 Hz) response is very low, which is been unable to be distinguished through the traditional analysis method; Secondly we investigated the HFCC-SSVEP response (beyond 25 Hz) for 3 frequencies (25Hz, 33.33Hz, and 40Hz), HFCC-SSVEP produces n{sup n} with n high stimulation frequencies through Frequence Combination Code. Further, Animproved Hilbert-huang transform (IHHT)-based variable frequency EEG feature extraction method and a local spectrum extreme target identification algorithmare adopted to extract time-frequency feature of the proposed HFCC-SSVEP response.Linear predictions and fixed sifting (iterating) 10 time is used to overcome the shortage of end effect and stopping criterion,generalized zero-crossing (GZC) is used to compute the instantaneous frequency of the proposed SSVEP respondent signals, the improved HHT-based feature extraction method for the proposed SSVEP paradigm in this study increases recognition efficiency, so as to improve ITR and to increase the stability of the BCI system. what is more, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subject’s fatigue and prevent safety hazards linked to photo-induced epileptic seizures, So as to ensure the system efficiency and undamaging.This study tests three subjects in order to verify the feasibility of the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.; Mallen, A.N.; Neymotin, L.Y.
1998-03-01
This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARC and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.
Burkett, B.; Sheridan, M. F.
2007-05-01
On November 3, 2002, El Reventador volcano, located on the eastern flank of the Ecuadorian Andes, produced a sudden, violent eruption culminating in a 17km high column containing mostly steam and ash. Explosions in the initial phase created a summit crater while generating four lithic-rich andesitic pyroclastic flows. The longest of these flows traveled ESE out of the breached caldera, obliquely overriding the 200-400m southern caldera wall, reaching the Quijos River 8km distant. This flow crossed the major oil pipelines of Ecuador, displacing a pressurized crude oil pipeline more than 100m. The flows contained mostly lithic fragments with only minor juvenile pumice. The accompanying ash-cloud surge deposited a thin layer on top of the PF deposit, indicating an abundance of gas within the flow. The eruption came with practically no warning and yet had a large socio- economic impact for Ecuador. While the flows themselves resulted in no loss of life, the lack of significant precursor activity underscores the necessity for detailed pre-eruption knowledge of the potential hazards and risk zones around a particular volcano so as to be prepared in the event of such "surprise" eruptions. In conjunction with field mapping, computer models of volcanogenic flows can be used not only to identify risk zones but to understand the evolution of these flows. A new set of computer simulations using the TITAN (www.gmfg.buffalo.edu) thin-layer code allows a more complete exploration of important flow properties associated with this type of eruption. Realizations of this code simulate the path, extent, flow thickness, velocity, and momentum of the flows given the set of initial conditions (volume, starting location, flux hydrograph, internal friction, and basal friction). The TITAN code was used to simulate the four lithic-rich pyroclastic flows generated at the beginning of the 2002 eruption. Using field estimated volumes and starting positions of the PFs, simulations of the two
Directory of Open Access Journals (Sweden)
Taewan Kim
2012-01-01
Full Text Available In order to assess the accuracy and validity of subchannel, system, and computational fluid dynamics codes, the Paul Scherrer Institut has participated in the OECD/NRC PSBT benchmark with the thermal-hydraulic system code TRACE5.0 developed by US NRC, the subchannel code FLICA4 developed by CEA, and the computational fluid dynamic code STAR-CD developed by CD-adapco. The PSBT benchmark consists of a series of void distribution exercises and departure from nucleate boiling exercises. The results reveal that the prediction by the subchannel code FLICA4 agrees with the experimental data reasonably well in both steady-state and transient conditions. The analyses of single-subchannel experiments by means of the computational fluid dynamic code STAR-CD with the CD-adapco boiling model indicate that the prediction of the void fraction has no significant discrepancy from the experiments. The analyses with TRACE point out the necessity to perform additional assessment of the subcooled boiling model and bulk condensation model of TRACE.
Energy Technology Data Exchange (ETDEWEB)
Keney, G.S.
1981-08-01
A computer code has been written to calculate neutron induced activation of neutral-beam injector components and the corresponding dose rates as a function of geometry, component composition, and time after shutdown. The code, ACDOS1, was written in FORTRAN IV to calculate both activity and dose rates for up to 30 target nuclides and 50 neutron groups. Sufficient versatility has also been incorporated into the code to make it applicable to a variety of general activation problems due to neutrons of energy less than 20 MeV.
Apolux : an innovative computer code for daylight design and analysis in architecture and urbanism
Energy Technology Data Exchange (ETDEWEB)
Claro, A.; Pereira, F.O.R.; Ledo, R.Z. [Santa Catarina Federal Univ., Florianopolis, SC (Brazil)
2005-07-01
The main capabilities of a new computer program for calculating and analyzing daylighting in architectural space were discussed. Apolux 1.0 was designed to use three-dimensional files generated in graphic editors in the data exchange file (DXF) format and was developed to integrate an architect's design characteristics. An example of its use in a design context development was presented. The program offers fast and flexible manipulation of video card models in different visualization conditions. The algorithm for working with the physics of light is based on the radiosity method representing the surfaces through finite elements divided in small triangular units of area which are fully confronted to each other. The form factors of each triangle are determined in relation to all others in the primary calculation. Visible directions of the sky are also included according to the modular units of a subdivided globe. Following these primary calculations, the different and successive daylighting solutions can be determined under different sky conditions. The program can also change the properties of the materials to quickly recalculate the solutions. The program has been applied in an office building in Florianopolis, Brazil. The four stages of design include initial discussion with the architects about the conceptual possibilities; development of a comparative study based on 2 architectural designs with different conceptual elements regarding daylighting exploitation in order to compare internal daylighting levels and distribution of the 2 options exposed to the same external conditions; study the solar shading devices for specific facades; and, simulations to test the performance of different designs. The program has proven to be very flexible with reliable results. It has the possibility of incorporating situations of the real sky through the input of the Spherical model of real sky luminance values. 3 refs., 14 figs.
Ramamoorthy, Karthikeyan
The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant
Energy Technology Data Exchange (ETDEWEB)
Saurwein, J.J.
1977-08-01
A system of computer codes has been developed to statistically reduce Peach Bottom fuel test element metrology data and to compare the material strains and fuel rod-fuel hole gaps computed from these data with HTGR design code predictions. The codes included in this system are STAT, STRAIN, GAPS, and DRWDIM. STAT statistically evaluates test element metrology data yielding fuel rod, fuel body, and sleeve irradiation-induced strains; fuel rod anisotropy; and additional data characterizing each analyzed fuel element. STRAIN compares test element fuel rod and fuel body irradiation-induced strains computed from metrology data with the corresponding design code predictions. GAPS compares test element fuel rod, fuel hole heat transfer gaps computed from metrology data with the corresponding design code predictions. DRWDIM plots the measured and predicted gaps and strains. Although specifically developed to expedite the analysis of Peach Bottom fuel test elements, this system can be applied, without extensive modification, to the analysis of Fort St. Vrain or other HTGR-type fuel test elements.
Holmes, Shawn Yvette
A simulation was created to emulate two Racial Ethical Sensitivity Test (REST) videos (Brabeck et al., 2000). The REST is a reliable assessment for ethical sensitivity to racial and gender intolerant behaviors in educational settings. Quantitative and qualitative analysis of the REST was performed using the Quick-REST survey and an interview protocol. The purpose of this study was to affect science educator ability to recognize instances of racial and gender intolerant behaviors by levering immersive qualities of simulations. The fictitious Hazelton High School virtual environment was created by the researcher and compared with the traditional REST. The study investigated whether computer simulations can influence the ethical sensitivity of preservice and inservice science teachers to racial and gender intolerant behaviors in school settings. The post-test only research design involved 32 third-year science education students enrolled in science education classes at several southeastern universities and 31 science teachers from the same locale, some of which were part of an NSF project. Participant samples were assigned to the video control group or the simulation experimental group. This resulted in four comparison group; preservice video, preservice simulation, inservice video and inservice simulation. Participants experienced two REST scenarios in the appropriate format then responded to Quick-REST survey questions for both scenarios. Additionally, the simulation groups answered in-simulation and post-simulation questions. Nonparametric analysis of the Quick-REST ascertained differences between comparison groups. Cronbach's alpha was calculated for internal consistency. The REST interview protocol was used to analyze recognition of intolerant behaviors in the in-simulation prompts. Post-simulation prompts were analyzed for emergent themes concerning effect of the simulation on responses. The preservice video group had a significantly higher mean rank score than
Energy Technology Data Exchange (ETDEWEB)
Bellido, Luis F.
1995-07-01
A computer code to calculate the projectile energy degradation along a target stack was developed for an IBM or compatible personal microcomputer. A comparison of protons and deuterons bombarding uranium and aluminium targets was made. The results showed that the data obtained with TRANGE were in agreement with other computers code such as TRIM, EDP and also using Williamsom and Janni range and stopping power tables. TRANGE can be used for any charged particle ion, for energies between 1 to 100 MeV, in metal foils and solid compounds targets. (author). 8 refs., 2 tabs.
Roh, Young Sook; Kim, Sang Suk
2014-01-01
Computer-based simulation has intuitive appeal to both educators and learners with the flexibility of time, place, immediate feedback, and self-paced and consistent curriculum. The purpose of this study was to assess the effects of computer-based simulation on nursing students' performance, self-efficacy, post-code stress, and satisfaction between computer-based simulation plus instructor-led cardiopulmonary resuscitation training group and instructor-led resuscitation training-only group. This study was a nonequivalent control group posttest-only design. There were 213 second year nursing students randomly assigned to one of two groups: 109 nursing students with computer-based simulation or 104 with control group. Overall nursing students' performance score was higher in the computer-based simulation group than in the control group but reached no statistical significance (t = 1.086, p = .283). There were no significant differences in resuscitation-specific self-efficacy, post-code stress, and satisfaction between the two groups. Computer-based simulation combined with hands-on practice did not affect in nursing students' performance, self-efficacy, post-code stress, and satisfaction in nursing students. Further study must be conducted to inform instructional design and help integrate computer-based simulation and rigorous scoring rubrics.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a
Energy Technology Data Exchange (ETDEWEB)
Stevens, E.J.; McNeilly, G.S.
1994-03-01
The existing National Center for Atmospheric Research (NCAR) code in the Hamburg Oceanic Carbon Cycle Circulation Model and the Hamburg Large-Scale Geostrophic Ocean General Circulation Model was modernized and reduced in size while still producing an equivalent end result. A reduction in the size of the existing code from more than 50,000 lines to approximately 7,500 lines in the new code has made the new code much easier to maintain. The existing code in Hamburg model uses legacy NCAR (including even emulated CALCOMP subrountines) graphics to display graphical output. The new code uses only current (version 3.1) NCAR subrountines.
Litsarev, Mikhail S.
2013-02-01
A description of the DEPOSIT computer code is presented. The code is intended to calculate total and m-fold electron-loss cross-sections (m is the number of ionized electrons) and the energy T(b) deposited to the projectile (positive or negative ion) during a collision with a neutral atom at low and intermediate collision energies as a function of the impact parameter b. The deposited energy is calculated as a 3D integral over the projectile coordinate space in the classical energy-deposition model. Examples of the calculated deposited energies, ionization probabilities and electron-loss cross-sections are given as well as the description of the input and output data. Program summaryProgram title: DEPOSIT Catalogue identifier: AENP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 8726 No. of bytes in distributed program, including test data, etc.: 126650 Distribution format: tar.gz Programming language: C++. Computer: Any computer that can run C++ compiler. Operating system: Any operating system that can run C++. Has the code been vectorised or parallelized?: An MPI version is included in the distribution. Classification: 2.4, 2.6, 4.10, 4.11. Nature of problem: For a given impact parameter b to calculate the deposited energy T(b) as a 3D integral over a coordinate space, and ionization probabilities Pm(b). For a given energy to calculate the total and m-fold electron-loss cross-sections using T(b) values. Solution method: Direct calculation of the 3D integral T(b). The one-dimensional quadrature formula of the highest accuracy based upon the nodes of the Yacobi polynomials for the cosθ=x∈[-1,1] angular variable is applied. The Simpson rule for the φ∈[0,2π] angular variable is used. The Newton-Cotes pattern of the seventh order
Directory of Open Access Journals (Sweden)
Pešić Milan P.
2012-01-01
Full Text Available A numerical simulation of the radiological consequences of the RB reactor reactivity excursion accident, which occurred on October 15, 1958, and an estimation of the total doses received by the operators were run by the MCNP5 computer code. The simulation was carried out under the same assumptions as those used in the 1960 IAEA-organized experimental simulation of the accident: total fission energy of 80 MJ released in the accident and the frozen positions of the operators. The time interval of exposure to high doses received by the operators has been estimated. Data on the RB1/1958 reactor core relevant to the accident are given. A short summary of the accident scenario has been updated. A 3-D model of the reactor room and the RB reactor tank, with all the details of the core, created. For dose determination, 3-D simplified, homogenised, sexless and faceless phantoms, placed inside the reactor room, have been developed. The code was run for a number of neutron histories which have given a dose rate uncertainty of less than 2%. For the determination of radiation spectra escaping the reactor core and radiation interaction in the tissue of the phantoms, the MCNP5 code was run (in the KCODE option and “mode n p e”, with a 55-group neutron spectra, 35-group gamma ray spectra and a 10-group electron spectra. The doses were determined by using the conversion of flux density (obtained by the F4 tally in the phantoms to doses using factors taken from ICRP-74 and from the deposited energy of neutrons and gamma rays (obtained by the F6 tally in the phantoms’ tissue. A rough estimation of the time moment when the odour of ozone was sensed by the operators is estimated for the first time and given in Appendix A.1. Calculated total absorbed and equivalent doses are compared to the previously reported ones and an attempt to understand and explain the reasons for the obtained differences has been made. A Root Cause Analysis of the accident was done and
Energy Technology Data Exchange (ETDEWEB)
Vittitoe, C.N.
1981-04-01
The FORTRAN IV computer code FIDELE simulates the high-frequency electrical logging of a well in which induction and receiving coils are mounted in an instrument sonde immersed in a drilling fluid. The fluid invades layers of surrounding rock in an azimuthally symmetric pattern, superimposing radial layering upon the horizonally layered earth. Maxwell's equations are reduced to a second-order elliptic differential equation for the azimuthal electric-field intensity. The equation is solved at each spatial position where the complex dielectric constant, magnetic permeability, and electrical conductivity have been assigned. Receiver response is given as the complex open-circuit voltage on receiver coils. The logging operation is simulated by a succession of such solutions as the sonde traverses the borehole. Test problems verify consistency with available results for simple geometries. The code's main advantage is its treatment of a two-dimensional earth; its chief disadvantage is the large computer time required for typical problems. Possible code improvements are noted. Use of the computer code is outlined, and tests of most code features are presented.
Energy Technology Data Exchange (ETDEWEB)
Pavlovichev, A.M.
2001-06-19
The report presents calculation results of isotopic composition of irradiated fuel performed for the Quad Cities-1 reactor bundle with UO{sub 2} and MOX fuel. The MCU-REA code was used for calculations. The code is developed in Kurchatov Institute, Russia. The MCU-REA results are compared with the experimental data and HELIOS code results.
Energy Technology Data Exchange (ETDEWEB)
Campioni, Guillaume; Mounier, Claude [Commissariat a l' Energie Atomique, CEA, 31-33, rue de la Federation, 75752 Paris cedex (France)
2006-07-01
The main goal of the thesis about studies of cold neutrons sources (CNS) in research reactors was to create a complete set of tools to design efficiently CNS. The work raises the problem to run accurate simulations of experimental devices inside reactor reflector valid for parametric studies. On one hand, deterministic codes have reasonable computation times but introduce problems for geometrical description. On the other hand, Monte Carlo codes give the possibility to compute on precise geometry, but need computation times so important that parametric studies are impossible. To decrease this computation time, several developments were made in the Monte Carlo code TRIPOLI-4.4. An uncoupling technique is used to isolate a study zone in the complete reactor geometry. By recording boundary conditions (incoming flux), further simulations can be launched for parametric studies with a computation time reduced by a factor 60 (case of the cold neutron source of the Orphee reactor). The short response time allows to lead parametric studies using Monte Carlo code. Moreover, using biasing methods, the flux can be recorded on the surface of neutrons guides entries (low solid angle) with a further gain of running time. Finally, the implementation of a coupling module between TRIPOLI- 4.4 and the Monte Carlo code McStas for research in condensed matter field gives the possibility to obtain fluxes after transmission through neutrons guides, thus to have the neutron flux received by samples studied by scientists of condensed matter. This set of developments, involving TRIPOLI-4.4 and McStas, represent a complete computation scheme for research reactors: from nuclear core, where neutrons are created, to the exit of neutrons guides, on samples of matter. This complete calculation scheme is tested against ILL4 measurements of flux in cold neutron guides. (authors)
Energy Technology Data Exchange (ETDEWEB)
Kress, T. S. [comp.
1985-04-01
The determination of severe accident source terms must, by necessity it seems, rely heavily on the use of complex computer codes. Source term acceptability, therefore, rests on the assessed validity of such codes. Consequently, one element of NRC's recent efforts to reassess LWR severe accident source terms is to provide a review of the status of validation of the computer codes used in the reassessment. The results of this review is the subject of this document. The separate review documents compiled in this report were used as a resource along with the results of the BMI-2104 study by BCL and the QUEST study by SNL to arrive at a more-or-less independent appraisal of the status of source term modeling at this time.
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.
KINGMA, J; TENVERGERT, E; KLASEN, HJ
1994-01-01
SHOWICD is an interactive computer program designed to document severity of injury from the ICD-9CM coded injury diagnoses of a particular patient. Two severity-of-injury scores [the Abbreviated Injury Scale (AIS) and the Injury Severity Score (ISS)] are used. By employing the AIS scores, the severi
Wai, J. C.; Blom, G.; Yoshihara, H.; Chaussee, D.
1986-01-01
The NASA/Ames parabolized Navier/Stokes computer code was used to calculate the turbulent flow over the wing/fuselage for a generic fighter at M = 2.2. 18 deg angle-of-attack, and 0 and 5 deg yaw. Good test/theory agreement was achieved in the zero yaw case. No test data were available for the yaw case.
Energy Technology Data Exchange (ETDEWEB)
Lee, Chang Hoon; Baek, Sang Yeup; Shin, In Sup; Moon, Shin Myung; Moon, Jae Phil; Koo, Hoon Young; Kim, Ju Shin [Seoul National University, Seoul (Korea, Republic of); Hong, Jung Sik [Seoul National Polytechnology University, Seoul (Korea, Republic of); Lim, Tae Jin [Soongsil University, Seoul (Korea, Republic of)
1996-08-01
The objective of this project is to develop a methodology of the dynamic reliability analysis for NPP. The first year`s research was focused on developing a procedure for analyzing failure data of running components and a simulator for estimating the reliability of series-parallel structures. The second year`s research was concentrated on estimating the lifetime distribution and PM effect of a component from its failure data in various cases, and the lifetime distribution of a system with a particular structure. Computer codes for performing these jobs were also developed. The objectives of the third year`s research is to develop models for analyzing special failure types (CCFs, Standby redundant structure) that were nor considered in the first two years, and to complete a methodology of the dynamic reliability analysis for nuclear power plants. The analysis of failure data of components and related researches for supporting the simulator must be preceded for providing proper input to the simulator. Thus this research is divided into three major parts. 1. Analysis of the time dependent life distribution and the PM effect. 2. Development of a simulator for system reliability analysis. 3. Related researches for supporting the simulator : accelerated simulation analytic approach using PH-type distribution, analysis for dynamic repair effects. 154 refs., 5 tabs., 87 figs. (author)
Energy Technology Data Exchange (ETDEWEB)
Glaser, R.
1996-02-06
A methodology is presented that allows the calculation of the probability that any of a particular collection of structures will be hit by an aircraft in a take-off or landing related accident during a specified window of time with a velocity exceeding a given critical value. A probabilistic model is developed that incorporates the location of each structure relative to airport runways in the vicinity; the size of the structure; the sizes, types, and frequency of use of commercial, military, and general aviation aircraft which take-off and land at these runways; the relative frequency of take-off and landing related accidents by aircraft type; the stochastic properties of off-runway crashes, namely impact location, impact angle, impact velocity, and the heading, deceleration, and skid distance after impact; and the stochastic properties of runway overruns and runoffs, namely the position at which the aircraft exits the runway, its exit velocity, and the heading and deceleration after exiting. Relevant probability distributions are fitted from extensive commercial, military, and general aviation accident report data bases. The computer source code for implementation of the calculation is provided.
Energy Technology Data Exchange (ETDEWEB)
Scherpelz, R. I.; Borst, F. J.; Hoenes, G. R.
1980-12-01
WRAITH is a FORTRAN computer code which calculates the doses received by a standard man exposed to an accidental release of radioactive material. The movement of the released material through the atmosphere is calculated using a bivariate straight-line Gaussian distribution model, with Pasquill values for standard deviations. The quantity of material in the released cloud is modified during its transit time to account for radioactive decay and daughter production. External doses due to exposure to the cloud can be calculated using a semi-infinite cloud approximation. In situations where the semi-infinite cloud approximation is not a good one, the external dose can be calculated by a "finite plume" three-dimensional point-kernel numerical integration technique. Internal doses due to acute inhalation are cal.culated using the ICRP Task Group Lung Model and a four-segmented gastro-intestinal tract model. Translocation of the material between body compartments and retention in the body compartments are calculated using multiple exponential retention functions. Internal doses to each organ are calculated as sums of cross-organ doses, with each target organ irradiated by radioactive material in a number of source organs. All doses are calculated in rads, with separate values determined for high-LET and low-LET radiation.
García-Jerez, Antonio; Sánchez-Sesma, Francisco J; Luzón, Francisco; Perton, Mathieu
2016-01-01
During a quarter of a century, the main characteristics of the horizontal-to-vertical spectral ratio of ambient noise HVSRN have been extensively used for site effect assessment. In spite of the uncertainties about the optimum theoretical model to describe these observations, several schemes for inversion of the full HVSRN curve for near surface surveying have been developed over the last decade. In this work, a computer code for forward calculation of H/V spectra based on the diffuse field assumption (DFA) is presented and tested.It takes advantage of the recently stated connection between the HVSRN and the elastodynamic Green's function which arises from the ambient noise interferometry theory. The algorithm allows for (1) a natural calculation of the Green's functions imaginary parts by using suitable contour integrals in the complex wavenumber plane, and (2) separate calculation of the contributions of Rayleigh, Love, P-SV and SH waves as well. The stability of the algorithm at high frequencies is preserv...
Energy Technology Data Exchange (ETDEWEB)
Jan, S. [CEA Direction des Sciences du Vivant, Institut d ' Imagerie Bio-Medicale, Service Hospitalier Frederic Joliot, 4 pl. du Gn. Leclerc 91401 Orsay Cedex (France)
2010-07-01
The author presents the GATE code, a simulation software based on the Geant4 development environment developed by the CERN (the European organization for nuclear research) which enables Monte-Carlo type simulation to be developed for tomography imagery using ionizing radiation, and radiotherapy examinations (conventional and hadron therapy) to be simulated. The authors concentrate on the use of medical imagery in carcinology. They comment some results obtained in nuclear imagery and in radiotherapy
Abraham, Nikhil
2015-01-01
Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill
Choo, Y. K.; Staiger, P. J.
1982-01-01
The code was designed to analyze performance at valves-wide-open design flow. The code can model conventional steam cycles as well as cycles that include such special features as process steam extraction and induction and feedwater heating by external heat sources. Convenience features and extensions to the special features were incorporated into the PRESTO code. The features are described, and detailed examples illustrating the use of both the original and the special features are given.
Burge, Johannes
2017-01-01
Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and
Burscher, B.; Odijk, D.; Vliegenthart, R.; de Rijke, M.; de Vreese, C.H.
2014-01-01
We explore the application of supervised machine learning (SML) to frame coding. By automating the coding of frames in news, SML facilitates the incorporation of large-scale content analysis into framing research, even if financial resources are scarce. This furthers a more integrated investigation
SOC-DS computer code provides tool for design evaluation of homogeneous two-material nuclear shield
Disney, R. K.; Ricks, L. O.
1967-01-01
SOC-DS Code /Shield Optimization Code-Direc Search/, selects a nuclear shield material of optimum volume, weight, or cost to meet the requirments of a given radiation dose rate or energy transmission constraint. It is applicable to evaluating neutron and gamma ray shields for all nuclear reactors.
Directory of Open Access Journals (Sweden)
Almeida Jonas S
2006-03-01
Full Text Available Abstract Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else. Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web
Energy Technology Data Exchange (ETDEWEB)
Kurtz, S.E.; Fields, D.E.
1983-10-01
This report describes a version of the TERPED/P computer code that is very useful for small data sets. A new algorithm for determining the Kolmogorov-Smirnov (KS) statistics is used to extend program applicability. The TERPED/P code facilitates the analysis of experimental data and assists the user in determining its probability distribution function. Graphical and numerical tests are performed interactively in accordance with the user's assumption of normally or log-normally distributed data. Statistical analysis options include computation of the chi-square statistic and the KS one-sample test statistic and the corresponding significance levels. Cumulative probability plots of the user's data are generated either via a local graphics terminal, a local line printer or character-oriented terminal, or a remote high-resolution graphics device such as the FR80 film plotter or the Calcomp paper plotter. Several useful computer methodologies suffer from limitations of their implementations of the KS nonparametric test. This test is one of the more powerful analysis tools for examining the validity of an assumption about the probability distribution of a set of data. KS algorithms are found in other analysis codes, including the Statistical Analysis Subroutine (SAS) package and earlier versions of TERPED. The inability of these algorithms to generate significance levels for sample sizes less than 50 has limited their usefulness. The release of the TERPED code described herein contains algorithms to allow computation of the KS statistic and significance level for data sets of, if the user wishes, as few as three points. Values computed for the KS statistic are within 3% of the correct value for all data set sizes.
Institute of Scientific and Technical Information of China (English)
2008-01-01
Quantum error correcting codes are indispensable for quantum information processing and quantum computation.In 1995 and 1996,Shor and Steane gave first several examples of quantum codes from classical error correcting codes.The construction of efficient quantum codes is now an active multi-discipline research field.In this paper we review the known several constructions of quantum codes and present some examples.
Directory of Open Access Journals (Sweden)
Ching-Ching Liu
2015-03-01
Full Text Available Electronics companies throughout Asia recognize the benefits of Green Supply Chain Management (GSCM for gaining competitive advantage. A large majority of electronics companies in Taiwan have recently adopted the Electronic Industry Citizenship Coalition (EICC Code of Conduct for defining and managing their social and environmental responsibilities throughout their supply chains. We surveyed 106 Tier 1 suppliers to the Taiwanese computer industry to determine their environmental performance using the EICC Code of Conduct (EICC Code and performed Analysis of Variance (ANOVA on the 63/106 questionnaire responses collected. We test the results to determine whether differences in product type, geographic area, and supplier size correlate with different levels of environmental performance. To our knowledge, this is the first study to analyze questionnaire data on supplier adoption to optimize the implementation of GSCM. The results suggest that characteristic classification of suppliers could be employed to enhance the efficiency of GSCM.
Energy Technology Data Exchange (ETDEWEB)
Suikkanen, P.
2009-01-15
The objective of the Masters thesis was to study guidelines and procedures for scaling of thermal hydraulic test facilities and to compare results from two test facility models and from EPR model. Aim was to get an impression of how well the studied test facilities describe the behaviour in power plant scale during accident scenarios with computer codes. Models were used to determine the influence of primary circuit mass inventory on the behaviour of the circuit. The data from test facility models represent the same phenomena as the data from EPR model. The results calculated with PKL model were also compared against PKL test facility data. They showed good agreement. Test facility data is used to validate computer codes, which are used in nuclear safety analysis. The scale of the facility has effect on the behaviour of the phenomena and therefore special care must be taken in using the data. (orig.)
DEFF Research Database (Denmark)
Cox, Geoff
; alternatives to mainstream development, from performances of the live-coding scene to the organizational forms of commons-based peer production; the democratic promise of social media and their paradoxical role in suppressing political expression; and the market’s emptying out of possibilities for free...... development, Speaking Code unfolds an argument to undermine the distinctions between criticism and practice, and to emphasize the aesthetic and political aspects of software studies. Not reducible to its functional aspects, program code mirrors the instability inherent in the relationship of speech...... expression in the public realm. The book’s line of argument defends language against its invasion by economics, arguing that speech continues to underscore the human condition, however paradoxical this may seem in an era of pervasive computing....
Energy Technology Data Exchange (ETDEWEB)
Williams, P. T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dickson, T. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Yin, S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2007-12-01
The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include the NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.
Energy Technology Data Exchange (ETDEWEB)
Yang, Yanhua; Nilsuwankosit, Sunchai; Moriyama, Kiyofumi; Maruyama, Yu; Nakamura, Hideo; Hashimoto, Kazuichiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2000-12-01
A steam explosion is a phenomenon where a high temperature liquid gives its internal energy very rapidly to another low temperature volatile liquid, causing very strong pressure build up due to rapid vaporization of the latter. In the field of light water reactor safety research, steam explosions caused by the contact of molten core and coolant has been recognized as a potential threat which could cause failure of the pressure vessel or the containment vessel during a severe accident. A numerical simulation code JASMINE was developed at Japan Atomic Energy Research Institute (JAERI) to evaluate the impact of steam explosions on the integrity of reactor boundaries. JASMINE code consists of two parts, JASMINE-pre and -pro, which handle the premixing and propagation phases in steam explosions, respectively. JASMINE-pro code simulates the thermo-hydrodynamics in the propagation phase of a steam explosion on the basis of the multi-fluid model for multiphase flow. This report, 'User's Manual', gives the usage of JASMINE-pro code as well as the information on the code structures which should be useful for users to understand how the code works. (author)
Directory of Open Access Journals (Sweden)
Wonkyeong Kim
2015-01-01
Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.
Menthe, R. W.; Mccolgan, C. J.; Ladden, R. M.
1991-01-01
The Unified AeroAcoustic Program (UAAP) code calculates the airloads on a single rotation prop-fan, or propeller, and couples these airloads with an acoustic radiation theory, to provide estimates of near-field or far-field noise levels. The steady airloads can also be used to calculate the nonuniform velocity components in the propeller wake. The airloads are calculated using a three dimensional compressible panel method which considers the effects of thin, cambered, multiple blades which may be highly swept. These airloads may be either steady or unsteady. The acoustic model uses the blade thickness distribution and the steady or unsteady aerodynamic loads to calculate the acoustic radiation. The users manual for the UAAP code is divided into five sections: general code description; input description; output description; system description; and error codes. The user must have access to IMSL10 libraries (MATH and SFUN) for numerous calls made for Bessel functions and matrix inversion. For plotted output users must modify the dummy calls to plotting routines included in the code to system-specific calls appropriate to the user's installation.
Validation of Advanced Computer Codes for VVER Technology: LB-LOCA Transient in PSB-VVER Facility
Directory of Open Access Journals (Sweden)
A. Del Nevo
2012-01-01
Full Text Available The OECD/NEA PSB-VVER project provided unique and useful experimental data for code validation from PSB-VVER test facility. This facility represents the scaled-down layout of the Russian-designed pressurized water reactor, namely, VVER-1000. Five experiments were executed, dealing with loss of coolant scenarios (small, intermediate, and large break loss of coolant accidents, a primary-to-secondary leak, and a parametric study (natural circulation test aimed at characterizing the VVER system at reduced mass inventory conditions. The comparative analysis, presented in the paper, regards the large break loss of coolant accident experiment. Four participants from three different institutions were involved in the benchmark and applied their own models and set up for four different thermal-hydraulic system codes. The benchmark demonstrated the performances of such codes in predicting phenomena relevant for safety on the basis of fixed criteria.
Energy Technology Data Exchange (ETDEWEB)
Nemoto, Toshiyuki; Watanabe, Hideo; Fujita, Toyozo [Fujitsu Ltd., Tokyo (Japan); Kawai, Wataru; Harada, Hiroo; Gorai, Kazuo; Yamasaki, Kazuhiko; Shoji, Makoto; Fujii, Minoru
1996-06-01
At Center for Promotion of Computational Science and Engineering, time consuming eight nuclear codes suggested by users have been vectorized, parallelized on the VPP500 computer system. In addition, two nuclear codes used on the VP2600 computer system were implemented on the VPP500 computer system. Neutron and photon transport calculation code MVP/GMVP and relativistic quantum molecular dynamics code QMDRELP have been parallelized. Extended quantum molecular dynamics code EQMD and adiabatic base calculation code HSABC have been parallelized and vectorized. Ballooning turbulence simulation code CURBAL, 3-D non-stationary compressible fluid dynamics code STREAM V3.1, operating plasma analysis code TOSCA and eddy current analysis code EDDYCAL have been vectorized. Reactor safety analysis code RELAP5/MOD2/C36-05 and RELAP5/MOD3 were implemented on the VPP500 computer system. (author)
Energy Technology Data Exchange (ETDEWEB)
Greene, N.M.; Petrie, L.M.; Westfall, R.M.; Bucholz, J.A.; Hermann, O.W.; Fraley, S.K. [Oak Ridge National Lab., TN (United States)
1995-04-01
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.
Energy Technology Data Exchange (ETDEWEB)
West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E. [Oak Ridge National Lab., TN (United States)
1995-04-01
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR.
Energy Technology Data Exchange (ETDEWEB)
Ihara, Hitoshi; Katakura, Jun-ichi; Nakagawa, Tsuneo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1995-11-01
In a nuclear reactor radioactive nuclides are generated and depleted with burning up of nuclear fuel. The radioactive nuclides, emitting {gamma} ray and {beta} ray, play role of radioactive source of decay heat in a reactor and radiation exposure. In safety evaluation of nuclear reactor and nuclear fuel cycle, it is needed to estimate the number of nuclides generated in nuclear fuel under various burn-up condition of many kinds of nuclear fuel used in a nuclear reactor. FPGS90 is a code calculating the number of nuclides, decay heat and spectrum of emitted {gamma} ray from fission products produced in a nuclear fuel under the various kinds of burn-up condition. The nuclear data library used in FPGS90 code is the library `JNDC Nuclear Data Library of Fission Products - second version -`, which is compiled by working group of Japanese Nuclear Data Committee for evaluating decay heat in a reactor. The code has a function of processing a so-called evaluated nuclear data file such as ENDF/B, JENDL, ENSDF and so on. It also has a function of making figures of calculated results. Using FPGS90 code it is possible to do all works from making library, calculating nuclide generation and decay heat through making figures of the calculated results. (author).
Energy Technology Data Exchange (ETDEWEB)
Kim, Moo Hwan; Seo, Kyoung Woo [POSTECH, Pohang (Korea, Republic of)
2001-03-15
In the probability approach, the calculated CCFPs of all the scenarios were zero, which meant that it was expected that for all the accident scenarios the maximum pressure load induced by DCH was lower than the containment failure pressure obtained from the fragility curve. Thus, it can be stated that the KSNP containment is robust to the DCH threat. And uncertainty of computer codes used to be two (deterministic and probabilistic) approaches were reduced by the sensitivity tests and the research with the verification and comparison of the DCH models in each code. So, this research was to evaluate synthetic result of DCH issue and expose accurate methodology to assess containment integrity about operating PWR in Korea.
Energy Technology Data Exchange (ETDEWEB)
Dellin, T.A.; Fish, M.J.; Yang, C.L.
1981-08-01
DELSOL2 is a revised and substantially extended version of the DELSOL computer program for calculating collector field performance and layout, and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and external cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. The advantages of speed and accuracy characteristic of Version I are maintained in DELSOL2.
DEFF Research Database (Denmark)
Andersen, Christian Ulrik
2007-01-01
discusses code as the artist’s material and, further, formulates a critique of Cramer. The seductive magic in computer-generated art does not lie in the magical expression, but nor does it lie in the code/material/text itself. It lies in the nature of code to do something – as if it was magic......Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...
Energy Technology Data Exchange (ETDEWEB)
Hermsmeyer, S. [European Commission JRC, Petten (Netherlands). Inst. for Energy and Transport; Herranz, L.E.; Iglesias, R. [CIEMAT, Madrid (Spain); and others
2015-07-15
The severe accident at the Fukushima-Daiichi nuclear power plant (NPP) has led to a worldwide review of nuclear safety approaches and is bringing a refocussing of R and D in the field. To support these efforts several new Euratom FP7 projects have been launched. The CESAM project focuses on the improvement of the ASTEC computer code. ASTEC is jointly developed by IRSN and GRS and is considered as the European reference code for Severe Accident Analyses since it capitalizes knowledge from the extensive Euro-pean R and D in the field. The project aims at the code's enhancement and extension for use in Severe Accident Management (SAM) analysis of the NPPs of Generation II-III presently under operation or foreseen in the near future in Europe, spent fuel pools included. The work reported here is concerned with the importance, for the further development of the code, of SAM strategies to be simulated. To this end, SAM strategies applied in the EU have been compiled. This compilation is mainly based on the public information made available in the frame of the EU ''stress tests'' for NPPs and has been complemented by information pro-vided by the different CESAM partners. The context of SAM is explained and the strategies are presented. The modelling capabilities for the simulation of these strategies in the current production version 2.0 of ASTEC are discussed. Furthermore, the requirements for the next version of ASTEC V2.1 that is supported in the CESAM project are highlighted. They are a necessary complement to the list of code improvements that is drawn from consolidating new fields of application, like SFP and BWR model enhancements, and from new experimental results on severe accident phenomena.
Energy Technology Data Exchange (ETDEWEB)
Takata, Takashi; Yamaguchi, Akira [Japan Nuclear Cycle Development Inst., Oarai, Ibaraki (Japan). Oarai Engineering Center
2002-12-01
A multi-component and multi-phase numerical analysis method is developed to investigate a mechanism of sodium-water reaction phenomena, which occur when pressurized water leaks from failed heat transfer tubes in the steam generator of a fast reactor. It is named SERAPHIM: Sodium-watEr Reaction Analysis PHysics of Interdisciplinary Multi-phase flow. In this code, the surface reaction model and the gas phase reaction model are implemented as a sodium-water reaction mechanism. The HSMAC method is adopted for numerical solution. A validation for compressible multi-phase flow analysis is carried out in the present paper. Two-dimensional analyses of the sodium-water reaction are also carried out and it is demonstrated that the numerical quantification of a sodium-water reaction accident by the SERAPHIM code is practicable. (author)
Energy Technology Data Exchange (ETDEWEB)
Hoffmann, Alexander; Merk, Bruno [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany); Hirsch, Tobias; Pitz-Paal, Robert [DLR Deutsches Zentrum fuer Luft- und Raumfahrt e.V., Stuttgart (Germany). Inst. fuer Solarforschung
2014-06-15
In the present feasibility study the system code ATHLET, which originates from nuclear engineering, is applied to a parabolic trough test facility. A model of the DISS (DIrect Solar Steam) test facility at Plataforma Solar de Almeria in Spain is assembled and the results of the simulations are compared to measured data and the simulation results of the Modelica library 'DissDyn'. A profound comparison between ATHLET Mod 3.0 Cycle A and the 'DissDyn' library reveals the capabilities of these codes. The calculated mass and energy balance in the ATHLET simulations are in good agreement with the results of the measurements and confirm the applicability for thermodynamic simulations of DSG processes in principle. Supplementary, the capabilities of the 6-equation model with transient momentum balances in ATHLET are used to study the slip between liquid and gas phases and to investigate pressure wave oscillations after a sudden valve closure. (orig.)
Directory of Open Access Journals (Sweden)
Armando C. Marino
2011-01-01
Full Text Available The BaCo code (“Barra Combustible” was developed at the Atomic Energy National Commission of Argentina (CNEA for the simulation of nuclear fuel rod behaviour under irradiation conditions. We present in this paper a brief description of the code and the strategy used for the development, improvement, enhancement, and validation of a BaCo during the last 30 years. “Extreme case analysis”, parametric (or sensitivity, probabilistic (or statistic analysis plus the analysis of the fuel performance (full core analysis are the tools developed in the structure of BaCo in order to improve the understanding of the burnup extension in the Atucha I NPP, and the design of advanced fuel elements as CARA and CAREM. The 3D additional tools of BaCo can enhance the understanding of the fuel rod behaviour, the fuel design, and the safety margins. The modular structure of the BaCo code and its detailed coupling of thermo-mechanical and irradiation-induced phenomena make it a powerful tool for the prediction of the influence of material properties on the fuel rod performance and integrity.
Baianu,I C
2004-01-01
The concepts of quantum automata and quantum computation are studied in the context of quantum genetics and genetic networks with nonlinear dynamics. In previous publications (Baianu,1971a, b) the formal concept of quantum automaton and quantum computation, respectively, were introduced and their possible implications for genetic processes and metabolic activities in living cells and organisms were considered. This was followed by a report on quantum and abstract, symbolic computation based on the theory of categories, functors and natural transformations (Baianu,1971b; 1977; 1987; 2004; Baianu et al, 2004). The notions of topological semigroup, quantum automaton, or quantum computer, were then suggested with a view to their potential applications to the analogous simulation of biological systems, and especially genetic activities and nonlinear dynamics in genetic networks. Further, detailed studies of nonlinear dynamics in genetic networks were carried out in categories of n-valued, Lukasiewicz Logic Algebra...
Energy Technology Data Exchange (ETDEWEB)
Both, J.P.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B
2003-07-01
This manual relates to Version 4.3 TRIPOLI-4 code. TRIPOLI-4 is a computer code simulating the transport of neutrons, photons, electrons and positrons. It can be used for radiation shielding calculations (long-distance propagation with flux attenuation in non-multiplying media) and neutronic calculations (fissile medium, criticality or sub-criticality basis). This makes it possible to calculate k{sub eff} (for criticality), flux, currents, reaction rates and multi-group cross-sections. TRIPOLI-4 is a three-dimensional code that uses the Monte-Carlo method. It allows for point-wise description in terms of energy of cross-sections and multi-group homogenized cross-sections and features two modes of geometrical representation: surface and combinatorial. The code uses cross-section libraries in ENDF/B format (such as JEF2-2, ENDF/B-VI and JENDL) for point-wise description cross-sections in APOTRIM format (from the APOLLO2 code) or a format specific to TRIPOLI-4 for multi-group description. (authors)
Algebraic geometric codes with applications
Institute of Scientific and Technical Information of China (English)
CHEN Hao
2007-01-01
The theory of linear error-correcting codes from algebraic geomet-ric curves (algebraic geometric (AG) codes or geometric Goppa codes) has been well-developed since the work of Goppa and Tsfasman, Vladut, and Zink in 1981-1982. In this paper we introduce to readers some recent progress in algebraic geometric codes and their applications in quantum error-correcting codes, secure multi-party computation and the construction of good binary codes.
Energy Technology Data Exchange (ETDEWEB)
Sergey I Shcherbakov [SSC RF IPPE named after A.I. Leypunsky, Bondarenko sq. 1, Obninsk, 249033, Kaluga region (Russian Federation)
2005-07-01
Full text of publication follows: The paper presents the key features of the TURBO-FLOW 2D computer code designed for on-line numerical solving of multiphase flow problems (at present, three phases) in the units of NPP equipment. The code implements a direct non-stationary calculation of velocity distribution and phase concentrations. The fields of application of the TURBO-FLOW code are the following: multi-version calculations for optimizing a construction design or regime; dynamic processes with a sampling up to 10{sup 5} of time steps (impacts, explosions, vibrations, and so on); express calculations. The code is characterized by the simplicity of giving the calculation object and very little time required for producing results (dozens of time steps per second). The system requirements are as follows: Win98/ME, Pentium3-600 (256 k L2 Cache), 32 Mb. The peculiarities of mathematical statement consist in dividing velocity variations into components (by reasons of their occurrence), calculating them independently, and using the medium-volume velocity of mixture and velocities of phase slip. To evaluate the medium-volume velocity, the current function and velocity potential calculated by the circulation and mass conservation equations are used. Preliminarily, the current functions and potentials are calculated for time-varying volumetric sources and boundary conditions. A concept of permissible velocity variations is used. The friction models for empty domain and porous solid are involved. The slip velocity is given by a continuous function of phase concentration and local pressure gradient. The equations of phase transfer are solved with individual velocities of phases and phase transfers (the rate and localization of phase breakdown into each other to be specified). In addition, the equations for the functions of phase particle age are solved. The two-dimensional computational model being given by the user on a rectangular nonuniform mesh is used. The procedure of
Energy Technology Data Exchange (ETDEWEB)
Hayes, J C; Norman, M
1999-10-28
This report details an investigation into the efficacy of two approaches to solving the radiation diffusion equation within a radiation hydrodynamic simulation. Because leading-edge scientific computing platforms have evolved from large single-node vector processors to parallel aggregates containing tens to thousands of individual CPU's, the ability of an algorithm to maintain high compute efficiency when distributed over a large array of nodes is critically important. The viability of an algorithm thus hinges upon the tripartite question of numerical accuracy, total time to solution, and parallel efficiency.
Energy Technology Data Exchange (ETDEWEB)
Chino, Masamichi; Yamazawa, Hiromi; Nagai, Haruyasu; Moriuchi, Shigeru [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishikawa, Hirohiko
1995-09-01
A computer code system has been developed for near real-time dose assessment during radiological emergencies. The system WSPEEDI, the worldwide version of SPEEDI (System for Prediction of Environmental Emergency Dose Information) aims at predicting the radiological impact on Japanese due to a nuclear accident in foreign countries. WSPEEDI consists of a mass-consistent wind model WSYNOP for large-scale wind fields and a particle random walk model GEARN for atmospheric dispersion and dry and wet deposition of radioactivity. The models are integrated into a computer code system together with a system control software, worldwide geographic database, meteorological data processor and graphic software. The performance of the models has been evaluated using the Chernobyl case with reliable source terms, well-established meteorological data and a comprehensive monitoring database. Furthermore, the response of the system has been examined by near real-time simulations of the European Tracer Experiment (ETEX), carried out over about 2,000 km area in Europe. (author).
Icarus: A 2D direct simulation Monte Carlo (DSMC) code for parallel computers. User`s manual - V.3.0
Energy Technology Data Exchange (ETDEWEB)
Bartel, T.; Plimpton, S.; Johannes, J.; Payne, J.
1996-10-01
Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modelled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modelled using steric factors derived from Arrhenius reaction rates. Surface chemistry is modelled with surface reaction probabilities. The electron number density is either a fixed external generated field or determined using a local charge neutrality assumption. Ion chemistry is modelled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electrostatic fields can either be externally input or internally generated using a Langmuir-Tonks model. The Icarus software package includes the grid generation, parallel processor decomposition, postprocessing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. The majority of the software packages are written in standard Fortran.
Energy Technology Data Exchange (ETDEWEB)
Willmann, P.A.; Hooper, E.B. Jr.
1977-02-01
A computer program was written to calculate the stored energy in a transformer. This result easily yields the inductance and leakage reactance of the transformer and is estimated to be accurate to better than 5 percent. The program was used to calculate the leakage reactance of the main transformer for the LLL neutral beam High Voltage Test Stand.
Mallios, Nikolaos; Vassilakopoulos, Michael Gr.
2015-01-01
One of the most intriguing objectives when teaching computer science in mid-adolescence high school students is attracting and mainly maintaining their concentration within the limits of the class. A number of theories have been proposed and numerous methodologies have been applied, aiming to assist in the implementation of a personalized learning…
Practices in Code Discoverability
Teuben, Peter; Nemiroff, Robert J; Shamir, Lior
2012-01-01
Much of scientific progress now hinges on the reliability, falsifiability and reproducibility of computer source codes. Astrophysics in particular is a discipline that today leads other sciences in making useful scientific components freely available online, including data, abstracts, preprints, and fully published papers, yet even today many astrophysics source codes remain hidden from public view. We review the importance and history of source codes in astrophysics and previous efforts to develop ways in which information about astrophysics codes can be shared. We also discuss why some scientist coders resist sharing or publishing their codes, the reasons for and importance of overcoming this resistance, and alert the community to a reworking of one of the first attempts for sharing codes, the Astrophysics Source Code Library (ASCL). We discuss the implementation of the ASCL in an accompanying poster paper. We suggest that code could be given a similar level of referencing as data gets in repositories such ...
Stepanek, J; Laissue, J A; Lyubimova, N; Di Michiel, F; Slatkin, D N
2000-01-01
Microbeam radiation therapy (MRT) is a currently experimental method of radiotherapy which is mediated by an array of parallel microbeams of synchrotron-wiggler-generated X-rays. Suitably selected, nominally supralethal doses of X-rays delivered to parallel microslices of tumor-bearing tissues in rats can be either palliative or curative while causing little or no serious damage to contiguous normal tissues. Although the pathogenesis of MRT-mediated tumor regression is not understood, as in all radiotherapy such understanding will be based ultimately on our understanding of the relationships among the following three factors: (1) microdosimetry, (2) damage to normal tissues, and (3) therapeutic efficacy. Although physical microdosimetry is feasible, published information on MRT microdosimetry to date is computational. This report describes Monte Carlo-based computational MRT microdosimetry using photon and/or electron scattering and photoionization cross-section data in the 1 e V through 100 GeV range distrib...
Energy Technology Data Exchange (ETDEWEB)
Boyle, W.G. Jr.
1977-10-28
There are three computer programs, written in the BASIC language, used for taking data from an atomic absorption spectrophotometer operating in the flame mode. The programs are divided into logical sections, and these have been flow-charted. The general features, the structure, the order of subroutines and functions, and the storage of data are discussed. In addition, variables are listed and defined, and a complete listing of each program with a symbol occurrence table is provided.
Energy Technology Data Exchange (ETDEWEB)
Medeiros, Eduarda da C.A.; Castrillo, Lazara S., E-mail: e.camedeiros@gmail.com, E-mail: lazara@poli.br [Universidade de Pernambuco, Recife, PE (Brazil). Escola Politecnica. Departamento de Engenharia Mecanica
2015-07-01
Insurge and outsurge phenomena are transient states and could be analyzed by thermodynamics principles, the pressurizer behavior will vary in response to mass flow changes. These surges can occur in the presence of noncondensable gases. On this paper, with the code RELAP5, the IRIS reactor pressurizer is described to analyze surges phenomena in their control volumes with non-condensable gases since they modify the pressure response. A set of three pipes components represents the pressurizer regions, connected with each other by single junctions components, the bottom volume control is connected to the primary circuit, represented by a time dependent volume component, through a time dependent junction component, which describes the mass flow behavior during surges through surge orifices. The hydrodynamic components representing the pressurizer are surrounded by heat structures, in addition there are heat structures inside the bottom volume control describing the behavior of electrical heaters, that operate in cases of outsurges. The analysis are intended to detail the behavior variables as pressure, temperature and volume of liquid inside the pressurizer during a water surge coming from the primary circuit or a water surge coming from the pressurizer to the primary circuit. (author)
VFLOW2D - A Vorte-Based Code for Computing Flow Over Elastically Supported Tubes and Tube Arrays
Energy Technology Data Exchange (ETDEWEB)
WOLFE,WALTER P.; STRICKLAND,JAMES H.; HOMICZ,GREGORY F.; GOSSLER,ALBERT A.
2000-10-11
A numerical flow model is developed to simulate two-dimensional fluid flow past immersed, elastically supported tube arrays. This work is motivated by the objective of predicting forces and motion associated with both deep-water drilling and production risers in the oil industry. This work has other engineering applications including simulation of flow past tubular heat exchangers or submarine-towed sensor arrays and the flow about parachute ribbons. In the present work, a vortex method is used for solving the unsteady flow field. This method demonstrates inherent advantages over more conventional grid-based computational fluid dynamics. The vortex method is non-iterative, does not require artificial viscosity for stability, displays minimal numerical diffusion, can easily treat moving boundaries, and allows a greatly reduced computational domain since vorticity occupies only a small fraction of the fluid volume. A gridless approach is used in the flow sufficiently distant from surfaces. A Lagrangian remap scheme is used near surfaces to calculate diffusion and convection of vorticity. A fast multipole technique is utilized for efficient calculation of velocity from the vorticity field. The ability of the method to correctly predict lift and drag forces on simple stationary geometries over a broad range of Reynolds numbers is presented.
Villoing, Daphnée; Marcatili, Sara; Garcia, Marie-Paule; Bardiès, Manuel
2017-03-01
The purpose of this work was to validate GATE-based clinical scale absorbed dose calculations in nuclear medicine dosimetry. GATE (version 6.2) and MCNPX (version 2.7.a) were used to derive dosimetric parameters (absorbed fractions, specific absorbed fractions and S-values) for the reference female computational model proposed by the International Commission on Radiological Protection in ICRP report 110. Monoenergetic photons and electrons (from 50 keV to 2 MeV) and four isotopes currently used in nuclear medicine (fluorine-18, lutetium-177, iodine-131 and yttrium-90) were investigated. Absorbed fractions, specific absorbed fractions and S-values were generated with GATE and MCNPX for 12 regions of interest in the ICRP 110 female computational model, thereby leading to 144 source/target pair configurations. Relative differences between GATE and MCNPX obtained in specific configurations (self-irradiation or cross-irradiation) are presented. Relative differences in absorbed fractions, specific absorbed fractions or S-values are below 10%, and in most cases less than 5%. Dosimetric results generated with GATE for the 12 volumes of interest are available as supplemental data. GATE can be safely used for radiopharmaceutical dosimetry at the clinical scale. This makes GATE a viable option for Monte Carlo modelling of both imaging and absorbed dose in nuclear medicine.
Energy Technology Data Exchange (ETDEWEB)
Lombardo, N.J.; Marseille, T.J.; White, M.D.; Lowery, P.S.
1990-06-01
TRUMP-BD (Boil Down) is an extension of the TRUMP (Edwards 1972) computer program for the analysis of nuclear fuel assemblies under severe accident conditions. This extension allows prediction of the heat transfer rates, metal-water oxidation rates, fission product release rates, steam generation and consumption rates, and temperature distributions for nuclear fuel assemblies under core uncovery conditions. The heat transfer processes include conduction in solid structures, convection across fluid-solid boundaries, and radiation between interacting surfaces. Metal-water reaction kinetics are modeled with empirical relationships to predict the oxidation rates of steam-exposed Zircaloy and uranium metal. The metal-water oxidation models are parabolic in form with an Arrhenius temperature dependence. Uranium oxidation begins when fuel cladding failure occurs; Zircaloy oxidation occurs continuously at temperatures above 13000{degree}F when metal and steam are available. From the metal-water reactions, the hydrogen generation rate, total hydrogen release, and temporal and spatial distribution of oxide formations are computed. Consumption of steam from the oxidation reactions and the effect of hydrogen on the coolant properties is modeled for independent coolant flow channels. Fission product release from exposed uranium metal Zircaloy-clad fuel is modeled using empirical time and temperature relationships that consider the release to be subject to oxidation and volitization/diffusion ( bake-out'') release mechanisms. Release of the volatile species of iodine (I), tellurium (Te), cesium (Ce), ruthenium (Ru), strontium (Sr), zirconium (Zr), cerium (Cr), and barium (Ba) from uranium metal fuel may be modeled.
Energy Technology Data Exchange (ETDEWEB)
PACKER, M.J.
2000-06-20
This report documents the verification and validation (V&V) activities undertaken to support the use of the RADNUC2-A and ORIGEN2 S.2 computer codes for the specific application of calculating isotopic inventories and decay heat loadings for Spent Nuclear Fuel Project (SNFP) activities as described herein. Two recent applications include the reports HNF-SD-SNF-TI-009, 105-K Basin Material Design Basis Feed Description for Spent Nuclear Fuel Project Facilities, Volume 1, Fuel (Praga, 1998), and HNF-3035, Rev. 0B, MCO Gas Composition for Low Reactive Surface Areas (Packer, 1998). Representative calculations documented in these two reports were repeated using RADNUC2-A, and the results were identical to the documented results. This serves as verification that version 2A of Radnuc was used for the applications noted above; the same version was tested herein, and perfect agreement was shown. Comprehensive V&V is demonstrated for RADNUC2-A in Appendix A.
Directory of Open Access Journals (Sweden)
Siniša Šadek
2010-01-01
Full Text Available RELAP5/SCDAPSIM and MAAP4 are two widely used severe accident computer codes for the integral analysis of the core and the reactor pressure vessel behaviour following the core degradation. The objective of the paper is the comparison of code results obtained by application of different modelling options and the evaluation of influence of thermal hydraulic behaviour of the plant on core damage progression. The analysed transient was postulated station blackout in NPP Krško with a leakage from reactor coolant pump seals. Two groups of calculations were performed where each group had a different break area and, thus, a different leakage rate. Analyses have shown that MAAP4 results were more sensitive to varying thermal hydraulic conditions in the primary system. User-defined parameters had to be carefully selected when the MAAP4 model was developed, in contrast to the RELAP5/SCDAPSIM model where those parameters did not have any significant impact on final results.
Raisali, G. R.; Sohrabpour, M.
1993-10-01
The EGS4 a Monte Carlo electron-photon transport simulation package together with a locally developed computer program "GCELL" has been used to simulate the transport of the gamma rays in Gammacell 220. An additional lead attenuator has been inserted in the chamber, has been included for those cases where lower dose rates were required. For three cases of 0, 1.35 and 4.0 cm thickness of added lead attenuators, the gamma spectrum, and dose rate distribution inside the chamber have been determined. For the case of no attenuator present, the main shield around the source cage has been included in the simulation program and its albedo effects have been investigated. The calculated dose rate distribution in the Gammacell chamber has been compared against measurements carried out with Fricke, PMMA and Gafchromic film dosimeters.
Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.
2016-02-01
The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and
An algebraic approach to graph codes
DEFF Research Database (Denmark)
Pinero, Fernando
theory as evaluation codes. Chapter three consists of the introduction to graph based codes, such as Tanner codes and graph codes. In Chapter four, we compute the dimension of some graph based codes with a result combining graph based codes and subfield subcodes. Moreover, some codes in chapter four......This thesis consists of six chapters. The first chapter, contains a short introduction to coding theory in which we explain the coding theory concepts we use. In the second chapter, we present the required theory for evaluation codes and also give an example of some fundamental codes in coding...... are optimal or best known for their parameters. In chapter five we study some graph codes with Reed–Solomon component codes. The underlying graph is well known and widely used for its good characteristics. This helps us to compute the dimension of the graph codes. We also introduce a combinatorial concept...
Directory of Open Access Journals (Sweden)
Thomas eJahans-Price
2014-03-01
Full Text Available We introduce a computational model describing rat behaviour and the interactions of neural populations processing spatial and mnemonic information during a maze-based, decision-making task. The model integrates sensory input and implements a working memory to inform decisions at a choice point, reproducing rat behavioural data and predicting the occurrence of turn- and memory-dependent activity in neuronal networks supporting task performance. We tested these model predictions using a new software toolbox (Maze Query Language, MQL to analyse activity of medial prefrontal cortical (mPFC and dorsal hippocampal (dCA1 neurons recorded from 6 adult rats during task performance. The firing rates of dCA1 neurons discriminated context (i.e. the direction of the previous turn, whilst a subset of mPFC neurons was selective for current turn direction or context, with some conjunctively encoding both. mPFC turn-selective neurons displayed a ramping of activity on approach to the decision turn and turn-selectivity in mPFC was significantly reduced during error trials. These analyses complement data from neurophysiological recordings in non-human primates indicating that firing rates of cortical neurons correlate with integration of sensory evidence used to inform decision-making.
Chen, Xiaowei Sylvia; Brown, Chris M
2012-10-01
Messenger ribonucleic acids (RNAs) contain a large number of cis-regulatory RNA elements that function in many types of post-transcriptional regulation. These cis-regulatory elements are often characterized by conserved structures and/or sequences. Although some classes are well known, given the wide range of RNA-interacting proteins in eukaryotes, it is likely that many new classes of cis-regulatory elements are yet to be discovered. An approach to this is to use computational methods that have the advantage of analysing genomic data, particularly comparative data on a large scale. In this study, a set of structural discovery algorithms was applied followed by support vector machine (SVM) classification. We trained a new classification model (CisRNA-SVM) on a set of known structured cis-regulatory elements from 3'-untranslated regions (UTRs) and successfully distinguished these and groups of cis-regulatory elements not been strained on from control genomic and shuffled sequences. The new method outperformed previous methods in classification of cis-regulatory RNA elements. This model was then used to predict new elements from cross-species conserved regions of human 3'-UTRs. Clustering of these elements identified new classes of potential cis-regulatory elements. The model, training and testing sets and novel human predictions are available at: http://mRNA.otago.ac.nz/CisRNA-SVM.
On the Dimension of Graph Codes with Reed–Solomon Component Codes
DEFF Research Database (Denmark)
Beelen, Peter; Høholdt, Tom; Pinero, Fernando
2013-01-01
We study a class of graph based codes with Reed-Solomon component codes as affine variety codes. We give a formulation of the exact dimension of graph codes in general. We give an algebraic description of these codes which makes the exact computation of the dimension of the graph codes easier....
Energy Technology Data Exchange (ETDEWEB)
Gonzalez, J. A.; Del Valle, E.; Vargas, S.; Xolocostli, J. V. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)], e-mail: joseangel.gonzalez@inin.gob.mx
2009-10-15
In this work the code TACHY is used, to simulate an operation cycle of the nuclear power plant of Laguna Verde. The code TACHY was designed originally to analyze recharge patterns of Hindu plants type BWR, that have near 800 assemblies, that is almost double the reactor of nuclear power plant of Laguna Verde. For this reason it was necessary to modify the code to be able to apply it to nuclear power plant of Laguna Verde. The values were modified like: operation power, entrance subcooling, flow through the nucleus, assemblies number in nucleus and dimensions of nucleus. In this work is take like base the cycle 9 of Unit 2 of nuclear power plant of Laguna Verde. This cycle is simulated with code TACHY and with code SIMULATE-3 that is part of computation package Core Management System, with the purpose of comparing the results. The results that are compared with the two codes, for the complete nucleus are: the burnt average of nucleus, the cycle longitude, the effective factor of neutrons multiplication, the pick of radial relative power; and for each assembly: the burnt and the relative power. Of the results obtained with TACHY we can conclude that we have a computation tool that allows to analyze a great number of recharge patterns in a reasonable time. (Author)
M. Kasemann
Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...
Energy Technology Data Exchange (ETDEWEB)
Martinez, M.; Miro, R.; Barrachina, T.; Verdu, G.
2011-07-01
This paper presents the results from the calculation of the steady state simulation with model of CFD (computational fluid dynamic) operating under conditions of operation at full power (Hot Full Power). Development and the CFD model results show the usefulness of these codes for calculating 3D of the variable thermohydraulics of these reactors.
Energy Technology Data Exchange (ETDEWEB)
Nichols, R.A.; Smith, W.W.
1976-06-30
The three-volume report describes a dual-mode nuclear space power and propulsion system concept that employs an advanced solid-core nuclear fission reactor coupled via heat pipes to one of several electric power conversion systems. The second volume describes the computer code and users' guide for the preliminary analysis of the system.
Energy Technology Data Exchange (ETDEWEB)
Wulff, W; Cheng, H S; Diamond, D J; Khatib-Rahbar, M
1984-01-01
This report documents the physical models and the numerical methods employed in the BWR systems code RAMONA-3B. The RAMONA-3B code simulates three-dimensional neutron kinetics and multichannel core hydraulics of nonhomogeneous, nonequilibrium two-phase flows. RAMONA-3B is programmed to calculate the steady and transient conditions in the main steam supply system for normal and abnormal operational transients, including the performances of plant control and protection systems. Presented are code capabilities and limitations, models and solution techniques, the results of development code assessment and suggestions for improving the code in the future.
Energy Technology Data Exchange (ETDEWEB)
Kim, Jong Bum; Jeong, Ji Young; Lee, Tae Ho; Kim, Sung Kyun; Euh, Dong Jin; Joo, Hyung Kook [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-10-15
The design of Prototype Gen-IV Sodium-Cooled Fast Reactor (PGSFR) has been developed and the validation and verification (V and V) activities to demonstrate the system performance and safety are in progress. In this paper, the current status of test activities is described briefly and significant results are discussed. The large-scale sodium thermal-hydraulic test program, Sodium Test Loop for Safety Simulation and Assessment-1 (STELLA-1), produced satisfactory results, which were used for the computer codes V and V, and the performance test results of the model pump in sodium showed good agreement with those in water. The second phase of the STELLA program with the integral effect tests facility, STELLA-2, is in the detailed design stage of the design process. The sodium thermal-hydraulic experiment loop for finned-tube sodium-to-air heat exchanger performance test, the intermediate heat exchanger test facility, and the test facility for the reactor flow distribution are underway. Flow characteristics test in subchannels of a wire-wrapped rod bundle has been carried out for safety analysis in the core and the dynamic characteristic test of upper internal structure has been performed for the seismic analysis model for the PGSFR. The performance tests for control rod assemblies (CRAs) have been conducted for control rod drive mechanism driving parts and drop tests of the CRA under scram condition were performed. Finally, three types of inspection sensors under development for the safe operation of the PGSFR were explained with significant results.
Energy Technology Data Exchange (ETDEWEB)
Byamukama, Abdul [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Haiyong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2014-10-15
Radioactive materials are utilized in industries, agriculture and research, medical facilities and academic institutions for numerous purposes that are useful in the daily life of mankind. To effectively manage the radioactive waste and selecting appropriate disposal schemes, it is imperative to have a specific criteria for allocating radioactive waste to a particular waste class. Uganda has a radioactive waste classification scheme based on activity concentration and half-life albeit in qualitative terms as documented in the Uganda Atomic Energy Regulations 2012. There is no clear boundary between the different waste classes and hence difficult to; suggest disposal options, make decisions and enforcing compliance, communicate with stakeholders effectively among others. To overcome the challenges, the RESRAD computer code was used to derive a specific criteria for classifying between the different waste categories for Uganda basing on the activity concentration of radionuclides. The results were compared with that of Australia and were found to correlate given the differences in site parameters and consumption habits of the residents in the two countries.
M. Kasemann
Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...
I. Fisk
2011-01-01
Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...
P. McBride
The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...
The HULL Hydrodynamics Computer Code
1976-09-01
Mark A. Fry, Capt, USAF Richard E. Durrett, Major, USAF Gary P. Ganong , Major, USAF Daniel A. Matuska, Major, USAF Mitchell D. Stucker, Capt, USAF... Ganong , G.P., and Roberts, W.A., The Effect of the Nuclear Environment on Crater Ejecta Trajectories for Surface Bursts, AFWL-TR-68-125, Air Force...unication. 17. Ganong , G.P.. et al.. Private communication. 18- A?^o9;ceG-Seapoan1 L^^y^ AFWL.TR-69.19, 19. Needham, C.E., TheorpHrai r=i^ i
Energy Technology Data Exchange (ETDEWEB)
Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.; Mallen, A.N.; Neymotin, L.Y.
1998-03-01
This document describes the major modifications and improvements made to the modeling of the RAMONA-3B/MOD0 code since 1981, when the code description and assessment report was completed. The new version of the code is RAMONA-4B. RAMONA-4B is a systems transient code for application to different versions of Boiling Water Reactors (BWR) such as the current BWR, the Advanced Boiling Water Reactor (ABWR), and the Simplified Boiling Water Reactor (SBWR). This code uses a three-dimensional neutron kinetics model coupled with a multichannel, non-equilibrium, drift-flux, two-phase flow formulation of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients and instability issues. Chapter 1 is an overview of the code`s capabilities and limitations; Chapter 2 discusses the neutron kinetics modeling and the implementation of reactivity edits. Chapter 3 is an overview of the heat conduction calculations. Chapter 4 presents modifications to the thermal-hydraulics model of the vessel, recirculation loop, steam separators, boron transport, and SBWR specific components. Chapter 5 describes modeling of the plant control and safety systems. Chapter 6 presents and modeling of Balance of Plant (BOP). Chapter 7 describes the mechanistic containment model in the code. The content of this report is complementary to the RAMONA-3B code description and assessment document. 53 refs., 81 figs., 13 tabs.
I. Fisk
2013-01-01
Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...
Atkinson, Paul
2011-01-01
The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati
I. Fisk
2010-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...
M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...
Introduction to coding and information theory
Roman, Steven
1997-01-01
This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.
Energy Technology Data Exchange (ETDEWEB)
Guasp, J.
1974-07-01
A Fortran V computer code for Univac 1108/6 using the partial statistical (or compound nucleus) model is described. The code calculates fast neutron cross sections for the (n, n'), (n, p), (n, d) and (n, {alpha}) reactions and the angular distributions and Legendre moments for the (n, n) and (n, n') processes in heavy and intermediate spherical nuclei. A local Optical Model with spin-orbit interaction for each level is employed, allowing for the width fluctuation and Moldauer corrections, as well as the inclusion of discrete and continuous levels. (Author) 67 refs.
Energy Technology Data Exchange (ETDEWEB)
Okamura, Nobuo; Sato, Koji [Japan Nuclear Cycle Development Inst., Oarai, Ibaraki (Japan). Oarai Engineering Center
2002-03-01
An analysis code using the object-oriented software EX{center_dot}TD Ver.4 was developed for the estimation of material balance for the system design of the pyrochemical reprocessing plants consisting of batch processes. This code can also estimate the radioactivity balance, decay heat balance and holdup, and easily cope with the improvement of the process flow, and so on. An example of the material balance estimation under the consideration of the solvent (molten salt) recycling time is presented for the oxide electrowinning reprocessing system designed in the feasibility study of the FBR fuel cycle system. The results indicate the possibility of reduction of the vitrified waste form volume due to the extension of the recycling time of the solvent. This paper describes the outline of the code and estimation of the material balance in the oxide electrowinning reprocessing system under consideration of the solvent recycling time. (author)
Directory of Open Access Journals (Sweden)
Jong-Bum Kim
2016-10-01
Full Text Available The design of Prototype Gen-IV Sodium-Cooled Fast Reactor (PGSFR has been developed and the validation and verification (V&V activities to demonstrate the system performance and safety are in progress. In this paper, the current status of test activities is described briefly and significant results are discussed. The large-scale sodium thermal-hydraulic test program, Sodium Test Loop for Safety Simulation and Assessment-1 (STELLA-1, produced satisfactory results, which were used for the computer codes V&V, and the performance test results of the model pump in sodium showed good agreement with those in water. The second phase of the STELLA program with the integral effect tests facility, STELLA-2, is in the detailed design stage of the design process. The sodium thermal-hydraulic experiment loop for finned-tube sodium-to-air heat exchanger performance test, the intermediate heat exchanger test facility, and the test facility for the reactor flow distribution are underway. Flow characteristics test in subchannels of a wire-wrapped rod bundle has been carried out for safety analysis in the core and the dynamic characteristic test of upper internal structure has been performed for the seismic analysis model for the PGSFR. The performance tests for control rod assemblies (CRAs have been conducted for control rod drive mechanism driving parts and drop tests of the CRA under scram condition were performed. Finally, three types of inspection sensors under development for the safe operation of the PGSFR were explained with significant results.
Blind recognition of punctured convolutional codes
Institute of Scientific and Technical Information of China (English)
LU Peizhong; LI Shen; ZOU Yan; LUO Xiangyang
2005-01-01
This paper presents an algorithm for blind recognition of punctured convolutional codes which is an important problem in adaptive modulation and coding. For a given finite sequence of convolutional code, the parity check matrix of the convolutional code is first computed by solving a linear system with adequate error tolerance. Then a minimal basic encoding matrix of the original convolutional code and its puncturing pattern are determined according to the known parity check matrix of the punctured convolutional code.
I. Fisk
2011-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...
M. Kasemann
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...
I. Fisk
2012-01-01
Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...
M. Kasemann
Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...
P. McBride
It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...
I. Fisk
2010-01-01
Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...
M. Kasemann
CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes. Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...
Multiple LDPC decoding for distributed source coding and video coding
DEFF Research Database (Denmark)
Forchhammer, Søren; Luong, Huynh Van; Huang, Xin
2011-01-01
Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
2014-12-01
QPSK Gaussian channels . .......................................................................... 39 vi 1. INTRODUCTION Forward error correction (FEC...Capacity of BSC. 7 Figure 5. Capacity of AWGN channel . 8 4. INTRODUCTION TO POLAR CODES Polar codes were introduced by E. Arikan in [1]. This paper...Under authority of C. A. Wilgenbusch, Head ISR Division EXECUTIVE SUMMARY This report describes the results of the project “More reliable wireless
Bohrson, Wendy A.; Spera, Frank J.
2007-11-01
Volcanic and plutonic rocks provide abundant evidence for complex processes that occur in magma storage and transport systems. The fingerprint of these processes, which include fractional crystallization, assimilation, and magma recharge, is captured in petrologic and geochemical characteristics of suites of cogenetic rocks. Quantitatively evaluating the relative contributions of each process requires integration of mass, species, and energy constraints, applied in a self-consistent way. The energy-constrained model Energy-Constrained Recharge, Assimilation, and Fractional Crystallization (EC-RaχFC) tracks the trace element and isotopic evolution of a magmatic system (melt + solids) undergoing simultaneous fractional crystallization, recharge, and assimilation. Mass, thermal, and compositional (trace element and isotope) output is provided for melt in the magma body, cumulates, enclaves, and anatectic (i.e., country rock) melt. Theory of the EC computational method has been presented by Spera and Bohrson (2001, 2002, 2004), and applications to natural systems have been elucidated by Bohrson and Spera (2001, 2003) and Fowler et al. (2004). The purpose of this contribution is to make the final version of the EC-RAχFC computer code available and to provide instructions for code implementation, description of input and output parameters, and estimates of typical values for some input parameters. A brief discussion highlights measures by which the user may evaluate the quality of the output and also provides some guidelines for implementing nonlinear productivity functions. The EC-RAχFC computer code is written in Visual Basic, the programming language of Excel. The code therefore launches in Excel and is compatible with both PC and MAC platforms. The code is available on the authors' Web sites http://magma.geol.ucsb.edu/and http://www.geology.cwu.edu/ecrafc) as well as in the auxiliary material.
2015-01-01
Games for teaching coding have been an educational holy grail since at least the early 1980s. Yet for decades, with games more popular than ever and with the need to teach kids coding having been well-recognized, no blockbuster coding games have arisen (see Chapter 2). Over the years, the research community has made various games for teaching computer science: a survey made by shows that most do not teach coding, and of the ones that do teach coding, most are research prototypes (not produc...
TOCAR: a code to interface FOURACES - CARNAVAL
Energy Technology Data Exchange (ETDEWEB)
Panini, G.C.; Vaccari, M.
1981-08-01
The TOCAR code, written in FORTRAN-IV for IBM-370 computers, is an interface between the output of the FOURACES code and the CARNAVAL binary format for the multigroup neutron cross-sections, scattering matrices and related quantities. Besides the description of the code and the how to use, the report contains the code listing.
I. Fisk
2013-01-01
Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites. Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month. Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB. Figure 3: The volume of data moved between CMS sites in the last six months The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...
I. Fisk
2012-01-01
Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently. Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...
Matthias Kasemann
Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...
M. Kasemann
Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...
Contributions from I. Fisk
2012-01-01
Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences. Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...
P. MacBride
The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...
2010-01-01
Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...
DEFF Research Database (Denmark)
2015-01-01
Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....
Institute of Scientific and Technical Information of China (English)
王延青; 杨发; 刘培杰; COLLINS Michael
2007-01-01
The software industry is asking universities and colleges to cultivate more software engineers who can write quality programs. A peer code review process is an ideal approach to maximize the learning outcome of students in programming. In this paper, the process in our previous publication was improved. The found problems were analyzed which will take as the basis of the future research on quality assurance. Finally, a set of solutions based on computer science were proposed to further improve the whole review process.
DEFF Research Database (Denmark)
2015-01-01
Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity in the net...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof.......Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...
I. Fisk
2011-01-01
Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...
2014-05-01
Parallelization and vectorization on the GPU is achieved with modifying the code syntax for compatibility with CUDA . We assess the speedup due to various...ExaScience Lab in Leuven, Belgium) and compare it with the performance of a GPU unit running CUDA . We implement a test case of a 1D two-stream instability...programming language syntax only in the GPU / CUDA version of the code and these changes do not have any significant impact on the final performance. 2
Institute of Scientific and Technical Information of China (English)
李海; 黄晨; 杜爱兵; 徐宝玉
2014-01-01
The thermal conductivity is one of the most important parameters in the computer code for performance prediction for fuel rods.Several fuel thermal conductivity models used in foreign computer code,including thermal conductivity models for MOX fuel and UO2 fuel were introduced in this paper. Thermal conductivities were calculated by using these models, and the results were compared and analyzed.Finally, the thermal conductivity model for the native computer code for performance prediction for fuel rods in fast reactor was recommended.%热导率是燃料元件性能分析程序最重要的参数之一，本文介绍了各国部分性能分析程序的燃料热导率模型，按照 MOX和 UO2燃料分类，给出了这些性能分析程序热导率模型的计算结果，并进行分析对比，给出了国产快堆性能分析程序的热导率推荐模型。
PARAVT: Parallel Voronoi tessellation code
González, R. E.
2016-10-01
In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.
ON CLASSICAL BCH CODES AND QUANTUM BCH CODES
Institute of Scientific and Technical Information of China (English)
Xu Yajie; Ma Zhi; Zhang Chunyuan
2009-01-01
It is a regular way of constructing quantum error-correcting codes via codes with self-orthogonal property, and whether a classical Bose-Chaudhuri-Hocquenghem (BCH) code is self-orthogonal can be determined by its designed distance. In this paper, we give the sufficient and necessary condition for arbitrary classical BCH codes with self-orthogonal property through algorithms. We also give a better upper bound of the designed distance of a classical narrow-sense BCH code which contains its Euclidean dual. Besides these, we also give one algorithm to compute the dimension of these codes. The complexity of all algorithms is analyzed. Then the results can be applied to construct a series of quantum BCH codes via the famous CSS constructions.
M. Kasemann
CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...
`95 computer system operation project
Energy Technology Data Exchange (ETDEWEB)
Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1995-12-01
This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new.
Institute of Scientific and Technical Information of China (English)
方艳; 明珠; 陈佩
2016-01-01
人机交互( Human-Computer Interaction, HCI)是研究人、计算机以及它们之间相互影响的技术。网上的网民符号交流,通过的是技术的平台和技术的手段,同时也是双方心理的“印迹”、传播伦理的表征。本文从人机交互的心理层次对网民使用文字、图像以及图文并茂的符码传播心理进行考察。%Human-Computer Interaction ( HCI) is the technology that studies on people, computer and the relations between them. The Internet users contact each other by codes, through the technology platform and means, which imprints both sides’ psychologies and presents the Internet ethic as well. This paper explores the code communication psychology of the Internet users from Human-Computer Interaction.
Random Coding Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences
2008-07-01
COVERED (From - To) 6 Jul 08 – 11 Jul 08 4. TITLE AND SUBTITLE RANDOM CODING BOUNDS FOR DNA CODES BASED ON FIBONACCI ENSEMBLES OF DNA SEQUENCES ... sequences which are generalizations of the Fibonacci sequences . 15. SUBJECT TERMS DNA Codes, Fibonacci Ensembles, DNA Computing, Code Optimization 16...coding bound on the rate of DNA codes is proved. To obtain the bound, we use some ensembles of DNA sequences which are generalizations of the Fibonacci
Energy Technology Data Exchange (ETDEWEB)
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
Energy Technology Data Exchange (ETDEWEB)
Behler, Matthias; Hannstein, Volker; Kilger, Robert; Moser, Franz-Eberhard; Pfeiffer, Arndt; Stuke, Maik
2014-06-15
In order to account for the reactivity-reducing effect of burn-up in the criticality safety analysis for systems with irradiated nuclear fuel (''burnup credit''), numerical methods to determine the enrichment and burnup dependent nuclide inventory (''burnup code'') and its resulting multiplication factor k{sub eff} (''criticality code'') are applied. To allow for reliable conclusions, for both calculation systems the systematic deviations of the calculation results from the respective true values, the bias and its uncertainty, are being quantified by calculation and analysis of a sufficient number of suitable experiments. This quantification is specific for the application case under scope and is also called validation. GRS has developed a methodology to validate a calculation system for the application of burnup credit in the criticality safety analysis for irradiated fuel assemblies from pressurized water reactors. This methodology was demonstrated by applying the GRS home-built KENOREST burnup code and the criticality calculation sequence CSAS5 from SCALE code package. It comprises a bounding approach and alternatively a stochastic, which both have been exemplarily demonstrated by use of a generic spent fuel pool rack and a generic dry storage cask, respectively. Based on publicly available post irradiation examination and criticality experiments, currently the isotopes of uranium and plutonium elements can be regarded for.
Energy Technology Data Exchange (ETDEWEB)
Chizhikova, Z.N.; Kalashnikov, A.G.; Kapranova, E.N.; Korobitsyn, V.E.; Manturov, G.N.; Tsiboulia, A.A.
1998-12-01
One of the important problems for ensuring the VVER type reactor safety when the reactor is partially loaded with MOX fuel is the choice of appropriate physical zoning to achieve the maximum flattening of pin-by-pin power distribution. When uranium fuel is replaced by MOX one provided that the reactivity due to fuel assemblies is kept constant, the fuel enrichment slightly decreases. However, the average neutron spectrum fission microscopic cross-section for {sup 239}Pu is approximately twice that for {sup 235}U. Therefore power peaks occur in the peripheral fuel assemblies containing MOX fuel which are aggravated by the interassembly water. Physical zoning has to be applied to flatten the power peaks in fuel assemblies containing MOX fuel. Moreover, physical zoning cannot be confined to one row of fuel elements as is the case with a uniform lattice of uranium fuel assemblies. Both the water gap and the jump in neutron absorption macroscopic cross-sections which occurs at the interface of fuel assemblies with different fuels make the problem of calculating space-energy neutron flux distribution more complicated since it increases nondiffusibility effects. To solve this problem it is necessary to update the current codes, to develop new codes and to verify all the codes including nuclear-physical constants libraries employed. In so doing it is important to develop and validate codes of different levels--from design codes to benchmark ones. This paper presents the results of the burnup calculation for a multiassembly structure, consisting of MOX fuel assemblies surrounded by uranium dioxide fuel assemblies. The structure concerned can be assumed to model a fuel assembly lattice symmetry element of the VVER-1000 type reactor in which 1/4 of all fuel assemblies contains MOX fuel.
Frutos-Alfaro, Francisco
2015-01-01
A program to generate codes in Fortran and C of the full Magnetohydrodynamic equations is shown. The program used the free computer algebra system software REDUCE. This software has a package called EXCALC, which is an exterior calculus program. The advantage of this program is that it can be modified to include another complex metric or spacetime. The output of this program is modified by means of a LINUX script which creates a new REDUCE program to manipulate the MHD equations to obtain a code that can be used as a seed for a MHD code for numerical applications. As an example, we present part of output of our programs for Cartesian coordinates and how to do the discretization.
Vaucouleur, Sebastien
2011-02-01
We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.
Energy Technology Data Exchange (ETDEWEB)
Link, Hamilton E.; Schroeppel, Richard Crabtree; Neumann, William Douglas; Campbell, Philip LaRoche; Beaver, Cheryl Lynn; Pierson, Lyndon George; Anderson, William Erik
2004-10-01
If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called &apos