pyro: A teaching code for computational astrophysical hydrodynamics
Zingale, Michael
2013-01-01
We describe pyro: a simple, freely-available code to aid students in learning the computational hydrodynamics methods widely used in astrophysics. pyro is written with simplicity and learning in mind and intended to allow students to experiment with various methods popular in the field, including those for advection, compressible and incompressible hydrodynamics, multigrid, and diffusion in a finite-volume framework. We show some of the test problems from pyro, describe its design philosophy, and suggest extensions for students to build their understanding of these methods.
pyro: A teaching code for computational astrophysical hydrodynamics
Zingale, M.
2014-10-01
We describe pyro: a simple, freely-available code to aid students in learning the computational hydrodynamics methods widely used in astrophysics. pyro is written with simplicity and learning in mind and intended to allow students to experiment with various methods popular in the field, including those for advection, compressible and incompressible hydrodynamics, multigrid, and diffusion in a finite-volume framework. We show some of the test problems from pyro, describe its design philosophy, and suggest extensions for students to build their understanding of these methods.
MULTI2D - a computer code for two-dimensional radiation hydrodynamics
Ramis, R.; Meyer-ter-Vehn, J.; Ramírez, J.
2009-06-01
Simulation of radiation hydrodynamics in two spatial dimensions is developed, having in mind, in particular, target design for indirectly driven inertial confinement energy (IFE) and the interpretation of related experiments. Intense radiation pulses by laser or particle beams heat high-Z target configurations of different geometries and lead to a regime which is optically thick in some regions and optically thin in others. A diffusion description is inadequate in this situation. A new numerical code has been developed which describes hydrodynamics in two spatial dimensions (cylindrical R-Z geometry) and radiation transport along rays in three dimensions with the 4 π solid angle discretized in direction. Matter moves on a non-structured mesh composed of trilateral and quadrilateral elements. Radiation flux of a given direction enters on two (one) sides of a triangle and leaves on the opposite side(s) in proportion to the viewing angles depending on the geometry. This scheme allows to propagate sharply edged beams without ray tracing, though at the price of some lateral diffusion. The algorithm treats correctly both the optically thin and optically thick regimes. A symmetric semi-implicit (SSI) method is used to guarantee numerical stability. Program summaryProgram title: MULTI2D Catalogue identifier: AECV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 151 098 No. of bytes in distributed program, including test data, etc.: 889 622 Distribution format: tar.gz Programming language: C Computer: PC (32 bits architecture) Operating system: Linux/Unix RAM: 2 Mbytes Word size: 32 bits Classification: 19.7 External routines: X-window standard library (libX11.so) and corresponding heading files (X11/*.h) are
Sandalski, Stou
Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.
An implicit Smooth Particle Hydrodynamic code
Energy Technology Data Exchange (ETDEWEB)
Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)
2000-05-01
An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.
VH-1: Multidimensional ideal compressible hydrodynamics code
Hawley, John; Blondin, John; Lindahl, Greg; Lufkin, Eric
2012-04-01
VH-1 is a multidimensional ideal compressible hydrodynamics code written in FORTRAN for use on any computing platform, from desktop workstations to supercomputers. It uses a Lagrangian remap version of the Piecewise Parabolic Method developed by Paul Woodward and Phil Colella in their 1984 paper. VH-1 comes in a variety of versions, from a simple one-dimensional serial variant to a multi-dimensional version scalable to thousands of processors.
Radiation hydrodynamics integrated in the PLUTO code
Kolb, Stefan M.; Stute, Matthias; Kley, Wilhelm; Mignone, Andrea
2013-11-01
Aims: The transport of energy through radiation is very important in many astrophysical phenomena. In dynamical problems the time-dependent equations of radiation hydrodynamics have to be solved. We present a newly developed radiation-hydrodynamics module specifically designed for the versatile magnetohydrodynamic (MHD) code PLUTO. Methods: The solver is based on the flux-limited diffusion approximation in the two-temperature approach. All equations are solved in the co-moving frame in the frequency-independent (gray) approximation. The hydrodynamics is solved by the different Godunov schemes implemented in PLUTO, and for the radiation transport we use a fully implicit scheme. The resulting system of linear equations is solved either using the successive over-relaxation (SOR) method (for testing purposes) or using matrix solvers that are available in the PETSc library. We state in detail the methodology and describe several test cases to verify the correctness of our implementation. The solver works in standard coordinate systems, such as Cartesian, cylindrical, and spherical, and also for non-equidistant grids. Results: We present a new radiation-hydrodynamics solver coupled to the MHD-code PLUTO that is a modern, versatile, and efficient new module for treating complex radiation hydrodynamical problems in astrophysics. As test cases, either purely radiative situations, or full radiation-hydrodynamical setups (including radiative shocks and convection in accretion disks) were successfully studied. The new module scales very well on parallel computers using MPI. For problems in star or planet formation, we added the possibility of irradiation by a central source.
Building a Hydrodynamics Code with Kinetic Theory
Sagert, Irina; Colbry, Dirk; Pickett, Rodney; Strother, Terrance
2013-01-01
We report on the development of a test-particle based kinetic Monte Carlo code for large systems and its application to simulate matter in the continuum regime. Our code combines advantages of the Direct Simulation Monte Carlo and the Point-of-Closest-Approach methods to solve the collision integral of the Boltzmann equation. With that, we achieve a high spatial accuracy in simulations while maintaining computational feasibility when applying a large number of test-particles. The hybrid setup of our approach allows us to study systems which move in and out of the hydrodynamic regime, with low and high particle densities. To demonstrate our code's ability to reproduce hydrodynamic behavior we perform shock wave simulations and focus here on the Sedov blast wave test. The blast wave problem describes the evolution of a spherical expanding shock front and is an important verification problem for codes which are applied in astrophysical simulation, especially for approaches which aim to study core-collapse supern...
Energy Technology Data Exchange (ETDEWEB)
Hayes, J C; Norman, M
1999-10-28
This report details an investigation into the efficacy of two approaches to solving the radiation diffusion equation within a radiation hydrodynamic simulation. Because leading-edge scientific computing platforms have evolved from large single-node vector processors to parallel aggregates containing tens to thousands of individual CPU's, the ability of an algorithm to maintain high compute efficiency when distributed over a large array of nodes is critically important. The viability of an algorithm thus hinges upon the tripartite question of numerical accuracy, total time to solution, and parallel efficiency.
Improvements to SOIL: An Eulerian hydrodynamics code
Energy Technology Data Exchange (ETDEWEB)
Davis, C.G.
1988-04-01
Possible improvements to SOIL, an Eulerian hydrodynamics code that can do coupled radiation diffusion and strength of materials, are presented in this report. Our research is based on the inspection of other Eulerian codes and theoretical reports on hydrodynamics. Several conclusions from the present study suggest that some improvements are in order, such as second-order advection, adaptive meshes, and speedup of the code by vectorization and/or multitasking. 29 refs., 2 figs.
A comparison of cosmological hydrodynamic codes
Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.
1994-01-01
We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic
A new hydrodynamics code for Type Ia Supernovae
Leung, S -C; Lin, L -M
2015-01-01
A two-dimensional hydrodynamics code for Type Ia supernovae (SNIa) simulations is presented. The code includes a fifth-order shock-capturing scheme WENO, detailed nuclear reaction network, flame-capturing scheme and sub-grid turbulence. For post-processing we have developed a tracer particle scheme to record the thermodynamical history of the fluid elements. We also present a one-dimensional radiative transfer code for computing observational signals. The code solves the Lagrangian hydrodynamics and moment-integrated radiative transfer equations. A local ionization scheme and composition dependent opacity are included. Various verification tests are presented, including standard benchmark tests in one and two dimensions. SNIa models using the pure turbulent deflagration model and the delayed-detonation transition model are studied. The results are consistent with those in the literature. We compute the detailed chemical evolution using the tracer particles' histories, and we construct corresponding bolometric...
Axially symmetric pseudo-Newtonian hydrodynamics code
Kim, Jinho; Choptuik, Matthew William; Lee, Hyung Mok
2012-01-01
We develop a numerical hydrodynamics code using a pseudo-Newtonian formulation that uses the weak field approximation for the geometry, and a generalized source term for the Poisson equation that takes into account relativistic effects. The code was designed to treat moderately relativistic systems such as rapidly rotating neutron stars. The hydrodynamic equations are solved using a finite volume method with High Resolution Shock Capturing (HRSC) techniques. We implement several different slope limiters for second order reconstruction schemes and also investigate higher order reconstructions. We use the method of lines (MoL) to convert the mixed spatial-time partial differential equations into ordinary differential equations (ODEs) that depend only on time. These ODEs are solved using 2nd and 3rd order Runge-Kutta methods. The Poisson equation for the gravitational potential is solved with a multigrid method. In order to confirm the validity of our code, we carry out four different tests including one and two...
The RAGE radiation-hydrodynamic code
Gittings, Michael; Clover, Michael; Betlach, Thomas; Byrne, Nelson; Coker, Robert; Dendy, Edward; Hueckstaedt, Robert; New, Kim; Oakes, W Rob; Ranta, Dale; Stefan, Ryan
2008-01-01
We describe RAGE, the ``Radiation Adaptive Grid Eulerian'' radiation-hydrodynamics code, including its data structures, its parallelization strategy and performance, its hydrodynamic algorithm(s), its (gray) radiation diffusion algorithm, and some of the considerable amount of verification and validation efforts. The hydrodynamics is a basic Godunov solver, to which we have made significant improvements to increase the advection algorithm's robustness and to converge stiffnesses in the equation of state. Similarly, the radiation transport is a basic gray diffusion, but our treatment of the radiation-material coupling, wherein we converge nonlinearities in a novel manner to allow larger timesteps and more robust behavior, can be applied to any multi-group transport algorithm.
The HULL Hydrodynamics Computer Code
1976-09-01
Mark A. Fry, Capt, USAF Richard E. Durrett, Major, USAF Gary P. Ganong , Major, USAF Daniel A. Matuska, Major, USAF Mitchell D. Stucker, Capt, USAF... Ganong , G.P., and Roberts, W.A., The Effect of the Nuclear Environment on Crater Ejecta Trajectories for Surface Bursts, AFWL-TR-68-125, Air Force...unication. 17. Ganong , G.P.. et al.. Private communication. 18- A?^o9;ceG-Seapoan1 L^^y^ AFWL.TR-69.19, 19. Needham, C.E., TheorpHrai r=i^ i
CHOLLA: A New Massively Parallel Hydrodynamics Code for Astrophysical Simulation
Schneider, Evan E.; Robertson, Brant E.
2015-04-01
We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳2563) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.
Energy Technology Data Exchange (ETDEWEB)
Vitruk, S.G.; Korsun, A.S. [Moscow Engineering Physics Institute (Russian Federation); Ushakov, P.A. [Institute of Physics and Power Engineering, Obninsk (R)] [and others
1995-09-01
The multilevel mathematical model of neutron thermal hydrodynamic processes in a passive safety core without assemblies duct walls and appropriate computer code SKETCH, consisted of thermal hydrodynamic module THEHYCO-3DT and neutron one, are described. A new effective discretization technique for energy, momentum and mass conservation equations is applied in hexagonal - z geometry. The model adequacy and applicability are presented. The results of the calculations show that the model and the computer code could be used in conceptual design of advanced reactors.
A new three-dimensional general-relativistic hydrodynamics code
Baiotti, L.; Hawke, I.; Montero, P. J.; Rezzolla, L.
We present a new three-dimensional general relativistic hydrodynamics code, the Whisky code. This code incorporates the expertise developed over the past years in the numerical solution of Einstein equations and of the hydrodynamics equations in a curved spacetime, and is the result of a collaboration of several European Institutes. We here discuss the ability of the code to carry out long-term accurate evolutions of the linear and nonlinear dynamics of isolated relativistic stars.
A new three-dimensional general-relativistic hydrodynamics code
Baiotti, Luca; Montero, Pedro J; Rezzolla, Luciano
2010-01-01
We present a new three-dimensional general relativistic hydrodynamics code, the Whisky code. This code incorporates the expertise developed over the past years in the numerical solution of Einstein equations and of the hydrodynamics equations in a curved spacetime, and is the result of a collaboration of several European Institutes. We here discuss the ability of the code to carry out long-term accurate evolutions of the linear and nonlinear dynamics of isolated relativistic stars.
Collisions and separations in 2D hydrodynamical code
Asida, Shimon
1991-06-01
Hydrodynamic problems involving the collision or separation of zones of different materials include the following types: armor penetration by a jet formed in the explosion of a shaped charge or by a kinetic projectile, and instabilities in cosmic jets. Calculations of hydrodynamic processes are based on numerical simulations which solve the differential equations by means of difference equations. A special grid is defined and the physical system is advanced via finite steps in time; in a Eulerian treatment, the grid is stationary in space whereas in a Lagrangian treatment it moves together with the fluid. In Lagrangian methods, the grid is defined on the fluid and the boundaries between materials are formed by the edges of computational cells, so that the shape of the grid depends on the shape of the boundary. Where there is a strong flow, the cells distort and the grid must be frequently redefined to enable the calculation to continue. Boundary collisions cause difficulty in defining a grid. In Eulerian methods, where the computational grid is defined over all the space through which the materials flow, it is necessary to use cells with non-homogeneous contents to follow the boundaries; such calculations are more complicated and less accurate. The aim of the present work was to develop a Lagrangian method for treating such collisions. The code, based on an existing 2D Lagrangian code with the addition of a new collision mechanism, uses a mixed computational grid, comprising squares and triangles, with which it is possible to describe systems.
RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code
Energy Technology Data Exchange (ETDEWEB)
Zhang, Wei-Qun; /KIPAC, Menlo Park; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study
2005-06-06
The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.
Pencil: Finite-difference Code for Compressible Hydrodynamic Flows
Brandenburg, Axel; Dobler, Wolfgang
2010-10-01
The Pencil code is a high-order finite-difference code for compressible hydrodynamic flows with magnetic fields. It is highly modular and can easily be adapted to different types of problems. The code runs efficiently under MPI on massively parallel shared- or distributed-memory computers, like e.g. large Beowulf clusters. The Pencil code is primarily designed to deal with weakly compressible turbulent flows. To achieve good parallelization, explicit (as opposed to compact) finite differences are used. Typical scientific targets include driven MHD turbulence in a periodic box, convection in a slab with non-periodic upper and lower boundaries, a convective star embedded in a fully nonperiodic box, accretion disc turbulence in the shearing sheet approximation, self-gravity, non-local radiation transfer, dust particle evolution with feedback on the gas, etc. A range of artificial viscosity and diffusion schemes can be invoked to deal with supersonic flows. For direct simulations regular viscosity and diffusion is being used. The code is written in well-commented Fortran90.
Merlin, Emiliano; Grassi, Tommaso; Piovan, Lorenzo; Chiosi, Cesare
2009-01-01
We present EvoL, the new release of the Padova N-body code for cosmological simulations of galaxy formation and evolution. In this paper, the basic Tree + SPH code is presented and analysed, together with an overview on the software architectures. EvoL is a flexible parallel Fortran95 code, specifically designed for simulations of cosmological structure formation on cluster, galactic and sub-galactic scales. EvoL is a fully Lagrangian self-adaptive code, based on the classical Oct-tree and on the Smoothed Particle Hydrodynamics algorithm. It includes special features such as adaptive softening lengths with correcting extra-terms, and modern formulations of SPH and artificial viscosity. It is designed to be run in parallel on multiple CPUs to optimize the performance and save computational time. We describe the code in detail, and present the results of a number of standard hydrodynamical tests.
Physics codes on parallel computers
Energy Technology Data Exchange (ETDEWEB)
Eltgroth, P.G.
1985-12-04
An effort is under way to develop physics codes which realize the potential of parallel machines. A new explicit algorithm for the computation of hydrodynamics has been developed which avoids global synchronization entirely. The approach, called the Independent Time Step Method (ITSM), allows each zone to advance at its own pace, determined by local information. The method, coded in FORTRAN, has demonstrated parallelism of greater than 20 on the Denelcor HEP machine. ITSM can also be used to replace current implicit treatments of problems involving diffusion and heat conduction. Four different approaches toward work distribution have been investigated and implemented for the one-dimensional code on the Denelcor HEP. They are ''self-scheduled'', an ASKFOR monitor, a ''queue of queues'' monitor, and a distributed ASKFOR monitor. The self-scheduled approach shows the lowest overhead but the poorest speedup. The distributed ASKFOR monitor shows the best speedup and the lowest execution times on the tested problems. 2 refs., 3 figs.
A new GPU-accelerated hydrodynamical code for numerical simulation of interacting galaxies
Igor, Kulikov
2013-01-01
In this paper a new scalable hydrodynamic code GPUPEGAS (GPU-accelerated PErformance Gas Astrophysic Simulation) for simulation of interacting galaxies is proposed. The code is based on combination of Godunov method as well as on the original implementation of FlIC method, specially adapted for GPU-implementation. Fast Fourier Transform is used for Poisson equation solution in GPUPEGAS. Software implementation of the above methods was tested on classical gas dynamics problems, new Aksenov's test and classical gravitational gas dynamics problems. Collisionless hydrodynamic approach was used for modelling of stars and dark matter. The scalability of GPUPEGAS computational accelerators is shown.
A modified Henyey method for computing radiative transfer hydrodynamics
Karp, A. H.
1975-01-01
The implicit hydrodynamic code of Kutter and Sparks (1972), which is limited to optically thick regions and employs the diffusion approximation for radiative transfer, is modified to include radiative transfer effects in the optically thin regions of a model star. A modified Henyey method is used to include the solution of the radiative transfer equation in this implicit code, and the convergence properties of this method are proven. A comparison is made between two hydrodynamic models of a classical Cepheid with a 12-day period, one of which was computed with the diffusion approximation and the other with the modified Henyey method. It is found that the two models produce nearly identical light and velocity curves, but differ in the fact that the former never has temperature inversions in the atmosphere while the latter does when sufficiently strong shocks are present.
Energy Technology Data Exchange (ETDEWEB)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L. [Sandia National Labs., Albuquerque, NM (United States); Hodge, S.A.; Hyman, C.R.; Sanders, R.L. [Oak Ridge National Lab., TN (United States)
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.
General Relativistic Smoothed Particle Hydrodynamics code developments: A progress report
Faber, Joshua; Silberman, Zachary; Rizzo, Monica
2017-01-01
We report on our progress in developing a new general relativistic Smoothed Particle Hydrodynamics (SPH) code, which will be appropriate for studying the properties of accretion disks around black holes as well as compact object binary mergers and their ejecta. We will discuss in turn the relativistic formalisms being used to handle the evolution, our techniques for dealing with conservative and primitive variables, as well as those used to ensure proper conservation of various physical quantities. Code tests and performance metrics will be discussed, as will the prospects for including smoothed particle hydrodynamics codes within other numerical relativity codebases, particularly the publicly available Einstein Toolkit. We acknowledge support from NSF award ACI-1550436 and an internal RIT D-RIG grant.
Network coding for computing: Linear codes
Appuswamy, Rathinakumar; Karamchandani, Nikhil; Zeger, Kenneth
2011-01-01
In network coding it is known that linear codes are sufficient to achieve the coding capacity in multicast networks and that they are not sufficient in general to achieve the coding capacity in non-multicast networks. In network computing, Rai, Dey, and Shenvi have recently shown that linear codes are not sufficient in general for solvability of multi-receiver networks with scalar linear target functions. We study single receiver networks where the receiver node demands a target function of the source messages. We show that linear codes may provide a computing capacity advantage over routing only when the receiver demands a `linearly-reducible' target function. % Many known target functions including the arithmetic sum, minimum, and maximum are not linearly-reducible. Thus, the use of non-linear codes is essential in order to obtain a computing capacity advantage over routing if the receiver demands a target function that is not linearly-reducible. We also show that if a target function is linearly-reducible,...
Recent advances in the smoothed-particle hydrodynamics technique: Building the code SPHYNX
Cabezon, Ruben M; Figueira, Joana
2016-01-01
A novel computational hydrocode oriented to Astrophysical applications is described, discussed and validated in the following pages. The code, called SPHYNX, is of Newtonian type and grounded on the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. The distinctive features of the code are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called sinc kernels, which suppress pairing instability; and the incorporation of a new type of volume elements which provides a better partition of the unity. The ensuing hydrodynamic code conserves mass, linear and angular momentum, energy, entropy and preserves kernel normalization even in strong shocks. By a careful choice of the index of the sinc kernel and the number of neighbors in the SPH summations, there is a substantial improvement in the estimation of gradients. Additionally, the new volume elements reduce the so-called tensile instability. Both features help to suppress much of t...
Shapiro, Wilbur
1996-01-01
This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.
Cholla : A New Massively-Parallel Hydrodynamics Code For Astrophysical Simulation
Schneider, Evan E
2014-01-01
We present Cholla (Computational Hydrodynamics On ParaLLel Architectures), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind (CTU) algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Cholla performs all hydrodynamical calculations in a massively-parallel manner, using GPUs to evolve the fluid properties of thousands of cells simultaneously while leaving the power of central processing units (CPUs) available for modeling additional physics. On current hardware, Cholla can update more than ten million cells per GPU-second while using an exact Riemann solver and PPM reconstruction with the CTU algorithm. Owing to the massively-parallel architecture of GPUs and the design of the Cholla ...
Parallelization of plasma 2-D hydrodynamics code using Message Passing Interface (MPI)
Energy Technology Data Exchange (ETDEWEB)
Sasaki, Akira [Japan Atomic Energy Research Inst., Neyagawa, Osaka (Japan). Kansai Research Establishment
1997-11-01
2 dimensional hydrodynamics code using CIP method is parallelized for Intel Paragon XP/S massive parallel computer at Kansai Research Establishment using MPI (Message Passing Interface). The communicator is found to be useful to divide and parallelize programs into functional modules. Using the process topology and the derived data type, large scale finite difference simulation codes can be significantly accelerated with simple coding of the area division method. MPI has functions which simplify the program to process boundary conditions and simplify the communication between adjacent nodes. 357 and 576 times acceleration is obtained for 400 and 782 nodes, respectively. MPI utilizes feature of scalar massive parallel computers with distributed memories. Fast and portable codes can be developed using MPI. (author)
Hydrodynamic and magnetohydrodynamic computations inside a rotating sphere
Mininni, P D; Turner, L; 10.1088/1367-2630/9/8/303
2009-01-01
Numerical solutions of the incompressible magnetohydrodynamic (MHD) equations are reported for the interior of a rotating, perfectly-conducting, rigid spherical shell that is insulator-coated on the inside. A previously-reported spectral method is used which relies on a Galerkin expansion in Chandrasekhar-Kendall vector eigenfunctions of the curl. The new ingredient in this set of computations is the rigid rotation of the sphere. After a few purely hydrodynamic examples are sampled (spin down, Ekman pumping, inertial waves), attention is focused on selective decay and the MHD dynamo problem. In dynamo runs, prescribed mechanical forcing excites a persistent velocity field, usually turbulent at modest Reynolds numbers, which in turn amplifies a small seed magnetic field that is introduced. A wide variety of dynamo activity is observed, all at unit magnetic Prandtl number. The code lacks the resolution to probe high Reynolds numbers, but nevertheless interesting dynamo regimes turn out to be plentiful in those ...
pyro: Python-based tutorial for computational methods for hydrodynamics
Zingale, Michael
2015-07-01
pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.
Energy Technology Data Exchange (ETDEWEB)
Garcia, Jr., W. J.; Viecelli, J. A.
1976-06-01
This report is intended to be a ''user manual'' for the Lawrence Livermore Laboratory version of the Eulerian incompressible hydrodynamic computer code ABMAC. The theory of the numerical model is discussed in general terms. The format for data input and data printout is described in detail. A listing and flow chart of the computer code are provided.
Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide
2015-09-01
The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.
Institute of Scientific and Technical Information of China (English)
左风丽; 莫则尧; 张宝琳
2002-01-01
Based on shared memory environment,this paper propose a new classification regroup mapping performance optimization method for parallel computing two-dimensional elastic-plastic hydrodynamics code (EPHDC-2D).Firstly,all slide lines being computed are classified into different species by slide line unit,and the different species of slide lines are independently executed.Then,to support load balance,classification groups reassembled are mapped on multiple processors according to the number of processors given or the size of divided classifications.Numberical experiments have shown that the algorithm can deeply improve the performance over the other schedule strategies.
A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects
Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.
2016-05-01
Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.
A 3+1 dimensional viscous hydrodynamic code for relativistic heavy ion collisions
Karpenko, Iu.; Huovinen, P.; Bleicher, M.
2014-11-01
We describe the details of 3+1 dimensional relativistic hydrodynamic code for the simulations of quark-gluon/hadron matter expansion in ultra-relativistic heavy ion collisions. The code solves the equations of relativistic viscous hydrodynamics in the Israel-Stewart framework. With the help of ideal-viscous splitting, we keep the ability to solve the equations of ideal hydrodynamics in the limit of zero viscosities using a Godunov-type algorithm. Milne coordinates are used to treat the predominant expansion in longitudinal (beam) direction effectively. The results are successfully tested against known analytical relativistic inviscid and viscous solutions, as well as against existing 2+1D relativistic viscous code. Catalogue identifier: AETZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 13 825 No. of bytes in distributed program, including test data, etc.: 92 750 Distribution format: tar.gz Programming language: C++. Computer: any with a C++ compiler and the CERN ROOT libraries. Operating system: tested on GNU/Linux Ubuntu 12.04 x64 (gcc 4.6.3), GNU/Linux Ubuntu 13.10 (gcc 4.8.2), Red Hat Linux 6 (gcc 4.4.7). RAM: scales with the number of cells in hydrodynamic grid; 1900 Mbytes for 3D 160×160×100 grid. Classification: 1.5, 4.3, 12. External routines: CERN ROOT (http://root.cern.ch), Gnuplot (http://www.gnuplot.info/) for plotting the results. Nature of problem: relativistic hydrodynamical description of the 3-dimensional quark-gluon/hadron matter expansion in ultra-relativistic heavy ion collisions. Solution method: finite volume Godunov-type method. Running time: scales with the number of hydrodynamic cells; typical running times on Intel(R) Core(TM) i7-3770 CPU @ 3.40 GHz, single thread mode, 160
High-order hydrodynamic algorithms for exascale computing
Energy Technology Data Exchange (ETDEWEB)
Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-02-05
Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.
Energy Technology Data Exchange (ETDEWEB)
Jo, Young Beom; Kim, Eung Soo [Seoul National Univ., Seoul (Korea, Republic of)
2014-10-15
It becomes more complicated when considering the shape and phase of the ground below the seawater. Therefore, some different attempts are required to precisely analyze the behavior of tsunami. This paper introduces an on-going activities on code development in SNU based on an unconventional mesh-free fluid analysis method called Smoothed Particle Hydrodynamics (SPH) and its verification work with some practice simulations. This paper summarizes the on-going development and verification activities on Lagrangian mesh-free SPH code in SNU. The newly developed code can cover equation of motions and heat conduction equation so far, and verification of each models is completed. In addition, parallel computation using GPU is now possible, and GUI is also prepared. If users change input geometry or input values, they can simulate for various conditions geometries. A SPH method has large advantages and potential in modeling of free surface, highly deformable geometry and multi-phase problems that traditional grid-based code has difficulties in analysis. Therefore, by incorporating more complex physical models such as turbulent flow, phase change, two-phase flow, and even solid mechanics, application of the current SPH code is expected to be much more extended including molten fuel behaviors in the sever accident.
Hydrodynamic Instability, Integrated Code, Laboratory Astrophysics, and Astrophysics
Takabe, Hideaki
2016-10-01
This is an article for the memorial lecture of Edward Teller Medal and is presented as memorial lecture at the IFSA03 conference held on September 12th, 2003, at Monterey, CA. The author focuses on his main contributions to fusion science and its extension to astrophysics in the field of theory and computation by picking up five topics. The first one is the anomalous resisitivity to hot electrons penetrating over-dense region through the ion wave turbulence driven by the return current compensating the current flow by the hot electrons. It is concluded that almost the same value of potential as the average kinetic energy of the hot electrons is realized to prevent the penetration of the hot electrons. The second is the ablative stabilization of Rayleigh-Taylor instability at ablation front and its dispersion relation so-called Takabe formula. This formula gave a principal guideline for stable target design. The author has developed an integrated code ILESTA (ID & 2D) for analyses and design of laser produced plasma including implosion dynamics. It is also applied to design high gain targets. The third is the development of the integrated code ILESTA. The forth is on Laboratory Astrophysics with intense lasers. This consists of two parts; one is review on its historical background and the other is on how we relate laser plasma to wide-ranging astrophysics and the purposes for promoting such research. In relation to one purpose, I gave a comment on anomalous transport of relativistic electrons in Fast Ignition laser fusion scheme. Finally, I briefly summarize recent activity in relation to application of the author's experience to the development of an integrated code for studying extreme phenomena in astrophysics.
A multi-dimensional, adiabatic, hydrodynamics code for studying tidal excitation
Broderick, A E; Broderick, Avery E.; Rathore, Yasser
2004-01-01
We have developed a parallel, simple, and fast hydrodynamics code for multi-dimensional, self-gravitating, adiabatic flows. Our primary motivation is the study of the non-linear evolution of white dwarf oscillations excited via tidal resonances, typically over hundreds of stellar dynamical times. Consequently, we require long term stability, low diffusivity, and high algorithmic efficiency. An explicit, Eulerian, finite-difference scheme on a regular Cartesian grid fulfills these requirements. It provides uniform resolution throughout the flow, as well as simplifying the computation of the self-gravitational potential, which is done via spectral methods. In this paper, we describe the numerical scheme and present the results of some diagnostic problems. We also demonstrate the stability of a cold white dwarf in three dimensions over hundreds of dynamical times. Finally, we compare the results of the numerical scheme to the linear theory of adiabatic oscillations, finding numerical quality factors on the order...
An Efficient Implementation of Flux Formulae in Multidimensional Relativistic Hydrodynamical Codes
Aloy, M A; Ibáñez, J M
1999-01-01
We derive and analyze a simplified formulation of the numerical viscosity terms appearing in the expression of the numerical fluxes associated to several High-Resolution Shock-Capturing schemes. After some algebraic pre-processing, we give explicit expressions for the numerical viscosity terms of two of the most widely used flux formulae, which implementation saves computational time in multidimensional simulations of relativistic flows. Additionally, such treatment explicitely cancells and factorizes a number of terms helping to amortiguate the growing of round-off errors. We have checked the performance of our formulation running a 3D relativistic hydrodynamical code to solve a standard test-bed problem and found that the improvement in efficiency is of high practical interest in numerical simulations of relativistic flows in Astrophysics.
Euler-Lagrangian computation for estuarine hydrodynamics
Cheng, Ralph T.
1983-01-01
The transport of conservative and suspended matter in fluid flows is a phenomenon of Lagrangian nature because the process is usually convection dominant. Nearly all numerical investigations of such problems use an Eulerian formulation for the convenience that the computational grids are fixed in space and because the vast majority of field data are collected in an Eulerian reference frame. Several examples are given in this paper to illustrate a modeling approach which combines the advantages of both the Eulerian and Lagrangian computational techniques.
Tidal disruptions by rotating black holes: relativistic hydrodynamics with Newtonian codes
Tejeda, Emilio; Gafton, Emanuel; Rosswog, Stephan; Miller, John C.
2017-08-01
We propose an approximate approach for studying the relativistic regime of stellar tidal disruptions by rotating massive black holes. It combines an exact relativistic description of the hydrodynamical evolution of a test fluid in a fixed curved space-time with a Newtonian treatment of the fluid's self-gravity. Explicit expressions for the equations of motion are derived for Kerr space-time using two different coordinate systems. We implement the new methodology within an existing Newtonian smoothed particle hydrodynamics code and show that including the additional physics involves very little extra computational cost. We carefully explore the validity of the novel approach by first testing its ability to recover geodesic motion, and then by comparing the outcome of tidal disruption simulations against previous relativistic studies. We further compare simulations in Boyer-Lindquist and Kerr-Schild coordinates and conclude that our approach allows accurate simulation even of tidal disruption events where the star penetrates deeply inside the tidal radius of a rotating black hole. Finally, we use the new method to study the effect of the black hole spin on the morphology and fallback rate of the debris streams resulting from tidal disruptions, finding that while the spin has little effect on the fallback rate, it does imprint heavily on the stream morphology, and can even be a determining factor in the survival or disruption of the star itself. Our methodology is discussed in detail as a reference for future astrophysical applications.
A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects
Schäfer, Christoph M; Maindl, Thomas I; Speith, Roland; Scherrer, Samuel; Kley, Wilhelm
2016-01-01
Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. We find an impressive performance gain using NVIDIA consumer devices compared to ou...
Hallo, L.; Olazabal-Loumé, M.; Maire, P. H.; Breil, J.; Morse, R.-L.; Schurtz, G.
2006-06-01
This paper deals with ablation front instabilities simulations in the context of direct drive ICF. A simplified DT target, representative of realistic target on LIL is considered. We describe here two numerical approaches: the linear perturbation method using the perturbation codes Perle (planar) and Pansy (spherical) and the direct simulation method using our Bi-dimensional hydrodynamic code Chic. Numerical solutions are shown to converge, in good agreement with analytical models.
A new spherically symmetric general relativistic hydrodynamical code
Romero, J V; Martí, J M; Miralles, J A; Romero, Jose V; Ibanez, Jose M; Marti, Jose M; Miralles, Juan A
1995-01-01
In this paper we present a full general relativistic one-dimensional hydro-code which incorporates a modern high-resolution shock-capturing algorithm, with an approximate Riemann solver, for the correct modelling of formation and propagation of strong shocks. The efficiency of this code in treating strong shocks is demonstrated by some numerical experiments. The interest of this technique in several astrophysical scenarios is discussed.
Comparison of different computer platforms for running the Versatile Advection Code
Toth, G.; Keppens, R.; Sloot, P.; Bubak, M.; Hertzberger, B.
1998-01-01
The Versatile Advection Code is a general tool for solving hydrodynamical and magnetohydrodynamical problems arising in astrophysics. We compare the performance of the code on different computer platforms, including work stations and vector and parallel supercomputers. Good parallel scaling can be a
TPCI: the PLUTO-CLOUDY Interface . A versatile coupled photoionization hydrodynamics code
Salz, M.; Banerjee, R.; Mignone, A.; Schneider, P. C.; Czesla, S.; Schmitt, J. H. M. M.
2015-04-01
We present an interface between the (magneto-) hydrodynamics code PLUTO and the plasma simulation and spectral synthesis code CLOUDY. By combining these codes, we constructed a new photoionization hydrodynamics solver: the PLUTO-CLOUDY Interface (TPCI), which is well suited to simulate photoevaporative flows under strong irradiation. The code includes the electromagnetic spectrum from X-rays to the radio range and solves the photoionization and chemical network of the 30 lightest elements. TPCI follows an iterative numerical scheme: first, the equilibrium state of the medium is solved for a given radiation field by CLOUDY, resulting in a net radiative heating or cooling. In the second step, the latter influences the (magneto-) hydrodynamic evolution calculated by PLUTO. Here, we validated the one-dimensional version of the code on the basis of four test problems: photoevaporation of a cool hydrogen cloud, cooling of coronal plasma, formation of a Strömgren sphere, and the evaporating atmosphere of a hot Jupiter. This combination of an equilibrium photoionization solver with a general MHD code provides an advanced simulation tool applicable to a variety of astrophysical problems. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/576/A21
Characterizing Video Coding Computing in Conference Systems
Tuquerres, G.
2000-01-01
In this paper, a number of coding operations is provided for computing continuous data streams, in particular, video streams. A coding capability of the operations is expressed by a pyramidal structure in which coding processes and requirements of a distributed information system are represented. Th
A new relativistic hydrodynamics code for high-energy heavy-ion collisions
Okamoto, Kazuhisa; Akamatsu, Yukinao; Nonaka, Chiho
2016-10-01
We construct a new Godunov type relativistic hydrodynamics code in Milne coordinates, using a Riemann solver based on the two-shock approximation which is stable under the existence of large shock waves. We check the correctness of the numerical algorithm by comparing numerical calculations and analytical solutions in various problems, such as shock tubes, expansion of matter into the vacuum, the Landau-Khalatnikov solution, and propagation of fluctuations around Bjorken flow and Gubser flow. We investigate the energy and momentum conservation property of our code in a test problem of longitudinal hydrodynamic expansion with an initial condition for high-energy heavy-ion collisions. We also discuss numerical viscosity in the test problems of expansion of matter into the vacuum and conservation properties. Furthermore, we discuss how the numerical stability is affected by the source terms of relativistic numerical hydrodynamics in Milne coordinates.
A new relativistic hydrodynamics code for high-energy heavy-ion collisions
Energy Technology Data Exchange (ETDEWEB)
Okamoto, Kazuhisa [Nagoya University, Department of Physics, Nagoya (Japan); Akamatsu, Yukinao [Nagoya University, Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI), Nagoya (Japan); Osaka University, Department of Physics, Toyonaka (Japan); Stony Brook University, Department of Physics and Astronomy, Stony Brook, NY (United States); Nonaka, Chiho [Nagoya University, Department of Physics, Nagoya (Japan); Nagoya University, Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI), Nagoya (Japan); Duke University, Department of Physics, Durham, NC (United States)
2016-10-15
We construct a new Godunov type relativistic hydrodynamics code in Milne coordinates, using a Riemann solver based on the two-shock approximation which is stable under the existence of large shock waves. We check the correctness of the numerical algorithm by comparing numerical calculations and analytical solutions in various problems, such as shock tubes, expansion of matter into the vacuum, the Landau-Khalatnikov solution, and propagation of fluctuations around Bjorken flow and Gubser flow. We investigate the energy and momentum conservation property of our code in a test problem of longitudinal hydrodynamic expansion with an initial condition for high-energy heavy-ion collisions. We also discuss numerical viscosity in the test problems of expansion of matter into the vacuum and conservation properties. Furthermore, we discuss how the numerical stability is affected by the source terms of relativistic numerical hydrodynamics in Milne coordinates. (orig.)
A new relativistic hydrodynamics code for high-energy heavy-ion collisions
Okamoto, Kazuhisa; Nonaka, Chiho
2016-01-01
We construct a new Godunov type relativistic hydrodynamics code in Milne coordinates, using a Riemann solver based on the two-shock approximation which is stable under existence of large shock waves. We check the correctness of the numerical algorithm by comparing numerical calculations and analytical solutions in various problems, such as shock tubes, expansion of matter into the vacuum, Landau-Khalatnikov solution, propagation of fluctuations around Bjorken flow and Gubser flow. We investigate the energy and momentum conservation property of our code in a test problem of longitudinal hydrodynamic expansion with an initial condition for high-energy heavy-ion collisions.We also discuss numerical viscosity in the test problems of expansion of matter into the vacuum and conservation properties. Furthermore, we discuss how the numerical stability is affected by the source terms of relativistic numerical hydrodynamics in Milne coordinates.
Simulation of Tailrace Hydrodynamics Using Computational Fluid Dynamics Models
Energy Technology Data Exchange (ETDEWEB)
Cook, Christopher B.; Richmond, Marshall C.
2001-05-01
This report investigates the feasibility of using computational fluid dynamics (CFD) tools to investigate hydrodynamic flow fields surrounding the tailrace zone below large hydraulic structures. Previous and ongoing studies using CFD tools to simulate gradually varied flow with multiple constituents and forebay/intake hydrodynamics have shown that CFD tools can provide valuable information for hydraulic and biological evaluation of fish passage near hydraulic structures. These studies however are incapable of simulating the rapidly varying flow fields that involving breakup of the free-surface, such as those through and below high flow outfalls and spillways. Although the use of CFD tools for these types of flow are still an active area of research, initial applications discussed in this report show that these tools are capable of simulating the primary features of these highly transient flow fields.
Vanaverbeke, S.; Keppens, R.; Poedts, S.; Boffin, H.
2009-01-01
We describe the algorithms implemented in the first version of GRADSPH, a parallel, tree-based, smoothed particle hydrodynamics code for simulating self-gravitating astrophysical systems written in FORTRAN 90. The paper presents details on the implementation of the Smoothed Particle Hydro (SPH) desc
Energy Technology Data Exchange (ETDEWEB)
Hallo, L.; Olazabal-Loume, M.; Maire, P.H.; Breil, J.; Schurtz, G. [CELIA, 33 - Talence (France); Morse, R.L. [Arizona Univ., Dept. of Nuclear Engineering, Tucson (United States)
2006-06-15
This paper deals with ablation front instabilities simulations in the context of direct drive inertial confinement fusion. A simplified deuterium-tritium target, representative of realistic target on LIL (laser integration line at Megajoule laser facility) is considered. We describe here two numerical approaches: the linear perturbation method using the perturbation codes Perle (planar) and Pansy (spherical) and the direct simulation method using our bi-dimensional hydrodynamic code Chic. Our work shows a good behaviour of all methods even for large wavenumbers during the acceleration phase of the ablation front. We also point out a good agreement between model and numerical predictions at ablation front during the shock wave transit.
Numerical Modeling of Imploding Plasma liners Using the 1D Radiation-Hydrodynamics Code HELIOS
Davis, J. S.; Hanna, D. S.; Awe, T. J.; Hsu, S. C.; Stanic, M.; Cassibry, J. T.; Macfarlane, J. J.
2010-11-01
The Plasma Liner Experiment (PLX) is attempting to form imploding plasma liners to reach 0.1 Mbar upon stagnation, via 30--60 spherically convergent plasma jets. PLX is partly motivated by the desire to develop a standoff driver for magneto-inertial fusion. The liner density, atomic makeup, and implosion velocity will help determine the maximum pressure that can be achieved. This work focuses on exploring the effects of atomic physics and radiation on the 1D liner implosion and stagnation dynamics. For this reason, we are using Prism Computational Science's 1D Lagrangian rad-hydro code HELIOS, which has both equation of state (EOS) table-lookup and detailed configuration accounting (DCA) atomic physics modeling. By comparing a series of PLX-relevant cases proceeding from ideal gas, to EOS tables, to DCA treatments, we aim to identify how and when atomic physics effects are important for determining the peak achievable stagnation pressures. In addition, we present verification test results as well as brief comparisons to results obtained with RAVEN (1D radiation-MHD) and SPHC (smoothed particle hydrodynamics).
Computer Code for Nanostructure Simulation
Filikhin, Igor; Vlahovic, Branislav
2009-01-01
Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.
Cloud Computing for Complex Performance Codes.
Energy Technology Data Exchange (ETDEWEB)
Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-02-01
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
Recent Hydrodynamics Improvements to the RELAP5-3D Code
Energy Technology Data Exchange (ETDEWEB)
Richard A. Riemke; Cliff B. Davis; Richard.R. Schultz
2009-07-01
The hydrodynamics section of the RELAP5-3D computer program has been recently improved. Changes were made as follows: (1) improved turbine model, (2) spray model for the pressurizer model, (3) feedwater heater model, (4) radiological transport model, (5) improved pump model, and (6) compressor model.
Sijoy, C. D.; Chaturvedi, S.
2016-06-01
Higher-order cell-centered multi-material hydrodynamics (HD) and parallel node-centered radiation transport (RT) schemes are combined self-consistently in three-temperature (3T) radiation hydrodynamics (RHD) code TRHD (Sijoy and Chaturvedi, 2015) developed for the simulation of intense thermal radiation or high-power laser driven RHD. For RT, a node-centered gray model implemented in a popular RHD code MULTI2D (Ramis et al., 2009) is used. This scheme, in principle, can handle RT in both optically thick and thin materials. The RT module has been parallelized using message passing interface (MPI) for parallel computation. Presently, for multi-material HD, we have used a simple and robust closure model in which common strain rates to all materials in a mixed cell is assumed. The closure model has been further generalized to allow different temperatures for the electrons and ions. In addition to this, electron and radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. This has been achieved by using a node-centered symmetric-semi-implicit (SSI) integration scheme. The electron thermal conduction is calculated using a cell-centered, monotonic, non-linear finite volume scheme (NLFV) suitable for unstructured meshes. In this paper, we have described the details of the 2D, 3T, non-equilibrium, multi-material RHD code developed with a special attention to the coupling of various cell-centered and node-centered formulations along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We also report the parallel performance of RT module. Finally, in order to demonstrate the full capability of the code implementation, we have presented the simulation of laser driven shock propagation in a layered thin foil. The simulation results are found to be in good
PLUTO code for computational Astrophysics: News and Developments
Tzeferacos, P.; Mignone, A.
2012-01-01
We present an overview on recent developments and functionalities available with the PLUTO code for astrophysical fluid dynamics. The recent extension of the code to a conservative finite difference formulation and high order spatial discretization of the compressible equations of magneto-hydrodynamics (MHD), complementary to its finite volume approach, allows for a highly accurate treatment of smooth flows, while avoiding loss of accuracy near smooth extrema and providing sharp non-oscillatory transitions at discontinuities. Among the novel features, we present alternative, fully explicit treatments to include non-ideal dissipative processes (namely viscosity, resistivity and anisotropic thermal conduction), that do not suffer from the usual timestep limitation of explicit time stepping. These methods, offsprings of the multistep Runge-Kutta family that use a Chebyshev polynomial recursion, are competitive substitutes of computationally expensive implicit schemes that involve sparse matrix inversion. Several multi-dimensional benchmarks and appli-cations assess the potential of PLUTO to efficiently handle many astrophysical problems.
Cholla: 3D GPU-based hydrodynamics code for astrophysical simulation
Schneider, Evan E.; Robertson, Brant E.
2016-07-01
Cholla (Computational Hydrodynamics On ParaLLel Architectures) models the Euler equations on a static mesh and evolves the fluid properties of thousands of cells simultaneously using GPUs. It can update over ten million cells per GPU-second while using an exact Riemann solver and PPM reconstruction, allowing computation of astrophysical simulations with physically interesting grid resolutions (>256^3) on a single device; calculations can be extended onto multiple devices with nearly ideal scaling beyond 64 GPUs.
Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors
Energy Technology Data Exchange (ETDEWEB)
Sale, D.; Jonkman, J.; Musial, W.
2009-08-01
This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.
Gender codes why women are leaving computing
Misa, Thomas J
2010-01-01
The computing profession is facing a serious gender crisis. Women are abandoning the computing field at an alarming rate. Fewer are entering the profession than anytime in the past twenty-five years, while too many are leaving the field in mid-career. With a maximum of insight and a minimum of jargon, Gender Codes explains the complex social and cultural processes at work in gender and computing today. Edited by Thomas Misa and featuring a Foreword by Linda Shafer, Chair of the IEEE Computer Society Press, this insightful collection of essays explores the persisting gender imbalance in computing and presents a clear course of action for turning things around.
Donmez, O
2004-01-01
In this paper, the general procedure to solve the General Relativistic Hydrodynamical(GRH) equations with Adaptive-Mesh Refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of general relativistic hydrodynamic equations are done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second order convergence of the code in 1D, 2D and 3D. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the general relativistic hydrodynamical equa...
Energy Technology Data Exchange (ETDEWEB)
Estabrook, K; Farley, D; Glendinning, S G; Remington, B A; Stone, J; Turner, N
1999-09-22
Recent shock tube experiments using the Nova laser facility have demonstrated that strong shocks and highly supersonic flows similar to those encountered in astrophysical jets can be studied in detail through carefully controlled experiment. We propose the use of high power lasers such as Nova, Omega, and NIF to perform experiments on radiation hydrodynamic problems such as jets involving the multidimensional dynamics of strong shocks. High power lasers are the only experimental facilities that can reach the very high Mach number regime. The experiments will serve both as diagnostics of astrophysically interesting gas dynamic problems, and could also form the basis of test problems for numerical algorithms for astrophysical radiation hydrodynamic codes, The potential for experimentally achieving a strongly radiative jet seems very good.
Modelling of Be Disks in Binary Systems Using the Hydrodynamic Code PLUTO
Cyr, I. H.; Panoglou, D.; Jones, C. E.; Carciofi, A. C.
2016-11-01
The study of the gas structure and dynamics of Be star disks is critical to our understanding of the Be star phenomenon. The central star is the major force driving the evolution of these disks, however other external forces may also affect the formation of the disk, for example, the gravitational torque produced in a close binary system. We are interested in understanding the gravitational effects of a low-mass binary companion on the formation and growth of a disk in a close binary system. To study these effects, we used the grid-based hydrodynamic code PLUTO. Because this code has not been used to study such systems before, we compared our simulations against codes used in previous work on binary systems. We were able to simulate the formation of a disk in both an isolated and binary system. Our current results suggest that PLUTO is in fact a well suited tool to study the dynamics of Be disks.
Investigating the Magnetorotational Instability with Dedalus, and Open-Souce Hydrodynamics Code
Energy Technology Data Exchange (ETDEWEB)
Burns, Keaton J; /UC, Berkeley, aff SLAC
2012-08-31
The magnetorotational instability is a fluid instability that causes the onset of turbulence in discs with poloidal magnetic fields. It is believed to be an important mechanism in the physics of accretion discs, namely in its ability to transport angular momentum outward. A similar instability arising in systems with a helical magnetic field may be easier to produce in laboratory experiments using liquid sodium, but the applicability of this phenomenon to astrophysical discs is unclear. To explore and compare the properties of these standard and helical magnetorotational instabilities (MRI and HRMI, respectively), magnetohydrodynamic (MHD) capabilities were added to Dedalus, an open-source hydrodynamics simulator. Dedalus is a Python-based pseudospectral code that uses external libraries and parallelization with the goal of achieving speeds competitive with codes implemented in lower-level languages. This paper will outline the MHD equations as implemented in Dedalus, the steps taken to improve the performance of the code, and the status of MRI investigations using Dedalus.
Cheng, J Y; Chahine, G L
2001-12-01
The slender body theory, lifting surface theories, and more recently panel methods and Navier-Stokes solvers have been used to study the hydrodynamics of fish swimming. This paper presents progress on swimming hydrodynamics using a boundary integral equation method (or boundary element method) based on potential flow model. The unsteady three-dimensional BEM code 3DynaFS that we developed and used is able to model realistic body geometries, arbitrary movements, and resulting wake evolution. Pressure distribution over the body surface, vorticity in the wake, and the velocity field around the body can be computed. The structure and dynamic behavior of the vortex wakes generated by the swimming body are responsible for the underlying fluid dynamic mechanisms to realize the high-efficiency propulsion and high-agility maneuvering. Three-dimensional vortex wake structures are not well known, although two-dimensional structures termed 'reverse Karman Vortex Street' have been observed and studied. In this paper, simulations about a swimming saithe (Pollachius virens) using our BEM code have demonstrated that undulatory swimming reduces three-dimensional effects due to substantially weakened tail tip vortex, resulting in a reverse Karman Vortex Street as the major flow pattern in the three-dimensional wake of an undulating swimming fish.
Computer simulation of the fire-tube boiler hydrodynamics
Khaustov Sergei A.; Zavorin Alexander S.; Buvakov Konstantin V.; Sheikin Vyacheslav A.
2015-01-01
Finite element method was used for simulating the hydrodynamics of fire-tube boiler with the ANSYS Fluent 12.1.4 engineering simulation software. Hydrodynamic structure and volumetric temperature distribution were calculated. The results are presented in graphical form. Complete geometric model of the fire-tube boiler based on boiler drawings was considered. Obtained results are suitable for qualitative analysis of hydrodynamics and singularities identification in fire-tube boiler water shell.
Computer simulation of the fire-tube boiler hydrodynamics
Directory of Open Access Journals (Sweden)
Khaustov Sergei A.
2015-01-01
Full Text Available Finite element method was used for simulating the hydrodynamics of fire-tube boiler with the ANSYS Fluent 12.1.4 engineering simulation software. Hydrodynamic structure and volumetric temperature distribution were calculated. The results are presented in graphical form. Complete geometric model of the fire-tube boiler based on boiler drawings was considered. Obtained results are suitable for qualitative analysis of hydrodynamics and singularities identification in fire-tube boiler water shell.
Computer Security: is your code sane?
Stefan Lueders, Computer Security Team
2015-01-01
How many of us write code? Software? Programs? Scripts? How many of us are properly trained in this and how well do we do it? Do we write functional, clean and correct code, without flaws, bugs and vulnerabilities*? In other words: are our codes sane? Figuring out weaknesses is not that easy (see our quiz in an earlier Bulletin article). Therefore, in order to improve the sanity of your code, prevent common pit-falls, and avoid the bugs and vulnerabilities that can crash your code, or – worse – that can be misused and exploited by attackers, the CERN Computer Security team has reviewed its recommendations for checking the security compliance of your code. “Static Code Analysers” are stand-alone programs that can be run on top of your software stack, regardless of whether it uses Java, C/C++, Perl, PHP, Python, etc. These analysers identify weaknesses and inconsistencies including: employing undeclared variables; expressions resu...
Computational modeling and analysis of the hydrodynamics of human swimming
von Loebbecke, Alfred
Computational modeling and simulations are used to investigate the hydrodynamics of competitive human swimming. The simulations employ an immersed boundary (IB) solver that allows us to simulate viscous, incompressible, unsteady flow past complex, moving/deforming three-dimensional bodies on stationary Cartesian grids. This study focuses on the hydrodynamics of the "dolphin kick". Three female and two male Olympic level swimmers are used to develop kinematically accurate models of this stroke for the simulations. A simulation of a dolphin undergoing its natural swimming motion is also presented for comparison. CFD enables the calculation of flow variables throughout the domain and over the swimmer's body surface during the entire kick cycle. The feet are responsible for all thrust generation in the dolphin kick. Moreover, it is found that the down-kick (ventral position) produces more thrust than the up-kick. A quantity of interest to the swimming community is the drag of a swimmer in motion (active drag). Accurate estimates of this quantity have been difficult to obtain in experiments but are easily calculated with CFD simulations. Propulsive efficiencies of the human swimmers are found to be in the range of 11% to 30%. The dolphin simulation case has a much higher efficiency of 55%. Investigation of vortex structures in the wake indicate that the down-kick can produce a vortex ring with a jet of accelerated fluid flowing through its center. This vortex ring and the accompanying jet are the primary thrust generating mechanisms in the human dolphin kick. In an attempt to understand the propulsive mechanisms of surface strokes, we have also conducted a computational analysis of two different styles of arm-pulls in the backstroke and the front crawl. These simulations involve only the arm and no air-water interface is included. Two of the four strokes are specifically designed to take advantage of lift-based propulsion by undergoing lateral motions of the hand
Multigroup radiation transport in one-dimensional Lagrangian radiation-hydrodynamics codes
Energy Technology Data Exchange (ETDEWEB)
Rottler, J.S.
1987-01-01
A new treatment of radiation transport has been added to the Lagrangian radiation-hydrodynamics code CHARTD. The new energy flow model was derived based on the assumption that the directional dependence of the radiation energy density can be represented by the first two terms of a spherical harmonic expansion, and that the photon energy spectrum can be partitioned into energy groups. The time derivative in the second moment equation, which is usually neglected, is retained in this implementation of the multigroup P-1 approximation. An accelerated iterative scheme is used to solve the difference equations. The new energy flow model and the iterative scheme will be described.
Pontoon Bridge Hydrodynamic Computations by Multi-block Grid Generation Technique
Institute of Scientific and Technical Information of China (English)
PAN Xiao-qiang; SHEN Qing
2006-01-01
To investigate the hydrodynamic characteristic of pontoon bridge, the multi-block grid generation technique with numerical methods for viscous fluid dynamics is applied to numerical simulations on the hydrodynamic characteristic of a ribbon ferrying raft model at a series of towing speeds. Comparison of the simulated results with the experimental data indicates that the simulated results are acceptable. It shows that the multi-block grid generation technique is effective in the computation on pontoon bridge hydrodynamics.
Hydrodynamic Computations of Pressures Generated by Steam Pipe Rupture
1981-02-23
Compressible Flow Problems," J. Comput. Phys. 1, pp 87-118, 1966. *Until now, this code was called TUTTI, However, this is also the name of a Los...the "nozzle" cross section (Figure 8). This effect is the familiar vena contracta of incompressible flow and is knowvn tc occur for compressible flow ... Lapple , C. E., "Discharge Coefficients of Small-Piamreter Orifices and Flow Nozzles," Trans. ASME, pp 639-647, Jul 1951. llArnberg, G. T., "Review of
Energy Technology Data Exchange (ETDEWEB)
Benitz, M. A.; Schmidt, D. P.; Lackner, M. A.; Stewart, G. M.; Jonkman, J.; Robertson, A.
2014-09-01
Hydrodynamic loads on the platforms of floating offshore wind turbines are often predicted with computer-aided engineering tools that employ Morison's equation and/or potential-flow theory. This work compares results from one such tool, FAST, NREL's wind turbine computer-aided engineering tool, and the computational fluid dynamics package, OpenFOAM, for the OC4-DeepCwind semi-submersible analyzed in the International Energy Agency Wind Task 30 project. Load predictions from HydroDyn, the offshore hydrodynamics module of FAST, are compared with high-fidelity results from OpenFOAM. HydroDyn uses a combination of Morison's equations and potential flow to predict the hydrodynamic forces on the structure. The implications of the assumptions in HydroDyn are evaluated based on this code-to-code comparison.
Energy Technology Data Exchange (ETDEWEB)
Andronov, V.A.; Zhidov, I.G.; Meskov, E.E.; Nevmerzhitskii, N.V.; Nikiforov, V.V.; Razin, A.N.; Rogatchev, V.G.; Tolshmyakov, A.I.; Yanilkin, Yu.V. [Russian Federal Nuclear Center (Russian Federation)
1995-02-01
This report describes an extensive program of investigations conducted at Arzamas-16 in Russia over the past several decades. The focus of the work is on material interface instability and the mixing of two materials. Part 1 of the report discusses analytical and computational studies of hydrodynamic instabilities and turbulent mixing. The EGAK codes are described and results are illustrated for several types of unstable flow. Semiempirical turbulence transport equations are derived for the mixing of two materials, and their capabilities are illustrated for several examples. Part 2 discusses the experimental studies that have been performed to investigate instabilities and turbulent mixing. Shock-tube and jelly techniques are described in considerable detail. Results are presented for many circumstances and configurations.
Numerical model for two-dimensional hydrodynamics and energy transport. [VECTRA code
Energy Technology Data Exchange (ETDEWEB)
Trent, D.S.
1973-06-01
The theoretical basis and computational procedure of the VECTRA computer program are presented. VECTRA (Vorticity-Energy Code for TRansport Analysis) is designed for applying numerical simulation to a broad range of intake/discharge flows in conjunction with power plant hydrological evaluation. The code computational procedure is based on finite-difference approximation of the vorticity-stream function partial differential equations which govern steady flow momentum transport of two-dimensional, incompressible, viscous fluids in conjunction with the transport of heat and other constituents.
Incompressible face seals: Computer code IFACE
Artiles, Antonio
1994-01-01
Capabilities of the computer code IFACE are given in viewgraph format. These include: two dimensional, incompressible, isoviscous flow; rotation of both rotor and housing; roughness in both rotor and housing; arbitrary film thickness distribution, including steps, pockets, and tapers; three degrees of freedom; dynamic coefficients; prescribed force and moments; pocket pressures or orifice size; turbulence, Couette and Poiseuille flow; cavitation; and inertia pressure drops at inlets to film.
Computing Challenges in Coded Mask Imaging
Skinner, Gerald
2009-01-01
This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.
Code Verification of the HIGRAD Computational Fluid Dynamics Solver
Energy Technology Data Exchange (ETDEWEB)
Van Buren, Kendra L. [Los Alamos National Laboratory; Canfield, Jesse M. [Los Alamos National Laboratory; Hemez, Francois M. [Los Alamos National Laboratory; Sauer, Jeremy A. [Los Alamos National Laboratory
2012-05-04
The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.
New developments in the Saphire computer codes
Energy Technology Data Exchange (ETDEWEB)
Russell, K.D.; Wood, S.T.; Kvarfordt, K.J. [Idaho Engineering Lab., Idaho Falls, ID (United States)] [and others
1996-03-01
The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a suite of computer programs that were developed to create and analyze a probabilistic risk assessment (PRA) of a nuclear power plant. Many recent enhancements to this suite of codes have been made. This presentation will provide an overview of these features and capabilities. The presentation will include a discussion of the new GEM module. This module greatly reduces and simplifies the work necessary to use the SAPHIRE code in event assessment applications. An overview of the features provided in the new Windows version will also be provided. This version is a full Windows 32-bit implementation and offers many new and exciting features. [A separate computer demonstration was held to allow interested participants to get a preview of these features.] The new capabilities that have been added since version 5.0 will be covered. Some of these major new features include the ability to store an unlimited number of basic events, gates, systems, sequences, etc.; the addition of improved reporting capabilities to allow the user to generate and {open_quotes}scroll{close_quotes} through custom reports; the addition of multi-variable importance measures; and the simplification of the user interface. Although originally designed as a PRA Level 1 suite of codes, capabilities have recently been added to SAPHIRE to allow the user to apply the code in Level 2 analyses. These features will be discussed in detail during the presentation. The modifications and capabilities added to this version of SAPHIRE significantly extend the code in many important areas. Together, these extensions represent a major step forward in PC-based risk analysis tools. This presentation provides a current up-to-date status of these important PRA analysis tools.
SALE: Safeguards Analytical Laboratory Evaluation computer code
Energy Technology Data Exchange (ETDEWEB)
Carroll, D.J.; Bush, W.J.; Dolan, C.A.
1976-09-01
The Safeguards Analytical Laboratory Evaluation (SALE) program implements an industry-wide quality control and evaluation system aimed at identifying and reducing analytical chemical measurement errors. Samples of well-characterized materials are distributed to laboratory participants at periodic intervals for determination of uranium or plutonium concentration and isotopic distributions. The results of these determinations are statistically-evaluated, and each participant is informed of the accuracy and precision of his results in a timely manner. The SALE computer code which produces the report is designed to facilitate rapid transmission of this information in order that meaningful quality control will be provided. Various statistical techniques comprise the output of the SALE computer code. Assuming an unbalanced nested design, an analysis of variance is performed in subroutine NEST resulting in a test of significance for time and analyst effects. A trend test is performed in subroutine TREND. Microfilm plots are obtained from subroutine CUMPLT. Within-laboratory standard deviations are calculated in the main program or subroutine VAREST, and between-laboratory standard deviations are calculated in SBLV. Other statistical tests are also performed. Up to 1,500 pieces of data for each nuclear material sampled by 75 (or fewer) laboratories may be analyzed with this code. The input deck necessary to run the program is shown, and input parameters are discussed in detail. Printed output and microfilm plot output are described. Output from a typical SALE run is included as a sample problem.
Structure of the solar photosphere studied from the radiation hydrodynamics code ANTARES
Leitner, P.; Lemmerer, B.; Hanslmeier, A.; Zaqarashvili, T.; Veronig, A.; Grimm-Strele, H.; Muthsam, H. J.
2017-09-01
The ANTARES radiation hydrodynamics code is capable of simulating the solar granulation in detail unequaled by direct observation. We introduce a state-of-the-art numerical tool to the solar physics community and demonstrate its applicability to model the solar granulation. The code is based on the weighted essentially non-oscillatory finite volume method and by its implementation of local mesh refinement is also capable of simulating turbulent fluids. While the ANTARES code already provides promising insights into small-scale dynamical processes occurring in the quiet-Sun photosphere, it will soon be capable of modeling the latter in the scope of radiation magnetohydrodynamics. In this first preliminary study we focus on the vertical photospheric stratification by examining a 3-D model photosphere with an evolution time much larger than the dynamical timescales of the solar granulation and of particular large horizontal extent corresponding to 25''×25'' on the solar surface to smooth out horizontal spatial inhomogeneities separately for up- and downflows. The highly resolved Cartesian grid thereby covers ˜4 Mm of the upper convection zone and the adjacent photosphere. Correlation analysis, both local and two-point, provides a suitable means to probe the photospheric structure and thereby to identify several layers of characteristic dynamics: The thermal convection zone is found to reach some ten kilometers above the solar surface, while convectively overshooting gas penetrates even higher into the low photosphere. An ≈145 km wide transition layer separates the convective from the oscillatory layers in the higher photosphere.
Mueller, B; Dimmelmeier, H
2010-01-01
We present a new general relativistic (GR) code for hydrodynamic supernova simulations with neutrino transport in spherical and azimuthal symmetry (1D/2D). The code is a combination of the CoCoNuT hydro module, which is a Riemann-solver based, high-resolution shock-capturing method, and the three-flavor, energy-dependent neutrino transport scheme VERTEX. VERTEX integrates the neutrino moment equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the ray-by-ray plus approximation in 2D, assuming the neutrino distribution to be axially symmetric around the radial direction, and thus the neutrino flux to be radial. Our spacetime treatment employs the ADM 3+1 formalism with the conformal flatness condition for the spatial three-metric. This approach is exact in 1D and has been shown to yield very accurate results also for rotational stellar collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian...
A 3D Spectral Anelastic Hydrodynamic Code for Shearing, Stratified Flows
Barranco, J A; Barranco, Joseph A.; Marcus, Philip S.
2005-01-01
We have developed a three-dimensional (3D) spectral hydrodynamic code to study vortex dynamics in rotating, shearing, stratified systems (e.g. the atmosphere of gas giant planets, protoplanetary disks around newly forming protostars). The time-independent background state is stably stratified in the vertical direction and has a unidirectional linear shear flow aligned with one horizontal axis. Superposed on this background state is an unsteady, subsonic flow that is evolved with the Euler equations subject to the anelastic approximation to filter acoustic phenomena. A Fourier-Fourier basis in a set of quasi-Lagrangian coordinates that advect with the background shear is used for spectral expansions in the two horizontal directions. For the vertical direction, two different sets of basis functions have been implemented: (1) Chebyshev polynomials on a truncated, finite domain, and (2) rational Chebyshev functions on an infinite domain. Use of this latter set is equivalent to transforming the infinite domain to ...
Neutron spectrum unfolding using computer code SAIPS
Karim, S
1999-01-01
The main objective of this project was to study the neutron energy spectrum at rabbit station-1 in Pakistan Research Reactor (PARR-I). To do so, multiple foils activation method was used to get the saturated activities. The computer code SAIPS was used to unfold the neutron spectra from the measured reaction rates. Of the three built in codes in SAIPS, only SANDI and WINDOWS were used. Contribution of thermal part of the spectra was observed to be higher than the fast one. It was found that the WINDOWS gave smooth spectra while SANDII spectra have violet oscillations in the resonance region. The uncertainties in the WINDOWS results are higher than those of SANDII. The results show reasonable agreement with the published results.
Hydrodynamic models of a Cepheid atmosphere
Karp, A. H.
1975-01-01
Instead of computing a large number of coarsely zoned hydrodynamic models covering the entire atmospheric instability strip, the author computed a single model as well as computer limitations allow. The implicit hydrodynamic code of Kutter and Sparks was modified to include radiative transfer effects in optically thin zones.
From Coding to Computational Thinking and Back
DePryck, K.
2016-01-01
Presentation of Dr. Koen DePryck in the Computational Thinking Session in TEEM 2016 Conference, held in the University of Salamanca (Spain), Nov 2-4, 2016. Introducing coding in the curriculum at an early age is considered a long-term investment in bridging the skills gap between the technology demands of the labour market and the availability of people to fill them. The keys to success include moving from mere literacy to active control – not only at the level of learners but also ...
Spiking network simulation code for petascale computers
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682
Spiking network simulation code for petascale computers
Directory of Open Access Journals (Sweden)
Susanne eKunkel
2014-10-01
Full Text Available Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
MODA: a new algorithm to compute optical depths in multidimensional hydrodynamic simulations
Perego, Albino; Gafton, Emanuel; Cabezón, Rubén; Rosswog, Stephan; Liebendörfer, Matthias
2014-08-01
Aims: We introduce the multidimensional optical depth algorithm (MODA) for the calculation of optical depths in approximate multidimensional radiative transport schemes, equally applicable to neutrinos and photons. Motivated by (but not limited to) neutrino transport in three-dimensional simulations of core-collapse supernovae and neutron star mergers, our method makes no assumptions about the geometry of the matter distribution, apart from expecting optically transparent boundaries. Methods: Based on local information about opacities, the algorithm figures out an escape route that tends to minimize the optical depth without assuming any predefined paths for radiation. Its adaptivity makes it suitable for a variety of astrophysical settings with complicated geometry (e.g., core-collapse supernovae, compact binary mergers, tidal disruptions, star formation, etc.). We implement the MODA algorithm into both a Eulerian hydrodynamics code with a fixed, uniform grid and into an SPH code where we use a tree structure that is otherwise used for searching neighbors and calculating gravity. Results: In a series of numerical experiments, we compare the MODA results with analytically known solutions. We also use snapshots from actual 3D simulations and compare the results of MODA with those obtained with other methods, such as the global and local ray-by-ray method. It turns out that MODA achieves excellent accuracy at a moderate computational cost. In appendix we also discuss implementation details and parallelization strategies.
Comparison of Particle Flow Code and Smoothed Particle Hydrodynamics Modelling of Landslide Run outs
Preh, A.; Poisel, R.; Hungr, O.
2009-04-01
In most continuum mechanics methods modelling the run out of landslides the moving mass is divided into a number of elements, the velocities of which can be established by numerical integration of Newtońs second law (Lagrangian solution). The methods are based on fluid mechanics modelling the movements of an equivalent fluid. In 2004, McDougall and Hungr presented a three-dimensional numerical model for rapid landslides, e.g. debris flows and rock avalanches, called DAN3D.The method is based on the previous work of Hungr (1995) and is using an integrated two-dimensional Lagrangian solution and meshless Smooth Particle Hydrodynamics (SPH) principle to maintain continuity. DAN3D has an open rheological kernel, allowing the use of frictional (with constant porepressure ratio) and Voellmy rheologies and gives the possibility to change material rheology along the path. Discontinuum (granular) mechanics methods model the run out mass as an assembly of particles moving down a surface. Each particle is followed exactly as it moves and interacts with the surface and with its neighbours. Every particle is checked on contacts with every other particle in every time step using a special cell-logic for contact detection in order to reduce the computational effort. The Discrete Element code PFC3D was adapted in order to make possible discontinuum mechanics models of run outs. Punta Thurwieser Rock Avalanche and Frank Slide were modelled by DAN as well as by PFC3D. The simulations showed correspondingly that the parameters necessary to get results coinciding with observations in nature are completely different. The maximum velocity distributions due to DAN3D reveal that areas of different maximum flow velocity are next to each other in Punta Thurwieser run out whereas the distribution of maximum flow velocity shows almost constant maximum flow velocity over the width of the run out regarding Frank Slide. Some 30 percent of total kinetic energy is rotational kinetic energy in
Superimposed Code Theorectic Analysis of DNA Codes and DNA Computing
2010-03-01
Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008 IEEE Proceedings of International Symposium on Information Theory, pp. 2292...5, June 2008, pp. 525-34. 32 28. A. Macula, et al., “Random Coding Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008...combinatorial method of bio-memory design and detection that encodes item or process information as numerical sequences represented in DNA. ComDMem is a
Development of probabilistic internal dosimetry computer code
Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki
2017-02-01
Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of
ICAN Computer Code Adapted for Building Materials
Murthy, Pappu L. N.
1997-01-01
The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.
A surface code quantum computer in silicon.
Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L
2015-10-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.
Wani, Naveel; Maqbool, Bari; Iqbal, Naseer; Misra, Ranjeev
2016-07-01
X-ray binaries and AGNs are powered by accretion discs around compact objects, where the x-rays are emitted from the inner regions and uv emission arise from the relatively cooler outer parts. There has been an increasing evidence that the variability of the x-rays in different timescales is caused by stochastic fluctuations in the accretion disc at different radii. These fluctuations although arise in the outer parts of the disc but propagate inwards to give rise to x-ray variability and hence provides a natural connection between the x-ray and uv variability. There are analytical expressions to qualitatively understand the effect of these stochastic variabilities, but quantitative predictions are only possible by a detailed hydrodynamical study of the global time dependent solution of standard accretion disc. We have developed numerical efficient code (to incorporate all these effects), which considers gas pressure dominated solutions and stochastic fluctuations with the inclusion of boundary effect of the last stable orbit.
Energy Technology Data Exchange (ETDEWEB)
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
40 CFR 194.23 - Models and computer codes.
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...
Collective migration under hydrodynamic interactions -- a computational approach
Marth, Wieland
2016-01-01
Substrate-based cell motility is essential for fundamental biological processes, such as tissue growth, wound healing and immune response. Even if a comprehensive understanding of this motility mode remains elusive, progress has been achieved in its modeling using a whole cell physical model. The model takes into account the main mechanisms of cell motility - actin polymerization, substrate mediated adhesion and actin-myosin dynamics and combines it with steric cell-cell and hydrodynamic interactions. The model predicts the onset of collective cell migration, which emerges spontaneously as a result of inelastic collisions of neighboring cells. Each cell here modeled as an active polar gel, is accomplished with two vortices if it moves. Open collision of two cells the two vortices which come close to each other annihilate. This leads to a rotation of the cells and together with the deformation and the reorientation of the actin filaments in each cell induces alignment of these cells and leads to persistent tra...
Energy Technology Data Exchange (ETDEWEB)
Ramshaw, J D
2000-10-01
A simple model was recently described for predicting the time evolution of the width of the mixing layer at an unstable fluid interface [J. D. Ramshaw, Phys. Rev. E 58, 5834 (1998); ibid. 61, 5339 (2000)]. The ordinary differential equations of this model have been heuristically generalized into partial differential equations suitable for implementation in multicomponent hydrodynamics codes. The central ingredient in this generalization is a nun-diffusional expression for the species mass fluxes. These fluxes describe the relative motion of the species, and thereby determine the local mixing rate and spatial distribution of mixed fluid as a function of time. The generalized model has been implemented in a two-dimensional hydrodynamics code. The model equations and implementation procedure are summarized, and comparisons with experimental mixing data are presented.
A high order special relativistic hydrodynamic code with space-time adaptive mesh refinement
Zanotti, Olindo
2013-01-01
We present a high order one-step ADER-WENO finite volume scheme with space-time adaptive mesh refinement (AMR) for the solution of the special relativistic hydrodynamics equations. By adopting a local discontinuous Galerkin predictor method, a high order one-step time discretization is obtained, with no need for Runge-Kutta sub-steps. This turns out to be particularly advantageous in combination with space-time adaptive mesh refinement, which has been implemented following a "cell-by-cell" approach. As in existing second order AMR methods, also the present higher order AMR algorithm features time-accurate local time stepping (LTS), where grids on different spatial refinement levels are allowed to use different time steps. We also compare two different Riemann solvers for the computation of the numerical fluxes at the cell interfaces. The new scheme has been validated over a sample of numerical test problems in one, two and three spatial dimensions, exploring its ability in resolving the propagation of relativ...
Hanford Meteorological Station computer codes: Volume 6, The SFC computer code
Energy Technology Data Exchange (ETDEWEB)
Andrews, G.L.; Buck, J.W.
1987-11-01
Each hour the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, records and archives weather observations. Hourly surface weather observations consist of weather phenomena such as cloud type and coverage; dry bulb, wet bulb, and dew point temperatures; relative humidity; atmospheric pressure; and wind speed and direction. The SFC computer code is used to archive those weather observations and apply quality assurance checks to the data. This code accesses an input file, which contains the previous archive's date and hour and an output file, which contains surface observations for the current day. As part of the program, a data entry form consisting of 24 fields must be filled in. The information on the form is appended to the daily file, which provides an archive for the hourly surface observations.
Private Computing and Mobile Code Systems
Cartrysse, K.
2005-01-01
This thesis' objective is to provide privacy to mobile code. A practical example of mobile code is a mobile software agent that performs a task on behalf of its user. The agent travels over the network and is executed at different locations of which beforehand it is not known whether or not these ca
Reducing Computational Overhead of Network Coding with Intrinsic Information Conveying
DEFF Research Database (Denmark)
Heide, Janus; Zhang, Qi; Pedersen, Morten V.;
is RLNC (Random Linear Network Coding) and the goal is to reduce the amount of coding operations both at the coding and decoding node, and at the same time remove the need for dedicated signaling messages. In a traditional RLNC system, coding operation takes up significant computational resources and adds......This paper investigated the possibility of intrinsic information conveying in network coding systems. The information is embedded into the coding vector by constructing the vector based on a set of predefined rules. This information can subsequently be retrieved by any receiver. The starting point...
Dönmez, Orhan
2004-09-01
In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.
Wongwathanarat, A.; Grimm-Strele, H.; Müller, E.
2016-10-01
We present a new fourth-order, finite-volume hydrodynamics code named Apsara. The code employs a high-order, finite-volume method for mapped coordinates with extensions for nonlinear hyperbolic conservation laws. Apsara can handle arbitrary structured curvilinear meshes in three spatial dimensions. The code has successfully passed several hydrodynamic test problems, including the advection of a Gaussian density profile and a nonlinear vortex and the propagation of linear acoustic waves. For these test problems, Apsara produces fourth-order accurate results in case of smooth grid mappings. The order of accuracy is reduced to first-order when using the nonsmooth circular grid mapping. When applying the high-order method to simulations of low-Mach number flows, for example, the Gresho vortex and the Taylor-Green vortex, we discover that Apsara delivers superior results to codes based on the dimensionally split, piecewise parabolic method (PPM) widely used in astrophysics. Hence, Apsara is a suitable tool for simulating highly subsonic flows in astrophysics. In the first astrophysical application, we perform implicit large eddy simulations (ILES) of anisotropic turbulence in the context of core collapse supernova (CCSN) and obtain results similar to those previously reported.
Computer codes for birds of North America
US Fish and Wildlife Service, Department of the Interior — Purpose of paper was to provide a more useful way to provide codes for all North American species, thus making the list useful for virtually all projects concerning...
Prediction of detonation and JWL eos parameters of energetic materials using EXPLO5 computer code
CSIR Research Space (South Africa)
Peter, Xolani
2016-09-01
Full Text Available (Cowperthwaite and Zwisler, 1976), CHEETAH (Fried, 1996), EXPLO5(Sućeska , 2001), BARUT-X (Cengiz et al., 2007). These computer codes describe the detonation on the basis of the solution of Euler’s hydrodynamic equation based on the description of an equation... of detonation products equation of state from cylinder test: Analytical model and numerical analysis. Thermal Science, 19(1), pp. 35-48. Fried, L.E., 1996. CHEETAH 1.39 user’s manual. Lawrence Livermore National Laboratory. Göbel, M., 2009. Energetic...
Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi
The COMPASS code is designed based on the moving particle semi-implicit method to simulate various complex mesoscale phenomena relevant to core disruptive accidents of sodium-cooled fast reactors. In this study, a computational framework for fluid-solid mixture flow simulations was developed for the COMPASS code. The passively moving solid model was used to simulate hydrodynamic interactions between fluid and solids. Mechanical interactions between solids were modeled by the distinct element method. A multi-time-step algorithm was introduced to couple these two calculations. The proposed computational framework for fluid-solid mixture flow simulations was verified by the comparison between experimental and numerical studies on the water-dam break with multiple solid rods.
Directory of Open Access Journals (Sweden)
Koniges Alice
2013-11-01
Full Text Available The Neutralized Drift Compression Experiment II (NDCX II is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE hydrodynamics with Adaptive Mesh Refinement (AMR, has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. We also briefly discuss the effects of the move to exascale computing and related computational changes on general modeling codes in fusion.
Merlin, Emiliano; Buonomo, Umberto; Grassi, Tommaso; Piovan, Lorenzo; Chiosi, Cesare
2009-01-01
We present EvoL, the new release of the Padova N-body code for cosmological simulations of galaxy formation and evolution. In this paper, the basic Tree + SPH code is presented and analysed, together with an overview on the software architectures. EvoL is a flexible parallel Fortran95 code, specifically designed for simulations of cosmological structure formation on cluster, galactic and sub-galactic scales. EvoL is a fully Lagrangian self-adaptive code, based on the classical Oct-tree and on...
PORPST: A statistical postprocessor for the PORMC computer code
Energy Technology Data Exchange (ETDEWEB)
Eslinger, P.W.; Didier, B.T. (Pacific Northwest Lab., Richland, WA (United States))
1991-06-01
This report describes the theory underlying the PORPST code and gives details for using the code. The PORPST code is designed to do statistical postprocessing on files written by the PORMC computer code. The data written by PORMC are summarized in terms of means, variances, standard deviations, or statistical distributions. In addition, the PORPST code provides for plotting of the results, either internal to the code or through use of the CONTOUR3 postprocessor. Section 2.0 discusses the mathematical basis of the code, and Section 3.0 discusses the code structure. Section 4.0 describes the free-format point command language. Section 5.0 describes in detail the commands to run the program. Section 6.0 provides an example program run, and Section 7.0 provides the references. 11 refs., 1 fig., 17 tabs.
Wongwathanarat, Annop; Müller, Ewald
2016-01-01
We present a new fourth-order finite-volume hydrodynamics code named Apsara. The code employs the high-order finite-volume method for mapped coordinates developed by Colella et al. (2011) with extensions for non-linear hyperbolic conservation laws by McCorquodale & Colella (2011) and Guzik et al. (2012). Using the mapped-grid technique Apsara can handle arbitrary structured curvilinear meshes in three spatial dimensions. The code has successfully passed several hydrodynamic test problems including the advection of a Gaussian density profile and a non-linear vortex, as well as the propagation of linear acoustic waves. For these test problems Apsara produces fourth-order accurate results in case of smooth grid mappings. The order of accuracy is reduced to first-order when using the non-smooth circular grid mapping of Calhoun et al. (2008). When applying the high-order method by McCorquodale & Colella (2011) to simulations of low-Mach number flows, e.g. the Gresho vortex and the Taylor-Green vortex, we d...
Orban, Chris; Chawla, Sugreev; Wilks, Scott C; Lamb, Donald Q
2013-01-01
The potential for laser-produced plasmas to yield fundamental insights into high energy density physics (HEDP) and deliver other useful applications can sometimes be frustrated by uncertainties in modeling the properties and expansion of these plasmas using radiation-hydrodynamics codes. In an effort to overcome this and to corroborate the accuracy of the HEDP capabilities recently added to the publicly available FLASH radiation-hydrodynamics code, we present detailed comparisons of FLASH results to new and previously published results from the HYDRA code used extensively at Lawrence Livermore National Laboratory. We focus on two very different problems of interest: (1) an Aluminum slab irradiated by 15.3 and 76.7 mJ of "pre-pulse" laser energy and (2) a mm-long triangular groove cut in an Aluminum target irradiated by a rectangular laser beam. Because this latter problem bears a resemblance to astrophysical jets, Grava et al., Phys. Rev. E, 78, (2008) performed this experiment and compared detailed x-ray int...
Optimization of KINETICS Chemical Computation Code
Donastorg, Cristina
2012-01-01
NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.
Codes of Ethics for Computing at Russian Institutions and Universities.
Pourciau, Lester J.; Spain, Victoria, Ed.
1997-01-01
To determine the degree to which Russian institutions and universities have formulated and promulgated codes of ethics or policies for acceptable computer use, the author examined Russian institution and university home pages. Lists home pages examined, 10 commandments for computer ethics from the Computer Ethics Institute, and a policy statement…
Continuous Materiality: Through a Hierarchy of Computational Codes
Directory of Open Access Journals (Sweden)
Jichen Zhu
2008-01-01
Full Text Available The legacy of Cartesian dualism inherent in linguistic theory deeply influences current views on the relation between natural language, computer code, and the physical world. However, the oversimplified distinction between mind and body falls short of capturing the complex interaction between the material and the immaterial. In this paper, we posit a hierarchy of codes to delineate a wide spectrum of continuous materiality. Our research suggests that diagrams in architecture provide a valuable analog for approaching computer code in emergent digital systems. After commenting on ways that Cartesian dualism continues to haunt discussions of code, we turn our attention to diagrams and design morphology. Finally we notice the implications a material understanding of code bears for further research on the relation between human cognition and digital code. Our discussion concludes by noticing several areas that we have projected for ongoing research.
Structural Computer Code Evaluation. Volume I
1976-11-01
Rivlin model for large strains. Other exanmples are given in Reference 5. Hypoelasticity A hypoelastic material is one in which the components of...remains is the application of these codes to specific rocket nozzle problems and the evaluation of their capabilities to model modern nozzle mraterial...behavior. Further work may also require the development of appropriate material property data or new material models to adequately characterize these
Quantum computation with Turaev-Viro codes
Koenig, Robert; Reichardt, Ben W
2010-01-01
The Turaev-Viro invariant for a closed 3-manifold is defined as the contraction of a certain tensor network. The tensors correspond to tetrahedra in a triangulation of the manifold, with values determined by a fixed spherical category. For a manifold with boundary, the tensor network has free indices that can be associated to qudits, and its contraction gives the coefficients of a quantum error-correcting code. The code has local stabilizers determined by Levin and Wen. For example, applied to the genus-one handlebody using the Z_2 category, this construction yields the well-known toric code. For other categories, such as the Fibonacci category, the construction realizes a non-abelian anyon model over a discrete lattice. By studying braid group representations acting on equivalence classes of colored ribbon graphs embedded in a punctured sphere, we identify the anyons, and give a simple recipe for mapping fusion basis states of the doubled category to ribbon graphs. We explain how suitable initial states can ...
Lattice Boltzmann method fundamentals and engineering applications with computer codes
Mohamad, A A
2014-01-01
Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.
Tuning complex computer code to data
Energy Technology Data Exchange (ETDEWEB)
Cox, D.; Park, J.S.; Sacks, J.; Singer, C.
1992-01-01
The problem of estimating parameters in a complex computer simulator of a nuclear fusion reactor from an experimental database is treated. Practical limitations do not permit a standard statistical analysis using nonlinear regression methodology. The assumption that the function giving the true theoretical predictions is a realization of a Gaussian stochastic process provides a statistical method for combining information from relatively few computer runs with information from the experimental database and making inferences on the parameters.
Computer aided power flow software engineering and code generation
Energy Technology Data Exchange (ETDEWEB)
Bacher, R. [Swiss Federal Inst. of Tech., Zuerich (Switzerland)
1996-02-01
In this paper a software engineering concept is described which permits the automatic solution of a non-linear set of network equations. The power flow equation set can be seen as a defined subset of a network equation set. The automated solution process is the numerical Newton-Raphson solution process of the power flow equations where the key code parts are the numeric mismatch and the numeric Jacobian term computation. It is shown that both the Jacobian and the mismatch term source code can be automatically generated in a conventional language such as Fortran or C. Thereby one starts from a high level, symbolic language with automatic differentiation and code generation facilities. As a result of this software engineering process an efficient, very high quality newton-Raphson solution code is generated which allows easier implementation of network equation model enhancements and easier code maintenance as compared to hand-coded Fortran or C code.
Computer aided power flow software engineering and code generation
Energy Technology Data Exchange (ETDEWEB)
Bacher, R. [Swiss Federal Inst. of Tech., Zuerich (Switzerland)
1995-12-31
In this paper a software engineering concept is described which permits the automatic solution of a non-linear set of network equations. The power flow equation set can be seen as a defined subset of a network equation set. The automated solution process is the numerical Newton-Raphson solution process of the power flow equations where the key code parts are the numeric mismatch and the numeric Jacobian term computation. It is shown that both the Jacobian and the mismatch term source code can be automatically generated in a conventional language such as Fortran or C. Thereby one starts from a high level, symbolic language with automatic differentiation and code generation facilities. As a result of this software engineering process an efficient, very high quality Newton-Raphson solution code is generated which allows easier implementation of network equation model enhancements and easier code maintenance as compared to hand-coded Fortran or C code.
APC: A New Code for Atmospheric Polarization Computations
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
Neutron noise computation using panda deterministic code
Energy Technology Data Exchange (ETDEWEB)
Humbert, Ph. [CEA Bruyeres le Chatel (France)
2003-07-01
PANDA is a general purpose discrete ordinates neutron transport code with deterministic and non deterministic applications. In this paper we consider the adaptation of PANDA to stochastic neutron counting problems. More specifically we consider the first two moments of the count number probability distribution. In a first part we will recall the equations for the single neutron and source induced count number moments with the corresponding expression for the excess of relative variance or Feynman function. In a second part we discuss the numerical solution of these inhomogeneous adjoint time dependent transport coupled equations with discrete ordinate methods. Finally, numerical applications are presented in the third part. (author)
Zhang, X; Zhang, Xiao-he; Sutherland, Peter
1993-01-01
A new, fully dynamic and self-consistent radiation hydrodynamics code, suitable for the calculation of supernovae light curves and continuum spectra, is described. It is a multigroup (frequency-dependent) code and includes all important $O(v/c)$ effects. It is applied to the model W7 of Nomoto, Thielemann, \\& Yokoi (1984) for supernovae of type Ia. Radioactive energy deposition is incorporated through use of tables based upon Monte Carlo results. Effects of line opacity (both static or line blanketing and expansion or line blocking) are neglected, although these may prove to be important. At maximum light, models based upon different treatments of the opacity lead to values for $M_{B,max}$ in the range of -19.0 to -19.4. This range falls between the values for observed supernova claimed by Leibundgut \\& Tammann (1990) and by Pierce, Ressler, \\& Shure (1992).
Computer code for intraply hybrid composite design
Chamis, C. C.; Sinclair, J. H.
1981-01-01
A computer program has been developed and is described herein for intraply hybrid composite design (INHYD). The program includes several composite micromechanics theories, intraply hybrid composite theories and a hygrothermomechanical theory. These theories provide INHYD with considerable flexibility and capability which the user can exercise through several available options. Key features and capabilities of INHYD are illustrated through selected samples.
A general method for generating bathymetric data for hydrodynamic computer models
Burau, J.R.; Cheng, R.T.
1989-01-01
To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)
MODA: a new algorithm to compute optical depths in multi-dimensional hydrodynamic simulations
Perego, A; Cabezon, R; Rosswog, S; Liebendoerfer, M
2014-01-01
We introduce a new algorithm for the calculation of multidimensional optical depths in approximate radiative transport schemes, equally applicable to neutrinos and photons. Motivated by (but not limited to) neutrino transport in three-dimensional simulations of core-collapse supernovae and neutron star mergers, our method makes no assumptions about the geometry of the matter distribution, apart from expecting optically transparent boundaries. Based on local information about opacities, the algorithm figures out an escape route that tends to minimize the optical depth without assuming any pre-defined paths for radiation. Its adaptivity makes it suitable for a variety of astrophysical settings with complicated geometry (e.g., core-collapse supernovae, compact binary mergers, tidal disruptions, star formation, etc.). We implement the MODA algorithm into both a Eulerian hydrodynamics code with a fixed, uniform grid and into an SPH code where we make use a tree structure that is otherwise used for searching neighb...
Computer vision cracks the leaf code.
Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas
2016-03-22
Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies.
A first computational framework for integrated hydrologic-hydrodynamic inundation modelling
Hoch, Jannis; Baart, Fedor; Neal, Jeffrey; van Beek, Rens; Winsemius, Hessel; Bates, Paul; Bierkens, Marc
2017-04-01
To provide detailed flood hazard and risk estimates for current and future conditions, advanced modelling approaches are required. Currently, many approaches are however built upon specific hydrologic or hydrodynamic model routines. By applying these routines in stand-alone mode important processes cannot accurately be described. For instance, global hydrologic models (GHM) run at coarse spatial resolution which does not identify locally relevant flood hazard information. Moreover, hydrologic models generally focus on correct computations of water balances, but employ less sophisticated routing schemes such as the kinematic wave approximation. Hydrodynamic models, on the other side, excel in the computations of open water flow dynamics, but are highly dependent on specific runoff or observed discharge for their input. In most cases hydrodynamic models are forced by applying discharge at the boundaries and thus cannot account for water sources within the model domain. Thus, discharge and inundation dynamics at reaches not fed by upstream boundaries cannot be modelled. In a recent study, Hoch et al. (HESS, 2017) coupled the GHM PCR-GLOBWB with the hydrodynamic model Delft3D Flexible Mesh. A core element of this study was that both models were connected on a cell-by-cell basis which allows for direct hydrologic forcing within the hydrodynamic model domain. The means for such model coupling is the Basic Model Interface (BMI) which provides a set of functions to directly access model variables. Model results showed that discharge simulations can profit from model coupling as their accuracy is higher compared to stand-alone runs. Model results of a coupled simulation clearly depend on the quality of the individual models. Depending on purpose, location or simply the models at hand, it would be worthwhile to allow a wider range of models to be coupled. As a first step, we present a framework which allows coupling of PCR-GLOBWB to both Delft3D Flexible Mesh and LISFLOOD
HUDU: The Hanford Unified Dose Utility computer code
Energy Technology Data Exchange (ETDEWEB)
Scherpelz, R.I.
1991-02-01
The Hanford Unified Dose Utility (HUDU) computer program was developed to provide rapid initial assessment of radiological emergency situations. The HUDU code uses a straight-line Gaussian atmospheric dispersion model to estimate the transport of radionuclides released from an accident site. For dose points on the plume centerline, it calculates internal doses due to inhalation and external doses due to exposure to the plume. The program incorporates a number of features unique to the Hanford Site (operated by the US Department of Energy), including a library of source terms derived from various facilities' safety analysis reports. The HUDU code was designed to run on an IBM-PC or compatible personal computer. The user interface was designed for fast and easy operation with minimal user training. The theoretical basis and mathematical models used in the HUDU computer code are described, as are the computer code itself and the data libraries used. Detailed instructions for operating the code are also included. Appendices to the report contain descriptions of the program modules, listings of HUDU's data library, and descriptions of the verification tests that were run as part of the code development. 14 refs., 19 figs., 2 tabs.
Computer code applicability assessment for the advanced Candu reactor
Energy Technology Data Exchange (ETDEWEB)
Wren, D.J.; Langman, V.J.; Popov, N.; Snell, V.G. [Atomic Energy of Canada Ltd (Canada)
2004-07-01
AECL Technologies, the 100%-owned US subsidiary of Atomic Energy of Canada Ltd. (AECL), is currently the proponents of a pre-licensing review of the Advanced Candu Reactor (ACR) with the United States Nuclear Regulatory Commission (NRC). A key focus topic for this pre-application review is the NRC acceptance of the computer codes used in the safety analysis of the ACR. These codes have been developed and their predictions compared against experimental results over extended periods of time in Canada. These codes have also undergone formal validation in the 1990's. In support of this formal validation effort AECL has developed, implemented and currently maintains a Software Quality Assurance program (SQA) to ensure that its analytical, scientific and design computer codes meet the required standards for software used in safety analyses. This paper discusses the SQA program used to develop, qualify and maintain the computer codes used in ACR safety analysis, including the current program underway to confirm the applicability of these computer codes for use in ACR safety analyses. (authors)
Experimental methodology for computational fluid dynamics code validation
Energy Technology Data Exchange (ETDEWEB)
Aeschliman, D.P.; Oberkampf, W.L.
1997-09-01
Validation of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. Typically, CFD code validation is accomplished through comparison of computed results to previously published experimental data that were obtained for some other purpose, unrelated to code validation. As a result, it is a near certainty that not all of the information required by the code, particularly the boundary conditions, will be available. The common approach is therefore unsatisfactory, and a different method is required. This paper describes a methodology developed specifically for experimental validation of CFD codes. The methodology requires teamwork and cooperation between code developers and experimentalists throughout the validation process, and takes advantage of certain synergisms between CFD and experiment. The methodology employs a novel uncertainty analysis technique which helps to define the experimental plan for code validation wind tunnel experiments, and to distinguish between and quantify various types of experimental error. The methodology is demonstrated with an example of surface pressure measurements over a model of varying geometrical complexity in laminar, hypersonic, near perfect gas, 3-dimensional flow.
Computer Security: better code, fewer problems
Stefan Lueders, Computer Security Team
2016-01-01
The origin of many security incidents is negligence or unintentional mistakes made by web developers or programmers. In the rush to complete the work, due to skewed priorities, or just to ignorance, basic security principles can be omitted or forgotten. The resulting vulnerabilities lie dormant until the evil side spots them and decides to hit hard. Computer security incidents in the past have put CERN’s reputation at risk due to websites being defaced with negative messages about the Organization, hash files of passwords being extracted, restricted data exposed… And it all started with a little bit of negligence! If you check out the Top 10 web development blunders, you will see that the most prevalent mistakes are: Not filtering input, e.g. accepting “<“ or “>” in input fields even if only a number is expected. Not validating that input: you expect a birth date? So why accept letters? &...
A three-dimensional magnetostatics computer code for insertion devices.
Chubar, O; Elleaume, P; Chavanne, J
1998-05-01
RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica [Mathematica is a registered trademark of Wolfram Research, Inc.]. The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented.
Low Computational Complexity Network Coding For Mobile Networks
DEFF Research Database (Denmark)
Heide, Janus
2012-01-01
Network Coding (NC) is a technique that can provide benefits in many types of networks, some examples from wireless networks are: In relay networks, either the physical or the data link layer, to reduce the number of transmissions. In reliable multicast, to reduce the amount of signaling and enable...... cooperation among receivers. In meshed networks, to simplify routing schemes and to increase robustness toward node failures. This thesis deals with implementation issues of one NC technique namely Random Linear Network Coding (RLNC) which can be described as a highly decentralized non-deterministic intra......-flow coding technique. One of the key challenges of this technique is its inherent computational complexity which can lead to high computational load and energy consumption in particular on the mobile platforms that are the target platform in this work. To increase the coding throughput several...
Recent applications of the transonic wing analysis computer code, TWING
Subramanian, N. R.; Holst, T. L.; Thomas, S. D.
1982-01-01
An evaluation of the transonic-wing-analysis computer code TWING is given. TWING utilizes a fully implicit approximate factorization iteration scheme to solve the full potential equation in conservative form. A numerical elliptic-solver grid-generation scheme is used to generate the required finite-difference mesh. Several wing configurations were analyzed, and the limits of applicability of this code was evaluated. Comparisons of computed results were made with available experimental data. Results indicate that the code is robust, accurate (when significant viscous effects are not present), and efficient. TWING generally produces solutions an order of magnitude faster than other conservative full potential codes using successive-line overrelaxation. The present method is applicable to a wide range of isolated wing configurations including high-aspect-ratio transport wings and low-aspect-ratio, high-sweep, fighter configurations.
Computational investigation of hydrodynamics and cracking reaction in a heavy oil riser reactor
Institute of Scientific and Technical Information of China (English)
Jian Chang; Kai Zhang; Fandong Meng; Longyan Wang; Xiaoli Wei
2012-01-01
This paper presents a computational investigation of hydrodynamics,heat transfer and cracking reaction in a heavy oil riser operated in a novel operating mode of low temperature contact and high catalyst-to-oil ratio.Through incorporating feedstock vaporization and a 12-lump cracking kinetics model,a validated gas-solid flow model has been extended to the analysis of the hydrodynamic and reaction behavior in an industrial riser.The results indicate that the hydrodynamics,temperature and species concentration exhibit significantly nonuniform behavior inside the riser,especially in the atomization nozzle region.The lump concentration profiles along the riser height provide useful information for riser optimization.Compared to conventional fluid catalytic cracking (FCC) process,feedstock conversion and gasoline yield are respectively increased by 1.9 units and 1.0 unit in the new FCC process,the yield of liquefied petroleum gas is increased by about 1.0 unit while dry gas yield is reduced by about 0.3 unit.
FLASH: A finite element computer code for variably saturated flow
Energy Technology Data Exchange (ETDEWEB)
Baca, R.G.; Magnuson, S.O.
1992-05-01
A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.
Marozas, J. A.; Collins, T. J. B.
2012-10-01
The cross-beam energy transfer (CBET) effect causes pump and probe beams to exchange energy via stimulated Brillouin scattering.footnotetext W. L. Kruer, The Physics of Laser--Plasma Interactions, Frontiers in Physics, Vol. 73, edited by D. Pines (Addison-Wesley, Redwood City, CA, 1988), p. 45. The total energy gained does not, in general, equate to the total energy lost; the ion-acoustic wave comprises the residual energy balance, which can decay, resulting in ion heating.footnotetext E. A. Williams et al., Phys. Plasmas 11, 231 (2004). The additional ion heating can retune the conditions for CBET affecting the overall energy transfer as a function of time. CBET and the additional ion heating are incorporated into the 2-D hydrodynamics code DRACOfootnotetext P. B. Radha et al., Phys. Plasmas 12, 056307 (2005). as an integral part of the 3-D ray trace where CBET is treated self-consistently within on the hydrodynamic evolution. DRACO simulation results employing CBET will be discussed. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC52-08NA28302.
An improved treatment of radiation energy flow in the radiation-hydrodynamics code CHARTD
Energy Technology Data Exchange (ETDEWEB)
Rottler, J.S.
1987-05-01
An improved treatment of radiation transport has been added to the energy flow model in CHARTD. The new energy flow model was derived based on the assumption that the directional dependence of the radiation energy density can be represented by the first two terms of a spherical harmonic expansion, and that the photon energy spectrum can be partitioned into energy groups. This treatment of radiation transport is called the multigroup P-1 approximation, and is an effective description of radiation transport for a broad class of radiation-hydrodynamics problems. A synthetic acceleration scheme is used to solve the differenced multigroup P-1 equations. The coupling between the material field and the radiation field is fully explicit. This report describes the new energy flow model and the acceleration scheme used to solve the difference equations. 15 refs.
Shestakov, A I
2007-01-01
We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate level-solve packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation (PTC). We analyze the magnitude of the PTC parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary dat...
Highly Optimized Code Generation for Stencil Codes with Computation Reuse for GPUs
Institute of Scientific and Technical Information of China (English)
Wen-Jing Ma; Kan Gao; Guo-Ping Long
2016-01-01
Computation reuse is known as an effective optimization technique. However, due to the complexity of modern GPU architectures, there is yet not enough understanding regarding the intriguing implications of the interplay of compu-tation reuse and hardware specifics on application performance. In this paper, we propose an automatic code generator for a class of stencil codes with inherent computation reuse on GPUs. For such applications, the proper reuse of intermediate results, combined with careful register and on-chip local memory usage, has profound implications on performance. Current state of the art does not address this problem in depth, partially due to the lack of a good program representation that can expose all potential computation reuse. In this paper, we leverage the computation overlap graph (COG), a simple representation of data dependence and data reuse with “element view”, to expose potential reuse opportunities. Using COG, we propose a portable code generation and tuning framework for GPUs. Compared with current state-of-the-art code generators, our experimental results show up to 56.7%performance improvement on modern GPUs such as NVIDIA C2050.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Energy Technology Data Exchange (ETDEWEB)
Breil, J; Maire, P-H; Nicolai, P; Schurtz, G [CELIA, Universite Bordeaux I, CNRS, CEA, 351 cours de la Liberation, 33405 Talence (France)], E-mail: breil@celia.u-bordeaux1.fr
2008-05-15
In laser produced plasmas large self-generated magnetic fields have been measured. The classical formulas by Braginskii predict that magnetic fields induce a reduction of the magnitude of the heat flux and its rotation through the Righi-Leduc effect. In this paper a second order tensorial diffusion method used to correctly solve the Righi-Leduc effect in multidimensional code is presented.
Prodeto, a computer code for probabilistic fatigue design
Energy Technology Data Exchange (ETDEWEB)
Braam, H. [ECN-Solar and Wind Energy, Petten (Netherlands); Christensen, C.J.; Thoegersen, M.L. [Risoe National Lab., Roskilde (Denmark); Ronold, K.O. [Det Norske Veritas, Hoevik (Norway)
1999-03-01
A computer code for structural relibility analyses of wind turbine rotor blades subjected to fatigue loading is presented. With pre-processors that can transform measured and theoretically predicted load series to load range distributions by rain-flow counting and with a family of generic distribution models for parametric representation of these distribution this computer program is available for carying through probabilistic fatigue analyses of rotor blades. (au)
A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies
Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.; Bhalla, Amneet Pal Singh
2017-10-01
We present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid-structure interface are avoided and far-field (smooth) velocity and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torques through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.
Methods and computer codes for nuclear systems calculations
Indian Academy of Sciences (India)
B P Kochurov; A P Knyazev; A Yu Kwaretzkheli
2007-02-01
Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.
Computer code for double beta decay QRPA based calculations
Energy Technology Data Exchange (ETDEWEB)
Barbero, C. A.; Mariano, A. [Departamento de Física, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, La Plata, Argentina and Instituto de Física La Plata, CONICET, La Plata (Argentina); Krmpotić, F. [Instituto de Física La Plata, CONICET, La Plata, Argentina and Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo (Brazil); Samana, A. R.; Ferreira, V. dos Santos [Departamento de Ciências Exatas e Tecnológicas, Universidade Estadual de Santa Cruz, BA (Brazil); Bertulani, C. A. [Department of Physics, Texas A and M University-Commerce, Commerce, TX (United States)
2014-11-11
The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β{sup ±} processes, is extended to include also the nuclear double beta decay.
Plagiarism Detection Algorithm for Source Code in Computer Science Education
Liu, Xin; Xu, Chan; Ouyang, Boyu
2015-01-01
Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…
Connecting Neural Coding to Number Cognition: A Computational Account
Prather, Richard W.
2012-01-01
The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…
General review of the MOSTAS computer code for wind turbines
Dungundji, J.; Wendell, J. H.
1981-01-01
The MOSTAS computer code for wind turbine analysis is reviewed, and techniques and methods used in its analyses are described. Impressions of its strengths and weakness, and recommendations for its application, modification, and further development are made. Basic techniques used in wind turbine stability and response analyses for systems with constant and periodic coefficients are reviewed.
Energy Technology Data Exchange (ETDEWEB)
Kok Yan Chan, G.; Sclavounos, P. D.; Jonkman, J.; Hayman, G.
2015-04-02
A hydrodynamics computer module was developed for the evaluation of the linear and nonlinear loads on floating wind turbines using a new fluid-impulse formulation for coupling with the FAST program. The recently developed formulation allows the computation of linear and nonlinear loads on floating bodies in the time domain and avoids the computationally intensive evaluation of temporal and nonlinear free-surface problems and efficient methods are derived for its computation. The body instantaneous wetted surface is approximated by a panel mesh and the discretization of the free surface is circumvented by using the Green function. The evaluation of the nonlinear loads is based on explicit expressions derived by the fluid-impulse theory, which can be computed efficiently. Computations are presented of the linear and nonlinear loads on the MIT/NREL tension-leg platform. Comparisons were carried out with frequency-domain linear and second-order methods. Emphasis was placed on modeling accuracy of the magnitude of nonlinear low- and high-frequency wave loads in a sea state. Although fluid-impulse theory is applied to floating wind turbines in this paper, the theory is applicable to other offshore platforms as well.
Computed radiography simulation using the Monte Carlo code MCNPX
Energy Technology Data Exchange (ETDEWEB)
Correa, S.C.A. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Centro Universitario Estadual da Zona Oeste (CCMAT)/UEZO, Av. Manuel Caldeira de Alvarenga, 1203, Campo Grande, 23070-200, Rio de Janeiro, RJ (Brazil); Souza, E.M. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Silva, A.X., E-mail: ademir@con.ufrj.b [PEN/COPPE-DNC/Poli CT, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Cassiano, D.H. [Instituto de Radioprotecao e Dosimetria/CNEN Av. Salvador Allende, s/n, Recreio, 22780-160, Rio de Janeiro, RJ (Brazil); Lopes, R.T. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil)
2010-09-15
Simulating X-ray images has been of great interest in recent years as it makes possible an analysis of how X-ray images are affected owing to relevant operating parameters. In this paper, a procedure for simulating computed radiographic images using the Monte Carlo code MCNPX is proposed. The sensitivity curve of the BaFBr image plate detector as well as the characteristic noise of a 16-bit computed radiography system were considered during the methodology's development. The results obtained confirm that the proposed procedure for simulating computed radiographic images is satisfactory, as it allows obtaining results comparable with experimental data.
Fault-tolerant quantum computing with color codes
Landahl, Andrew J; Rice, Patrick R
2011-01-01
We present and analyze protocols for fault-tolerant quantum computing using color codes. We present circuit-level schemes for extracting the error syndrome of these codes fault-tolerantly. We further present an integer-program-based decoding algorithm for identifying the most likely error given the syndrome. We simulated our syndrome extraction and decoding algorithms against three physically-motivated noise models using Monte Carlo methods, and used the simulations to estimate the corresponding accuracy thresholds for fault-tolerant quantum error correction. We also used a self-avoiding walk analysis to lower-bound the accuracy threshold for two of these noise models. We present and analyze two architectures for fault-tolerantly computing with these codes: one with 2D arrays of qubits are stacked atop each other and one in a single 2D substrate. Our analysis demonstrates that color codes perform slightly better than Kitaev's surface codes when circuit details are ignored. When these details are considered, w...
New Parallel computing framework for radiation transport codes
Energy Technology Data Exchange (ETDEWEB)
Kostin, M.A.; /Michigan State U., NSCL; Mokhov, N.V.; /Fermilab; Niita, K.; /JAERI, Tokai
2010-09-01
A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.
New Parallel computing framework for radiation transport codes
Kostin, M A; Niita, K
2012-01-01
A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The frame work was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the sch eduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be...
LMFBR models for the ORIGEN2 computer code
Energy Technology Data Exchange (ETDEWEB)
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1983-06-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
LMFBR models for the ORIGEN2 computer code
Energy Technology Data Exchange (ETDEWEB)
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1981-10-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
User's manual for HDR3 computer code
Energy Technology Data Exchange (ETDEWEB)
Arundale, C.J.
1982-10-01
A description of the HDR3 computer code and instructions for its use are provided. HDR3 calculates space heating costs for a hot dry rock (HDR) geothermal space heating system. The code also compares these costs to those of a specific oil heating system in use at the National Aeronautics and Space Administration Flight Center at Wallops Island, Virginia. HDR3 allows many HDR system parameters to be varied so that the user may examine various reservoir management schemes and may optimize reservoir design to suit a particular set of geophysical and economic parameters.
D'Arcy, Deirdre M; Healy, Anne Marie; Corrigan, Owen I
2009-06-28
One of the earliest level A in vitro dissolution in vivo absorption correlations (IVIVCs) was established by Levy and co-workers in 1965 using a beaker dissolution apparatus Levy et al., 1965. In the current work, the computational fluid dynamics (CFD) package, Fluent((R)), was used to simulate the hydrodynamics within the Levy beaker apparatus and compare them to those within the paddle and basket apparatuses. In vitro velocity values relevant to in vivo dissolution, presented as apparent gastrointestinal fluid velocity (AGV) values, were calculated. The AGV values were estimated from IVIVCs of immediate release (IR) dosage forms in each apparatus and CFD simulations. The simulations from the Levy apparatus revealed complex hydrodynamics in the region of the stirrer blades, and radial inflow at the centre of the beaker base. The calculated AGV values ranged from 0.001 to 0.026ms(-1). In vitro fluid velocities should reflect in vivo dissolution rates affected by natural convection and gastrointestinal motility, in addition to local fluid velocity. The maximum CFD generated velocity at the base of the paddle apparatus at 20rpm was similar to the average maximum AGV value determined, suggesting use of agitation rates which are lower than those commonly used (e.g. 50rpm in the paddle apparatus) may be appropriate when attempting an IVIVC for IR dosage forms.
Computer codes for evaluation of control room habitability (HABIT)
Energy Technology Data Exchange (ETDEWEB)
Stage, S.A. [Pacific Northwest Lab., Richland, WA (United States)
1996-06-01
This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs.
War of ontology worlds: mathematics, computer code, or Esperanto?
Directory of Open Access Journals (Sweden)
Andrey Rzhetsky
2011-09-01
Full Text Available The use of structured knowledge representations-ontologies and terminologies-has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies.
War of Ontology Worlds: Mathematics, Computer Code, or Esperanto?
Rzhetsky, Andrey; Evans, James A.
2011-01-01
The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies. PMID:21980276
RAyMOND: An N-body and hydrodynamics code for MOND
Candlish, G N; Fellhauer, M
2014-01-01
The LCDM concordance cosmological model is supported by a wealth of observational evidence, particularly on large scales. At galactic scales, however, the model is poorly constrained and recent observations suggest a more complex behaviour in the dark sector than may be accommodated by a single cold dark matter component. Furthermore, a modification of the gravitational force in the very weak field regime may account for at least some of the phenomenology of dark matter. A well-known example of such an approach is MOdified Newtonian Dynamics (MOND). While this idea has proven remarkably successful in the context of stellar dynamics in individual galaxies, the effects of such a modification of gravity on galaxy interactions and environmental processes deserves further study. To explore this arena we modify the parallel adaptive mesh refinement code RAMSES to use two formulations of MOND. We implement both the fully non-linear aquadratic Lagrangian (AQUAL) formulation as well as the simpler quasi-linear formula...
A heterogeneous and parallel computing framework for high-resolution hydrodynamic simulations
Smith, Luke; Liang, Qiuhua
2015-04-01
Shock-capturing hydrodynamic models are now widely applied in the context of flood risk assessment and forecasting, accurately capturing the behaviour of surface water over ground and within rivers. Such models are generally explicit in their numerical basis, and can be computationally expensive; this has prohibited full use of high-resolution topographic data for complex urban environments, now easily obtainable through airborne altimetric surveys (LiDAR). As processor clock speed advances have stagnated in recent years, further computational performance gains are largely dependent on the use of parallel processing. Heterogeneous computing architectures (e.g. graphics processing units or compute accelerator cards) provide a cost-effective means of achieving high throughput in cases where the same calculation is performed with a large input dataset. In recent years this technique has been applied successfully for flood risk mapping, such as within the national surface water flood risk assessment for the United Kingdom. We present a flexible software framework for hydrodynamic simulations across multiple processors of different architectures, within multiple computer systems, enabled using OpenCL and Message Passing Interface (MPI) libraries. A finite-volume Godunov-type scheme is implemented using the HLLC approach to solving the Riemann problem, with optional extension to second-order accuracy in space and time using the MUSCL-Hancock approach. The framework is successfully applied on personal computers and a small cluster to provide considerable improvements in performance. The most significant performance gains were achieved across two servers, each containing four NVIDIA GPUs, with a mix of K20, M2075 and C2050 devices. Advantages are found with respect to decreased parametric sensitivity, and thus in reducing uncertainty, for a major fluvial flood within a large catchment during 2005 in Carlisle, England. Simulations for the three-day event could be performed
Zeng, X.; Scovazzi, G.
2016-06-01
We present a monolithic arbitrary Lagrangian-Eulerian (ALE) finite element method for computing highly transient flows with strong shocks. We use a variational multiscale (VMS) approach to stabilize a piecewise-linear Galerkin formulation of the equations of compressible flows, and an entropy artificial viscosity to capture strong solution discontinuities. Our work demonstrates the feasibility of VMS methods for highly transient shock flows, an area of research for which the VMS literature is extremely scarce. In addition, the proposed monolithic ALE method is an alternative to the more commonly used Lagrangian+remap methods, in which, at each time step, a Lagrangian computation is followed by mesh smoothing and remap (conservative solution interpolation). Lagrangian+remap methods are the methods of choice in shock hydrodynamics computations because they provide nearly optimal mesh resolution in proximity of shock fronts. However, Lagrangian+remap methods are not well suited for imposing inflow and outflow boundary conditions. These issues offer an additional motivation for the proposed approach, in which we first perform the mesh motion, and then the flow computations using the monolithic ALE framework. The proposed method is second-order accurate and stable, as demonstrated by extensive numerical examples in two and three space dimensions.
Benchmarking of computer codes and approaches for modeling exposure scenarios
Energy Technology Data Exchange (ETDEWEB)
Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)
1994-08-01
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.
Computational Complexity of Decoding Orthogonal Space-Time Block Codes
Ayanoglu, Ender; Karipidis, Eleftherios
2009-01-01
The computational complexity of optimum decoding for an orthogonal space-time block code G satisfying the orthogonality property that the Hermitian transpose of G multiplied by G is equal to a constant c times the sum of the squared symbols of the code times an identity matrix, where c is a positive integer is quantified. Four equivalent techniques of optimum decoding which have the same computational complexity are specified. Modifications to the basic formulation in special cases are calculated and illustrated by means of examples. This paper corrects and extends [1],[2], and unifies them with the results from the literature. In addition, a number of results from the literature are extended to the case c > 1.
Fish, Frank E; Beneski, John T; Ketten, Darlene R
2007-06-01
The flukes of cetaceans function in the hydrodynamic generation of forces for thrust, stability, and maneuverability. The three-dimensional geometry of flukes is associated with production of lift and drag. Data on fluke geometry were collected from 19 cetacean specimens representing eight odontocete genera (Delphinus, Globicephala, Grampus, Kogia, Lagenorhynchus, Phocoena, Stenella, Tursiops). Flukes were imaged as 1 mm thickness cross-sections using X-ray computer-assisted tomography. Fluke shapes were characterized quantitatively by dimensions of the chord, maximum thickness, and position of maximum thickness from the leading edge. Sections were symmetrical about the chordline and had a rounded leading edge and highly tapered trailing edge. The thickness ratio (maximum thickness/chord) among species increased from insertion on the tailstock to a maximum at 20% of span and then decreasing steadily to the tip. Thickness ratio ranged from 0.139 to 0.232. These low values indicate reduced drag while moving at high speed. The position of maximum thickness from the leading edge remained constant over the fluke span at an average for all species of 0.285 chord. The displacement of the maximum thickness reduces the tendency of the flow to separate from the fluke surface, potentially affecting stall patterns. Similarly, the relatively large leading edge radius allows greater lift generation and delays stall. Computational analysis of fluke profiles at 50% of span showed that flukes were generally comparable or better for lift generation than engineered foils. Tursiops had the highest lift coefficients, which were superior to engineered foils by 12-19%. Variation in the structure of cetacean flukes reflects different hydrodynamic characteristics that could influence swimming performance.
Bragg optics computer codes for neutron scattering instrument design
Energy Technology Data Exchange (ETDEWEB)
Popovici, M.; Yelon, W.B.; Berliner, R.R. [Missouri Univ. Research Reactor, Columbia, MO (United States); Stoica, A.D. [Institute of Physics and Technology of Materials, Bucharest (Romania)
1997-09-01
Computer codes for neutron crystal spectrometer design, optimization and experiment planning are described. Phase space distributions, linewidths and absolute intensities are calculated by matrix methods in an extension of the Cooper-Nathans resolution function formalism. For modeling the Bragg reflection on bent crystals the lamellar approximation is used. Optimization is done by satisfying conditions of focusing in scattering and in real space, and by numerically maximizing figures of merit. Examples for three-axis and two-axis spectrometers are given.
General review of the MOSTAS computer code for wind turbines
Energy Technology Data Exchange (ETDEWEB)
Dugundji, J.; Wendell, J.H.
1981-06-01
The MOSTAS computer code for wind turbine analysis is reviewed, and the techniques and methods used in its analyses are described in some detail. Some impressions of its strengths and weaknesses, and some recommendations for its application, modification, and further development are made. Additionally, some basic techniques used in wind turbine stability and response analyses for systems with constant and periodic coefficients are reviewed in the Appendices.
Refactoring Android Java Code for On-Demand Computation Offloading
Zhang, Ying; Huang, Gang; Liu, Xuanzhe; Zhang, Wei; Zhang, Wei; Mei, Hong; Yang, Shunxiang
2012-01-01
International audience; Computation offloading is a promising way to improve the performance as well as reduce the battery energy consumption of a smartphone application by executing some part of the application on a remote server. Supporting such capability is not easy to smartphone app developers for 1) correctness: some codes, e.g. those for GPS, gravity and other sensors, can only run on the smartphone so that the developers have to identify which part of the application cannot be offload...
Methodology for computational fluid dynamics code verification/validation
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, W.L.; Blottner, F.G.; Aeschliman, D.P.
1995-07-01
The issues of verification, calibration, and validation of computational fluid dynamics (CFD) codes has been receiving increasing levels of attention in the research literature and in engineering technology. Both CFD researchers and users of CFD codes are asking more critical and detailed questions concerning the accuracy, range of applicability, reliability and robustness of CFD codes and their predictions. This is a welcomed trend because it demonstrates that CFD is maturing from a research tool to the world of impacting engineering hardware and system design. In this environment, the broad issue of code quality assurance becomes paramount. However, the philosophy and methodology of building confidence in CFD code predictions has proven to be more difficult than many expected. A wide variety of physical modeling errors and discretization errors are discussed. Here, discretization errors refer to all errors caused by conversion of the original partial differential equations to algebraic equations, and their solution. Boundary conditions for both the partial differential equations and the discretized equations will be discussed. Contrasts are drawn between the assumptions and actual use of numerical method consistency and stability. Comments are also made concerning the existence and uniqueness of solutions for both the partial differential equations and the discrete equations. Various techniques are suggested for the detection and estimation of errors caused by physical modeling and discretization of the partial differential equations.
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
Energy Technology Data Exchange (ETDEWEB)
Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)
1997-12-01
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.
Energy Technology Data Exchange (ETDEWEB)
Pompa, J.A.; Allik, H.; Webman, K.; Spaulding, M.
1979-02-01
The design and analysis of the cold water pipe (CWP) is one of the most important technological problems to be solved in the OTEC ocean engineering program. Analytical computer models have to be developed and verified in order to provide an engineering approach for the OTEC CWP with regards to environmental factors such as waves, currents, platform motions, etc., and for various structural configurations and materials such as rigid wall CWP, compliant CWP, stockade CWP, etc. To this end, Analysis and Technology, Inc. has performed a review and evaluation of shell structural analysis computer programs applicable to the design of an OTEC CWP. Included in this evaluation are discussions of the hydrodynamic flow field, structure-fluid interaction and the state-of-the-art analytical procedures for analysis of offshore structures. The analytical procedures which must be incorporated into the design of a CWP are described. A brief review of the state-of-the-art for analysis of offshore structures and the need for a shell analysis for the OTEC CWP are included. A survey of available shell computer programs, both special purpose and general purpose, and discussions of the features of these dynamic shell programs and how the hydrodynamic loads are represented within the computer programs are included. The hydrodynamic loads design criteria for the CWP are described. An assessment of the current state of knowledge for hydrodynamic loads is presented. (WHK)
Baiotti, Luca; Shibata, Masaru; Yamamoto, Tetsuro
2010-09-01
We present the first quantitative comparison of two independent general-relativistic hydrodynamics codes, the whisky code and the sacra code. We compare the output of simulations starting from the same initial data and carried out with the configuration (numerical methods, grid setup, resolution, gauges) which for each code has been found to give consistent and sufficiently accurate results, in particular, in terms of cleanness of gravitational waveforms. We focus on the quantities that should be conserved during the evolution (rest mass, total mass energy, and total angular momentum) and on the gravitational-wave amplitude and frequency. We find that the results produced by the two codes agree at a reasonable level, with variations in the different quantities but always at better than about 10%.
Baiotti, Luca; Yamamoto, Tetsuro
2010-01-01
We present the first quantitative comparison of two independent general-relativistic hydrodynamics codes, the Whisky code and the SACRA code. We compare the output of simulations starting from the same initial data and carried out with the configuration (numerical methods, grid setup, resolution, gauges) which for each code has been found to give consistent and sufficiently accurate results, in particular in terms of cleanness of gravitational waveforms. We focus on the quantities that should be conserved during the evolution (rest mass, total mass energy, and total angular momentum) and on the gravitational-wave amplitude and frequency. We find that the results produced by the two codes agree at a reasonable level, with variations in the different quantities but always at better than about 10%.
Improvement of level-1 PSA computer code package
Energy Technology Data Exchange (ETDEWEB)
Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.
1997-07-01
This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on `The improvement of level-1 PSA Computer Codes` is divided into two main activities : (1) improvement of level-1 PSA methodology, (2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs.
Computationally efficient sub-band coding of ECG signals.
Husøy, J H; Gjerde, T
1996-03-01
A data compression technique is presented for the compression of discrete time electrocardiogram (ECG) signals. The compression system is based on sub-band coding, a technique traditionally used for compressing speech and images. The sub-band coder employs quadrature mirror filter banks (QMF) with up to 32 critically sampled sub-bands. Both finite impulse response (FIR) and the more computationally efficient infinite impulse response (IIR) filter banks are considered as candidates in a complete ECG coding system. The sub-bands are threshold, quantized using uniform quantizers and run-length coded. The output of the run-length coder is further compressed by a Huffman coder. Extensive simulations indicate that 16 sub-bands are a suitable choice for this application. Furthermore, IIR filter banks are preferable due to their superiority in terms of computational efficiency. We conclude that the present scheme, which is suitable for real time implementation on a PC, can provide compression ratios between 5 and 15 without loss of clinical information.
Codes for Computationally Simple Channels: Explicit Constructions with Optimal Rate
Guruswami, Venkatesan
2010-01-01
In this paper, we consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors is bounded with high probability by a parameter p and (b) the process which adds the errors can be described by a sufficiently "simple" circuit. For three classes of channels, we provide explicit, efficiently encodable/decodable codes of optimal rate where only inefficiently decodable codes were previously known. In each case, we provide one encoder/decoder that works for every channel in the class. (1) Unique decoding for additive errors: We give the first construction of poly-time encodable/decodable codes for additive (a.k.a. oblivious) channels that achieve the Shannon capacity 1-H(p). Such channels capture binary symmetric errors and burst errors as special cases. (2) List-decoding for log-space channels: A space-S(n) channel reads and modifies the transmitted codeword as a stream, using at most S(n) bits of workspace on transmissions of n bi...
Compressing industrial computed tomography images by means of contour coding
Jiang, Haina; Zeng, Li
2013-10-01
An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.
Hydrodynamics and Water Quality forecasting over a Cloud Computing environment: INDIGO-DataCloud
Aguilar Gómez, Fernando; de Lucas, Jesús Marco; García, Daniel; Monteoliva, Agustín
2017-04-01
Algae Bloom due to eutrophication is an extended problem for water reservoirs and lakes that impacts directly in water quality. It can create a dead zone that lacks enough oxygen to support life and it can also be human harmful, so it must be controlled in water masses for supplying, bathing or other uses. Hydrodynamic and Water Quality modelling can contribute to forecast the status of the water system in order to alert authorities before an algae bloom event occurs. It can be used to predict scenarios and find solutions to reduce the harmful impact of the blooms. High resolution models need to process a big amount of data using a robust enough computing infrastructure. INDIGO-DataCloud (https://www.indigo-datacloud.eu/) is an European Commission funded project that aims at developing a data and computing platform targeting scientific communities, deployable on multiple hardware and provisioned over hybrid (private or public) e-infrastructures. The project addresses the development of solutions for different Case Studies using different Cloud-based alternatives. In the first INDIGO software release, a set of components are ready to manage the deployment of services to perform N number of Delft3D simulations (for calibrating or scenario definition) over a Cloud Computing environment, using the Docker technology: TOSCA requirement description, Docker repository, Orchestrator, AAI (Authorization, Authentication) and OneData (Distributed Storage System). Moreover, the Future Gateway portal based on Liferay, provides an user-friendly interface where the user can configure the simulations. Due to the data approach of INDIGO, the developed solutions can contribute to manage the full data life cycle of a project, thanks to different tools to manage datasets or even metadata. Furthermore, the cloud environment contributes to provide a dynamic, scalable and easy-to-use framework for non-IT experts users. This framework is potentially capable to automatize the processing of
A COMPUTATIONAL STUDY ON BACKWARD SWIMMING HYDRODYNAMICS IN THE EEL ANGUILLA ANGUILLA
Institute of Scientific and Technical Information of China (English)
HU Wen-rong; TONG Bin-gang; MA Hui-yang; LIU Hao
2005-01-01
Eels can perform both forward and backward undulatory swimming but few studies are seen on how eels propel themselves backward. A computational study on the unsteady hydrodynamics of the backward swimming in the eel anguilla anguilla is carried out and presented. A two-dimensional geometric model of the European eel body in its middle horizontal section is appropriately approximated by a NACA0005 airfoil. Kinematic data of the backward and forward swimming eel used in the computational modeling are based on the experimental results of the European eel. Present study provided the different flow field characteristics of three typical cases in the backward swimming, and confirmed the guess of Wu: When the eel swims steadily, the vortex centers extensive comparison between the backward and forward swimming further reveals that the controllable parameters, such as frequency, amplitude and wavelength of the traveling wave, have a similar influence on the propulsion performance as in forward swimming. But it is shown that the backward swimming does not be a "reversed" forward swimming one. The backward swimming does show significant discrepancy in the propulsion performance: utilization of a constant-amplitude wave profile enables larger force generation for maneuverability but with much lower propulsive efficiency instead of the linear-increasing amplitude wave profile in the forward swimming. The actual swimming modes eels choose is the best choice associated with their propulsive requirement, as well as their physiological and ecological adaptation.
Kuroda, Takami; Kotake, Kei
2015-01-01
We present a new multi-dimensional radiation-hydrodynamics code for massive stellar core-collapse in full general relativity (GR). Employing an M1 analytical closure scheme, we solve spectral neutrino transport of the radiation energy and momentum based on a truncated moment formalism. Regarding neutrino opacities, we take into account the so-called standard set in state-of-the-art simulations, in which inelastic neutrino-electron scattering, thermal neutrino production via pair annihilation and nucleon-nucleon bremsstrahlung are included. In addition to gravitational redshift and Doppler effects, these energy-coupling reactions are incorporated in the moment equations in a covariant form. While the Einstein field equations and the spatial advection terms in the radiation-hydrodynamics equations are evolved explicitly, the source terms due to neutrino-matter interactions and energy shift in the radiation moment equations are integrated implicitly by an iteration method. To verify our code, we conduct several ...
Müller, Bernhard; Janka, Hans-Thomas
2014-06-01
Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M ⊙, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, langErang, of \\bar{\
Multicode comparison of selected source-term computer codes
Energy Technology Data Exchange (ETDEWEB)
Hermann, O.W.; Parks, C.V.; Renier, J.P.; Roddy, J.W.; Ashline, R.C.; Wilson, W.B.; LaBauve, R.J.
1989-04-01
This report summarizes the results of a study to assess the predictive capabilities of three radionuclide inventory/depletion computer codes, ORIGEN2, ORIGEN-S, and CINDER-2. The task was accomplished through a series of comparisons of their output for several light-water reactor (LWR) models (i.e., verification). Of the five cases chosen, two modeled typical boiling-water reactors (BWR) at burnups of 27.5 and 40 GWd/MTU and two represented typical pressurized-water reactors (PWR) at burnups of 33 and 50 GWd/MTU. In the fifth case, identical input data were used for each of the codes to examine the results of decay only and to show differences in nuclear decay constants and decay heat rates. Comparisons were made for several different characteristics (mass, radioactivity, and decay heat rate) for 52 radionuclides and for nine decay periods ranging from 30 d to 10,000 years. Only fission products and actinides were considered. The results are presented in comparative-ratio tables for each of the characteristics, decay periods, and cases. A brief summary description of each of the codes has been included. Of the more than 21,000 individual comparisons made for the three codes (taken two at a time), nearly half (45%) agreed to within 1%, and an additional 17% fell within the range of 1 to 5%. Approximately 8% of the comparison results disagreed by more than 30%. However, relatively good agreement was obtained for most of the radionuclides that are expected to contribute the greatest impact to waste disposal. Even though some defects have been noted, each of the codes in the comparison appears to produce respectable results. 12 figs., 12 tabs.
MULTI-fs - A computer code for laser-plasma interaction in the femtosecond regime
Ramis, R.; Eidmann, K.; Meyer-ter-Vehn, J.; Hüller, S.
2012-03-01
The code MULTI-fs is a numerical tool devoted to the study of the interaction of ultrashort sub-picosecond laser pulses with matter in the intensity range from 10 11 to 10 17 W cm -2. Hydrodynamics is solved in one-dimensional geometry together with laser energy deposition and transport by thermal conduction and radiation. In contrast to long nanosecond pulses, short pulses generate steep gradient plasmas with typical scale lengths in the order of the laser wavelength and smaller. Under these conditions, Maxwell's equations are solved explicitly to obtain the light field. Concerning laser absorption, two different models for the electron-ion collision frequency are implemented to cover the regime of warm dense matter between high-temperature plasma and solid matter and also interaction with short-wave-length (VUV) light. MULTI-fs code is based on the MULTI radiation-hydrodynamic code [R. Ramis, R. Schmalz, J. Meyer-ter-Vehn, Comp. Phys. Comm. 49 (1988) 475] and most of the original features for the treatment of radiation are maintained. Program summaryProgram title: MULTI-fs Catalogue identifier: AEKT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 598 No. of bytes in distributed program, including test data, etc.: 443 771 Distribution format: tar.gz Programming language: FORTRAN Computer: PC (32 bits and 64 bits architecture) Operating system: Linux/Unix RAM: 1.6 MiB Classification: 19.13, 21.2 Subprograms used: Cat Id: AECV_v1_0; Title: MULTI2D; Reference: CPC 180 (2009) 977 Nature of problem: One-dimensional interaction of intense ultrashort (sub-picosecond) and ultraintense (up to 10 17 W cm -2) laser beams with matter. Solution method: The hydrodynamic motion coupled to laser propagation and
Knowlton, Marie; Wetzel, Robin
2006-01-01
This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…
Geometric plane shapes for computer-generated holographic engraving codes
Augier, Ángel G.; Rabal, Héctor; Sánchez, Raúl B.
2017-04-01
We report a new theoretical and experimental study on hologravures, as holographic computer-generated laser-engravings. A geometric theory of images based on the general principles of light ray behaviour is shown. The models used are also applicable for similar engravings obtained by any non-laser method, and the solutions allow for the analysis of particular situations, not only in the case of light reflection mode, but also in transmission mode geometry. This approach is a novel perspective allowing the three-dimensional (3D) design of engraved images for specific ends. We prove theoretically that plane curves of very general geometric shapes can be used to encode image information onto a two-dimensional (2D) engraving, showing notable influence on the behaviour of reconstructed images that appears as an exciting investigation topic, extending its applications. Several cases of code using particular curvilinear shapes are experimentally studied. The computer-generated objects are coded by using the chosen curve type, and engraved by a laser on a plane surface of suitable material. All images are recovered optically by adequate illumination. The pseudoscopic or orthoscopic character of these images is considered, and an appropriate interpretation is presented.
A computational method for analysis of underwater dolphin kick hydrodynamics in human swimming.
von Loebbecke, Alfred; Mittal, Rajat; Mark, Russell; Hahn, James
2009-03-01
We present a new method that combines the use of laser body scans, underwater video footage, software-based animation, and a fully unsteady computational fluid dynamics technique to simulate and examine the hydrodynamics of the dolphin kick. The focus of the current work is to model this particular stroke in all its complexity with minimal ad-hoc assumptions or simplifications. Simulations of one female and one male swimmer (both at about 1.7 m beneath the water surface) at velocities of 0.95 and 1.31 m/s and Strouhal numbers of 1.21 and 1.06 respectively are presented. Vorticity and fluid velocity profiles in the wake are examined in detail for both swimmers. A three-dimensional vortex ring is clearly identified in the wake for one of the cases and a two-dimensional slice through the ring corroborates previous experiments of Miwa et al. (2006). We also find that most of the thrust is produced by the feet and in both cases the down-kick produces much larger thrust than the up-kick.
Evaluation of detonation energy from EXPLO5 computer code results
Energy Technology Data Exchange (ETDEWEB)
Suceska, M. [Brodarski Institute, Zagreb (Croatia). Marine Research and Special Technologies
1999-10-01
The detonation energies of several high explosives are evaluated from the results of chemical-equilibrium computer code named EXPLO5. Two methods of the evaluation of detonation energy are applied: (a) Direct evaluation from the internal energy of detonation products at the CJ point and the energy of shock compression of the detonation products, i.e. by equating the detonation energy and the heat of detonation, and (b) evaluation from the expansion isentrope of detonation products, applying the JWL model. These energies are compared to the energies computed from cylinder test derived JWL coefficients. It is found out that the detonation energies obtained directly from the energy of detonation products at the CJ point are uniformly to high (0.9445{+-}0.577 kJ/cm{sup 3}) while the detonation energies evaluated from the expansion isentrope, are in a considerable agreement (0.2072{+-}0.396 kJ/cm{sup 3}) with the energies calculated from cylinder test derived JWL coefficients. (orig.) [German] Die Detonationsenergien verschiedener Hochleistungssprengstoffe werden bewertet aus den Ergebnissen des Computer Codes fuer chemische Gleichgewichte genannt EXPLO5. Zwei Methoden wurden angewendet: (a) Direkte Bewertung aus der inneren Energie der Detonationsprodukte am CJ-Punkt und aus der Energie der Stosskompression der Detonationsprodukte, d.h. durch Gleichsetzung von Detonationsenergie und Detonationswaerme, (b) Auswertung durch die Expansions-Isentrope der Detonationsprodukte unter Anwendung des JWL-Modells. Diese Energien werden verglichen mit den berechneten Energien mit aus dem Zylindertest abgeleiteten JWL-Koeffizienten. Es wird gefunden, dass die Detonationsenergien, die direkt aus der Energie der Detonationsprodukte beim CJ-Punkt erhalten wurden, einheitlich zu hoch sind (0,9445{+-}0,577 kJ/cm{sup 3}), waehrend die aus der Expansions-Isentrope erhaltenen in guter Uebereinstimmung sind (0,2072{+-}0,396 kJ/cm{sup 3}) mit den berechneten Energien mit aus dem Zylindertest
Good, Jonathon; Keenan, Sarah; Mishra, Punya
2016-01-01
The popular press is rife with examples of how students in the United States and around the globe are learning to program, make, and tinker. The Hour of Code, maker-education, and similar efforts are advocating that more students be exposed to principles found within computer science. We propose an expansion beyond simply teaching computational…
A computer code to simulate X-ray imaging techniques
Energy Technology Data Exchange (ETDEWEB)
Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel
2000-09-01
A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.
Reasoning with Computer Code: a new Mathematical Logic
Pissanetzky, Sergio
2013-01-01
A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self
Energy Technology Data Exchange (ETDEWEB)
Shestakov, A I; Offner, S R
2006-09-21
We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory
Energy Technology Data Exchange (ETDEWEB)
Shestakov, A I; Offner, S R
2007-03-02
We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory
Interface design of VSOP'94 computer code for safety analysis
Natsir, Khairina; Yazid, Putranto Ilham; Andiwijayakusuma, D.; Wahanani, Nursinta Adi
2014-09-01
Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.
A computer code for analysis of severe accidents in LWRs
Energy Technology Data Exchange (ETDEWEB)
NONE
2001-07-01
The ICARE2 computer code, developed and validated since 1988 at IPSN (nuclear safety and protection institute), calculates in a mechanistic way the physical and chemical phenomena involved in the core degradation process during possible severe accidents in LWR's. The coupling between ICARE2 and the best-estimate thermal-hydraulics code CATHARE2 was completed at IPSN and led to the release of a first ICARE/CATHARE V1 version in 1999, followed by 2 successive revisions in 2000 and 2001. This documents gathers all the contributions presented at the first international ICARE/CATHARE users'club seminar that took place in November 2001. This seminar was characterized by a high quality and variety of the presentations, showing an increase of reactor applications and user needs in this area (2D/3D aspects, reflooding, corium slumping into the lower head,...). 2 sessions were organized. The first one was dedicated to the applications of ICARE2 V3mod1 against small-scale experiments such as PHEBUS FPT2 and FPT3 tests, PHEBUS AIC, QUENCH experiments, NRU-FLHT-5 test, ACRR-MP1 and DC1 experiments, CORA-PWR tests, and PBF-SFD1.4 test. The second session involved ICARE/CATHARE V1mod1 reactor applications and users'guidelines. Among reactor applications we found: code applicability to high burn-up fuel rods, simulation of the TMI-2 transient, simulation of a PWR-900 high pressure severe accident sequence, and the simulation of a VVER-1000 large break LOCA scenario. (A.C.)
Compute-and-Forward: Harnessing Interference through Structured Codes
Nazer, Bobak
2009-01-01
Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information. Its potential is demonstrated through examples drawn ...
Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation
Scott, James R.
2004-01-01
NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.
Fire aerosol experiment and comparisons with computer code predictions
Energy Technology Data Exchange (ETDEWEB)
Gregory, W.S.; Nichols, B.D.; White, B.W.; Smith, P.R.; Leslie, I.H.; Corkran, J.R.
1988-01-01
Los Alamos National Laboratory, in cooperation with New Mexico State University, has carried on a series of tests to provide experimental data on fire-generated aerosol transport. These data will be used to verify the aerosol transport capabilities of the FIRAC computer code. FIRAC was developed by Los Alamos for the US Nuclear Regulatory Commission. It is intended to be used by safety analysts to evaluate the effects of hypothetical fires on nuclear plants. One of the most significant aspects of this analysis deals with smoke and radioactive material movement throughout the plant. The tests have been carried out using an industrial furnace that can generate gas temperatures to 300/degree/C. To date, we have used quartz aerosol with a median diameter of about 10 ..mu..m as the fire aerosol simulant. We also plan to use fire-generated aerosols of polystyrene and polymethyl methacrylate (PMMA). The test variables include two nominal gas flow rates (150 and 300 ft/sup 3//min) and three nominal gas temperatures (ambient, 150/degree/C, and 300/degree/C). The test results are presented in the form of plots of aerosol deposition vs length of duct. In addition, the mass of aerosol caught in a high-efficiency particulate air (HEPA) filter during the tests is reported. The tests are simulated with the FIRAC code, and the results are compared with the experimental data. 3 refs., 10 figs., 1 tab.
Application of the RESRAD computer code to VAMP scenario S
Energy Technology Data Exchange (ETDEWEB)
Gnanapragasam, E.K.; Yu, C.
1997-03-01
The RESRAD computer code developed at Argonne National Laboratory was among 11 models from 11 countries participating in the international Scenario S validation of radiological assessment models with Chernobyl fallout data from southern Finland. The validation test was conducted by the Multiple Pathways Assessment Working Group of the Validation of Environmental Model Predictions (VAMP) program coordinated by the International Atomic Energy Agency. RESRAD was enhanced to provide an output of contaminant concentrations in environmental media and in food products to compare with measured data from southern Finland. Probability distributions for inputs that were judged to be most uncertain were obtained from the literature and from information provided in the scenario description prepared by the Finnish Centre for Radiation and Nuclear Safety. The deterministic version of RESRAD was run repeatedly to generate probability distributions for the required predictions. These predictions were used later to verify the probabilistic RESRAD code. The RESRAD predictions of radionuclide concentrations are compared with measured concentrations in selected food products. The radiological doses predicted by RESRAD are also compared with those estimated by the Finnish Centre for Radiation and Nuclear Safety.
Computer Tensor Codes to Design the War Drive
Maccone, C.
To address problems in Breakthrough Propulsion Physics (BPP) and design the Warp Drive one needs sheer computing capabilities. This is because General Relativity (GR) and Quantum Field Theory (QFT) are so mathematically sophisticated that the amount of analytical calculations is prohibitive and one can hardly do all of them by hand. In this paper we make a comparative review of the main tensor calculus capabilities of the three most advanced and commercially available “symbolic manipulator” codes. We also point out that currently one faces such a variety of different conventions in tensor calculus that it is difficult or impossible to compare results obtained by different scholars in GR and QFT. Mathematical physicists, experimental physicists and engineers have each their own way of customizing tensors, especially by using different metric signatures, different metric determinant signs, different definitions of the basic Riemann and Ricci tensors, and by adopting different systems of physical units. This chaos greatly hampers progress toward the design of the Warp Drive. It is thus suggested that NASA would be a suitable organization to establish standards in symbolic tensor calculus and anyone working in BPP should adopt these standards. Alternatively other institutions, like CERN in Europe, might consider the challenge of starting the preliminary implementation of a Universal Tensor Code to design the Warp Drive.
Computer code for the atomistic simulation of lattice defects and dynamics. [COMENT code
Energy Technology Data Exchange (ETDEWEB)
Schiffgens, J.O.; Graves, N.J.; Oster, C.A.
1980-04-01
This document has been prepared to satisfy the need for a detailed, up-to-date description of a computer code that can be used to simulate phenomena on an atomistic level. COMENT was written in FORTRAN IV and COMPASS (CDC assembly language) to solve the classical equations of motion for a large number of atoms interacting according to a given force law, and to perform the desired ancillary analysis of the resulting data. COMENT is a dual-purpose intended to describe static defect configurations as well as the detailed motion of atoms in a crystal lattice. It can be used to simulate the effect of temperature, impurities, and pre-existing defects on radiation-induced defect production mechanisms, defect migration, and defect stability.
Computational hydrodynamics and optical performance of inductively-coupled plasma adaptive lenses
Energy Technology Data Exchange (ETDEWEB)
Mortazavi, M.; Urzay, J., E-mail: jurzay@stanford.edu; Mani, A. [Center for Turbulence Research, Stanford University, Stanford, California 94305-3024 (United States)
2015-06-15
This study addresses the optical performance of a plasma adaptive lens for aero-optical applications by using both axisymmetric and three-dimensional numerical simulations. Plasma adaptive lenses are based on the effects of free electrons on the phase velocity of incident light, which, in theory, can be used as a phase-conjugation mechanism. A closed cylindrical chamber filled with Argon plasma is used as a model lens into which a beam of light is launched. The plasma is sustained by applying a radio-frequency electric current through a coil that envelops the chamber. Four different operating conditions, ranging from low to high powers and induction frequencies, are employed in the simulations. The numerical simulations reveal complex hydrodynamic phenomena related to buoyant and electromagnetic laminar transport, which generate, respectively, large recirculating cells and wall-normal compression stresses in the form of local stagnation-point flows. In the axisymmetric simulations, the plasma motion is coupled with near-wall axial striations in the electron-density field, some of which propagate in the form of low-frequency traveling disturbances adjacent to vortical quadrupoles that are reminiscent of Taylor-Görtler flow structures in centrifugally unstable flows. Although the refractive-index fields obtained from axisymmetric simulations lead to smooth beam wavefronts, they are found to be unstable to azimuthal disturbances in three of the four three-dimensional cases considered. The azimuthal striations are optically detrimental, since they produce high-order angular aberrations that account for most of the beam wavefront error. A fourth case is computed at high input power and high induction frequency, which displays the best optical properties among all the three-dimensional simulations considered. In particular, the increase in induction frequency prevents local thermalization and leads to an axisymmetric distribution of electrons even after introduction of
Smoothed Particle Hydrodynamics: Applications Within DSTO
2006-10-01
dimensional SPH code. They used SPH to model wave overtopping on the decks of offshore platforms and ships and used moving boundary particles to create...loading on offshore structures is a subject area which is now becoming amenable to detailed study using sophisticated computational fluid dynamics codes...incorporation of bending, torsional stiffness, and hydrodynamic loads, thus making it ideal for the simulation of umbilical cables on ROVs and AUVs
Directory of Open Access Journals (Sweden)
Alex D Rygg
Full Text Available The hammerhead shark possesses a unique head morphology that is thought to facilitate enhanced olfactory performance. The olfactory chambers, located at the distal ends of the cephalofoil, contain numerous lamellae that increase the surface area for olfaction. Functionally, for the shark to detect chemical stimuli, water-borne odors must reach the olfactory sensory epithelium that lines these lamellae. Thus, odorant transport from the aquatic environment to the sensory epithelium is the first critical step in olfaction. Here we investigate the hydrodynamics of olfaction in Sphyrna tudes based on an anatomically-accurate reconstruction of the head and olfactory chamber from high-resolution micro-CT and MRI scans of a cadaver specimen. Computational fluid dynamics simulations of water flow in the reconstructed model reveal the external and internal hydrodynamics of olfaction during swimming. Computed external flow patterns elucidate the occurrence of flow phenomena that result in high and low pressures at the incurrent and excurrent nostrils, respectively, which induces flow through the olfactory chamber. The major (prenarial nasal groove along the cephalofoil is shown to facilitate sampling of a large spatial extent (i.e., an extended hydrodynamic "reach" by directing oncoming flow towards the incurrent nostril. Further, both the major and minor nasal grooves redirect some flow away from the incurrent nostril, thereby limiting the amount of fluid that enters the olfactory chamber. Internal hydrodynamic flow patterns are also revealed, where we show that flow rates within the sensory channels between olfactory lamellae are passively regulated by the apical gap, which functions as a partial bypass for flow in the olfactory chamber. Consequently, the hammerhead shark appears to utilize external (major and minor nasal grooves and internal (apical gap flow regulation mechanisms to limit water flow between the olfactory lamellae, thus protecting these
Mueller, B
2014-01-01
Considering general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 solar masses, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the Vertex-CoCoNuT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies of electron antineutrinos and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M>10 M_sun as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of the electron antineutrino mean energy with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10-20% level, and accretion continuing beyond the onset of the explosion prevents the abru...
Institute of Scientific and Technical Information of China (English)
TANG Shao-Qiang; ZHANG Da-Peng
2005-01-01
@@ We propose a pseudo-hydrodynamic (PHD) model that has hyperbolic principal part. It formally converges to the corresponding energy-transport model in the limit of zero momentum relaxation time. Numerical examples have demonstrated the regularization effects of the PHD model.
Implementation of a 3D mixing layer code on parallel computers
Energy Technology Data Exchange (ETDEWEB)
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E. [Syracuse Univ., NY (United States)
1995-09-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
Smoothed Particle Hydrodynamic Simulator
Energy Technology Data Exchange (ETDEWEB)
2016-10-05
This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.
Disruptive Innovation in Numerical Hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Waltz, Jacob I. [Los Alamos National Laboratory
2012-09-06
We propose the research and development of a high-fidelity hydrodynamic algorithm for tetrahedral meshes that will lead to a disruptive innovation in the numerical modeling of Laboratory problems. Our proposed innovation has the potential to reduce turnaround time by orders of magnitude relative to Advanced Simulation and Computing (ASC) codes; reduce simulation setup costs by millions of dollars per year; and effectively leverage Graphics Processing Unit (GPU) and future Exascale computing hardware. If successful, this work will lead to a dramatic leap forward in the Laboratory's quest for a predictive simulation capability.
Assessment of the computer code COBRA/CFTL
Energy Technology Data Exchange (ETDEWEB)
Baxi, C. B.; Burhop, C. J.
1981-07-01
The COBRA/CFTL code has been developed by Oak Ridge National Laboratory (ORNL) for thermal-hydraulic analysis of simulated gas-cooled fast breeder reactor (GCFR) core assemblies to be tested in the core flow test loop (CFTL). The COBRA/CFTL code was obtained by modifying the General Atomic code COBRA*GCFR. This report discusses these modifications, compares the two code results for three cases which represent conditions from fully rough turbulent flow to laminar flow. Case 1 represented fully rough turbulent flow in the bundle. Cases 2 and 3 represented laminar and transition flow regimes. The required input for the COBRA/CFTL code, a sample problem input/output and the code listing are included in the Appendices.
Energy Technology Data Exchange (ETDEWEB)
Müller, Bernhard [Monash Center for Astrophysics, School of Mathematical Sciences, Building 28, Monash University, Victoria 3800 (Australia); Janka, Hans-Thomas, E-mail: bernhard.mueller@monash.edu, E-mail: bjmuellr@mpa-garching.mpg.de, E-mail: thj@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching (Germany)
2014-06-10
Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M {sub ☉}, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, (E), of ν-bar {sub e} and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M ≳ 10 M {sub ☉} as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of 〈E{sub ν-bar{sub e}}〉 with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10%-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ∼10 kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such 'SASI neutrino chirps' reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50 Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.
Superimposed Code Theoretic Analysis of Deoxyribonucleic Acid (DNA) Codes and DNA Computing
2010-01-01
DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008 IEEE Proceedings of International Symposium on Information Theory, pp. 2292 – 2296...2008, pp. 525-34. 28. A. Macula, et al., “Random Coding Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008 IEEE...component of this innovation is the combinatorial method of bio-memory design and detection that encodes item or process information as numerical sequences
User manual for PACTOLUS: a code for computing power costs.
Energy Technology Data Exchange (ETDEWEB)
Huber, H.D.; Bloomster, C.H.
1979-02-01
PACTOLUS is a computer code for calculating the cost of generating electricity. Through appropriate definition of the input data, PACTOLUS can calculate the cost of generating electricity from a wide variety of power plants, including nuclear, fossil, geothermal, solar, and other types of advanced energy systems. The purpose of PACTOLUS is to develop cash flows and calculate the unit busbar power cost (mills/kWh) over the entire life of a power plant. The cash flow information is calculated by two principal models: the Fuel Model and the Discounted Cash Flow Model. The Fuel Model is an engineering cost model which calculates the cash flow for the fuel cycle costs over the project lifetime based on input data defining the fuel material requirements, the unit costs of fuel materials and processes, the process lead and lag times, and the schedule of the capacity factor for the plant. For nuclear plants, the Fuel Model calculates the cash flow for the entire nuclear fuel cycle. For fossil plants, the Fuel Model calculates the cash flow for the fossil fuel purchases. The Discounted Cash Flow Model combines the fuel costs generated by the Fuel Model with input data on the capital costs, capital structure, licensing time, construction time, rates of return on capital, tax rates, operating costs, and depreciation method of the plant to calculate the cash flow for the entire lifetime of the project. The financial and tax structure for both investor-owned utilities and municipal utilities can be simulated through varying the rates of return on equity and debt, the debt-equity ratios, and tax rates. The Discounted Cash Flow Model uses the principal that the present worth of the revenues will be equal to the present worth of the expenses including the return on investment over the economic life of the project. This manual explains how to prepare the input data, execute cases, and interpret the output results. (RWR)
MMA, A Computer Code for Multi-Model Analysis
Energy Technology Data Exchange (ETDEWEB)
Eileen P. Poeter and Mary C. Hill
2007-08-20
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.
Reducing Computational Overhead of Network Coding with Intrinsic Information Conveying
DEFF Research Database (Denmark)
Heide, Janus; Zhang, Qi; Pedersen, Morten V.
This paper investigated the possibility of intrinsic information conveying in network coding systems. The information is embedded into the coding vector by constructing the vector based on a set of predefined rules. This information can subsequently be retrieved by any receiver. The starting point...... to the overall energy consumption, which is particular problematic for mobile battery-driven devices. In RLNC coding is performed over a FF (Finite Field). We propose to divide this field into sub fields, and let each sub field signify some information or state. In order to embed the information correctly...... the coding operations must be performed in a particular way, which we introduce. Finally we evaluate the suggested system and find that the amount of coding can be significantly reduced both at nodes that recode and decode....
3-D field computation: The near-triumph of commerical codes
Energy Technology Data Exchange (ETDEWEB)
Turner, L.R.
1995-07-01
In recent years, more and more of those who design and analyze magnets and other devices are using commercial codes rather than developing their own. This paper considers the commercial codes and the features available with them. Other recent trends with 3-D field computation include parallel computation and visualization methods such as virtual reality systems.
ORNL ALICE: a statistical model computer code including fission competition. [In FORTRAN
Energy Technology Data Exchange (ETDEWEB)
Plasil, F.
1977-11-01
A listing of the computer code ORNL ALICE is given. This code is a modified version of computer codes ALICE and OVERLAID ALICE. It allows for higher excitation energies and for a greater number of evaporated particles than the earlier versions. The angular momentum removal option was made more general and more internally consistent. Certain roundoff errors are avoided by keeping a strict accounting of partial probabilities. Several output options were added.
Comparison of computer codes for estimates of the symmetric coupled bunch instabilities growth times
Angal-Kalinin, Deepa
2002-01-01
The standard computer codes used for estimating the growth times of the symmetric coupled bunch instabilities are ZAP and BBI.The code Vlasov was earlier used for the LHC for the estimates of the coupled bunch instabilities growth time[1]. The results obtained by these three codes have been compared and the options under which their results can be compared are discussed. The differences in the input and the output for these three codes are given for a typical case.
VH1 Hydrodynamics for Introductory Astronomy
Christian, Wolfgang; Blondin, John
1997-05-01
Improvements in personal computer operating systems and hardware now makes it possible to run research grade Fortran simulations on student computers. Unfortunately, many legacy applications do not have a graphical user interface and are sometimes hard coded to a specific problem making them unsuitable for beginning students. A good way to re-purpose such legacy code for undergraduate teaching is to build a graphical front end using a Rapid Application Development, RAD, tool that starts the simulation as a separate thread. This technique is being used with Virginia Hydrodynamics One, VH1, to provide an introduction to computational hydrodynamics. Standard test problems including gravitational collapse of an interstellar cloud, radiation cooling, and formation of shocks are demonstrated using this on Microsoft Windows 95/NT.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will
Quantum error correcting codes and one-way quantum computing: Towards a quantum memory
Schlingemann, D
2003-01-01
For realizing a quantum memory we suggest to first encode quantum information via a quantum error correcting code and then concatenate combined decoding and re-encoding operations. This requires that the encoding and the decoding operation can be performed faster than the typical decoherence time of the underlying system. The computational model underlying the one-way quantum computer, which has been introduced by Hans Briegel and Robert Raussendorf, provides a suitable concept for a fast implementation of quantum error correcting codes. It is shown explicitly in this article is how encoding and decoding operations for stabilizer codes can be realized on a one-way quantum computer. This is based on the graph code representation for stabilizer codes, on the one hand, and the relation between cluster states and graph codes, on the other hand.
Application of computational fluid dynamics methods to improve thermal hydraulic code analysis
Sentell, Dennis Shannon, Jr.
A computational fluid dynamics code is used to model the primary natural circulation loop of a proposed small modular reactor for comparison to experimental data and best-estimate thermal-hydraulic code results. Recent advances in computational fluid dynamics code modeling capabilities make them attractive alternatives to the current conservative approach of coupled best-estimate thermal hydraulic codes and uncertainty evaluations. The results from a computational fluid dynamics analysis are benchmarked against the experimental test results of a 1:3 length, 1:254 volume, full pressure and full temperature scale small modular reactor during steady-state power operations and during a depressurization transient. A comparative evaluation of the experimental data, the thermal hydraulic code results and the computational fluid dynamics code results provides an opportunity to validate the best-estimate thermal hydraulic code's treatment of a natural circulation loop and provide insights into expanded use of the computational fluid dynamics code in future designs and operations. Additionally, a sensitivity analysis is conducted to determine those physical phenomena most impactful on operations of the proposed reactor's natural circulation loop. The combination of the comparative evaluation and sensitivity analysis provides the resources for increased confidence in model developments for natural circulation loops and provides for reliability improvements of the thermal hydraulic code.
Energy Technology Data Exchange (ETDEWEB)
VOOGD, J.A.
1999-04-19
An analysis of three software proposals is performed to recommend a computer code for immobilized low activity waste flow and transport modeling. The document uses criteria restablished in HNF-1839, ''Computer Code Selection Criteria for Flow and Transport Codes to be Used in Undisturbed Vadose Zone Calculation for TWRS Environmental Analyses'' as the basis for this analysis.
Mueller, B; Marek, A
2012-01-01
We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the CoCoNuT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the spacetime metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 solar mass progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared to Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated ele...
MULTI-IFE-A one-dimensional computer code for Inertial Fusion Energy (IFE) target simulations
Ramis, R.; Meyer-ter-Vehn, J.
2016-06-01
The code MULTI-IFE is a numerical tool devoted to the study of Inertial Fusion Energy (IFE) microcapsules. It includes the relevant physics for the implosion and thermonuclear ignition and burning: hydrodynamics of two component plasmas (ions and electrons), three-dimensional laser light ray-tracing, thermal diffusion, multigroup radiation transport, deuterium-tritium burning, and alpha particle diffusion. The corresponding differential equations are discretized in spherical one-dimensional Lagrangian coordinates. Two typical application examples, a high gain laser driven capsule and a low gain radiation driven marginally igniting capsule are discussed. In addition to phenomena relevant for IFE, the code includes also components (planar and cylindrical geometries, transport coefficients at low temperature, explicit treatment of Maxwell's equations) that extend its range of applicability to laser-matter interaction at moderate intensities (<1016 W cm-2). The source code design has been kept simple and structured with the aim to encourage user's modifications for specialized purposes.
Quantum computation with topological codes from qubit to topological fault-tolerance
Fujii, Keisuke
2015-01-01
This book presents a self-consistent review of quantum computation with topological quantum codes. The book covers everything required to understand topological fault-tolerant quantum computation, ranging from the definition of the surface code to topological quantum error correction and topological fault-tolerant operations. The underlying basic concepts and powerful tools, such as universal quantum computation, quantum algorithms, stabilizer formalism, and measurement-based quantum computation, are also introduced in a self-consistent way. The interdisciplinary fields between quantum information and other fields of physics such as condensed matter physics and statistical physics are also explored in terms of the topological quantum codes. This book thus provides the first comprehensive description of the whole picture of topological quantum codes and quantum computation with them.
Two-Phase Flow in Geothermal Wells: Development and Uses of a Good Computer Code
Energy Technology Data Exchange (ETDEWEB)
Ortiz-Ramirez, Jaime
1983-06-01
A computer code is developed for vertical two-phase flow in geothermal wellbores. The two-phase correlations used were developed by Orkiszewski (1967) and others and are widely applicable in the oil and gas industry. The computer code is compared to the flowing survey measurements from wells in the East Mesa, Cerro Prieto, and Roosevelt Hot Springs geothermal fields with success. Well data from the Svartsengi field in Iceland are also used. Several applications of the computer code are considered. They range from reservoir analysis to wellbore deposition studies. It is considered that accurate and workable wellbore simulators have an important role to play in geothermal reservoir engineering.
Margination of white blood cells - a computational approach by a hydrodynamic phase field model
Marth, Wieland
2015-01-01
We numerically investigate margination of white blood cells and demonstrate the dependency on a number of conditions including hematocrit, the deformability of the cells and the Reynolds number. A detailed mesoscopic hydrodynamic Helfrich-type model is derived, validated and used for the simulations to provides a quantitative description of the margination of white blood cells. Previous simulation results, obtained with less detailed models, could be confirmed, e.g. the largest probability of margination of white blood cells at an intermediate range of hematocrit values and a decreasing tendency with increasing deformability. The consideration of inertia effects, which become of relevance in small vessels, also shows a dependency and leads to less pronounced margination of white blood cells with increasing Reynolds number.
Efficient Quantification of Uncertainties in Complex Computer Code Results Project
National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...
Second Generation Integrated Composite Analyzer (ICAN) Computer Code
Murthy, Pappu L. N.; Ginty, Carol A.; Sanfeliz, Jose G.
1993-01-01
This manual updates the original 1986 NASA TP-2515, Integrated Composite Analyzer (ICAN) Users and Programmers Manual. The various enhancements and newly added features are described to enable the user to prepare the appropriate input data to run this updated version of the ICAN code. For reference, the micromechanics equations are provided in an appendix and should be compared to those in the original manual for modifications. A complete output for a sample case is also provided in a separate appendix. The input to the code includes constituent material properties, factors reflecting the fabrication process, and laminate configuration. The code performs micromechanics, macromechanics, and laminate analyses, including the hygrothermal response of polymer-matrix-based fiber composites. The output includes the various ply and composite properties, the composite structural response, and the composite stress analysis results with details on failure. The code is written in FORTRAN 77 and can be used efficiently as a self-contained package (or as a module) in complex structural analysis programs. The input-output format has changed considerably from the original version of ICAN and is described extensively through the use of a sample problem.
Code of Ethical Conduct for Computer-Using Educators: An ICCE Policy Statement.
Computing Teacher, 1987
1987-01-01
Prepared by the International Council for Computers in Education's Ethics and Equity Committee, this code of ethics for educators using computers covers nine main areas: curriculum issues, issues relating to computer access, privacy/confidentiality issues, teacher-related issues, student issues, the community, school organizational issues,…
Energy Technology Data Exchange (ETDEWEB)
Passy, Jean-Claude; Mac Low, Mordecai-Mark [Department of Astrophysics, American Museum of Natural History, New York, NY (United States); De Marco, Orsola [Department of Physics and Astronomy, Macquarie University, Sydney, NSW (Australia); Fryer, Chris L.; Diehl, Steven; Rockefeller, Gabriel [Computational Computer Science Division, Los Alamos National Laboratory, Los Alamos, NM (United States); Herwig, Falk [Department of Physics and Astronomy, University of Victoria, Victoria, BC (Canada); Oishi, Jeffrey S. [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Palo Alto, CA (United States); Bryan, Greg L. [Department of Astronomy, Columbia University, New York, NY (United States)
2012-01-01
We use three-dimensional hydrodynamical simulations to study the rapid infall phase of the common envelope (CE) interaction of a red giant branch star of mass equal to 0.88 M{sub Sun} and a companion star of mass ranging from 0.9 down to 0.1 M{sub Sun }. We first compare the results obtained using two different numerical techniques with different resolutions, and find very good agreement overall. We then compare the outcomes of those simulations with observed systems thought to have gone through a CE. The simulations fail to reproduce those systems in the sense that most of the envelope of the donor remains bound at the end of the simulations and the final orbital separations between the donor's remnant and the companion, ranging from 26.8 down to 5.9 R{sub Sun }, are larger than the ones observed. We suggest that this discrepancy vouches for recombination playing an essential role in the ejection of the envelope and/or significant shrinkage of the orbit happening in the subsequent phase.
Computational Participation: Understanding Coding as an Extension of Literacy Instruction
Burke, Quinn; O'Byrne, W. Ian; Kafai, Yasmin B.
2016-01-01
Understanding the computational concepts on which countless digital applications run offers learners the opportunity to no longer simply read such media but also become more discerning end users and potentially innovative "writers" of new media themselves. To think computationally--to solve problems, to design systems, and to process and…
Selection of a computer code for Hanford low-level waste engineered-system performance assessment
Energy Technology Data Exchange (ETDEWEB)
McGrail, B.P.; Mahoney, L.A.
1995-10-01
Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected to affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.
User`s manual for FLUFIX/MOD2: A computer program for fluid-solids hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Lyczkowski, R.W.; Bouillard, J.X.; Folga, S.M.
1992-04-01
This report describes FLUFIX/MOD2, a computer program that was developed as a two-dimensional analytical tool, based on a two-fluid hydrodynamic model, for application to fluid-flow simulation in fluid-solids systems and replaces the Interim User`s Manual for FLUFIX/MOD1. The field equations that constitute the two-fluid model used in FLUFIX/MOD2 and the constitutive relationships required to close this system of equations, as well as the finite-difference equations that approximate these equations and their solution procedure, are presented and discussed. The global structure of FLUFIX/MOD2 that implements this solution procedure is discussed. The input data for FLUFIX/MOD2 are given, and a sample problem for a fluidized bed is described.
Computer codes used during upgrading activities at MINT TRIGA reactor
Energy Technology Data Exchange (ETDEWEB)
Mohammad Suhaimi Kassim; Adnan Bokhari; Mohd. Idris Taib [Malaysian Institute for Nuclear Technology Research, Kajang (Malaysia)
1999-10-01
MINT TRIGA Reactor is a 1-MW swimming pool nuclear research reactor commissioned in 1982. In 1993, a project was initiated to upgrade the thermal power to 2 MW. The IAEA assistance was sought to assist the various activities relevant to an upgrading exercise. For neutronics calculations, the IAEA has provided expert assistance to introduce the WIMS code, TRIGAP, and EXTERMINATOR2. For thermal-hydraulics calculations, PARET and RELAP5 were introduced. Shielding codes include ANISN and MERCURE. However, in the middle of 1997, MINT has decided to change the scope of the project to safety upgrading of the MINT Reactor. This paper describes some of the activities carried out during the upgrading process. (author)
Pakmor, R; Roepke, F K; Hillebrandt, W
2012-01-01
Mergers of two carbon-oxygen white dwarfs have long been suspected to be progenitors of Type Ia Supernovae. Here we present our modifications to the cosmological smoothed particle hydrodynamics code Gadget to apply it to stellar physics including but not limited to mergers of white dwarfs. We demonstrate a new method to map a one-dimensional profile of an object in hydrostatic equilibrium to a stable particle distribution. We use the code to study the effect of initial conditions and resolution on the properties of the merger of two white dwarfs. We compare mergers with approximate and exact binary initial conditions and find that exact binary initial conditions lead to a much more stable binary system but there is no difference in the properties of the actual merger. In contrast, we find that resolution is a critical issue for simulations of white dwarf mergers. Carbon burning hotspots which may lead to a detonation in the so-called violent merger scenario emerge only in simulations with sufficient resolutio...
Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor
Ajmani, Kumud; Liu, Nan-Suey
2005-01-01
The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.
Challenges of Computational Processing of Code-Switching
Çetinoğlu, Özlem; Schulz, Sarah; Vu, Ngoc Thang
2016-01-01
This paper addresses challenges of Natural Language Processing (NLP) on non-canonical multilingual data in which two or more languages are mixed. It refers to code-switching which has become more popular in our daily life and therefore obtains an increasing amount of attention from the research community. We report our experience that cov- ers not only core NLP tasks such as normalisation, language identification, language modelling, part-of-speech tagging and dependency parsing but also more...
POTRE: A computer code for the assessment of dose from ingestion
Energy Technology Data Exchange (ETDEWEB)
Hanusik, V.; Mitro, A.; Niedel, S.; Grosikova, B.; Uvirova, E.; Stranai, I. (Institute of Radioecology and Applied Nuclear Techniques, Kosice (Czechoslovakia))
1991-01-01
The paper describes the computer code PORET and the auxiliary database system which allow to assess the radiation exposure from ingestion of foodstuffs contaminated by radionuclides released from a nuclear facility during normal operation into the atmosphere. (orig.).
Speeding-up MADYMO 3D on serial and parallel computers using a portable coding environment
Tsiandikos, T.; Rooijackers, H.F.L.; Asperen, F.G.J. van; Lupker, H.A.
1996-01-01
This paper outlines the strategy and methodology used to create a portable coding environment for the commercial package MADYMO. The objective is to design a global data structure that efficiently utilises the memory and cache of computers, so that one source code can be used for serial, vector and
Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela
2015-01-01
Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…
Holbrook, M. Cay; MacCuspie, P. Ann
2010-01-01
Braille-reading mathematicians, scientists, and computer scientists were asked to examine the usability of the Unified English Braille Code (UEB) for technical materials. They had little knowledge of the code prior to the study. The research included two reading tasks, a short tutorial about UEB, and a focus group. The results indicated that the…
Metropol, a computer code for the simulation of transport of contaminants with groundwater
Sauter FJ; Hassanizadeh SM; Leijnse A; Glasbergen P; Slot AFM
1990-01-01
In this report a description is given of the computer code METROPOL. This code simulates the three dimensional flow of groundwater with varying density and the simultaneous transport of contaminants in low concentration and is based on the finite element method. The basic equations for groundwater
Code and papers: computing publication patterns in the LHC era
CERN. Geneva
2012-01-01
Publications in scholarly journals establish the body of knowledge deriving from scientific research; they also play a fundamental role in the career path of scientists and in the evaluation criteria of funding agencies. This presentation reviews the evolution of computing-oriented publications in HEP following the start of operation of LHC. Quantitative analyses are illustrated, which document the production of scholarly papers on computing-related topics by HEP experiments and core tools projects (including distributed computing R&D), and the citations they receive. Several scientometric indicators are analyzed to characterize the role of computing in HEP literature. Distinctive features of scholarly publication production in the software-oriented and hardware-oriented experimental HEP communities are highlighted. Current patterns and trends are compared to the situation in previous generations' HEP experiments at LEP, Tevatron and B-factories. The results of this scientometric analysis document objec...
Proposed standards for peer-reviewed publication of computer code
Computer simulation models are mathematical abstractions of physical systems. In the area of natural resources and agriculture, these physical systems encompass selected interacting processes in plants, soils, animals, or watersheds. These models are scientific products and have become important i...
Energy Technology Data Exchange (ETDEWEB)
Joshua J. Cogliati; Abderrafi M. Ougouag
2006-10-01
A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.
Computation of Grobner basis for systematic encoding of generalized quasi-cyclic codes
Van, Vo Tam; Mita, Seiichi
2008-01-01
Generalized quasi-cyclic (GQC) codes form a wide and useful class of linear codes that includes thoroughly quasi-cyclic codes, finite geometry (FG) low density parity check (LDPC) codes, and Hermitian codes. Although it is known that the systematic encoding of GQC codes is equivalent to the division algorithm in the theory of Grobner basis of modules, there has been no algorithm that computes Grobner basis for all types of GQC codes. In this paper, we propose two algorithms to compute Grobner basis for GQC codes from their parity check matrices: echelon canonical form algorithm and transpose algorithm. Both algorithms require sufficiently small number of finite-field operations with the order of the third power of code-length. Each algorithm has its own characteristic; the first algorithm is composed of elementary methods, and the second algorithm is based on a novel formula and is faster than the first one for high-rate codes. Moreover, we show that a serial-in serial-out encoder architecture for FG LDPC cod...
Charogiannis, Alexandros; Denner, Fabian; van Wachem, Berend; Pradas, Marc; Kalliadasis, Serafim; Markides, Christos
2016-11-01
We investigate the hydrodynamic characteristics of harmonically excited liquid-films flowing down a 20circ; incline by simultaneous application of Particle Tracking Velocimetry and Planar Laser-Induced Fluorescence (PLIF) imaging, complemented by Direct Numerical Simulations. By simultaneously implementing the above two optical techniques, instantaneous and highly localised flow-rate data were also retrieved, based on which the effect of local film topology on the flow-field underneath the wavy interface is studied in detail. Our main result is that the instantaneous flow rate varies linearly with the instantaneous film-height, as confirmed by both experiments and simulations. Furthermore, both experimental and numerical flow-rate data are closely approximated by a simple analytical relationship, which is reported here for the first time, with only minor deviations. This relationship includes the wave speed c and mean flow-rate Q , both of which can be obtained by simple and inexpensive measurement techniques, thus allowing for spatiotemporally resolved flow-rate predictions to be made without requiring any knowledge of the full flow-field from below the wavy interface.
Raymond, Samuel J.; Jones, Bruce; Williams, John R.
2016-12-01
A strategy is introduced to allow coupling of the material point method (MPM) and smoothed particle hydrodynamics (SPH) for numerical simulations. This new strategy partitions the domain into SPH and MPM regions, particles carry all state variables and as such no special treatment is required for the transition between regions. The aim of this work is to derive and validate the coupling methodology between MPM and SPH. Such coupling allows for general boundary conditions to be used in an SPH simulation without further augmentation. Additionally, as SPH is a purely particle method, and MPM is a combination of particles and a mesh. This coupling also permits a smooth transition from particle methods to mesh methods, where further coupling to mesh methods could in future provide an effective farfield boundary treatment for the SPH method. The coupling technique is introduced and described alongside a number of simulations in 1D and 2D to validate and contextualize the potential of using these two methods in a single simulation. The strategy shown here is capable of fully coupling the two methods without any complicated algorithms to transform information from one method to another.
Energy Technology Data Exchange (ETDEWEB)
Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas, E-mail: bjmuellr@mpa-garching.mpg.de, E-mail: thj@mpa-garching.mpg.de [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching (Germany)
2012-09-01
We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M{sub Sun} progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.
Windtalking Computers: Frequency Normalization, Binary Coding Systems and Encryption
Zirkind, Givon
2009-01-01
The goal of this paper is to discuss the application of known techniques, knowledge and technology in a novel way, to encrypt computer and non-computer data. To-date most computers use base 2 and most encryption systems use ciphering and/or an encryption algorithm, to convert data into a secret message. The method of having the computer "speak another secret language" as used in human military secret communications has never been imitated. The author presents the theory and several possible implementations of a method for computers for secret communications analogous to human beings using a secret language or; speaking multiple languages. The kind of encryption scheme proposed significantly increases the complexity of and the effort needed for, decryption. As every methodology has its drawbacks, so too, the data of the proposed system has its drawbacks. It is not as compressed as base 2 would be. However, this is manageable and acceptable, if the goal is very strong encryption: At least two methods and their ...
SENDIN and SENTINEL: two computer codes to assess the effects of nuclear data changes
Energy Technology Data Exchange (ETDEWEB)
Marable, J. H.; Drischler, J. D.; Weisbin, C. R.
1977-07-01
A description is given of the computer code SENTINEL, which provides a simple means for finding the effects on calculated reactor and shielding performance parameters due to proposed changes in the cross section data base. This code uses predetermined detailed sensitivity coefficients in SENPRO format, which is described in Appendix A. Knowledge of details of the particular reactor and/or shielding assemblies is not required of the user. Also described is the computer code SENDIN, which converts unformatted (binary) sensitivity files to card image form and vice versa. This is useful for transferring sensitivity files from one installation to another.
TPASS: a gamma-ray spectrum analysis and isotope identification computer code
Energy Technology Data Exchange (ETDEWEB)
Dickens, J.K.
1981-03-01
The gamma-ray spectral data-reduction and analysis computer code TPASS is described. This computer code is used to analyze complex Ge(Li) gamma-ray spectra to obtain peak areas corrected for detector efficiencies, from which are determined gamma-ray yields. These yields are compared with an isotope gamma-ray data file to determine the contributions to the observed spectrum from decay of specific radionuclides. A complete FORTRAN listing of the code and a complex test case are given.
Mathematical model and computer code for the analysis of advanced fast reactor dynamics
Energy Technology Data Exchange (ETDEWEB)
Schukin, N.V. (Moscow Engineering Physics Inst. (Russian Federation)); Korsun, A.S. (Moscow Engineering Physics Inst. (Russian Federation)); Vitruk, S.G. (Moscow Engineering Physics Inst. (Russian Federation)); Zimin, V.G. (Moscow Engineering Physics Inst. (Russian Federation)); Romanin, S.D. (Moscow Engineering Physics Inst. (Russian Federation))
1993-04-01
Efficient algorithms for mathematical modeling of 3-D neutron kinetics and thermal hydraulics are described. The model and appropriate computer code make it possible to analyze a variety of transient events ranging from normal operational states to catastrophic accident excursions. To verify the code, a number of calculations of different kind of transients was carried out. The results of the calculations show that the model and the computer code could be used for conceptual design of advanced liquid metal reactors. The detailed description of calculations of TOP WS accident is presented. (orig./DG)
Development of a system of computer codes for severe accident analyses and its applications
Energy Technology Data Exchange (ETDEWEB)
Chang, Soon Hong; Cheon, Moon Heon; Cho, Nam jin; No, Hui Cheon; Chang, Hyeon Seop; Moon, Sang Kee; Park, Seok Jeong; Chung, Jee Hwan [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
1991-12-15
The objectives of this study is to develop a system of computer codes for postulated severe accident analyses in Nuclear Power Plants. This system of codes is necessary to conduct individual plant examination for domestic nuclear power plants. As a result of this study, one can conduct severe accident assessments more easily, and can extract the plant-specific vulnerabilities for severe accidents and at the same time the ideas for enhancing overall accident resistance. The scope and contents of this study are as follows : development of a system of computer codes for severe accident analyses, development of severe accident management strategy.
Energy Technology Data Exchange (ETDEWEB)
Andronov, V.A.; Zhidov, I.G.; Meskov, E.E.; Nevmerzhitskii, N.V.; Nikiforov, V.V.; Razin, A.N.; Rogatchev, V.G.; Tolshmyakov, A.I.; Yanilkin, Y.V. [Russian Federal Nuclear Center (Russian Federation)
1994-12-31
The report presents the basic results of some calculations, theoretical and experimental efforts in the study of Rayleigh-Taylor, Kelvin-Helmholtz, Richtmyer-Meshkov instabilities and the turbulent mixing which is caused by their evolution. Since the late forties the VNIIEF has been conducting these investigations. This report is based on the data which were published in different times in Russian and foreign journals. The first part of the report deals with calculations an theoretical techniques for the description of hydrodynamic instabilities applied currently, as well as with the results of several individual problems and their comparison with the experiment. These methods can be divided into two types: direct numerical simulation methods and phenomenological methods. The first type includes the regular 2D and 3D gasdynamical techniques as well as the techniques based on small perturbation approximation and on incompressible liquid approximation. The second type comprises the techniques based on various phenomenological turbulence models. The second part of the report describes the experimental methods and cites the experimental results of Rayleigh-Taylor and Richtmyer-Meskov instability studies as well as of turbulent mixing. The applied methods were based on thin-film gaseous models, on jelly models and liquid layer models. The research was done for plane and cylindrical geometries. As drivers, the shock tubes of different designs were used as well as gaseous explosive mixtures, compressed air and electric wire explosions. The experimental results were applied in calculational-theoretical technique calibrations. The authors did not aim at covering all VNIIEF research done in this field of science. To a great extent the choice of the material depended on the personal contribution of the author in these studies.
Directory of Open Access Journals (Sweden)
Boronina Lyudmila Vladimirovna
2012-12-01
Full Text Available Improvement of water intake technologies are of great importance. These technologies are required to provide high quality water intake and treatment; they must be sufficiently simple and reliable, and they must be easily adjustable to particular local conditions. A mathematical model of a water supply area near the filtering water intake is proposed. On its basis, a software package designated for the calculation of parameters of the supply area along with its graphical representation is developed. To improve the efficiency of water treatment plants, the authors propose a new method of their integration into the landscape by taking account of velocity distributions in the water supply area within the water reservoir where the plant installation is planned. In the proposed relationship, the filtration rate and the scattering rate at the outlet of the supply area are taken into account, and they assure more precise projections of the inlet velocity. In the present study, assessment of accuracy of the mathematical model involving the scattering of a turbulent flow has been done. The assessment procedure is based on verification of the mean values equality hypothesis and on comparison with the experimental data. The results and conclusions obtained by means of the method developed by the authors have been verified through comparison of deviations of specific values calculated through the employment of similar algorithms in MathCAD, Maple and PLUMBING. The method of the water supply area analysis, with the turbulent scattering area having been taken into account, and the software package enable to numerically estimate the efficiency of the pre-purification process by tailoring a number of parameters of the filtering component of the water intake to the river hydrodynamic properties. Therefore, the method and the software package provide a new tool for better design, installation and operation of water treatment plants with respect to filtration and
Proceedings of the conference on computer codes and the linear accelerator community
Energy Technology Data Exchange (ETDEWEB)
Cooper, R.K. (comp.)
1990-07-01
The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.
Exact Gap Computation for Code Coverage Metrics in ISO-C
Richter, Dirk; 10.4204/EPTCS.80.4
2012-01-01
Test generation and test data selection are difficult tasks for model based testing. Tests for a program can be meld to a test suite. A lot of research is done to quantify the quality and improve a test suite. Code coverage metrics estimate the quality of a test suite. This quality is fine, if the code coverage value is high or 100%. Unfortunately it might be impossible to achieve 100% code coverage because of dead code for example. There is a gap between the feasible and theoretical maximal possible code coverage value. Our review of the research indicates, none of current research is concerned with exact gap computation. This paper presents a framework to compute such gaps exactly in an ISO-C compatible semantic and similar languages. We describe an efficient approximation of the gap in all the other cases. Thus, a tester can decide if more tests might be able or necessary to achieve better coverage.
Visualization of elastic wavefields computed with a finite difference code
Energy Technology Data Exchange (ETDEWEB)
Larsen, S. [Lawrence Livermore National Lab., CA (United States); Harris, D.
1994-11-15
The authors have developed a finite difference elastic propagation model to simulate seismic wave propagation through geophysically complex regions. To facilitate debugging and to assist seismologists in interpreting the seismograms generated by the code, they have developed an X Windows interface that permits viewing of successive temporal snapshots of the (2D) wavefield as they are calculated. The authors present a brief video displaying the generation of seismic waves by an explosive source on a continent, which propagate to the edge of the continent then convert to two types of acoustic waves. This sample calculation was part of an effort to study the potential of offshore hydroacoustic systems to monitor seismic events occurring onshore.
Introduction to error correcting codes in quantum computers
Salas, P J
2006-01-01
The goal of this paper is to review the theoretical basis for achieving a faithful quantum information transmission and processing in the presence of noise. Initially encoding and decoding, implementing gates and quantum error correction will be considered error free. Finally we will relax this non realistic assumption, introducing the quantum fault-tolerant concept. The existence of an error threshold permits to conclude that there is no physical law preventing a quantum computer from being built. An error model based on the depolarizing channel will be able to provide a simple estimation of the storage or memory computation error threshold: < 5.2 10-5. The encoding is made by means of the [[7,1,3
High-Performance Java Codes for Computational Fluid Dynamics
Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.
Compendium of computer codes for the safety analysis of fast breeder reactors
Energy Technology Data Exchange (ETDEWEB)
1977-10-01
The objective of the compendium is to provide the reader with a guide which briefly describes many of the computer codes used for liquid metal fast breeder reactor safety analyses, since it is for this system that most of the codes have been developed. The compendium is designed to address the following frequently asked questions from individuals in licensing and research and development activities: (1) What does the code do. (2) To what safety problems has it been applied. (3) What are the code's limitations. (4) What is being done to remove these limitations. (5) How does the code compare with experimental observations and other code predictions. (6) What reference documents are available.
Performance evaluation of moment-method codes on an Intel iPSC/860 hypercube computer
Energy Technology Data Exchange (ETDEWEB)
Klimkowski, K.; Ling, H. (Texas Univ., Austin (United States))
1993-09-01
An analytical evaluation is conducted of the performance of a moment-method code on a parallel computer, treating algorithmic complexity costs within the framework of matrix size and the 'subblock-size' matrix-partitioning parameter. A scaled-efficiencies analysis is conducted for the measured computation times of the matrix-fill operation and LU decomposition. 6 refs.
Computing the Feng-Rao distances for codes from order domains
DEFF Research Database (Denmark)
Ruano Benito, Diego
2007-01-01
We compute the Feng–Rao distance of a code coming from an order domain with a simplicial value semigroup. The main tool is the Apéry set of a semigroup that can be computed using a Gröbner basis....
Charogiannis, Alexandros; Denner, Fabian; van Wachem, Berend G. M.; Kalliadasis, Serafim; Markides, Christos N.
2017-01-01
We present results from the simultaneous application of planar laser-induced fluorescence (PLIF), particle image velocimetry (PIV) and particle tracking velocimetry (PTV), complemented by direct numerical simulations, aimed at the detailed hydrodynamic characterization of harmonically excited liquid-film flows falling under the action of gravity. The experimental campaign comprises four different aqueous-glycerol solutions corresponding to four Kapitza numbers (Ka=14 , 85, 350, 1800), spanning the Reynolds number range Re=2.3 -320 , and with forcing frequencies fw=7 and 10 Hz . PLIF was employed to generate spatiotemporally resolved film-height measurements, and PIV and PTV to generate two-dimensional velocity-vector maps of the flow field underneath the wavy film interface. The latter allows for instantaneous, highly localized velocity-profile, bulk-velocity, and flow-rate data to be retrieved, based on which the effect of local film topology on the flow field underneath the waves is studied in detail. Temporal sequences of instantaneous and local film height and bulk velocity are generated and combined into bulk flow-rate time series. The time-mean flow rates are then decomposed into steady and unsteady components, the former represented by the product of the mean film height and mean bulk velocity and the latter by the covariance of the film-height and bulk-velocity fluctuations. The steady terms are found to vary linearly with the flow Re, with the best-fit gradients approximated closely by the kinematic viscosities of the three examined liquids. The unsteady terms, typically amounting to 5 %-10 % of the mean and peaking at approximately 20 % , are found to scale linearly with the film-height variance. And, interestingly, the instantaneous flow rate is found to vary linearly with the instantaneous film height. Both experimental and numerical flow-rate data are closely approximated by a simple analytical relationship with only minor deviations. This relationship
Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution
Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi
2015-05-01
In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs
Robust Coding for Lossy Computing with Observation Costs
Ahmadi, Behzad
2011-01-01
An encoder wishes to minimize the bit rate necessary to guarantee that a decoder is able to calculate a symbol-wise function of a sequence available only at the encoder and a sequence that can be measured only at the decoder. This classical problem, first studied by Yamamoto, is addressed here by including two new aspects: (i) The decoder obtains noisy measurements of its sequence, where the quality of such measurements can be controlled via a cost-constrained "action" sequence, which is taken at the decoder or at the encoder; (ii) Measurement at the decoder may fail in a way that is unpredictable to the encoder, thus requiring robust encoding. The considered scenario generalizes known settings such as the Heegard-Berger-Kaspi and the "source coding with a vending machine" problems. The rate-distortion-cost function is derived in relevant special cases, along with general upper and lower bounds. Numerical examples are also worked out to obtain further insight into the optimal system design.
Automatic Parallelization Tool: Classification of Program Code for Parallel Computing
Directory of Open Access Journals (Sweden)
Mustafa Basthikodi
2016-04-01
Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.
The Uncertainty Test for the MAAP Computer Code
Energy Technology Data Exchange (ETDEWEB)
Park, S. H.; Song, Y. M.; Park, S. Y.; Ahn, K. I.; Kim, K. R.; Lee, Y. J. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2008-10-15
After the Three Mile Island Unit 2 (TMI-2) and Chernobyl accidents, safety issues for a severe accident are treated in various aspects. Major issues in our research part include a level 2 PSA. The difficulty in expanding the level 2 PSA as a risk information activity is the uncertainty. In former days, it attached a weight to improve the quality in a internal accident PSA, but the effort is insufficient for decrease the phenomenon uncertainty in the level 2 PSA. In our country, the uncertainty degree is high in the case of a level 2 PSA model, and it is necessary to secure a model to decrease the uncertainty. We have not yet experienced the uncertainty assessment technology, the assessment system itself depends on advanced nations. In advanced nations, the severe accident simulator is implemented in the hardware level. But in our case, basic function in a software level can be implemented. In these circumstance at home and abroad, similar instances are surveyed such as UQM and MELCOR. Referred to these instances, SAUNA (Severe Accident UNcertainty Analysis) system is being developed in our project to assess and decrease the uncertainty in a level 2 PSA. It selects the MAAP code to analyze the uncertainty in a severe accident.
Skála, J.; Baruffa, F.; Büchner, J.; Rampp, M.
2015-08-01
Context. The numerical simulation of turbulence and flows in almost ideal astrophysical plasmas with large Reynolds numbers motivates the implementation of magnetohydrodynamical (MHD) computer codes with low resistivity. They need to be computationally efficient and scale well with large numbers of CPU cores, allow obtaining a high grid resolution over large simulation domains, and be easily and modularly extensible, for instance, to new initial and boundary conditions. Aims: Our aims are the implementation, optimization, and verification of a computationally efficient, highly scalable, and easily extensible low-dissipative MHD simulation code for the numerical investigation of the dynamics of astrophysical plasmas with large Reynolds numbers in three dimensions (3D). Methods: The new GOEMHD3 code discretizes the ideal part of the MHD equations using a fast and efficient leap-frog scheme that is second-order accurate in space and time and whose initial and boundary conditions can easily be modified. For the investigation of diffusive and dissipative processes the corresponding terms are discretized by a DuFort-Frankel scheme. To always fulfill the Courant-Friedrichs-Lewy stability criterion, the time step of the code is adapted dynamically. Numerically induced local oscillations are suppressed by explicit, externally controlled diffusion terms. Non-equidistant grids are implemented, which enhance the spatial resolution, where needed. GOEMHD3 is parallelized based on the hybrid MPI-OpenMP programing paradigm, adopting a standard two-dimensional domain-decomposition approach. Results: The ideal part of the equation solver is verified by performing numerical tests of the evolution of the well-understood Kelvin-Helmholtz instability and of Orszag-Tang vortices. The accuracy of solving the (resistive) induction equation is tested by simulating the decay of a cylindrical current column. Furthermore, we show that the computational performance of the code scales very
Development of a model and computer code to describe solar grade silicon production processes
Gould, R. K.; Srivastava, R.
1979-01-01
Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.
Validation of physics and thermalhydraulic computer codes for advanced Candu reactor applications
Energy Technology Data Exchange (ETDEWEB)
Wren, D.J.; Popov, N.; Snell, V.G. [Atomic Energy of Canada Ltd, (Canada)
2004-07-01
Atomic Energy of Canada Ltd. (AECL) is developing an Advanced Candu Reactor (ACR) that is an evolutionary advancement of the currently operating Candu 6 reactors. The ACR is being designed to produce electrical power for a capital cost and at a unit-energy cost significantly less than that of the current reactor designs. The ACR retains the modular Candu concept of horizontal fuel channels surrounded by a heavy water moderator. However, ACR uses slightly enriched uranium fuel compared to the natural uranium used in Candu 6. This achieves the twin goals of improved economics (via large reductions in the heavy water moderator volume and replacement of the heavy water coolant with light water coolant) and improved safety. AECL has developed and implemented a software quality assurance program to ensure that its analytical, scientific and design computer codes meet the required standards for software used in safety analyses. Since the basic design of the ACR is equivalent to that of the Candu 6, most of the key phenomena associated with the safety analyses of ACR are common, and the Candu industry standard tool-set of safety analysis codes can be applied to the analysis of the ACR. A systematic assessment of computer code applicability addressing the unique features of the ACR design was performed covering the important aspects of the computer code structure, models, constitutive correlations, and validation database. Arising from this assessment, limited additional requirements for code modifications and extensions to the validation databases have been identified. This paper provides an outline of the AECL software quality assurance program process for the validation of computer codes used to perform physics and thermal-hydraulics safety analyses of the ACR. It describes the additional validation work that has been identified for these codes and the planned, and ongoing, experimental programs to extend the code validation as required to address specific ACR design
Issues in computational fluid dynamics code verification and validation
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, W.L.; Blottner, F.G.
1997-09-01
A broad range of mathematical modeling errors of fluid flow physics and numerical approximation errors are addressed in computational fluid dynamics (CFD). It is strongly believed that if CFD is to have a major impact on the design of engineering hardware and flight systems, the level of confidence in complex simulations must substantially improve. To better understand the present limitations of CFD simulations, a wide variety of physical modeling, discretization, and solution errors are identified and discussed. Here, discretization and solution errors refer to all errors caused by conversion of the original partial differential, or integral, conservation equations representing the physical process, to algebraic equations and their solution on a computer. The impact of boundary conditions on the solution of the partial differential equations and their discrete representation will also be discussed. Throughout the article, clear distinctions are made between the analytical mathematical models of fluid dynamics and the numerical models. Lax`s Equivalence Theorem and its frailties in practical CFD solutions are pointed out. Distinctions are also made between the existence and uniqueness of solutions to the partial differential equations as opposed to the discrete equations. Two techniques are briefly discussed for the detection and quantification of certain types of discretization and grid resolution errors.
Computer code simulations of explosions in flow networks and comparison with experiments
Gregory, W. S.; Nichols, B. D.; Moore, J. A.; Smith, P. R.; Steinke, R. G.; Idzorek, R. D.
1987-10-01
A program of experimental testing and computer code development for predicting the effects of explosions in air-cleaning systems is being carried out for the Department of Energy. This work is a combined effort by the Los Alamos National Laboratory and New Mexico State University (NMSU). Los Alamos has the lead responsibility in the project and develops the computer codes; NMSU performs the experimental testing. The emphasis in the program is on obtaining experimental data to verify the analytical work. The primary benefit of this work will be the development of a verified computer code that safety analysts can use to analyze the effects of hypothetical explosions in nuclear plant air cleaning systems. The experimental data show the combined effects of explosions in air-cleaning systems that contain all of the important air-cleaning elements (blowers, dampers, filters, ductwork, and cells). A small experimental set-up consisting of multiple rooms, ductwork, a damper, a filter, and a blower was constructed. Explosions were simulated with a shock tube, hydrogen/air-filled gas balloons, and blasting caps. Analytical predictions were made using the EVENT84 and NF85 computer codes. The EVENT84 code predictions were in good agreement with the effects of the hydrogen/air explosions, but they did not model the blasting cap explosions adequately. NF85 predicted shock entrance to and within the experimental set-up very well. The NF85 code was not used to model the hydrogen/air or blasting cap explosions.
Algorithms and computer codes for atomic and molecular quantum scattering theory
Energy Technology Data Exchange (ETDEWEB)
Thomas, L. (ed.)
1979-01-01
This workshop has succeeded in bringing up 11 different coupled equation codes on the NRCC computer, testing them against a set of 24 different test problems and making them available to the user community. These codes span a wide variety of methodologies, and factors of up to 300 were observed in the spread of computer times on specific problems. A very effective method was devised for examining the performance of the individual codes in the different regions of the integration range. Many of the strengths and weaknesses of the codes have been identified. Based on these observations, a hybrid code has been developed which is significantly superior to any single code tested. Thus, not only have the original goals been fully met, the workshop has resulted directly in an advancement of the field. All of the computer programs except VIVS are available upon request from the NRCC. Since an improved version of VIVS is contained in the hybrid program, VIVAS, it was not made available for distribution. The individual program LOGD is, however, available. In addition, programs which compute the potential energy matrices of the test problems are also available. The software library names for Tests 1, 2 and 4 are HEH2, LICO, and EN2, respectively.
Kindgen, Sarah; Wachtel, Herbert; Abrahamsson, Bertil; Langguth, Peter
2015-09-01
Disintegration of oral solid dosage forms is a prerequisite for drug dissolution and absorption and is to a large extent dependent on the pressures and hydrodynamic conditions in the solution that the dosage form is exposed to. In this work, the hydrodynamics in the PhEur/USP disintegration tester were investigated using computational fluid dynamics (CFD). Particle image velocimetry was used to validate the CFD predictions. The CFD simulations were performed with different Newtonian and non-Newtonian fluids, representing fasted and fed states. The results indicate that the current design and operating conditions of the disintegration test device, given by the pharmacopoeias, are not reproducing the in vivo situation. This holds true for the hydrodynamics in the disintegration tester that generates Reynolds numbers dissimilar to the reported in vivo situation. Also, when using homogenized US FDA meal, representing the fed state, too high viscosities and relative pressures are generated. The forces acting on the dosage form are too small for all fluids compared to the in vivo situation. The lack of peristaltic contractions, which generate hydrodynamics and shear stress in vivo, might be the major drawback of the compendial device resulting in the observed differences between predicted and in vivo measured hydrodynamics.
Energy Technology Data Exchange (ETDEWEB)
Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d' %C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)
2011-06-01
This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
The eukaryotic genome contains varying numbers of non-coding RNA(ncRNA) genes."Computational RNomics" takes a multidisciplinary approach,like information science,to resolve the structure and function of ncRNAs.Here,we review the main issues in "Computational RNomics" of data storage and management,ncRNA gene identification and characterization,ncRNA target identification and functional prediction,and we summarize the main methods and current content of "computational RNomics".
[Vascular assessment in stroke codes: role of computed tomography angiography].
Mendigaña Ramos, M; Cabada Giadas, T
2015-01-01
Advances in imaging studies for acute ischemic stroke are largely due to the development of new efficacious treatments carried out in the acute phase. Together with computed tomography (CT) perfusion studies, CT angiography facilitates the selection of patients who are likely to benefit from appropriate early treatment. CT angiography plays an important role in the workup for acute ischemic stroke because it makes it possible to confirm vascular occlusion, assess the collateral circulation, and obtain an arterial map that is very useful for planning endovascular treatment. In this review about CT angiography, we discuss the main technical characteristics, emphasizing the usefulness of the technique in making the right diagnosis and improving treatment strategies. Copyright © 2012 SERAM. Published by Elsevier España, S.L.U. All rights reserved.
Symbolic coding for noninvertible systems: uniform approximation and numerical computation
Beyn, Wolf-Jürgen; Hüls, Thorsten; Schenke, Andre
2016-11-01
It is well known that the homoclinic theorem, which conjugates a map near a transversal homoclinic orbit to a Bernoulli subshift, extends from invertible to specific noninvertible dynamical systems. In this paper, we provide a unifying approach that combines such a result with a fully discrete analog of the conjugacy for finite but sufficiently long orbit segments. The underlying idea is to solve appropriate discrete boundary value problems in both cases, and to use the theory of exponential dichotomies to control the errors. This leads to a numerical approach that allows us to compute the conjugacy to any prescribed accuracy. The method is demonstrated for several examples where invertibility of the map fails in different ways.
Benchmark Problems Used to Assess Computational Aeroacoustics Codes
Dahl, Milo D.; Envia, Edmane
2005-01-01
The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.
Scherer, W.; Brockmann, H.; Haas, K. A.; Rütten, H. J.
2005-01-01
V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to HTRs and to two spatial dimensions. The...
Energy Technology Data Exchange (ETDEWEB)
Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M
2006-07-01
The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.
Entropy-limited hydrodynamics: a novel approach to relativistic hydrodynamics
Guercilena, Federico; Radice, David; Rezzolla, Luciano
2017-07-01
We present entropy-limited hydrodynamics (ELH): a new approach for the computation of numerical fluxes arising in the discretization of hyperbolic equations in conservation form. ELH is based on the hybridisation of an unfiltered high-order scheme with the first-order Lax-Friedrichs method. The activation of the low-order part of the scheme is driven by a measure of the locally generated entropy inspired by the artificial-viscosity method proposed by Guermond et al. (J. Comput. Phys. 230(11):4248-4267, 2011, doi: 10.1016/j.jcp.2010.11.043). Here, we present ELH in the context of high-order finite-differencing methods and of the equations of general-relativistic hydrodynamics. We study the performance of ELH in a series of classical astrophysical tests in general relativity involving isolated, rotating and nonrotating neutron stars, and including a case of gravitational collapse to black hole. We present a detailed comparison of ELH with the fifth-order monotonicity preserving method MP5 (Suresh and Huynh in J. Comput. Phys. 136(1):83-99, 1997, doi: 10.1006/jcph.1997.5745), one of the most common high-order schemes currently employed in numerical-relativity simulations. We find that ELH achieves comparable and, in many of the cases studied here, better accuracy than more traditional methods at a fraction of the computational cost (up to {˜}50% speedup). Given its accuracy and its simplicity of implementation, ELH is a promising framework for the development of new special- and general-relativistic hydrodynamics codes well adapted for massively parallel supercomputers.
Energy Technology Data Exchange (ETDEWEB)
Hoffman, F. O.; Miller, C. W.; Shaeffer, D. L.; Garten, Jr., C. T.; Shor, R. W.; Ensminger, J. T.
1977-04-01
The objective of this paper is to present a compilation of computer codes for the assessment of accidental or routine releases of radioactivity to the environment from nuclear power facilities. The capabilities of 83 computer codes in the areas of environmental transport and radiation dosimetry are summarized in tabular form. This preliminary analysis clearly indicates that the initial efforts in assessment methodology development have concentrated on atmospheric dispersion, external dosimetry, and internal dosimetry via inhalation. The incorporation of terrestrial and aquatic food chain pathways has been a more recent development and reflects the current requirements of environmental legislation and the needs of regulatory agencies. The characteristics of the conceptual models employed by these codes are reviewed. The appendixes include abstracts of the codes and indexes by author, key words, publication description, and title.
Compendium of computer codes for the researcher in magnetic fusion energy
Energy Technology Data Exchange (ETDEWEB)
Porter, G.D. (ed.)
1989-03-10
This is a compendium of computer codes, which are available to the fusion researcher. It is intended to be a document that permits a quick evaluation of the tools available to the experimenter who wants to both analyze his data, and compare the results of his analysis with the predictions of available theories. This document will be updated frequently to maintain its usefulness. I would appreciate receiving further information about codes not included here from anyone who has used them. The information required includes a brief description of the code (including any special features), a bibliography of the documentation available for the code and/or the underlying physics, a list of people to contact for help in running the code, instructions on how to access the code, and a description of the output from the code. Wherever possible, the code contacts should include people from each of the fusion facilities so that the novice can talk to someone ''down the hall'' when he first tries to use a code. I would also appreciate any comments about possible additions and improvements in the index. I encourage any additional criticism of this document. 137 refs.
Energy Technology Data Exchange (ETDEWEB)
Mann, F.M.
1998-01-26
The Tank Waste Remediation System (TWRS) is responsible for the safe storage, retrieval, and disposal of waste currently being held in 177 underground tanks at the Hanford Site. In order to successfully carry out its mission, TWRS must perform environmental analyses describing the consequences of tank contents leaking from tanks and associated facilities during the storage, retrieval, or closure periods and immobilized low-activity tank waste contaminants leaving disposal facilities. Because of the large size of the facilities and the great depth of the dry zone (known as the vadose zone) underneath the facilities, sophisticated computer codes are needed to model the transport of the tank contents or contaminants. This document presents the code selection criteria for those vadose zone analyses (a subset of the above analyses) where the hydraulic properties of the vadose zone are constant in time the geochemical behavior of the contaminant-soil interaction can be described by simple models, and the geologic or engineered structures are complicated enough to require a two-or three dimensional model. Thus, simple analyses would not need to use the fairly sophisticated codes which would meet the selection criteria in this document. Similarly, those analyses which involve complex chemical modeling (such as those analyses involving large tank leaks or those analyses involving the modeling of contaminant release from glass waste forms) are excluded. The analyses covered here are those where the movement of contaminants can be relatively simply calculated from the moisture flow. These code selection criteria are based on the information from the low-level waste programs of the US Department of Energy (DOE) and of the US Nuclear Regulatory Commission as well as experience gained in the DOE Complex in applying these criteria. Appendix table A-1 provides a comparison between the criteria in these documents and those used here. This document does not define the models (that
POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs
Energy Technology Data Exchange (ETDEWEB)
Hardie, R.W.
1982-02-01
POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case.
Fault-tolerant quantum computation with asymmetric Bacon-Shor codes
Brooks, Peter; Preskill, John
2013-03-01
We develop a scheme for fault-tolerant quantum computation based on asymmetric Bacon-Shor codes, which works effectively against highly biased noise dominated by dephasing. We find the optimal Bacon-Shor block size as a function of the noise strength and the noise bias, and estimate the logical error rate and overhead cost achieved by this optimal code. Our fault-tolerant gadgets, based on gate teleportation, are well suited for hardware platforms with geometrically local gates in two dimensions.
HIFI: a computer code for projectile fragmentation accompanied by incomplete fusion
Energy Technology Data Exchange (ETDEWEB)
Wu, J.R.
1980-07-01
A brief summary of a model proposed to describe projectile fragmentation accompanied by incomplete fusion and the instructions for the use of the computer code HIFI are given. The code HIFI calculates single inclusive spectra, coincident spectra and excitation functions resulting from particle-induced reactions. It is a multipurpose program which can calculate any type of coincident spectra as long as the reaction is assumed to take place in two steps.
SAMDIST: A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters
Energy Technology Data Exchange (ETDEWEB)
Leal, L.C.
1995-01-01
The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.
SAMDIST A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters
Leal, L C
1995-01-01
The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.
The development of an intelligent interface to a computational fluid dynamics flow-solver code
Williams, Anthony D.
1988-01-01
Researchers at NASA Lewis are currently developing an 'intelligent' interface to aid in the development and use of large, computational fluid dynamics flow-solver codes for studying the internal fluid behavior of aerospace propulsion systems. This paper discusses the requirements, design, and implementation of an intelligent interface to Proteus, a general purpose, three-dimensional, Navier-Stokes flow solver. The interface is called PROTAIS to denote its introduction of artificial intelligence (AI) concepts to the Proteus code.
ANL/HTP: a computer code for the simulation of heat pipe operation
Energy Technology Data Exchange (ETDEWEB)
McLennan, G.A.
1983-11-01
ANL/HTP is a computer code for the simulation of heat pipe operation, to predict heat pipe performance and temperature distributions during steady state operation. Source and sink temperatures and heat transfer coefficients can be set as input boundary conditions, and varied for parametric studies. Five code options are included to calculate performance for fixed operating conditions, or to vary any one of the four boundary conditions to determine the heat pipe limited performance. The performance limits included are viscous, sonic, entrainment capillary, and boiling, using the best available theories to model these effects. The code has built-in models for a number of wick configurations - open grooves, screen-covered grooves, screen-wrap, and arteries, with provision for expansion. The current version of the code includes the thermophysical properties of sodium as the working fluid in an expandable subroutine. The code-calculated performance agrees quite well with measured experiment data.
LEADS-DC: A computer code for intense dc beam nonlinear transport simulation
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
An intense dc beam nonlinear transport code has been developed. The code is written in Visual FORTRAN 6.6 and has ~13000 lines. The particle distribution in the transverse cross section is uniform or Gaussian. The space charge forces are calculated by the PIC (particle in cell) scheme, and the effects of the applied fields on the particle motion are calculated with the Lie algebraic method through the third order approximation. Obviously,the solutions to the equations of particle motion are self-consistent. The results obtained from the theoretical analysis have been put in the computer code. Many optical beam elements are contained in the code. So, the code can simulate the intense dc particle motions in the beam transport lines, high voltage dc accelerators and ion implanters.
A COMPUTATIONAL HYDRODYNAMIC ANALYSIS OF DUISBURG TEST CASE WITH FREE SURFACE AND PROPELLER
Directory of Open Access Journals (Sweden)
Omer Kemal Kinaci
2016-01-01
Full Text Available This paper discusses the effects of the free surface and the propeller on a benchmark Post-Panamax Ship, Duisburg Test Case (DTC. The experimental results are already available in the literature. The computational study carried out in this work is verified first with the experiments and then used to explain some of the physical aspects associated with viscous ship flows. There are two interesting outcomes of this work. The first one is, the existence of the propeller contributes to the pressure resistance of the ship by increasing the wave elevations along the hull and the fluid domain substantially. The second outcome is; by changing the pressure distribution along the hull and the propeller, the free surface increases the efficiency of the propulsion system. These specific outcomes are thoroughly discussed in the paper with CFD generated results and physical explanations.
SCALE: A modular code system for performing standardized computer analyses for licensing evaluation
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.
ASHMET: a computer code for estimating insolation incident on tilted surfaces
Energy Technology Data Exchange (ETDEWEB)
Elkin, R.F.; Toelle, R.G.
1980-05-01
A computer code, ASHMET, has been developed by MSFC to estimate the amount of solar insolation incident on the surfaces of solar collectors. Both tracking and fixed-position collectors have been included. Climatological data for 248 US locations are built into the code. This report describes the methodology of the code, and its input and output. The basic methodology used by ASHMET is the ASHRAE clear-day insolation relationships modified by a clearness index derived from SOLMET-measured solar radiation data to a horizontal surface.
Borde, Arnaud; Palanque-Delabrouille, Nathalie; Rossi, Graziano; Viel, Matteo; Bolton, James S.; Yèche, Christophe; LeGoff, Jean-Marc; Rich, Jim
2014-07-01
Current experiments are providing measurements of the flux power spectrum from the Lyman-α forests observed in quasar spectra with unprecedented accuracy. Their interpretation in terms of cosmological constraints requires specific simulations of at least equivalent precision. In this paper, we present a suite of cosmological N-body simulations with cold dark matter and baryons, specifically aiming at modeling the low-density regions of the inter-galactic medium as probed by the Lyman-α forests at high redshift. The simulations were run using the GADGET-3 code and were designed to match the requirements imposed by the quality of the current SDSS-III/BOSS or forthcoming SDSS-IV/eBOSS data. They are made using either 2 × 7683 simeq 1 billion or 2 × 1923 simeq 14 million particles, spanning volumes ranging from (25 Mpc h-1)3 for high-resolution simulations to (100 Mpc h-1)3 for large-volume ones. Using a splicing technique, the resolution is further enhanced to reach the equivalent of simulations with 2 × 30723 simeq 58 billion particles in a (100 Mpc h-1)3 box size, i.e. a mean mass per gas particle of 1.2 × 105Msolar h-1. We show that the resulting power spectrum is accurate at the 2% level over the full range from a few Mpc to several tens of Mpc. We explore the effect on the one-dimensional transmitted-flux power spectrum of four cosmological parameters (ns, σ8, Ωm and H0) and two astrophysical parameters (T0 and γ) that are related to the heating rate of the intergalactic medium. By varying the input parameters around a central model chosen to be in agreement with the latest Planck results, we built a grid of simulations that allows the study of the impact on the flux power spectrum of these six relevant parameters. We improve upon previous studies by not only measuring the effect of each parameter individually, but also probing the impact of the simultaneous variation of each pair of parameters. We thus provide a full second-order expansion, including
Wang, Bing; Armenante, Piero M
2016-08-20
Mini vessel dissolution testing systems consist of a small-scale 100-mL vessel with a small paddle impeller, similar to the USP Apparatus 2, and are typically utilized when only small amounts of drug product are available during drug development. Despite their common industrial use, mini vessels have received little attention in the literature. Here, Computational Fluid Dynamics (CFD) was used to predict velocity profiles, flow patterns, and strain rate distribution in a mini vessel at different agitation speeds. These results were compared with experimental velocity measurements obtained with Particle Image Velocimetry (PIV). Substantial agreement was observed between CFD results and PIV data. The flow is strongly dominated by the tangential velocity component. Secondary flows consist of vertical upper and lower recirculation loops above and below the impeller. A low recirculation zone was observed in the lower part of the vessel. The radial and axial velocities in the region just below the impeller are very small especially in the innermost core zone below the paddle, where tablet dissolution occurs. Increasing agitation speed reduces the radius of this zone, which is always present at any speed, and only modestly increases the tangential flow intensity, with significant implication for dissolution testing in mini vessels.
Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code
Weinberg, B. C.; Mcdonald, H.
1980-01-01
There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.
Aeschliman, D. P.; Oberkampf, W. L.; Blottner, F. G.
Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.
NADAC and MERGE: computer codes for processing neutron activation analysis data
Energy Technology Data Exchange (ETDEWEB)
Heft, R.E.; Martin, W.E.
1977-05-19
Absolute disintegration rates of specific radioactive products induced by neutron irradition of a sample are determined by spectrometric analysis of gamma-ray emissions. Nuclide identification and quantification is carried out by a complex computer code GAMANAL (described elsewhere). The output of GAMANAL is processed by NADAC, a computer code that converts the data on observed distintegration rates to data on the elemental composition of the original sample. Computations by NADAC are on an absolute basis in that stored nuclear parameters are used rather than the difference between the observed disintegration rate and the rate obtained by concurrent irradiation of elemental standards. The NADAC code provides for the computation of complex cases including those involving interrupted irradiations, parent and daughter decay situations where the daughter may also be produced independently, nuclides with very short half-lives compared to counting interval, and those involving interference by competing neutron-induced reactions. The NADAC output consists of a printed report, which summarizes analytical results, and a card-image file, which can be used as input to another computer code MERGE. The purpose of MERGE is to combine the results of multiple analyses and produce a single final answer, based on all available information, for each element found.
Energy Technology Data Exchange (ETDEWEB)
Baes, C.F. III; Sharp, R.D.; Sjoreen, A.L.; Hermann, O.W.
1984-11-01
TERRA is a computer code which calculates concentrations of radionuclides and ingrowing daughters in surface and root-zone soil, produce and feed, beef, and milk from a given deposition rate at any location in the conterminous United States. The code is fully integrated with seven other computer codes which together comprise a Computerized Radiological Risk Investigation System, CRRIS. Output from either the long range (> 100 km) atmospheric dispersion code RETADD-II or the short range (<80 km) atmospheric dispersion code ANEMOS, in the form of radionuclide air concentrations and ground deposition rates by downwind location, serves as input to TERRA. User-defined deposition rates and air concentrations may also be provided as input to TERRA through use of the PRIMUS computer code. The environmental concentrations of radionuclides predicted by TERRA serve as input to the ANDROS computer code which calculates population and individual intakes, exposures, doses, and risks. TERRA incorporates models to calculate uptake from soil and atmospheric deposition on four groups of produce for human consumption and four groups of livestock feeds. During the environmental transport simulation, intermediate calculations of interception fraction for leafy vegetables, produce directly exposed to atmospherically depositing material, pasture, hay, and silage are made based on location-specific estimates of standing crop biomass. Pasture productivity is estimated by a model which considers the number and types of cattle and sheep, pasture area, and annual production of other forages (hay and silage) at a given location. Calculations are made of the fraction of grain imported from outside the assessment area. TERRA output includes the above calculations and estimated radionuclide concentrations in plant produce, milk, and a beef composite by location.
Lilley, D. G.; Rhode, D. L.
1982-01-01
A primitive pressure-velocity variable finite difference computer code was developed to predict swirling recirculating inert turbulent flows in axisymmetric combustors in general, and for application to a specific idealized combustion chamber with sudden or gradual expansion. The technique involves a staggered grid system for axial and radial velocities, a line relaxation procedure for efficient solution of the equations, a two-equation k-epsilon turbulence model, a stairstep boundary representation of the expansion flow, and realistic accommodation of swirl effects. A user's manual, dealing with the computational problem, showing how the mathematical basis and computational scheme may be translated into a computer program is presented. A flow chart, FORTRAN IV listing, notes about various subroutines and a user's guide are supplied as an aid to prospective users of the code.
Just-in-Time Compilation-Inspired Methodology for Parallelization of Compute Intensive Java Code
Directory of Open Access Journals (Sweden)
GHULAM MUSTAFA
2017-01-01
Full Text Available Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots and exploit their parallelization potential on multicore systems. Proposed methodology selects and parallelizes each DOALL loop that is either contained in a hotspot method or calls a hotspot method. The methodology could be integrated in front-end of a JIT compiler to parallelize sequential code, just before native translation. However, compilation to native code is out of scope of this work. As a case study, we analyze eighteen JGF (Java Grande Forum benchmarks to determine parallelization potential of hotspots. Eight benchmarks demonstrate a speedup of up to 7.6x on an 8-core system
Development of a computer code for thermal hydraulics of reactors (THOR). [BWR and PWR
Energy Technology Data Exchange (ETDEWEB)
Wulff, W
1975-01-01
The purpose of the advanced code development work is to construct a computer code for the prediction of thermohydraulic transients in water-cooled nuclear reactor systems. The fundamental formulation of fluid dynamics is to be based on the one-dimensional drift flux model for non-homogeneous, non-equilibrium flows of two-phase mixtures. Particular emphasis is placed on component modeling, automatic prediction of initial steady state conditions, inclusion of one-dimensional transient neutron kinetics, freedom in the selection of computed spatial detail, development of reliable constitutive descriptions, and modular code structure. Numerical solution schemes have been implemented to integrate simultaneously the one-dimensional transient drift flux equations. The lumped-parameter modeling analyses of thermohydraulic transients in the reactor core and in the pressurizer have been completed. The code development for the prediction of the initial steady state has been completed with preliminary representation of individual reactor system components. A program has been developed to predict critical flow expanding from a dead-ended pipe; the computed results have been compared and found in good agreement with idealized flow solutions. Transport properties for liquid water and water vapor have been coded and verified.
PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)
Vincenti, Henri
2016-03-01
The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.
Energy Technology Data Exchange (ETDEWEB)
TP Clement
1999-06-24
RT3DV1 (Reactive Transport in 3-Dimensions) is computer code that solves the coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in three-dimensional saturated groundwater systems. RT3D is a generalized multi-species version of the US Environmental Protection Agency (EPA) transport code, MT3D (Zheng, 1990). The current version of RT3D uses the advection and dispersion solvers from the DOD-1.5 (1997) version of MT3D. As with MT3D, RT3D also requires the groundwater flow code MODFLOW for computing spatial and temporal variations in groundwater head distribution. The RT3D code was originally developed to support the contaminant transport modeling efforts at natural attenuation demonstration sites. As a research tool, RT3D has also been used to model several laboratory and pilot-scale active bioremediation experiments. The performance of RT3D has been validated by comparing the code results against various numerical and analytical solutions. The code is currently being used to model field-scale natural attenuation at multiple sites. The RT3D code is unique in that it includes an implicit reaction solver that makes the code sufficiently flexible for simulating various types of chemical and microbial reaction kinetics. RT3D V1.0 supports seven pre-programmed reaction modules that can be used to simulate different types of reactive contaminants including benzene-toluene-xylene mixtures (BTEX), and chlorinated solvents such as tetrachloroethene (PCE) and trichloroethene (TCE). In addition, RT3D has a user-defined reaction option that can be used to simulate any other types of user-specified reactive transport systems. This report describes the mathematical details of the RT3D computer code and its input/output data structure. It is assumed that the user is familiar with the basics of groundwater flow and contaminant transport mechanics. In addition, RT3D users are expected to have some experience in
DEVELOPMENT OF TWO-DIMENSIONAL HYDRODYNAMIC AND WATER QUALITY MODEL FOR HUANGPU RIVER
Institute of Scientific and Technical Information of China (English)
Xu Zu-xin; Yin Hai-long
2003-01-01
Based on numerical computation model RMA2 and RMA4 with open source code, finite element meshes representing the study domain are created, then the finite element hydrodynamic and water quality model for Huangpu River is developed and calibrated, and the simulation results are analyzed. This developed hydrodynamic and water quality model is used to analyze the influence of discharged wastewater from planning Wastwater Treatment Plant (WWTP) on Huangpu River's water quality.
Physical implementation of a Majorana fermion surface code for fault-tolerant quantum computation
Vijay, Sagar; Fu, Liang
2016-12-01
We propose a physical realization of a commuting Hamiltonian of interacting Majorana fermions realizing Z 2 topological order, using an array of Josephson-coupled topological superconductor islands. The required multi-body interaction Hamiltonian is naturally generated by a combination of charging energy induced quantum phase-slips on the superconducting islands and electron tunneling between islands. Our setup improves on a recent proposal for implementing a Majorana fermion surface code (Vijay et al 2015 Phys. Rev. X 5 041038), a ‘hybrid’ approach to fault-tolerant quantum computation that combines (1) the engineering of a stabilizer Hamiltonian with a topologically ordered ground state with (2) projective stabilizer measurements to implement error correction and a universal set of logical gates. Our hybrid strategy has advantages over the traditional surface code architecture in error suppression and single-step stabilizer measurements, and is widely applicable to implementing stabilizer codes for quantum computation.
Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code
Energy Technology Data Exchange (ETDEWEB)
Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T
1985-04-01
This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.
Computational approaches towards understanding human long non-coding RNA biology.
Jalali, Saakshi; Kapoor, Shruti; Sivadas, Ambily; Bhartiya, Deeksha; Scaria, Vinod
2015-07-15
Long non-coding RNAs (lncRNAs) form the largest class of non-protein coding genes in the human genome. While a small subset of well-characterized lncRNAs has demonstrated their significant role in diverse biological functions like chromatin modifications, post-transcriptional regulation, imprinting etc., the functional significance of a vast majority of them still remains an enigma. Increasing evidence of the implications of lncRNAs in various diseases including cancer and major developmental processes has further enhanced the need to gain mechanistic insights into the lncRNA functions. Here, we present a comprehensive review of the various computational approaches and tools available for the identification and annotation of long non-coding RNAs. We also discuss a conceptual roadmap to systematically explore the functional properties of the lncRNAs using computational approaches.
Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I
Energy Technology Data Exchange (ETDEWEB)
Thomas, L. (ed.)
1979-01-01
The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations.
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
DEFF Research Database (Denmark)
Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal;
2011-01-01
We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:01(n)01n with minimum distance (n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w=(n(lognloglogn)2) . (2) If d=3 then w=(nlglgn). (3...
Application of Multiple Description Coding for Adaptive QoS Mechanism for Mobile Cloud Computing
Directory of Open Access Journals (Sweden)
Ilan Sadeh
2014-02-01
Full Text Available Multimedia transmission over cloud infrastructure is a hot research topic worldwide. It is very strongly related to video streaming, VoIP, mobile networks, and computer networks. The goal is a reliable integration of telephony, video and audio transmission, computing and broadband transmission based on cloud computing. One right approach to pave the way for mobile multimedia and cloud computing is Multiple Description Coding (MDC, i.e. the solution would be: TCP/IP and similar protocols to be used for transmission of text files, and Multiple Description Coding “Send and Forget” algorithm to be used as transmission method for Multimedia over the cloud. Multiple Description Coding would improve the Quality of Service and would provide new service of rate adaptive streaming. This paper presents a new approach for improving the quality of multimedia and other services in the cloud, by using Multiple Description Coding (MDC. Firsty MDC Send and Forget Algorithm is compared with the existing protocols such as TCP/IP, UDP, RTP, etc. Then the Achievable Rate Region for MDC system is evaluated. Finally, a new subset of Quality of Service that considers the blocking in multi-terminal multimedia network and fidelity losses is considered.
Ivanov, Anisoara; Neacsu, Andrei
2011-01-01
This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…
Methods, algorithms and computer codes for calculation of electron-impact excitation parameters
Bogdanovich, P; Stonys, D
2015-01-01
We describe the computer codes, developed at Vilnius University, for the calculation of electron-impact excitation cross sections, collision strengths, and excitation rates in the plane-wave Born approximation. These codes utilize the multireference atomic wavefunctions which are also adopted to calculate radiative transition parameters of complex many-electron ions. This leads to consistent data sets suitable in plasma modelling codes. Two versions of electron scattering codes are considered in the present work, both of them employing configuration interaction method for inclusion of correlation effects and Breit-Pauli approximation to account for relativistic effects. These versions differ only by one-electron radial orbitals, where the first one employs the non-relativistic numerical radial orbitals, while another version uses the quasirelativistic radial orbitals. The accuracy of produced results is assessed by comparing radiative transition and electron-impact excitation data for neutral hydrogen, helium...
Computer code to interchange CDS and wave-drag geometry formats
Johnson, V. S.; Turnock, D. L.
1986-01-01
A computer program has been developed on the PRIME minicomputer to provide an interface for the passage of aircraft configuration geometry data between the Rockwell Configuration Development System (CDS) and a wireframe geometry format used by aerodynamic design and analysis codes. The interface program allows aircraft geometry which has been developed in CDS to be directly converted to the wireframe geometry format for analysis. Geometry which has been modified in the analysis codes can be transformed back to a CDS geometry file and examined for physical viability. Previously created wireframe geometry files may also be converted into CDS geometry files. The program provides a useful link between a geometry creation and manipulation code and analysis codes by providing rapid and accurate geometry conversion.
Users manual for CAFE-3D : a computational fluid dynamics fire code.
Energy Technology Data Exchange (ETDEWEB)
Khalil, Imane; Lopez, Carlos; Suo-Anttila, Ahti Jorma (Alion Science and Technology, Albuquerque, NM)
2005-03-01
The Container Analysis Fire Environment (CAFE) computer code has been developed to model all relevant fire physics for predicting the thermal response of massive objects engulfed in large fires. It provides realistic fire thermal boundary conditions for use in design of radioactive material packages and in risk-based transportation studies. The CAFE code can be coupled to commercial finite-element codes such as MSC PATRAN/THERMAL and ANSYS. This coupled system of codes can be used to determine the internal thermal response of finite element models of packages to a range of fire environments. This document is a user manual describing how to use the three-dimensional version of CAFE, as well as a description of CAFE input and output parameters. Since this is a user manual, only a brief theoretical description of the equations and physical models is included.
TEMP: a computer code to calculate fuel pin temperatures during a transient. [LMFBR
Energy Technology Data Exchange (ETDEWEB)
Bard, F E; Christensen, B Y; Gneiting, B C
1980-04-01
The computer code TEMP calculates fuel pin temperatures during a transient. It was developed to accommodate temperature calculations in any system of axi-symmetric concentric cylinders. When used to calculate fuel pin temperatures, the code will handle a fuel pin as simple as a solid cylinder or as complex as a central void surrounded by fuel that is broken into three regions by two circumferential cracks. Any fuel situation between these two extremes can be analyzed along with additional cladding, heat sink, coolant or capsule regions surrounding the fuel. The one-region version of the code accurately calculates the solution to two problems having closed-form solutions. The code uses an implicit method, an explicit method and a Crank-Nicolson (implicit-explicit) method.
NASCRAC - A computer code for fracture mechanics analysis of crack growth
Harris, D. O.; Eason, E. D.; Thomas, J. M.; Bianca, C. J.; Salter, L. D.
1987-01-01
NASCRAC - a computer code for fracture mechanics analysis of crack growth - is described in this paper. The need for such a code is increasing as requirements grow for high reliability and low weight in aerospace components. The code is comprehensive and versatile, as well as user friendly. The major purpose of the code is calculation of fatigue, corrosion fatigue, or stress corrosion crack growth, and a variety of crack growth relations can be selected by the user. Additionally, crack retardation models are included. A very wide variety of stress intensity factor solutions are contained in the code, and extensive use is made of influence functions. This allows complex stress gradients in three-dimensional crack problems to be treated easily and economically. In cases where previous stress intensity factor solutions are not adequate, new influence functions can be calculated by the code. Additional features include incorporation of J-integral solutions from the literature and a capability for estimating elastic-plastic stress redistribution from the results of a corresponding elastic analysis. An example problem is presented which shows typical outputs from the code.
A proposed framework for computational fluid dynamics code calibration/validation
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, W.L.
1993-12-31
The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ``calibrated code,`` ``validated code,`` and a ``validation experiment`` is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance.
An Object-Oriented Computer Code for Aircraft Engine Weight Estimation
Tong, Michael T.; Naylor, Bret A.
2009-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.
Multiple frequencies sequential coding for SSVEP-based brain-computer interface.
Directory of Open Access Journals (Sweden)
Yangsong Zhang
Full Text Available BACKGROUND: Steady-state visual evoked potential (SSVEP-based brain-computer interface (BCI has become one of the most promising modalities for a practical noninvasive BCI system. Owing to both the limitation of refresh rate of liquid crystal display (LCD or cathode ray tube (CRT monitor, and the specific physiological response property that only a very small number of stimuli at certain frequencies could evoke strong SSVEPs, the available frequencies for SSVEP stimuli are limited. Therefore, it may not be enough to code multiple targets with the traditional frequencies coding protocols, which poses a big challenge for the design of a practical SSVEP-based BCI. This study aimed to provide an innovative coding method to tackle this problem. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we present a novel protocol termed multiple frequencies sequential coding (MFSC for SSVEP-based BCI. In MFSC, multiple frequencies are sequentially used in each cycle to code the targets. To fulfill the sequential coding, each cycle is divided into several coding epochs, and during each epoch, certain frequency is used. Obviously, different frequencies or the same frequency can be presented in the coding epochs, and the different epoch sequence corresponds to the different targets. To show the feasibility of MFSC, we used two frequencies to realize four targets and carried on an offline experiment. The current study shows that: 1 MFSC is feasible and efficient; 2 the performance of SSVEP-based BCI based on MFSC can be comparable to some existed systems. CONCLUSIONS/SIGNIFICANCE: The proposed protocol could potentially implement much more targets with the limited available frequencies compared with the traditional frequencies coding protocol. The efficiency of the new protocol was confirmed by real data experiment. We propose that the SSVEP-based BCI under MFSC might be a promising choice in the future.
Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan
2015-10-01
In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.
MOLOCH computer code for molecular-dynamics simulation of processes in condensed matter
Directory of Open Access Journals (Sweden)
Derbenev I.V.
2011-01-01
Full Text Available Theoretical and experimental investigation into properties of condensed matter is one of the mainstreams in RFNC-VNIITF scientific activity. The method of molecular dynamics (MD is an innovative method of theoretical materials science. Modern supercomputers allow the direct simulation of collective effects in multibillion atom sample, making it possible to model physical processes on the atomistic level, including material response to dynamic load, radiation damage, influence of defects and alloying additions upon material mechanical properties, or aging of actinides. During past ten years, the computer code MOLOCH has been developed at RFNC-VNIITF. It is a parallel code suitable for massive parallel computing. Modern programming techniques were used to make the code almost 100% efficient. Practically all instruments required for modelling were implemented in the code: a potential builder for different materials, simulation of physical processes in arbitrary 3D geometry, and calculated data processing. A set of tests was developed to analyse algorithms efficiency. It can be used to compare codes with different MD implementation between each other.
Directory of Open Access Journals (Sweden)
Daniel Litinski
2017-09-01
Full Text Available We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall–superconductor hybrids.
Once-through CANDU reactor models for the ORIGEN2 computer code
Energy Technology Data Exchange (ETDEWEB)
Croff, A.G.; Bjerke, M.A.
1980-11-01
Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % /sup 235/U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given.
Adaptive Mesh Computations with the PLUTO Code for Astrophysical Fluid Dynamics
Mignone, A.; Zanni, C.
2012-07-01
We present an overview of the current version of the PLUTO code for numerical simulations of astrophysical fluid flows over block-structured adaptive grids. The code preserves its modular framework for the solution of the classical or relativistic magnetohydrodynamics (MHD) equations while exploiting the distributed infrastructure of the Chombo library for multidimensional adaptive mesh refinement (AMR) parallel computations. Equations are evolved in time using an explicit second-order, dimensionally unsplit time stepping scheme based on a cell-centered discretization of the flow variables. Efficiency and robustness are shown through multidimensional benchmarks and applications to problems of astrophysical relevance.
Experimental assessment of computer codes used for safety analysis of integral reactors
Energy Technology Data Exchange (ETDEWEB)
Falkov, A.A.; Kuul, V.S.; Samoilov, O.B. [OKB Mechanical Engineering, Nizhny Novgorod (Russian Federation)
1995-09-01
Peculiarities of integral reactor thermohydraulics in accidents are associated with presence of noncondensable gas in built-in pressurizer, absence of pumped ECCS, use of guard vessel for LOCAs localisation and passive RHRS through in-reactor HX`s. These features defined the main trends in experimental investigations and verification efforts for computer codes applied. The paper reviews briefly the performed experimental investigation of thermohydraulics of AST-500, VPBER600-type integral reactors. The characteristic of UROVEN/MB-3 code for LOCAs analysis in integral reactors and results of its verification are given. The assessment of RELAP5/mod3 applicability for accident analysis in integral reactor is presented.
The MELTSPREAD-1 computer code for the analysis of transient spreading in containments
Energy Technology Data Exchange (ETDEWEB)
Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.
1990-01-01
A one-dimensional, multicell, Eulerian finite difference computer code (MELTSPREAD-1) has been developed to provide an improved prediction of the gravity driven spreading and thermal interactions of molten corium flowing over a concrete or steel surface. In this paper, the modeling incorporated into the code is described and the spreading models are benchmarked against a simple dam break'' problem as well as water simulant spreading data obtained in a scaled apparatus of the Mk I containment. Results are also presented for a scoping calculation of the spreading behavior and shell thermal response in the full scale Mk I system following vessel meltthrough. 24 refs., 15 figs.
Energy Technology Data Exchange (ETDEWEB)
Strenge, D.L.; Peloquin, R.A.
1981-04-01
The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested.
V.S.O.P. (99/05) computer code system
Energy Technology Data Exchange (ETDEWEB)
Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Scherer, W.
2005-11-01
V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to HTRs and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. V.S.O.P.(99 / 05) represents the further development of V.S.O.P. (99). Compared to its precursor, the code system has been improved in many details. Major improvements and extensions have been included concerning the neutron spectrum calculation, the 3-d neutron diffusion options, and the thermal hydraulic section with respect to 'multi-pass'-fuelled pebblebed cores. This latest code version was developed and tested under the WINDOWS-XP - operating system. The storage requirement for the executables and the basic libraries associated with the code amounts to about 15 MB. Another 5 MB are required - if desired - for storage of the source code ({approx}65000 Fortran statements). (orig.)
V.S.O.P. (99/05) computer code system
Energy Technology Data Exchange (ETDEWEB)
Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Scherer, W.
2005-11-01
V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to HTRs and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. V.S.O.P.(99 / 05) represents the further development of V.S.O.P. (99). Compared to its precursor, the code system has been improved in many details. Major improvements and extensions have been included concerning the neutron spectrum calculation, the 3-d neutron diffusion options, and the thermal hydraulic section with respect to 'multi-pass'-fuelled pebblebed cores. This latest code version was developed and tested under the WINDOWS-XP - operating system. The storage requirement for the executables and the basic libraries associated with the code amounts to about 15 MB. Another 5 MB are required - if desired - for storage of the source code ({approx}65000 Fortran statements). (orig.)
Rutishauser, David
2006-01-01
The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters
Energy Technology Data Exchange (ETDEWEB)
Reginatto, M.; Goldhagen, P.
1998-06-01
The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user`s guide for the code MAXED is included in an appendix. The code is available from the authors upon request.
The MELTSPREAD-1 computer code for the analysis of transient spreading in containments
Energy Technology Data Exchange (ETDEWEB)
Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.
1990-01-01
Transient spreading of molten core materials is important in the assessment of severe-accident sequences for Mk-I boiling water reactors (BWRs). Of interest is whether core materials are able to spread over the pedestal and drywell floors to contact the containment shell and cause thermally induced shell failure, or whether heat transfer to underlying concrete and overlying water will freeze the melt short of the shell. The development of a computational capability for the assessment of this problem was initiated by Sienicki et al. in the form of MELTSPREAD-O code. Development is continuing in the form of the MELTSPREAD-1 code, which contains new models for phenomena that were ignored in the earlier code. This paper summarizes these new models, provides benchmarking calculations of the relocation model against an analytical solution as well as simulant spreading data, and summarizes the results of a scoping calculation for the full Mk-I system.
Computer code simulations of the formation of Meteor Crater, Arizona - Calculations MC-1 and MC-2
Roddy, D. J.; Schuster, S. H.; Kreyenhagen, K. N.; Orphal, D. L.
1980-01-01
It has been widely accepted that hypervelocity impact processes play a major role in the evolution of the terrestrial planets and satellites. In connection with the development of quantitative methods for the description of impact cratering, it was found that the results provided by two-dimensional finite difference, computer codes is greatly improved when initial impact conditions can be defined and when the numerical results can be tested against field and laboratory data. In order to address this problem, a numerical code study of the formation of Meteor (Barringer) Crater, Arizona, has been undertaken. A description is presented of the major results from the first two code calculations, MC-1 and MC-2, that have been completed for Meteor Crater. Both calculations used an iron meteorite with a kinetic energy of 3.8 Megatons. Calculation MC-1 had an impact velocity of 25 km/sec and MC-2 had an impact velocity of 15 km/sec.
WOLF: a computer code package for the calculation of ion beam trajectories
Energy Technology Data Exchange (ETDEWEB)
Vogel, D.L.
1985-10-01
The WOLF code solves POISSON'S equation within a user-defined problem boundary of arbitrary shape. The code is compatible with ANSI FORTRAN and uses a two-dimensional Cartesian coordinate geometry represented on a triangular lattice. The vacuum electric fields and equipotential lines are calculated for the input problem. The use may then introduce a series of emitters from which particles of different charge-to-mass ratios and initial energies can originate. These non-relativistic particles will then be traced by WOLF through the user-defined region. Effects of ion and electron space charge are included in the calculation. A subprogram PISA forms part of this code and enables optimization of various aspects of the problem. The WOLF package also allows detailed graphics analysis of the computed results to be performed.
HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments
Energy Technology Data Exchange (ETDEWEB)
McCann, R.A.; Lowery, P.S.
1987-10-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.
HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual
Energy Technology Data Exchange (ETDEWEB)
McCann, R.A.; Lowery, P.S.; Lessor, D.L.
1987-09-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.
DualSPHysics: Open-source parallel CFD solver based on Smoothed Particle Hydrodynamics (SPH)
Crespo, A. J. C.; Domínguez, J. M.; Rogers, B. D.; Gómez-Gesteira, M.; Longshaw, S.; Canelas, R.; Vacondio, R.; Barreiro, A.; García-Feal, O.
2015-02-01
DualSPHysics is a hardware accelerated Smoothed Particle Hydrodynamics code developed to solve free-surface flow problems. DualSPHysics is an open-source code developed and released under the terms of GNU General Public License (GPLv3). Along with the source code, a complete documentation that makes easy the compilation and execution of the source files is also distributed. The code has been shown to be efficient and reliable. The parallel power computing of Graphics Computing Units (GPUs) is used to accelerate DualSPHysics by up to two orders of magnitude compared to the performance of the serial version.
Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers
Energy Technology Data Exchange (ETDEWEB)
Nataf, J.M.; Winkelmann, F.
1992-09-01
We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.
Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers
Energy Technology Data Exchange (ETDEWEB)
Nataf, J.M.; Winkelmann, F.
1992-09-01
We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.
Computing element evolution towards Exascale and its impact on legacy simulation codes
Energy Technology Data Exchange (ETDEWEB)
Colin de Verdiere, Guillaume J.L. [CEA, DAM, DIF, Arpajon (France)
2015-12-15
In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes. (orig.)
Chen, Y. S.
1986-03-01
In this report, a numerical method for solving the equations of motion of three-dimensional incompressible flows in nonorthogonal body-fitted coordinate (BFC) systems has been developed. The equations of motion are transformed to a generalized curvilinear coordinate system from which the transformed equations are discretized using finite difference approximations in the transformed domain. The hybrid scheme is used to approximate the convection terms in the governing equations. Solutions of the finite difference equations are obtained iteratively by using a pressure-velocity correction algorithm (SIMPLE-C). Numerical examples of two- and three-dimensional, laminar and turbulent flow problems are employed to evaluate the accuracy and efficiency of the present computer code. The user's guide and computer program listing of the present code are also included.
Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center
Energy Technology Data Exchange (ETDEWEB)
Carter, B.J.; Maskewitz, B.F.
1985-04-01
This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art.
Energy Technology Data Exchange (ETDEWEB)
Pannala, S; D' Azevedo, E; Zacharia, T
2002-02-26
The goal of the radiation modeling effort was to develop and implement a radiation algorithm that is fast and accurate for the underhood environment. As part of this CRADA, a net-radiation model was chosen to simulate radiative heat transfer in an underhood of a car. The assumptions (diffuse-gray and uniform radiative properties in each element) reduce the problem tremendously and all the view factors for radiation thermal calculations can be calculated once and for all at the beginning of the simulation. The cost for online integration of heat exchanges due to radiation is found to be less than 15% of the baseline CHAD code and thus very manageable. The off-line view factor calculation is constructed to be very modular and has been completely integrated to read CHAD grid files and the output from this code can be read into the latest version of CHAD. Further integration has to be performed to accomplish the same with STAR-CD. The main outcome of this effort is to obtain a highly scalable and portable simulation capability to model view factors for underhood environment (for e.g. a view factor calculation which took 14 hours on a single processor only took 14 minutes on 64 processors). The code has also been validated using a simple test case where analytical solutions are available. This simulation capability gives underhood designers in the automotive companies the ability to account for thermal radiation - which usually is critical in the underhood environment and also turns out to be one of the most computationally expensive components of underhood simulations. This report starts off with the original work plan as elucidated in the proposal in section B. This is followed by Technical work plan to accomplish the goals of the project in section C. In section D, background to the current work is provided with references to the previous efforts this project leverages on. The results are discussed in section 1E. This report ends with conclusions and future scope of
Improvement of Level-1 PSA computer code package -A study for nuclear safety improvement-
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Kyu; Kim, Tae Woon; Ha, Jae Joo; Han, Sang Hoon; Cho, Yeong Kyun; Jeong, Won Dae; Jang, Seung Cheol; Choi, Young; Seong, Tae Yong; Kang, Dae Il; Hwang, Mi Jeong; Choi, Seon Yeong; An, Kwang Il [Korea Atomic Energy Res. Inst., Taejon (Korea, Republic of)
1994-07-01
This year is the second year of the Government-sponsored Mid- and Long-Term Nuclear Power Technology Development Project. The scope of this subproject titled on `The Improvement of Level-1 PSA Computer Codes` is divided into three main activities : (1) Methodology development on the under-developed fields such as risk assessment technology for plant shutdown and external events, (2) Computer code package development for Level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in the area of PSA methodology development, foreign PSA reports on shutdown and external events have been reviewed and various PSA methodologies have been compared. Level-1 PSA code KIRAP and CCF analysis code COCOA are converted from KOS to Windows. Human reliability database has been also established in this year. In the area of new technology applications, fuzzy set theory and entropy theory are used to estimate component life and to develop a new measure of uncertainty importance. Finally, in the field of application study of PSA technique to reactor regulation, a strategic study to develop a dynamic risk management tool PEPSI and the determination of inspection and test priority of motor operated valves based on risk importance worths have been studied. (Author).
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance
Energy Technology Data Exchange (ETDEWEB)
Vondy, D.R.
1979-10-01
The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
DEFF Research Database (Denmark)
Gal, A.; Hansen, Kristoffer Arnsfelt; Koucky, Michal
2013-01-01
We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n)→{0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: 1) if d=2, then w=Θ(n (lgn/lglgn)2); 2) if d=3, then w...
Chia-Chang Hu
2005-01-01
A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF) of Goldstein and Reed. This multistage technique resu...
Method for computing self-consistent solution in a gun code
Nelson, Eric M
2014-09-23
Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
DEFF Research Database (Denmark)
Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal;
2012-01-01
We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n) -> {0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w = Θ(n ({log n/ log log n})2). (2) If d...
DEFF Research Database (Denmark)
Johansen, Peter Meincke
1996-01-01
New uniform closed-form expressions for physical theory of diffraction equivalent edge currents are derived for truncated incremental wedge strips. In contrast to previously reported expressions, the new expressions are well-behaved for all directions of incidence and observation and take a finit...... value for zero strip length. Consequently, the new equivalent edge currents are, to the knowledge of the author, the first that are well-suited for implementation in general computer codes...
Walowit, Jed A.
1994-01-01
A viewgraph presentation is made showing the capabilities of the computer code SPIRALI. Overall capabilities of SPIRALI include: computes rotor dynamic coefficients, flow, and power loss for cylindrical and face seals; treats turbulent, laminar, Couette, and Poiseuille dominated flows; fluid inertia effects are included; rotor dynamic coefficients in three (face) or four (cylindrical) degrees of freedom; includes effects of spiral grooves; user definable transverse film geometry including circular steps and grooves; independent user definable friction factor models for rotor and stator; and user definable loss coefficients for sudden expansions and contractions.
Multilevel Coding Schemes for Compute-and-Forward with Flexible Decoding
Hern, Brett
2011-01-01
We consider the design of coding schemes for the wireless two-way relaying channel when there is no channel state information at the transmitter. In the spirit of the compute and forward paradigm, we present a multilevel coding scheme that permits computation (or, decoding) of a class of functions at the relay. The function to be computed (or, decoded) is then chosen depending on the channel realization. We define such a class of functions which can be decoded at the relay using the proposed coding scheme and derive rates that are universally achievable over a set of channel gains when this class of functions is used at the relay. We develop our framework with general modulation formats in mind, but numerical results are presented for the case where each node transmits using the QPSK constellation. Numerical results with QPSK show that the flexibility afforded by our proposed scheme results in substantially higher rates than those achievable by always using a fixed function or by adapting the function at the ...
Molecular hydrodynamics from memory kernels
Lesnicki, Dominika; Carof, Antoine; Rotenberg, Benjamin
2016-01-01
The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as $t^{-3/2}$. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, at odds with incompressible hydrodynamics predictions. We finally discuss the various contributions to the friction, the associated time scales and the cross-over between the molecular and hydrodynamic regimes upon increasing the solute radius.
Directory of Open Access Journals (Sweden)
Wenddabo Olivier Sawadogo
2012-01-01
Full Text Available The use of mathematical modeling as a tool for decision support is not common in Africa in solving development problems. In this article we talk about the numerical simulation of groundwater level of the plain of Gondo (Burkina Faso and the sensitivity analysis of the hydrodynamic parameters. The domain has fractures which have hydraulic coefficients lower than those of the rock. Our contribution is to bring brief replies to the real problem posed in the thesis of Mr. KOUSSOUBE [1]. Namely that what causes the appearance of the piezometric level observed and impact of surface water on the piezometry. The mathematical model of the flow was solved by programming the finite element method on FreeFem++[2]. A local refinement of the mesh at fracture was used. We then conduct a sensitivity analysis to see which hydrodynamic parameters influences much of the solution. The method used for the sensitivity analysis is based on the calculation of the gradient by the adjoint equation and requires great computational power. To remedy this, we used a technique of distributed computing and we launched our application to the Moroccan grid (magrid. This allowed us to reduce the computation time. The results allowed to highlight the role of fractures and contributions of surface water on the evolution of the piezometric level of the plain of Gondo and identified the parameters that greatly influence the piezometric level.
Multiphase integral reacting flow computer code (ICOMFLO): User`s guide
Energy Technology Data Exchange (ETDEWEB)
Chang, S.L.; Lottes, S.A.; Petrick, M.
1997-11-01
A copyrighted computational fluid dynamics computer code, ICOMFLO, has been developed for the simulation of multiphase reacting flows. The code solves conservation equations for gaseous species and droplets (or solid particles) of various sizes. General conservation laws, expressed by elliptic type partial differential equations, are used in conjunction with rate equations governing the mass, momentum, enthalpy, species, turbulent kinetic energy, and turbulent dissipation. Associated phenomenological submodels of the code include integral combustion, two parameter turbulence, particle evaporation, and interfacial submodels. A newly developed integral combustion submodel replacing an Arrhenius type differential reaction submodel has been implemented to improve numerical convergence and enhance numerical stability. A two parameter turbulence submodel is modified for both gas and solid phases. An evaporation submodel treats not only droplet evaporation but size dispersion. Interfacial submodels use correlations to model interfacial momentum and energy transfer. The ICOMFLO code solves the governing equations in three steps. First, a staggered grid system is constructed in the flow domain. The staggered grid system defines gas velocity components on the surfaces of a control volume, while the other flow properties are defined at the volume center. A blocked cell technique is used to handle complex geometry. Then, the partial differential equations are integrated over each control volume and transformed into discrete difference equations. Finally, the difference equations are solved iteratively by using a modified SIMPLER algorithm. The results of the solution include gas flow properties (pressure, temperature, density, species concentration, velocity, and turbulence parameters) and particle flow properties (number density, temperature, velocity, and void fraction). The code has been used in many engineering applications, such as coal-fired combustors, air
Agarwal, Sapan; Quach, Tu-Thach; Parekh, Ojas; Hsia, Alexander H.; DeBenedictis, Erik P.; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.
2016-01-01
The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning. PMID:26778946
Directory of Open Access Journals (Sweden)
Sapan eAgarwal
2016-01-01
Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.
Error threshold in topological quantum-computing models with color codes
Katzgraber, Helmut; Bombin, Hector; Martin-Delgado, Miguel A.
2009-03-01
Dealing with errors in quantum computing systems is possibly one of the hardest tasks when attempting to realize physical devices. By encoding the qubits in topological properties of a system, an inherent protection of the quantum states can be achieved. Traditional topologically-protected approaches are based on the braiding of quasiparticles. Recently, a braid-less implementation using brane-net condensates in 3-colexes has been proposed. In 2D it allows the transversal implementation of the whole Clifford group of quantum gates. In this work, we compute the error threshold for this topologically-protected quantum computing system in 2D, by means of mapping its error correction process onto a random 3-body Ising model on a triangular lattice. Errors manifest themselves as random perturbation of the plaquette interaction terms thus introducing frustration. Our results from Monte Carlo simulations suggest that these topological color codes are similarly robust to perturbations as the toric codes. Furthermore, they provide more computational capabilities and the possibility of having more qubits encoded in the quantum memory.
Energy Technology Data Exchange (ETDEWEB)
Heltemes, T A; Prochaska, A E; Moses, G A, E-mail: taheltemes@wisc.ed [Fusion Technology Institute, University of Wisconsin - Madison, 1500 Engineering Dr., Madison WI 53706 (United States)
2010-08-01
The BUCKY 1-D radiation hydrodynamics code has been used to simulate the dynamic thermo-mechanical interaction between a xenon gas-filled chamber and tungsten first-wall armor with an indirect-drive laser fusion target for the LIFE reactor design. Two classes of simulations were performed: (1) short-time (0-2 ms) simulations to fully capture the hydrodynamic effects of the introduction of the LIFE indirect-drive target x-ray and ion threat spectra and (2) long-time (2-70 ms) simulations starting with quiescent chamber conditions characteristic of those at 2 ms to estimate xenon plasma cooling between target implosions at 13 Hz. The short-time simulation results reported are: (1) the plasma hydrodynamics of the xenon in the chamber, (2) dynamic overpressure on the tungsten armor, and (3) time-dependent temperatures in the tungsten armor. The ramifications of local thermodynamic equilibrium (LTE) vs. non-LTE opacity models are also addressed.
Energy Technology Data Exchange (ETDEWEB)
Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.; Chu, C.C.
1992-01-01
A transient, one dimensional, finite difference computer code (MELTSPREAD-1) has been developed to predict spreading behavior of high temperature melts flowing over concrete and/or steel surfaces submerged in water, or without the effects of water if the surface is initially dry. This paper provides a summary overview of models and correlations currently implemented in the code, code validation activities completed thus far, LWR spreading-related safety issues for which the code has been applied, and the status of documentation for the code.
Energy Technology Data Exchange (ETDEWEB)
Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.; Chu, C.C.
1992-04-01
A transient, one dimensional, finite difference computer code (MELTSPREAD-1) has been developed to predict spreading behavior of high temperature melts flowing over concrete and/or steel surfaces submerged in water, or without the effects of water if the surface is initially dry. This paper provides a summary overview of models and correlations currently implemented in the code, code validation activities completed thus far, LWR spreading-related safety issues for which the code has been applied, and the status of documentation for the code.
V.S.O.P. (99/09) Computer Code System for Reactor Physics and Fuel Cycle Simulation; Version 2009
Rütten, H.-J.; Haas, K. A.; Brockmann, H.; Ohlig, U.; Pohl, C.; Scherer, W.
2010-01-01
V.S.O.P.(99/ 09) represents the further development of V.S.O.P.(99/ 05). Compared to its precursor, the code system has been improved again in many details. The main motivation for this new code version was to update the basic nuclear libraries used by the code system. Thus, all cross section libraries involved in the code have now been based on ENDF/B-VII. V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of...
Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes
Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)
2000-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.
On the Computational Complexity of Sphere Decoder for Lattice Space-Time Coded MIMO Channel
Abediseid, Walid
2011-01-01
The exact complexity analysis of the basic sphere decoder for general space-time codes applied to multi-input multi-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi-static, LAttice Space-Time (LAST) coded MIMO channel. Specifically, we derive the asymptotic tail distribution of the decoder's computational complexity in the high signal-to-noise ratio (SNR) regime. For the uncoded $M\\times N$ MIMO channel (e.g., V-BLAST), the analysis in [6] revealed that the tail distribution of such a decoder is of a Pareto-type with tail exponent that is equivalent to $N-M+1$. In our analysis, we show that the tail exponent of the sphere decoder's complexity distribution is equivalent to the diversity-multiplexing tradeoff achieved by LAST coding and lattice decoding schemes. This leads to extend the channel's tradeoff to include the decoding complexity. Moreover, we show analytically how minimum-mean square-error decisio...
Energy Technology Data Exchange (ETDEWEB)
Chung, Chang Hyun; You, Young Woo; Huh, Chang Wook; Kim, Ju Yeul; Kim Do Hyung; Kim, Yoon Ik; Yang, Hui Chang [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hansung University, Seoul (Korea, Republic of)
1997-07-01
The objective of this study is to develop the appropriate procedure that can evaluate the human error in LP/S(lower power/shutdown) and the computer code that calculate the human error probabilities(HEPs) using this framework. The assessment of applicability of the typical HRA methodologies to LP/S is conducted and a new HRA procedure, SEPLOT (Systematic Evaluation Procedure for LP/S Operation Tasks) which presents the characteristics of LP/S is developed by selection and categorization of human actions by reviewing present studies. This procedure is applied to evaluate the LOOP(Loss of Off-site Power) sequence and the HEPs obtained by using SEPLOT are used to quantitative evaluation of the core uncovery frequency. In this evaluation one of the dynamic reliability computer codes, DYLAM-3 which has the advantages against the ET/FT is used. The SEPLOT developed in this study can give the basis and arrangement as to the human error evaluation technique. And this procedure can make it possible to assess the dynamic aspects of accidents leading to core uncovery applying the HEPs obtained by using the SEPLOT as input data to DYLAM-3 code, Eventually, it is expected that the results of this study will contribute to improve safety in LP/S and reduce uncertainties in risk. 57 refs. 17 tabs., 33 figs. (author)
Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1995-07-01
This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on `The improvement of level-1 PSA computer codes` is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author).
Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide
Energy Technology Data Exchange (ETDEWEB)
Ritchie, L.T.; Johnson, J.D.; Blond, R.M.
1983-02-01
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.
SEACC: the systems engineering and analysis computer code for small wind systems
Energy Technology Data Exchange (ETDEWEB)
Tu, P.K.C.; Kertesz, V.
1983-03-01
The systems engineering and analysis (SEA) computer program (code) evaluates complete horizontal-axis SWECS performance. Rotor power output as a function of wind speed and energy production at various wind regions are predicted by the code. Efficiencies of components such as gearbox, electric generators, rectifiers, electronic inverters, and batteries can be included in the evaluation process to reflect the complete system performance. Parametric studies can be carried out for blade design characteristics such as airfoil series, taper rate, twist degrees and pitch setting; and for geometry such as rotor radius, hub radius, number of blades, coning angle, rotor rpm, etc. Design tradeoffs can also be performed to optimize system configurations for constant rpm, constant tip speed ratio and rpm-specific rotors. SWECS energy supply as compared to the load demand for each hour of the day and during each session of the year can be assessed by the code if the diurnal wind and load distributions are known. Also available during each run of the code is blade aerodynamic loading information.
A fully parallel, high precision, N-body code running on hybrid computing platforms
Capuzzo-Dolcetta, R; Punzo, D
2012-01-01
We present a new implementation of the numerical integration of the classical, gravitational, N-body problem based on a high order Hermite's integration scheme with block time steps, with a direct evaluation of the particle-particle forces. The main innovation of this code (called HiGPUs) is its full parallelization, exploiting both OpenMP and MPI in the use of the multicore Central Processing Units as well as either Compute Unified Device Architecture (CUDA) or OpenCL for the hosted Graphic Processing Units. We tested both performance and accuracy of the code using up to 256 GPUs in the supercomputer IBM iDataPlex DX360M3 Linux Infiniband Cluster provided by the italian supercomputing consortium CINECA, for values of N up to 8 millions. We were able to follow the evolution of a system of 8 million bodies for few crossing times, task previously unreached by direct summation codes. The code is freely available to the scientific community.
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes
Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...
The Proteus Navier-Stokes code. [two and three dimensional computational fluid dynamics
Towne, Charles E.; Schwab, John R.
1992-01-01
An effort is currently underway at NASA Lewis to develop two and three dimensional Navier-Stokes codes, called Proteus, for aerospace propulsion applications. Proteus solves the Reynolds-averaged, unsteady, compressible Navier-Stokes equations in strong conservation law form. Turbulence is modeled using a Baldwin-Lomax based algebraic eddy viscosity model. In addition, options are available to solve thin layer or Euler equations, and to eliminate the energy equation by assuming constant stagnation enthalpy. An extensive series of validation cases have been run, primarily using the two dimensional planar/axisymmetric version of the code. Several flows were computed that have exact solution such as: fully developed channel and pipe flow; Couette flow with and without pressure gradients; unsteady Couette flow formation; flow near a suddenly accelerated flat plate; flow between concentric rotating cylinders; and flow near a rotating disk. The two dimensional version of the Proteus code has been released, and the three dimensional code is scheduled for release in late 1991.
Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)
2014-05-15
In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow
Assessment of computer codes for VVER-440/213-type nuclear power plants
Energy Technology Data Exchange (ETDEWEB)
Szabados, L.; Ezsol, Gy.; Perneczky [Atomic Energy Research Institute, Budapest (Hungary)
1995-09-01
Nuclear power plant of VVER-440/213 designed by the former USSR have a number of special features. As a consequence of these features the transient behaviour of such a reactor system should be different from the PWR system behaviour. To study the transient behaviour of the Hungarian Paks Nuclear Power Plant of VVER-440/213-type both analytical and experimental activities have been performed. The experimental basis of the research in the PMK-2 integral-type test facility , which is a scaled down model of the plant. Experiments performed on this facility have been used to assess thermal-hydraulic system codes. Four tests were selected for {open_quotes}Standard Problem Exercises{close_quotes} of the International Atomic Energy Agency. Results of the 4th Exercise, of high international interest, are presented in the paper, focusing on the essential findings of the assessment of computer codes.
Revised uranium--plutonium cycle PWR and BWR models for the ORIGEN computer code
Energy Technology Data Exchange (ETDEWEB)
Croff, A. G.; Bjerke, M. A.; Morrison, G. W.; Petrie, L. M.
1978-09-01
Reactor physics calculations and literature searches have been conducted, leading to the creation of revised enriched-uranium and enriched-uranium/mixed-oxide-fueled PWR and BWR reactor models for the ORIGEN computer code. These ORIGEN reactor models are based on cross sections that have been taken directly from the reactor physics codes and eliminate the need to make adjustments in uncorrected cross sections in order to obtain correct depletion results. Revised values of the ORIGEN flux parameters THERM, RES, and FAST were calculated along with new parameters related to the activation of fuel-assembly structural materials not located in the active fuel zone. Recommended fuel and structural material masses and compositions are presented. A summary of the new ORIGEN reactor models is given.
A general panel sizing computer code and its application to composite structural panels
Anderson, M. S.; Stroud, W. J.
1978-01-01
A computer code for obtaining the dimensions of optimum (least mass) stiffened composite structural panels is described. The procedure, which is based on nonlinear mathematical programming and a rigorous buckling analysis, is applicable to general cross sections under general loading conditions causing buckling. A simplified method of accounting for bow-type imperfections is also included. Design studies in the form of structural efficiency charts for axial compression loading are made with the code for blade and hat stiffened panels. The effects on panel mass of imperfections, material strength limitations, and panel stiffness requirements are also examined. Comparisons with previously published experimental data show that accounting for imperfections improves correlation between theory and experiment.
Development of system of computer codes for severe accident analysis and its applications
Energy Technology Data Exchange (ETDEWEB)
Jang, H. S.; Jeon, M. H.; Cho, N. J. and others [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
1992-01-15
The objectives of this study is to develop a system of computer codes for postulated severe accident analyses in nuclear power plants. This system of codes is necessary to conduct Individual Plant Examination for domestic nuclear power plants. As a result of this study, one can conduct severe accident assessments more easily, and can extract the plant-specific vulnerabilities for severe accidents and at the same time the ideas for enhancing overall accident-resistance. Severe accident can be mitigated by the proper accident management strategies. Some operator action for mitigation can lead to more disastrous result and thus uncertain severe accident phenomena must be well recognized. There must be further research for development of severe accident management strategies utilizing existing plant resources as well as new design concepts.
Energy Technology Data Exchange (ETDEWEB)
Yang, Yanhua; Nilsuwankosit, Sunchai; Moriyama, Kiyofumi; Maruyama, Yu; Nakamura, Hideo; Hashimoto, Kazuichiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2000-12-01
A steam explosion is a phenomenon where a high temperature liquid gives its internal energy very rapidly to another low temperature volatile liquid, causing very strong pressure build up due to rapid vaporization of the latter. In the field of light water reactor safety research, steam explosions caused by the contact of molten core and coolant has been recognized as a potential threat which could cause failure of the pressure vessel or the containment vessel during a severe accident. A numerical simulation code JASMINE was developed at Japan Atomic Energy Research Institute (JAERI) to evaluate the impact of steam explosions on the integrity of reactor boundaries. JASMINE code consists of two parts, JASMINE-pre and -pro, which handle the premixing and propagation phases in steam explosions, respectively. JASMINE-pro code simulates the thermo-hydrodynamics in the propagation phase of a steam explosion on the basis of the multi-fluid model for multiphase flow. This report, 'User's Manual', gives the usage of JASMINE-pro code as well as the information on the code structures which should be useful for users to understand how the code works. (author)
Development of a computer code for dynamic analysis of the primary circuit of advanced reactors
Energy Technology Data Exchange (ETDEWEB)
Rocha, Jussie Soares da; Lira, Carlos A.B.O.; Magalhaes, Mardson A. de Sa, E-mail: cabol@ufpe.b [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear
2011-07-01
Currently, advanced reactors are being developed, seeking for enhanced safety, better performance and low environmental impacts. Reactor designs must follow several steps and numerous tests before a conceptual project could be certified. In this sense, computational tools become indispensable in the preparation of such projects. Thus, this study aimed at the development of a computational tool for thermal-hydraulic analysis by coupling two computer codes to evaluate the influence of transients caused by pressure variations and flow surges in the region of the primary circuit of IRIS reactor between the core and the pressurizer. For the simulation, it was used a situation of 'insurge', characterized by the entry of water in the pressurizer, due to the expansion of the refrigerant in the primary circuit. This expansion was represented by a pressure disturbance in step form, through the block 'step' of SIMULINK, thus enabling the transient startup. The results showed that the dynamic tool, obtained through the coupling of the codes, generated very satisfactory responses within model limitations, preserving the most important phenomena in the process. (author)
Luciano, Rezzolla
2013-01-01
Relativistic hydrodynamics is a very successful theoretical framework to describe the dynamics of matter from scales as small as those of colliding elementary particles, up to the largest scales in the universe. This book provides an up-to-date, lively, and approachable introduction to the mathematical formalism, numerical techniques, and applications of relativistic hydrodynamics. The topic is typically covered either by very formal or by very phenomenological books, but is instead presented here in a form that will be appreciated both by students and researchers in the field. The topics covered in the book are the results of work carried out over the last 40 years, which can be found in rather technical research articles with dissimilar notations and styles. The book is not just a collection of scattered information, but a well-organized description of relativistic hydrodynamics, from the basic principles of statistical kinetic theory, down to the technical aspects of numerical methods devised for the solut...
Energy Technology Data Exchange (ETDEWEB)
Müller, C.; Hughes, E. D.; Niederauer, G. F.; Wilkening, H.; Travis, J. R.; Spore, J. W.; Royl, P.; Baumann, W.
1998-10-01
Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best- estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume
Institute of Scientific and Technical Information of China (English)
任健; 魏军侠; 曹小林
2012-01-01
With support of Federation Computing in JASMIN, two serial codes named radiation hydrodynamics code RH2D and particle transport code Sn2D as Federal members are concatenated to an integrated program RHSn2D,which uses efficiently thousands of processors to simulate a multiphysics composition system. Federal members of RHSn2D have mesh patches and parallel algorithms respectively,encapsulating parallel communication between them based on JASMIN. For a typical model discretized by 90720 meshes, 100 patches in RH2D,2835 patches in Sn2D,48 directions and 16 energy groups, it shows that the integrated program RHSn2D achieves parallel efficiency of 36% with 1 024 processors.%基于JASMIN框架的“联邦计算”,将两个串行程序辐射流体RH2D与粒子输运Sn2D作为独立“邦元”耦合连接,形成的集成程序RHSn2D可以采用数千处理器并行模拟多物理耦合问题.集成程序RHSn2D中的邦元具有各自独立的网格划分与并行算法,同时借助框架技术,可以屏蔽邦元间的并行数据传递.算例表明,对于应用问题规模(90 720个网格单元,辐射流体100个Patch,粒子输运2 835个Patch,Sn方向48,16群),集成程序RHSn2D采用1 024个处理器可以达到36％的并行效率.
An SPH code for galaxy formation problems; Presentation of the code
Hultman, John; Kaellander, Daniel
1997-01-01
We present and test a code for two-fluid simulations of galaxy formation, one of the fluids being collision-less. The hydrodynamical evolution is solved through the SPH method while gravitational forces are calculated using a tree method. The code is Lagrangian, and fully adaptive both in space and time. A significant fraction gas in simulations of hierarchical galaxy formation ends up in tight clumps where it is, in terms of computational effort, very expensive to integrate the SPH equations...
DEFF Research Database (Denmark)
Mohebbi, Ali; Engelsholm, Signe K.D.; Puthusserypady, Sadasivan
2015-01-01
In this pilot study, a novel and minimalistic Brain Computer Interface (BCI) based wheelchair control application was developed. The system was based on pseudorandom code modulated Visual Evoked Potentials (c-VEPs). The visual stimuli in the scheme were generated based on the Gold code...
DEFF Research Database (Denmark)
Sessarego, Matias; Ramos García, Néstor; Sørensen, Jens Nørkær
2017-01-01
Aerodynamic and structural dynamic performance analysis of modern wind turbines are routinely estimated in the wind energy field using computational tools known as aeroelastic codes. Most aeroelastic codes use the blade element momentum (BEM) technique to model the rotor aerodynamics and a modal...
On the application of computational fluid dynamics codes for liquefied natural gas dispersion.
Luketa-Hanlin, Anay; Koopman, Ronald P; Ermak, Donald L
2007-02-20
Computational fluid dynamics (CFD) codes are increasingly being used in the liquefied natural gas (LNG) industry to predict natural gas dispersion distances. This paper addresses several issues regarding the use of CFD for LNG dispersion such as specification of the domain, grid, boundary and initial conditions. A description of the k-epsilon model is presented, along with modifications required for atmospheric flows. Validation issues pertaining to the experimental data from the Burro, Coyote, and Falcon series of LNG dispersion experiments are also discussed. A description of the atmosphere is provided as well as discussion on the inclusion of the Coriolis force to model very large LNG spills.
Fletcher, C. D.
The capability to perform thermal-hydraulic analyses of a space reactor using the ATHENA computer code is demonstrated. The fast reactor, liquid-lithium coolant loops, and lithium-filled heat pipes of the preliminary General electric SP-100 design were modeled with ATHENA. Two demonstration transient calculations were performed simulating accident conditions. Calculated results are available for display using the Nuclear Plant Analyzer color graphics analysis tool in addition to traditional plots. ATHENA-calculated results appear reasonable, both for steady state full power conditions, and for the two transients. This analysis represents the first known transient thermal-hydraulic simulation using an integral space reactor system model incorporating heat pipes.
Capabilities of the ATHENA computer code for modeling the SP-100 space reactor concept
Fletcher, C. D.
1985-09-01
The capability to perform thermal-hydraulic analyses of an SP-100 space reactor was demonstrated using the ATHENA computer code. The preliminary General Electric SP-100 design was modeled using Athena. The model simulates the fast reactor, liquid-lithium coolant loops, and lithium-filled heat pipes of this design. Two ATHENA demonstration calculations were performed simulating accident scenarios. A mask for the SP-100 model and an interface with the Nuclear Plant Analyzer (NPA) were developed, allowing a graphic display of the calculated results on the NPA.
Discrete logarithm computations over finite fields using Reed-Solomon codes
Augot, Daniel; Morain, François
2012-01-01
Cheng and Wan have related the decoding of Reed-Solomon codes to the computation of discrete logarithms over finite fields, with the aim of proving the hardness of their decoding. In this work, we experiment with solving the discrete logarithm over GF(q^h) using Reed-Solomon decoding. For fixed h and q going to infinity, we introduce an algorithm (RSDL) needing O~(h! q^2) operations over GF(q), operating on a q x q matrix with (h+2) q non-zero coefficients. We give faster variants including a...
Resin Matrix/Fiber Reinforced Composite Material, Ⅱ: Method of Solution and Computer Code
Institute of Scientific and Technical Information of China (English)
Li Chensha(李辰砂); Jiao Caishan; Liu Ying; Wang Zhengping; Wang Hongjie; Cao Maosheng
2003-01-01
According to a mathematical model which describes the curing process of composites constructed from continuous fiber-reinforced, thermosetting resin matrix prepreg materials, and the consolidation of the composites, the solution method to the model is made and a computer code is developed, which for flat-plate composites cured by a specified cure cycle, provides the variation of temperature distribution, the cure reaction process in the resin, the resin flow and fibers stress inside the composite, the void variation and the residual stress distribution.
Fuel burnup analysis for Thai research reactor by using MCNPX computer code
Sangkaew, S.; Angwongtrakool, T.; Srimok, B.
2017-06-01
This paper presents the fuel burnup analysis of the Thai research reactor (TRR-1/M1), TRIGA Mark-III, operated by Thailand Institute of Nuclear Technology (TINT) in Bangkok, Thailand. The modelling software used in this analysis is MCNPX (MCNP eXtended) version 2.6.0, a Fortran90 Monte Carlo radiation transport computer code. The analysis results will cover the core excess reactivity, neutron fluxes at the irradiation positions and neutron detector tubes, power distribution, fuel burnup, and fission products based on fuel cycle of first reactor core arrangement.
Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.
2016-11-01
Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.
Apparatus, Method, and Computer Program for a Resolution-Enhanced Pseudo-Noise Code Technique
Li, Steven X. (Inventor)
2015-01-01
An apparatus, method, and computer program for a resolution enhanced pseudo-noise coding technique for 3D imaging is provided. In one embodiment, a pattern generator may generate a plurality of unique patterns for a return to zero signal. A plurality of laser diodes may be configured such that each laser diode transmits the return to zero signal to an object. Each of the return to zero signal includes one unique pattern from the plurality of unique patterns to distinguish each of the transmitted return to zero signals from one another.
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The fundamental algorithm of light beam propagation in high powerlaser system is investigated and the corresponding computational codes are given. It is shown that the number of modulation ring due to the diffraction is related to the size of the pinhole in spatial filter (in terms of the times of diffraction limitation, i.e. TDL) and the Fresnel number of the laser system; for the complex laser system with multi-spatial filters and free space, the system can be investigated by the reciprocal rule of operators.
Modeling of field lysimeter release data using the computer code dust
Energy Technology Data Exchange (ETDEWEB)
Sullivan, T.M.; Fitzgerald, I.T. (Brookhaven National Lab., Upton, NY (United States)); McConnell, J.W.; Rogers, R.D. (Idaho National Engineering Lab., Idaho Falls, ID (United States))
1993-01-01
In this study, it was attempted to match the experimentally measured mass release data collected over a period of seven years by investigators from Idaho National Engineering Laboratory from the lysimeters at Oak Ridge National Laboratory and Argonne National Laboratory using the computer code DUST. The influence of the dispersion coefficient and distribution coefficient on mass release was investigated. Both were found to significantly influence mass release over the seven year period. It is recommended that these parameters be measured on a site specific basis to enhance the understanding of the system.
Modeling of field lysimeter release data using the computer code dust
Energy Technology Data Exchange (ETDEWEB)
Sullivan, T.M.; Fitzgerald, I.T. [Brookhaven National Lab., Upton, NY (United States); McConnell, J.W.; Rogers, R.D. [Idaho National Engineering Lab., Idaho Falls, ID (United States)
1993-03-01
In this study, it was attempted to match the experimentally measured mass release data collected over a period of seven years by investigators from Idaho National Engineering Laboratory from the lysimeters at Oak Ridge National Laboratory and Argonne National Laboratory using the computer code DUST. The influence of the dispersion coefficient and distribution coefficient on mass release was investigated. Both were found to significantly influence mass release over the seven year period. It is recommended that these parameters be measured on a site specific basis to enhance the understanding of the system.
Full sphere hydrodynamic and dynamo benchmarks
Marti, P.
2014-01-26
Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.
Energy Technology Data Exchange (ETDEWEB)
Nichols, B.D.; Mueller, C.; Necker, G.A.; Travis, J.R.; Spore, J.W.; Lam, K.L.; Royl, P.; Redlinger, R.; Wilson, T.L.
1998-10-01
Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior (1) in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and (2) during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included
Digital Poetry: A Narrow Relation between Poetics and the Codes of the Computational Logic
Laurentiz, Silvia
The project "Percorrendo Escrituras" (Walking Through Writings Project) has been developed at ECA-USP Fine Arts Department. Summarizing, it intends to study different structures of digital information that share the same universe and are generators of a new aesthetics condition. The aim is to search which are the expressive possibilities of the computer among the algorithm functions and other of its specific properties. It is a practical, theoretical and interdisciplinary project where the study of programming evolutionary language, logic and mathematics take us to poetic experimentations. The focus of this research is the digital poetry, and it comes from poetics of permutation combinations and culminates with dynamic and complex systems, autonomous, multi-user and interactive, through agents generation derivations, filtration and emergent standards. This lecture will present artworks that use some mechanisms introduced by cybernetics and the notion of system in digital poetry that demonstrate the narrow relationship between poetics and the codes of computational logic.
Milne-Thomson, L M
2011-01-01
This classic exposition of the mathematical theory of fluid motion is applicable to both hydrodynamics and aerodynamics. Based on vector methods and notation with their natural consequence in two dimensions - the complex variable - it offers more than 600 exercises and nearly 400 diagrams. Prerequisites include a knowledge of elementary calculus. 1968 edition.
Bonneau, Dominique; Souchet, Dominique
2014-01-01
This Series provides the necessary elements to the development and validation of numerical prediction models for hydrodynamic bearings. This book describes the rheological models and the equations of lubrication. It also presents the numerical approaches used to solve the above equations by finite differences, finite volumes and finite elements methods.
Lafrance, Pierre
1978-01-01
Explores in a non-mathematical treatment some of the hydrodynamical phenomena and forces that affect the operation of ships, especially at high speeds. Discusses the major components of ship resistance such as the different types of drags and ways to reduce them and how to apply those principles for the hovercraft. (GA)
An implementation of a tree code on a SIMD, parallel computer
Olson, Kevin M.; Dorband, John E.
1994-01-01
We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.
Wavelet subband coding of computer simulation output using the A++ array class library
Energy Technology Data Exchange (ETDEWEB)
Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.; Zhang, H.D. [Los Alamos National Lab., NM (United States); Nuri, V. [Washington State Univ., Pullman, WA (United States). School of EECS
1995-07-01
The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using a bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.
Computer simulation of Angra-2 PWR nuclear reactor core using MCNPX code
Energy Technology Data Exchange (ETDEWEB)
Medeiros, Marcos P.C. de; Rebello, Wilson F., E-mail: eng.cavaliere@ime.eb.br, E-mail: rebello@ime.eb.br [Instituto Militar de Engenharia - Secao de Engenharia Nuclear, Rio de Janeiro, RJ (Brazil); Oliveira, Claudio L. [Universidade Gama Filho, Departamento de Matematica, Rio de Janeiro, RJ (Brazil); Vellozo, Sergio O., E-mail: vellozo@cbpf.br [Centro Tecnologico do Exercito. Divisao de Defesa Quimica, Biologica e Nuclear, Rio de Janeiro, RJ (Brazil); Silva, Ademir X. da, E-mail: ademir@nuclear.ufrj.br [Coordenacao dos Programas de Pos Gaduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)
2011-07-01
In this work the MCNPX (Monte Carlo N-Particle Transport Code) code was used to develop a computerized model of the core of Angra 2 PWR (Pressurized Water Reactor) nuclear reactor. The model was created without any kind of homogenization, but using real geometric information and material composition of that reactor, obtained from the FSAR (Final Safety Analysis Report). The model is still being improved and the version presented in this work is validated by comparing values calculated by MCNPX with results calculated by others means and presented on FSAR. This paper shows the results already obtained to K{sub eff} and K{infinity}, general parameters of the core, considering the reactor operating under stationary conditions of initial testing and operation. Other stationary operation conditions have been simulated and, in all tested cases, there was a close agreement between values calculated computationally through this model and data presented on the FSAR, which were obtained by other codes. This model is expected to become a valuable tool for many future applications. (author)
Development of computer code models for analysis of subassembly voiding in the LMFBR
Energy Technology Data Exchange (ETDEWEB)
Hinkle, W [ed.
1979-12-01
The research program discussed in this report was started in FY1979 under the combined sponsorship of the US Department of Energy (DOE), General Electric (GE) and Hanford Engineering Development Laboratory (HEDL). The objective of the program is to develop multi-dimensional computer codes which can be used for the analysis of subassembly voiding incoherence under postulated accident conditions in the LMFBR. Two codes are being developed in parallel. The first will use a two fluid (6 equation) model which is more difficult to develop but has the potential for providing a code with the utmost in flexibility and physical consistency for use in the long term. The other will use a mixture (< 6 equation) model which is less general but may be more amenable to interpretation and use of experimental data and therefore, easier to develop for use in the near term. To assure that the models developed are not design dependent, geometries and transient conditions typical of both foreign and US designs are being considered.
Sodium combustion computer code ASSCOPS version 2.0 user`s manual
Energy Technology Data Exchange (ETDEWEB)
Ishikawa, Hiroyasu; Futagami, Satoshi; Ohno, Shuji; Seino, Hiroshi; Miyake, Osamu [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center
1997-12-01
ASSCOPS (Analysis of Simultaneous Sodium Combustion in Pool and Spray) has been developed for analyses of thermal consequences of sodium leak and fire accidents in LMFBRs. This report presents a description of the computational models, input, and output as the user`s manual of ASSCOPS version 2.0. ASSCOPS is an integrated code based on the sodium pool fire code SOFIRE II developed by the Atomics International Division of Rockwell International, and the sodium spray fire code SPRAY developed by the Hanford Engineering Development Laboratory in the U.S. The experimental studies conducted at PNC have been reflected in the ASSCOPS improvement. The users of ASSCOPS need to specify the sodium leak conditions (leak flow rate and temperature, etc.), the cell geometries (volume and structure surface area and thickness, etc.), and the atmospheric initial conditions, such as gas temperature, pressure, and gas composition. ASSCOPS calculates the time histories of atmospheric pressure and temperature changes along with those of the structural temperatures. (author)
Hierarchical surface code for network quantum computing with modules of arbitrary size
Li, Ying; Benjamin, Simon C.
2016-10-01
The network paradigm for quantum computing involves interconnecting many modules to form a scalable machine. Typically it is assumed that the links between modules are prone to noise while operations within modules have a significantly higher fidelity. To optimize fault tolerance in such architectures we introduce a hierarchical generalization of the surface code: a small "patch" of the code exists within each module and constitutes a single effective qubit of the logic-level surface code. Errors primarily occur in a two-dimensional subspace, i.e., patch perimeters extruded over time, and the resulting noise threshold for intermodule links can exceed ˜10 % even in the absence of purification. Increasing the number of qubits within each module decreases the number of qubits necessary for encoding a logical qubit. But this advantage is relatively modest, and broadly speaking, a "fine-grained" network of small modules containing only about eight qubits is competitive in total qubit count versus a "course" network with modules containing many hundreds of qubits.
Institute of Scientific and Technical Information of China (English)
M. Garbey; C. Picard
2008-01-01
The goal of this paper is to present a versatile framework for solution verification of PDE's.We first generalize the Richardson Extrapolation technique to an optimized extrapolation solution procedure that constructs the best consistent solution from a set of two or three coarse grid solution in the discrete norm of choice. This technique generalizes the Least Square Extrapolation method introduced by one of the author and W. Shyy. We second establish the conditioning number of the problem in a reduced space that approximates the main feature of the numerical solution thanks to a sensitivity analysis. Overall our method produces an a posteriori error estimation in this reduced space of approximation. The key feature of our method is that our construction does not require an internal knowledge of the software neither the source code that produces the solution to be verified. It can be applied in principle as a postprocessing procedure to off the shelf commercial code. We demonstrate the robustness of our method with two steady problems that are separately an incompressible back step flow test case and a heat transfer problem for a battery. Our error estimate might be ultimately verified with a near by manufactured solution. While our procedure is systematic and requires numerous computation of residuals, one can take advantage of distributed computing to get quickly the error estimate.
Directory of Open Access Journals (Sweden)
JUN YEOB LEE
2014-10-01
Full Text Available During the development process of a thermal-hydraulic system code, a non-regression test (NRT must be performed repeatedly in order to prevent software regression. The NRT process, however, is time-consuming and labor-intensive. Thus, automation of this process is an ideal solution. In this study, we have developed a program to support an efficient NRT for the SPACE code and demonstrated its usability. This results in a high degree of efficiency for code development. The program was developed using the Visual Basic for Applications and designed so that it can be easily customized for the NRT of other computer codes.
Simulating Rayleigh-Taylor (RT) instability using PPM hydrodynamics @scale on Roadrunner (u)
Energy Technology Data Exchange (ETDEWEB)
Woodward, Paul R [Los Alamos National Laboratory; Dimonte, Guy [Los Alamos National Laboratory; Rockefeller, Gabriel M [Los Alamos National Laboratory; Fryer, Christopher L [Los Alamos National Laboratory; Dimonte, Guy [Los Alamos National Laboratory; Dai, W [Los Alamos National Laboratory; Kares, R. J. [Los Alamos National Laboratory
2011-01-05
The effect of initial conditions on the self-similar growth of the RT instability is investigated using a hydrodynamics code based on the piecewise-parabolic-method (PPM). The PPM code was converted to the hybrid architecture of Roadrunner in order to perform the simulations at extremely high speed and spatial resolution. This paper describes the code conversion to the Cell processor, the scaling studies to 12 CU's on Roadrunner and results on the dependence of the RT growth rate on initial conditions. The relevance of the Roadrunner implementation of this PPM code to other existing and anticipated computer architectures is also discussed.
Computer code to predict the heat of explosion of high energy materials
Energy Technology Data Exchange (ETDEWEB)
Muthurajan, H. [Armament Research and Development Establishment, Pashan, Pune 411021 (India)], E-mail: muthurajan_h@rediffmail.com; Sivabalan, R.; Pon Saravanan, N.; Talawar, M.B. [High Energy Materials Research Laboratory, Sutarwadi, Pune 411 021 (India)
2009-01-30
The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-a-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion ({delta}H{sub e}) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R{sup 2} = 0.9721 with a linear equation y = 0.9262x + 101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials.
Computer code to predict the heat of explosion of high energy materials.
Muthurajan, H; Sivabalan, R; Pon Saravanan, N; Talawar, M B
2009-01-30
The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-à-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (DeltaH(e)) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R(2)=0.9721 with a linear equation y=0.9262x+101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials.
Implementation of discrete transfer radiation method into swift computational fluid dynamics code
Directory of Open Access Journals (Sweden)
Baburić Mario
2004-01-01
Full Text Available The Computational Fluid Dynamics (CFD has developed into a powerful tool widely used in science, technology and industrial design applications, when ever fluid flow, heat transfer, combustion, or other complicated physical processes, are involved. During decades of development of CFD codes scientists were writing their own codes, that had to include not only the model of processes that were of interest, but also a whole spectrum of necessary CFD procedures, numerical techniques, pre-processing and post-processing. That has arrested much of the scientist effort in work that has been copied many times over, and was not actually producing the added value. The arrival of commercial CFD codes brought relief to many engineers that could now use the user-function approach for mod el ling purposes, en trusting the application to do the rest of the work. This pa per shows the implementation of Discrete Transfer Radiation Method into AVL’s commercial CFD code SWIFT with the help of user defined functions. Few standard verification test cases were per formed first, and in order to check the implementation of the radiation method it self, where the comparisons with available analytic solution could be performed. After wards, the validation was done by simulating the combustion in the experimental furnace at IJmuiden (Netherlands, for which the experimental measurements were available. The importance of radiation prediction in such real-size furnaces is proved again to be substantial, where radiation itself takes the major fraction of over all heat transfer. The oil-combustion model used in simulations was the semi-empirical one that has been developed at the Power Engineering Department, and which is suit able for a wide range of typical oil flames.
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.
Gradient expansion for anisotropic hydrodynamics
Florkowski, Wojciech; Spaliński, Michał
2016-01-01
We compute the gradient expansion for anisotropic hydrodynamics. The results are compared with the corresponding expansion of the underlying kinetic-theory model with the collision term treated in the relaxation time approximation. We find that a recent formulation of anisotropic hydrodynamics based on an anisotropic matching principle yields the first three terms of the gradient expansion in agreement with those obtained for the kinetic theory. This gives further support for this particular hydrodynamic model as a good approximation of the kinetic-theory approach. We further find that the gradient expansion of anisotropic hydrodynamics is an asymptotic series, and the singularities of the analytic continuation of its Borel transform indicate the presence of non-hydrodynamic modes.
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
Energy Technology Data Exchange (ETDEWEB)
Castor, J I
2003-10-16
The discipline of radiation hydrodynamics is the branch of hydrodynamics in which the moving fluid absorbs and emits electromagnetic radiation, and in so doing modifies its dynamical behavior. That is, the net gain or loss of energy by parcels of the fluid material through absorption or emission of radiation are sufficient to change the pressure of the material, and therefore change its motion; alternatively, the net momentum exchange between radiation and matter may alter the motion of the matter directly. Ignoring the radiation contributions to energy and momentum will give a wrong prediction of the hydrodynamic motion when the correct description is radiation hydrodynamics. Of course, there are circumstances when a large quantity of radiation is present, yet can be ignored without causing the model to be in error. This happens when radiation from an exterior source streams through the problem, but the latter is so transparent that the energy and momentum coupling is negligible. Everything we say about radiation hydrodynamics applies equally well to neutrinos and photons (apart from the Einstein relations, specific to bosons), but in almost every area of astrophysics neutrino hydrodynamics is ignored, simply because the systems are exceedingly transparent to neutrinos, even though the energy flux in neutrinos may be substantial. Another place where we can do ''radiation hydrodynamics'' without using any sophisticated theory is deep within stars or other bodies, where the material is so opaque to the radiation that the mean free path of photons is entirely negligible compared with the size of the system, the distance over which any fluid quantity varies, and so on. In this case we can suppose that the radiation is in equilibrium with the matter locally, and its energy, pressure and momentum can be lumped in with those of the rest of the fluid. That is, it is no more necessary to distinguish photons from atoms, nuclei and electrons, than it is
Energy Technology Data Exchange (ETDEWEB)
Proskuryakov, K.N.; Bogomazov, D.N.; Poliakov, N. [Moscow Power Engineering Institute (Technical University), Moscow (Russian Federation)
2007-07-01
The new special module to neutron-physic and thermal-hydraulic computer codes for coolant acoustical characteristics calculation is worked out. The Russian computer code Rainbow has been selected for joint use with a developed module. This code system provides the possibility of EFOCP (Eigen Frequencies of Oscillations of the Coolant Pressure) calculations in any coolant acoustical elements of primary circuits of NPP. EFOCP values have been calculated for transient and for stationary operating. The calculated results for nominal operating were compared with results of measured EFOCP. For example, this comparison was provided for the system: 'pressurizer + surge line' of a WWER-1000 reactor. The calculated result 0.58 Hz practically coincides with the result of measurement (0.6 Hz). The EFOCP variations in transients are also shown. The presented results are intended to be useful for NPP vibration-acoustical certification. There are no serious difficulties for using this module with other computer codes.
Pon-Barry, Heather; Packard, Becky Wai-Ling; St. John, Audrey
2017-01-01
A dilemma within computer science departments is developing sustainable ways to expand capacity within introductory computer science courses while remaining committed to inclusive practices. Training near-peer mentors for peer code review is one solution. This paper describes the preparation of near-peer mentors for their role, with a focus on…
Lauga, Eric
2015-01-01
Bacteria predate plants and animals by billions of years. Today, they are the world's smallest cells yet they represent the bulk of the world's biomass, and the main reservoir of nutrients for higher organisms. Most bacteria can move on their own, and the majority of motile bacteria are able to swim in viscous fluids using slender helical appendages called flagella. Low-Reynolds-number hydrodynamics is at the heart of the ability of flagella to generate propulsion at the micron scale. In fact, fluid dynamic forces impact many aspects of bacteriology, ranging from the ability of cells to reorient and search their surroundings to their interactions within mechanically and chemically-complex environments. Using hydrodynamics as an organizing framework, we review the biomechanics of bacterial motility and look ahead to future challenges.
Relativistic Hydrodynamics with Wavelets
DeBuhr, Jackson; Anderson, Matthew; Neilsen, David; Hirschmann, Eric W
2015-01-01
Methods to solve the relativistic hydrodynamic equations are a key computational kernel in a large number of astrophysics simulations and are crucial to understanding the electromagnetic signals that originate from the merger of astrophysical compact objects. Because of the many physical length scales present when simulating such mergers, these methods must be highly adaptive and capable of automatically resolving numerous localized features and instabilities that emerge throughout the computational domain across many temporal scales. While this has been historically accomplished with adaptive mesh refinement (AMR) based methods, alternatives based on wavelet bases and the wavelet transformation have recently achieved significant success in adaptive representation for advanced engineering applications. This work presents a new method for the integration of the relativistic hydrodynamic equations using iterated interpolating wavelets and introduces a highly adaptive implementation for multidimensional simulati...
Radiation-hydrodynamic simulations of quasar disk winds
Higginbottom, N.
2015-09-01
Disk winds are a compelling candidate to provide geometrical unification between Broad Absorption Line QSOs (BALQSOs) and Type1 Quasars. However, the geometry of these winds, and even the driving mech- anism remain largely unknown. Progress has been made through RT simulations and theoretical analysis of simplified wind geometries but there are several outstanding issues including the problem of shielding the low ionization BAL gas from the intense X-ray radiation from the central corona, and also how to produce the strong emission lines which exemplify Type 1 Quasars. A complex, clumpy geometry may provide a solution, and a full hydrodynamic model in which such structure may well spontaneously develop is something we wish to investigate. We have already demonstrated that the previous generation of hydrodynamic models of BALQSOs suffer from the fact that radiation transfer (RT) was necessarily simplified to permit computation, thereby neglecting the effects of multiple scattering and reprocessing of photons within the wind (potentially very important processes). We have therefore embarked upon a project to marry together a RT code with a hydrodynamics code to permit full radiation hydrodynamics simulations to be carried out on QSO disk winds. Here we present details of the project and results to date.
DEFF Research Database (Denmark)
Hansen, Jesper Schmidt; Dyre, Jeppe C.; Daivis, Peter J.;
2011-01-01
We show by nonequilibrium molecular dynamics simulations that the Navier-Stokes equation does not correctly describe water flow in a nanoscale geometry. It is argued that this failure reflects the fact that the coupling between the intrinsic rotational and translational degrees of freedom becomes...... important for nanoflows. The coupling is correctly accounted for by the extended Navier-Stokes equations that include the intrinsic angular momentum as an independent hydrodynamic degree of freedom. © 2011 American Physical Society....
Survey of Multi-Material Closure Models in 1D Lagrangian Hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Maeng, Jungyeoul Brad [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hyde, David Andrew Bulloch [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-07-28
Accurately treating the coupled sub-cell thermodynamics of computational cells containing multiple materials is an inevitable problem in hydrodynamics simulations, whether due to initial configurations or evolutions of the materials and computational mesh. When solving the hydrodynamics equations within a multi-material cell, we make the assumption of a single velocity field for the entire computational domain, which necessitates the addition of a closure model to attempt to resolve the behavior of the multi-material cells’ constituents. In conjunction with a 1D Lagrangian hydrodynamics code, we present a variety of both the popular as well as more recently proposed multi-material closure models and survey their performances across a spectrum of examples. We consider standard verification tests as well as practical examples using combinations of fluid, solid, and composite constituents within multi-material mixtures. Our survey provides insights into the advantages and disadvantages of various multi-material closure models in different problem configurations.
Ball, W H; Cameron, R H; Gizon, L
2016-01-01
... [C]urrent stellar models predict oscillation frequencies that are systematically affected by simplified modelling of the near-surface layers. We use three-dimensional radiation hydrodynamics simulations to better model the near-surface equilibrium structure of dwarfs with spectral types F3, G2, K0 and K5, and examine the differences between oscillation mode frequencies. ... We precisely match stellar models to the simulations' gravities and effective temperatures at the surface, and to the temporally- and horizontally-averaged densities and pressures at their deepest points. We then replace the near-surface structure with that of the averaged simulation and compute the change in the oscillation mode frequencies. We also fit the differences using several parametric models currently available in the literature. The surface effect in the stars of solar-type and later is qualitatively similar and changes steadily with decreasing effective temperature. In particular, the point of greatest frequency difference ...
Wood, Jerry R.; Schmidt, James F.; Steinke, Ronald J.; Chima, Rodrick V.; Kunik, William G.
1987-01-01
Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advanced computational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.
Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi
2010-12-01
Investigating how a bioreactor functions is a necessary precursor for successful reactor design and operation. Traditional methods used to investigate flow-field cannot meet this challenge accurately and economically. Hydrodynamics model can solve this problem, but to understand a bioreactor in sufficient depth, it is often insufficient. In this paper, a coupled hydrodynamics-reaction kinetics model was formulated from computational fluid dynamics (CFD) code to simulate a gas-liquid-solid three-phase biotreatment system for the first time. The hydrodynamics model is used to formulate prediction of the flow field and the reaction kinetics model then portrays the reaction conversion process. The coupled model is verified and used to simulate the behavior of an expanded granular sludge bed (EGSB) reactor for biohydrogen production. The flow patterns were visualized and analyzed. The coupled model also demonstrates a qualitative relationship between hydrodynamics and biohydrogen production. The advantages and limitations of applying this coupled model are discussed.
Energy Technology Data Exchange (ETDEWEB)
Berna, G. A; Bohn, M. P.; Rausch, W. N.; Williford, R. E.; Lanning, D. D.
1981-01-01
FRAPCON-2 is a FORTRAN IV computer code that calculates the steady state response of light Mater reactor fuel rods during long-term burnup. The code calculates the temperature, pressure, deformation, and tai lure histories of a fuel rod as functions of time-dependent fuel rod power and coolant boundary conditions. The phenomena modeled by the code include (a) heat conduction through the fuel and cladding, (b) cladding elastic and plastic deformation, (c) fuel-cladding mechanical interaction, (d) fission gas release, (e} fuel rod internal gas pressure, (f) heat transfer between fuel and cladding, (g) cladding oxidation, and (h) heat transfer from cladding to coolant. The code contains necessary material properties, water properties, and heat transfer correlations. FRAPCON-2 is programmed for use on the CDC Cyber 175 and 176 computers. The FRAPCON-2 code Is designed to generate initial conditions for transient fuel rod analysis by either the FRAP-T6 computer code or the thermal-hydraulic code, RELAP4/MOD7 Version 2.
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.
Mueller, Bernhard; Marek, Andreas
2012-01-01
We present a detailed theoretical analysis of the gravitational-wave (GW) signal of the post-bounce evolution of core-collapse supernovae (SNe), employing for the first time relativistic, two-dimensional (2D) explosion models with multi-group, three-flavor neutrino transport based on the ray-by-ray-plus approximation. The waveforms reflect the accelerated mass motions associated with the characteristic evolutionary stages that were also identified in previous works: A quasi-periodic modulation by prompt postshock convection is followed by a phase of relative quiescence before growing amplitudes signal violent hydrodynamical activity due to convection and the standing accretion shock instability during the accretion period of the stalled shock. Finally, a high-frequency, low-amplitude variation from proto-neutron star (PNS) convection below the neutrinosphere appears superimposed on the low-frequency trend associated with the aspherical expansion of the SN shock after the onset of the explosion. Relativistic e...
Mathematical models for the EPIC code
Energy Technology Data Exchange (ETDEWEB)
Buchanan, H.L.
1981-06-03
EPIC is a fluid/envelope type computer code designed to study the energetics and dynamics of a high energy, high current electron beam passing through a gas. The code is essentially two dimensional (x, r, t) and assumes an axisymmetric beam whose r.m.s. radius is governed by an envelope model. Electromagnetic fields, background gas chemistry, and gas hydrodynamics (density channel evolution) are all calculated self-consistently as functions of r, x, and t. The code is a collection of five major subroutines, each of which is described in some detail in this report.
Interface design of VSOP'94 computer code for safety analysis
Energy Technology Data Exchange (ETDEWEB)
Natsir, Khairina, E-mail: yenny@batan.go.id; Andiwijayakusuma, D.; Wahanani, Nursinta Adi [Center for Development of Nuclear Informatics - National Nuclear Energy Agency, PUSPIPTEK, Serpong, Tangerang, Banten (Indonesia); Yazid, Putranto Ilham [Center for Nuclear Technology, Material and Radiometry- National Nuclear Energy Agency, Jl. Tamansari No.71, Bandung 40132 (Indonesia)
2014-09-30
Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.
A computational model of cellular mechanisms of temporal coding in the medial geniculate body (MGB.
Directory of Open Access Journals (Sweden)
Cal F Rabang
Full Text Available Acoustic stimuli are often represented in the early auditory pathway as patterns of neural activity synchronized to time-varying features. This phase-locking predominates until the level of the medial geniculate body (MGB, where previous studies have identified two main, largely segregated response types: Stimulus-synchronized responses faithfully preserve the temporal coding from its afferent inputs, and Non-synchronized responses, which are not phase locked to the inputs, represent changes in temporal modulation by a rate code. The cellular mechanisms underlying this transformation from phase-locked to rate code are not well understood. We use a computational model of a MGB thalamocortical neuron to test the hypothesis that these response classes arise from inferior colliculus (IC excitatory afferents with divergent properties similar to those observed in brain slice studies. Large-conductance inputs exhibiting synaptic depression preserved input synchrony as short as 12.5 ms interclick intervals, while maintaining low firing rates and low-pass filtering responses. By contrast, small-conductance inputs with Mixed plasticity (depression of AMPA-receptor component and facilitation of NMDA-receptor component desynchronized afferent inputs, generated a click-rate dependent increase in firing rate, and high-pass filtered the inputs. Synaptic inputs with facilitation often permitted band-pass synchrony along with band-pass rate tuning. These responses could be tuned by changes in membrane potential, strength of the NMDA component, and characteristics of synaptic plasticity. These results demonstrate how the same synchronized input spike trains from the inferior colliculus can be transformed into different representations of temporal modulation by divergent synaptic properties.
SPHRAY: A Smoothed Particle Hydrodynamics Ray Tracer for Radiative Transfer
Altay, Gabriel; Pelupessy, Inti
2008-01-01
We introduce SPHRAY, a Smoothed Particle Hydrodynamics (SPH) ray tracer designed to solve the 3D, time dependent, radiative transfer (RT) equations for arbitrary density fields. The SPH nature of SPHRAY makes the incorporation of separate hydrodynamics and gravity solvers very natural. SPHRAY relies on a Monte Carlo (MC) ray tracing scheme that does not interpolate the SPH particles onto a grid but instead integrates directly through the SPH kernels. Given initial conditions and a description of the sources of ionizing radiation, the code will calculate the non-equilibrium ionization state (HI, HII, HeI, HeII, HeIII, e) and temperature (internal energy/entropy) of each SPH particle. The sources of radiation can include point like objects, diffuse recombination radiation, and a background field from outside the computational volume. The MC ray tracing implementation allows for the quick introduction of new physics and is parallelization friendly. A quick Axis Aligned Bounding Box (AABB) test taken from compute...
Reduced gravity boiling and condensing experiments simulated with the COBRA/TRAC computer code
Cuta, Judith M.; Krotiuk, William
1988-01-01
A series of reduced-gravity two-phase flow experiments has been conducted with a boiler/condenser apparatus in the NASA KC-135 aircraft in order to obtain basic thermal-hydraulic data applicable to analytical design tools. Several test points from the KC-135 tests were selected for simulation by means of the COBRA/TRAC two-fluid, three-field thermal-hydraulic computer code; the points were chosen for a 25-90 percent void-fraction range. The possible causes for the lack of agreement noted between simulations and experiments are explored, with attention to the physical characteristics of two-phase flow in one-G and near-zero-G conditions.
Discrete logarithm computations over finite fields using Reed-Solomon codes
Augot, Daniel
2012-01-01
Cheng and Wan have related the decoding of Reed-Solomon codes to the computation of discrete logarithms over finite fields, with the aim of proving the hardness of their decoding. In this work, we experiment with solving the discrete logarithm over GF(q^h) using Reed-Solomon decoding. For fixed h and q going to infinity, we introduce an algorithm (RSDL) needing O (h! q^2) operations over GF(q), operating on a q x q matrix with (h+2) q non-zero coefficients. We give faster variants including an incremental version and another one that uses auxiliary finite fields that need not be subfields of GF(q^h); this variant is very practical for moderate values of q and h. We include some numerical results of our first implementations.
ACUTRI a computer code for assessing doses to the general public due to acute tritium releases
Yokoyama, S; Noguchi, H; Ryufuku, S; Sasaki, T
2002-01-01
Tritium, which is used as a fuel of a D-T burning fusion reactor, is the most important radionuclide for the safety assessment of a nuclear fusion experimental reactor such as ITER. Thus, a computer code, ACUTRI, which calculates the radiological impact of tritium released accidentally to the atmosphere, has been developed, aiming to be of use in a discussion of licensing of a fusion experimental reactor and an environmental safety evaluation method in Japan. ACUTRI calculates an individual tritium dose based on transfer models specific to tritium in the environment and ICRP dose models. In this calculation it is also possible to analyze statistically on meteorology in the same way as a conventional dose assessment method according to the meteorological guide of the Nuclear Safety Commission of Japan. A Gaussian plume model is used for calculating the atmospheric dispersion of tritium gas (HT) and/or tritiated water (HTO). The environmental pathway model in ACUTRI considers the following internal exposures: i...
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Kyu; Jae, Moo Sung; Jo, Young Gyun; Park, Rae Jun; Kim, Jae Hwan; Ha, Jae Ju; Kang, Dae Il; Choi, Sun Young; Kim, Si Hwan [Korea Atomic Energy Res. Inst., Taejon (Korea, Republic of)
1994-07-01
We have surveyed new technologies and research results for the accident management of nuclear power plants. And, based on the concept of using the existing plant capabilities for accident management, both in-vessel and ex-vessel strategies were identified and analyzed. When assessing accident management strategies, their effectiveness, adverse effects, and their feasibility must be considered. We have developed a framework for assessing the strategies with these factors in mind. We have applied the developed framework to assessing the strategies, including the likelihood that the operator correctly diagnoses the situation and successfully implements the strategies. Finally, the cavity flooding strategy was assessed by applying it to the station blackout sequence, which have been identified as one of the major contributors to risk at the reference plant. The thermohydraulic analyses with sensitivity calculations have been performed using MAAP 4 computer code. (Author).
Bousquet, Nicolas
2010-01-01
This article deals with the estimation of a probability p of an undesirable event. Its occurence is formalized by the exceedance of a threshold reliability value by the unidimensional output of a time-consuming computer code G with multivariate probabilistic input X. When G is assumed monotonous with respect to X, the Monotonous Reliability Method was proposed by de Rocquigny (2009) in an engineering context to provide sequentially narrowing 100%-confidence bounds and a crude estimate of p, via deterministic or stochastic designs of experiments. The present article consists in a formalization and technical deepening of this idea, as a large basis for future theoretical and applied studies. Three kinds of results are especially emphasized. First, the bounds themselves remain too crude and conservative estimators of p for a dimension of X upper than 2. Second, a maximum-likelihood estimator of p can be easily built, presenting a high variance reduction with respect to a standard Monte Carlo case, but suffering ...
Finite Element Simulation Code for Computing Thermal Radiation from a Plasma
Nguyen, C. N.; Rappaport, H. L.
2004-11-01
A finite element code, ``THERMRAD,'' for computing thermal radiation from a plasma is under development. Radiation from plasma test particles is found in cylindrical geometry. Although the plasma equilibrium is assumed axisymmetric individual test particle excitation produces a non-axisymmetric electromagnetic response. Specially designed Whitney class basis functions are to be used to allow the solution to be solved on a two-dimensional grid. The basis functions enforce both a vanishing of the divergence of the electric field within grid elements where the complex index of refraction is assumed constant and continuity of tangential electric field across grid elements while allowing the normal component of the electric field to be discontinuous. An appropriate variational principle which incorporates the Sommerfeld radiation condition on the simulation boundary, as well as its discretization by the Rayleigh-Ritz technique is given. 1. ``Finte Element Method for Electromagnetics Problems,'' Volakis et al., Wiley, 1998.
Bade, W. L.; Yos, J. M.
1975-01-01
A computer program for calculating quasi-one-dimensional gas flow in axisymmetric and two-dimensional nozzles and rectangular channels is presented. Flow is assumed to start from a state of thermochemical equilibrium at a high temperature in an upstream reservoir. The program provides solutions based on frozen chemistry, chemical equilibrium, and nonequilibrium flow with finite reaction rates. Electronic nonequilibrium effects can be included using a two-temperature model. An approximate laminar boundary layer calculation is given for the shear and heat flux on the nozzle wall. Boundary layer displacement effects on the inviscid flow are considered also. Chemical equilibrium and transport property calculations are provided by subroutines. The code contains precoded thermochemical, chemical kinetic, and transport cross section data for high-temperature air, CO2-N2-Ar mixtures, helium, and argon. It provides calculations of the stagnation conditions on axisymmetric or two-dimensional models, and of the conditions on the flat surface of a blunt wedge. The primary purpose of the code is to describe the flow conditions and test conditions in electric arc heated wind tunnels.
ACUTRI: a computer code for assessing doses to the general public due to acute tritium releases
Energy Technology Data Exchange (ETDEWEB)
Yokoyama, Sumi; Noguchi, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ryufuku, Susumu; Sasaki, Toshihisa; Kurosawa, Naohiro [Visible Information Center, Inc., Tokai, Ibaraki (Japan)
2002-11-01
Tritium, which is used as a fuel of a D-T burning fusion reactor, is the most important radionuclide for the safety assessment of a nuclear fusion experimental reactor such as ITER. Thus, a computer code, ACUTRI, which calculates the radiological impact of tritium released accidentally to the atmosphere, has been developed, aiming to be of use in a discussion of licensing of a fusion experimental reactor and an environmental safety evaluation method in Japan. ACUTRI calculates an individual tritium dose based on transfer models specific to tritium in the environment and ICRP dose models. In this calculation it is also possible to analyze statistically on meteorology in the same way as a conventional dose assessment method according to the meteorological guide of the Nuclear Safety Commission of Japan. A Gaussian plume model is used for calculating the atmospheric dispersion of tritium gas (HT) and/or tritiated water (HTO). The environmental pathway model in ACUTRI considers the following internal exposures: inhalation from a primary plume (HT and/or HTO) released from the facilities and inhalation from a secondary plume (HTO) reemitted from the ground following deposition of HT and HTO. This report describes an outline of the ACUTRI code, a user guide and the results of test calculation. (author)
The PLUTO Code for Adaptive Mesh Computations in Astrophysical Fluid Dynamics
Mignone, A.; Zanni, C.; Tzeferacos, P.; van Straalen, B.; Colella, P.; Bodo, G.
2012-01-01
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.
Coded aperture x-ray diffraction imaging with transmission computed tomography side-information
Odinaka, Ikenna; Greenberg, Joel A.; Kaganovsky, Yan; Holmgren, Andrew; Hassan, Mehadi; Politte, David G.; O'Sullivan, Joseph A.; Carin, Lawrence; Brady, David J.
2016-03-01
Coded aperture X-ray diffraction (coherent scatter spectral) imaging provides fast and dose-efficient measurements of the molecular structure of an object. The information provided is spatially-dependent and material-specific, and can be utilized in medical applications requiring material discrimination, such as tumor imaging. However, current coded aperture coherent scatter spectral imaging system assume a uniformly or weakly attenuating object, and are plagued by image degradation due to non-uniform self-attenuation. We propose accounting for such non-uniformities in the self-attenuation by utilizing an X-ray computed tomography (CT) image (reconstructed attenuation map). In particular, we present an iterative algorithm for coherent scatter spectral image reconstruction, which incorporates the attenuation map, at different stages, resulting in more accurate coherent scatter spectral images in comparison to their uncorrected counterpart. The algorithm is based on a spectrally grouped edge-preserving regularizer, where the neighborhood edge weights are determined by spatial distances and attenuation values.
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
Energy Technology Data Exchange (ETDEWEB)
Mignone, A.; Tzeferacos, P. [Dipartimento di Fisica Generale, Universita di Torino, via Pietro Giuria 1, 10125 Torino (Italy); Zanni, C.; Bodo, G. [INAF, Osservatorio Astronomico di Torino, Strada Osservatorio 20, Pino Torinese 10025 (Italy); Van Straalen, B.; Colella, P. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, MS 50A-1148, Berkeley, CA 94720 (United States)
2012-01-01
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.
SWIFT: task-based hydrodynamics and gravity for cosmological simulations
Theuns, Tom; Schaller, Matthieu; Gonnet, Pedro
2015-01-01
Simulations of galaxy formation follow the gravitational and hydrodynamical interactions between gas, stars and dark matter through cosmic time. The huge dynamic range of such calculations severely limits strong scaling behaviour of the community codes in use, with load-imbalance, cache inefficiencies and poor vectorisation limiting performance. The new swift code exploits task-based parallelism designed for many-core compute nodes interacting via MPI using asynchronous communication to improve speed and scaling. A graph-based domain decomposition schedules interdependent tasks over available resources. Strong scaling tests on realistic particle distributions yield excellent parallel efficiency, and efficient cache usage provides a large speed-up compared to current codes even on a single core. SWIFT is designed to be easy to use by shielding the astronomer from computational details such as the construction of the tasks or MPI communication. The techniques and algorithms used in SWIFT may benefit other compu...
Energy Technology Data Exchange (ETDEWEB)
Bertolotto, D.
2011-11-15
The current doctoral research is focused on the development and validation of a coupled computational tool, to combine the advantages of computational fluid dynamics (CFD) in analyzing complex flow fields and of state-of-the-art system codes employed for nuclear power plant (NPP) simulations. Such a tool can considerably enhance the analysis of NPP transient behavior, e.g. in the case of pressurized water reactor (PWR) accident scenarios such as Main Steam Line Break (MSLB) and boron dilution, in which strong coolant flow asymmetries and multi-dimensional mixing effects strongly influence the reactivity of the reactor core, as described in Chap. 1. To start with, a literature review on code coupling is presented in Chap. 2, together with the corresponding ongoing projects in the international community. Special reference is made to the framework in which this research has been carried out, i.e. the Paul Scherrer Institute's (PSI) project STARS (Steady-state and Transient Analysis Research for the Swiss reactors). In particular, the codes chosen for the coupling, i.e. the CFD code ANSYS CFX V11.0 and the system code US-NRC TRACE V5.0, are part of the STARS codes system. Their main features are also described in Chap. 2. The development of the coupled tool, named CFX/TRACE from the names of the two constitutive codes, has proven to be a complex and broad-based task, and therefore constraints had to be put on the target requirements, while keeping in mind a certain modularity to allow future extensions to be made with minimal efforts. After careful consideration, the coupling was defined to be on-line, parallel and with non-overlapping domains connected by an interface, which was developed through the Parallel Virtual Machines (PVM) software, as described in Chap. 3. Moreover, two numerical coupling schemes were implemented and tested: a sequential explicit scheme and a sequential semi-implicit scheme. Finally, it was decided that the coupling would be single
Maestro and Castro: Simulation Codes for Astrophysical Flows
Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun
2017-01-01
Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.
Energy Technology Data Exchange (ETDEWEB)
Short, Mark [Los Alamos National Laboratory; Aslam, Tariq D [Los Alamos National Laboratory
2010-01-01
The detonation structure in many insensitive high explosives consists of two temporally disparate zones of heat release. In PBX 9502, there is a fast reaction zone ({approx} 25 ns) during which reactants are converted to gaseous products and small carbon clusters, followed by a slower regime ({approx} 250 ns) of carbon coagulation. A hybrid approach for determining the propagation of two-stage heat release detonations has been developed that utilizes a detonation shock dynamics (DSD) based strategy for the fast reaction zone with a direct hydrodynamic simulation of the flow in the slow zone. Unlike a standard DSD/programmed bum formulation, the evolution of the fast zone DSD-like surface is coupled to the flow in the slow reaction zone. We have termed this formulation flow integrated detonation shock dynamics (FIDSD). The purpose of the present paper is to show how the FIDSD formulation can be applied to detonation propagation on an Eulerian grid using an algorithm based on level set interface tracking and a ghost fluid approach.
2017-04-13
AFRL-AFOSR-UK-TR-2017-0029 Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems ...MultiCore Systems 5a. CONTRACT NUMBER FA8655-12-1-2021 5b. GRANT NUMBER Grant 12-2021 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S...code for Heterogeneous multicore systems . The approach was based on the OmpSs programming model and the performance tools that constitute two strategic
Energy Technology Data Exchange (ETDEWEB)
Santoyo, E. [Universidad Nacional Autonoma de Mexico, Centro de Investigacion en Energia, Temixco (Mexico); Garcia, A.; Santoyo, S. [Unidad Geotermia, Inst. de Investigaciones Electricas, Temixco (Mexico); Espinosa, G. [Universidad Autonoma Metropolitana, Co. Vicentina (Mexico); Hernandez, I. [ITESM, Centro de Sistemas de Manufactura, Monterrey (Mexico)
2000-07-01
The development and application of the computer code STATIC{sub T}EMP, a useful tool for calculating static formation temperatures from actual bottomhole temperature data logged in geothermal wells is described. STATIC{sub T}EMP is based on five analytical methods which are the most frequently used in the geothermal industry. Conductive and convective heat flow models (radial, spherical/radial and cylindrical/radial) were selected. The computer code is a useful tool that can be reliably used in situ to determine static formation temperatures before or during the completion stages of geothermal wells (drilling and cementing). Shut-in time and bottomhole temperature measurements logged during well completion activities are required as input data. Output results can include up to seven computations of the static formation temperature by each wellbore temperature data set analysed. STATIC{sub T}EMP was written in Fortran-77 Microsoft language for MS-DOS environment using structured programming techniques. It runs on most IBM compatible personal computers. The source code and its computational architecture as well as the input and output files are described in detail. Validation and application examples on the use of this computer code with wellbore temperature data (obtained from specialised literature) and with actual bottomhole temperature data (taken from completion operations of some geothermal wells) are also presented. (Author)
Walitt, L.
1982-01-01
The VANS successive approximation numerical method was extended to the computation of three dimensional, viscous, transonic flows in turbomachines. A cross-sectional computer code, which conserves mass flux at each point of the cross-sectional surface of computation was developed. In the VANS numerical method, the cross-sectional computation follows a blade-to-blade calculation. Numerical calculations were made for an axial annular turbine cascade and a transonic, centrifugal impeller with splitter vanes. The subsonic turbine cascade computation was generated in blade-to-blade surface to evaluate the accuracy of the blade-to-blade mode of marching. Calculated blade pressures at the hub, mid, and tip radii of the cascade agreed with corresponding measurements. The transonic impeller computation was conducted to test the newly developed locally mass flux conservative cross-sectional computer code. Both blade-to-blade and cross sectional modes of calculation were implemented for this problem. A triplet point shock structure was computed in the inducer region of the impeller. In addition, time-averaged shroud static pressures generally agreed with measured shroud pressures. It is concluded that the blade-to-blade computation produces a useful engineering flow field in regions of subsonic relative flow; and cross-sectional computation, with a locally mass flux conservative continuity equation, is required to compute the shock waves in regions of supersonic relative flow.
Energy Technology Data Exchange (ETDEWEB)
Kostin, Mikhail [FRIB, MSU; Mokhov, Nikolai [FNAL; Niita, Koji [RIST, Japan
2013-09-25
A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.
Energy Technology Data Exchange (ETDEWEB)
McGrail, B.P.; Bacon, D.H.
1998-02-01
Planned performance assessments for the proposed disposal of low-activity waste (LAW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. The available computer codes with suitable capabilities at the time Revision 0 of this document was prepared were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical processes expected to affect LAW glass corrosion and the mobility of radionuclides. This analysis was repeated in this report but updated to include additional processes that have been found to be important since Revision 0 was issued and to include additional codes that have been released. The highest ranked computer code was found to be the STORM code developed at PNNL for the US Department of Energy for evaluation of arid land disposal sites.
Keshavarz, Mohammad Hossein; Motamedoshariati, Hadi; Moghayadnia, Reza; Nazari, Hamid Reza; Azarniamehraban, Jamshid
2009-12-30
In this paper a new simple user-friendly computer code, in Visual Basic, has been introduced to evaluate detonation performance of high explosives and their thermochemical properties. The code is based on recently developed methods to obtain thermochemical and performance parameters of energetic materials, which can complement the computer outputs of the other thermodynamic chemical equilibrium codes. It can predict various important properties of high explosive including velocity of detonation, detonation pressure, heat of detonation, detonation temperature, Gurney velocity, adiabatic exponent and specific impulse of high explosives. It can also predict detonation performance of aluminized explosives that can have non-ideal behaviors. This code has been validated with well-known and standard explosives and compared the predicted results, where the predictions of desired properties were possible, with outputs of some computer codes. A large amount of data for detonation performance on different classes of explosives from C-NO(2), O-NO(2) and N-NO(2) energetic groups have also been generated and compared with well-known complex code BKW.
Energy Technology Data Exchange (ETDEWEB)
Keshavarz, Mohammad Hossein, E-mail: mhkeshavarz@mut-es.ac.ir [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115 (Iran, Islamic Republic of); Motamedoshariati, Hadi; Moghayadnia, Reza; Nazari, Hamid Reza; Azarniamehraban, Jamshid [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115 (Iran, Islamic Republic of)
2009-12-30
In this paper a new simple user-friendly computer code, in Visual Basic, has been introduced to evaluate detonation performance of high explosives and their thermochemical properties. The code is based on recently developed methods to obtain thermochemical and performance parameters of energetic materials, which can complement the computer outputs of the other thermodynamic chemical equilibrium codes. It can predict various important properties of high explosive including velocity of detonation, detonation pressure, heat of detonation, detonation temperature, Gurney velocity, adiabatic exponent and specific impulse of high explosives. It can also predict detonation performance of aluminized explosives that can have non-ideal behaviors. This code has been validated with well-known and standard explosives and compared the predicted results, where the predictions of desired properties were possible, with outputs of some computer codes. A large amount of data for detonation performance on different classes of explosives from C-NO{sub 2}, O-NO{sub 2} and N-NO{sub 2} energetic groups have also been generated and compared with well-known complex code BKW.
CATARACT: Computer code for improving power calculations at NREL's high-flux solar furnace
Scholl, K.; Bingham, C.; Lewandowski, A.
1994-01-01
The High-Flux Solar Furnace (HFSF), operated by the National Renewable Energy Laboratory, uses a camera-based, flux-mapping system to analyze the distribution and to determine total power at the focal point. The flux-mapping system consists of a diffusively reflecting plate with seven circular foil calorimeters, a charge-coupled device (CCD) camera, an IBM-compatible personal computer with a frame-grabber board, and commercial image analysis software. The calorimeters provide flux readings that are used to scale the image captured from the plate by the camera. The image analysis software can estimate total power incident on the plate by integrating under the 3-dimensional image. Because of the physical layout of the HFSF, the camera is positioned at a 20 angle to the flux mapping plate normal. The foreshortening of the captured images that results represents a systematic error in the power calculations because the software incorrectly assumes the image is parallel to the camera's array. We have written a FORTRAN computer program called CATARACT (camera/target angle correction) that we use to transform the original flux-mapper image to a plane that is normal to the camera's optical axis. A description of the code and the results of experiments performed to verify it are presented. Also presented are comparisons of the total power available from the HFSF as determined from the flux mapping system and theoretical considerations.
Directory of Open Access Journals (Sweden)
C.S. Ierotheou
2001-01-01
Full Text Available The shared-memory programming model can be an effective way to achieve parallelism on shared memory parallel computers. Historically however, the lack of a programming standard using directives and the limited scalability have affected its take-up. Recent advances in hardware and software technologies have resulted in improvements to both the performance of parallel programs with compiler directives and the issue of portability with the introduction of OpenMP. In this study, the Computer Aided Parallelisation Toolkit has been extended to automatically generate OpenMP-based parallel programs with nominal user assistance. We categorize the different loop types and show how efficient directives can be placed using the toolkit's in-depth interprocedural analysis. Examples are taken from the NAS parallel benchmarks and a number of real-world application codes. This demonstrates the great potential of using the toolkit to quickly parallelise serial programs as well as the good performance achievable on up to 300 processors for hybrid message passing-directive parallelisations.
A simulation of a pebble bed reactor core by the MCNP-4C computer code
Directory of Open Access Journals (Sweden)
Bakhshayesh Moshkbar Khalil
2009-01-01
Full Text Available Lack of energy is a major crisis of our century; the irregular increase of fossil fuel costs has forced us to search for novel, cheaper, and safer sources of energy. Pebble bed reactors - an advanced new generation of reactors with specific advantages in safety and cost - might turn out to be the desired candidate for the role. The calculation of the critical height of a pebble bed reactor at room temperature, while using the MCNP-4C computer code, is the main goal of this paper. In order to reduce the MCNP computing time compared to the previously proposed schemes, we have devised a new simulation scheme. Different arrangements of kernels in fuel pebble simulations were investigated and the best arrangement to decrease the MCNP execution time (while keeping the accuracy of the results, chosen. The neutron flux distribution and control rods worth, as well as their shadowing effects, have also been considered in this paper. All calculations done for the HTR-10 reactor core are in good agreement with experimental results.
High-fidelity plasma codes for burn physics
Energy Technology Data Exchange (ETDEWEB)
Cooley, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Graziani, Frank [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Marinak, Marty [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Murillo, Michael [Michigan State Univ., East Lansing, MI (United States)
2016-10-19
Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental data and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.
WEC3: Wave Energy Converter Code Comparison Project: Preprint
Energy Technology Data Exchange (ETDEWEB)
Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien; Ruehl, Kelley; Roy, Andre; Costello, Ronan; Laporte Weywada, Pauline; Bailey, Helen
2017-01-01
This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to model hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.
An efficient radiative cooling approximation for use in hydrodynamic simulations
Lombardi, James C.; McInally, William G.; Faber, Joshua A.
2015-02-01
To make relevant predictions about observable emission, hydrodynamical simulation codes must employ schemes that account for radiative losses, but the large dimensionality of accurate radiative transfer schemes is often prohibitive. Stamatellos and collaborators introduced a scheme for smoothed particle hydrodynamics (SPH) simulations based on the notion of polytropic pseudo-clouds that uses only local quantities to estimate cooling rates. The computational approach is extremely efficient and works well in cases close to spherical symmetry, such as in star formation problems. Unfortunately, the method, which takes the local gravitational potential as an input, can be inaccurate when applied to non-spherical configurations, limiting its usefulness when studying discs or stellar collisions, among other situations of interest. Here, we introduce the `pressure scale height method,' which incorporates the fluid pressure scaleheight into the determination of column densities and cooling rates, and show that it produces more accurate results across a wide range of physical scenarios while retaining the computational efficiency of the original method. The tested models include spherical polytropes as well as discs with specified density and temperature profiles. We focus on applying our techniques within an SPH code, although our method can be implemented within any particle-based Lagrangian or grid-based Eulerian hydrodynamic scheme. Our new method may be applied in a broad range of situations, including within the realm of stellar interactions, collisions, and mergers.
Kucinskas, A; Caffau, E; Steffen, M
2009-01-01
We present synthetic broad-band photometric colors of a late-type giant located close to the RGB tip (T_eff = 3640 K, log g = 1.0 and [M/H] = 0.0). Johnson-Cousins-Glass BVRIJHK colors were obtained from the spectral energy distributions calculated using 3D hydrodynamical and 1D classical stellar atmosphere models. The differences between photometric magnitudes and colors predicted by the two types of models are significant, especially at optical wavelengths where they may reach, e.g., \\Delta V~0.16, \\Delta R~0.13 and \\Delta (V-I)~0.14, \\Delta (V-K)~0.20. Differences in the near-infrared are smaller but still non-negligible (e.g., \\Delta K~0.04). Such discrepancies may lead to noticeably different photometric parameters when these are inferred from photometry (e.g., effective temperature will change by \\Delta T_eff~60 K due to difference of \\Delta (V-K)~0.20).
Glassman, Arthur J.; Jones, Scott M.
1991-01-01
This analysis and this computer code apply to full, split, and dual expander cycles. Heat regeneration from the turbine exhaust to the pump exhaust is allowed. The combustion process is modeled as one of chemical equilibrium in an infinite-area or a finite-area combustor. Gas composition in the nozzle may be either equilibrium or frozen during expansion. This report, which serves as a users guide for the computer code, describes the system, the analysis methodology, and the program input and output. Sample calculations are included to show effects of key variables such as nozzle area ratio and oxidizer-to-fuel mass ratio.
Test results of a 40 kW Stirling engine and comparison with the NASA-Lewis computer code predictions
Allen, D.; Cairelli, J.
1985-01-01
A Stirling engine was tested without auxiliaries at NASA-Lewis. Three different regenerator configurations were tested with hydrogen. The test objectives were (1) to obtain steady-state and dynamic engine data, including indicated power, for validation of an existing computer model for this engine; and (2) to evaluate structurally the use of silicon carbide regenerators. This paper presents comparisons of the measured brake performance, indicated mean effective pressure, and cyclic pressure variations with those predicted by the code. The measured data tended to be lower than the computer code predictions. The silicon carbide foam regenerators appear to be structurally suitable, but the foam matrix tested severely reduced performance.
Energy Technology Data Exchange (ETDEWEB)
Hamada, Michael S [Los Alamos National Laboratory; Higdon, David M [Los Alamos National Laboratory
2009-01-01
In this paper, we present a generic example to illustrate various points about making future predictions of population performance using a biased performance computer code, physical performance data, and critical performance parameter data sampled from the population at various times. We show how the actual performance data help to correct the biased computer code and the impact of uncertainty especially when the prediction is made far from where the available data are taken. We also demonstrate how a Bayesian approach allows both inferences about the unknown parameters and predictions to be made in a consistent framework.
Grid cells generate an analog error-correcting code for singularly precise neural computation.
Sreenivasan, Sameet; Fiete, Ila
2011-09-11
Entorhinal grid cells in mammals fire as a function of animal location, with spatially periodic response patterns. This nonlocal periodic representation of location, a local variable, is unlike other neural codes. There is no theoretical explanation for why such a code should exist. We examined how accurately the grid code with noisy neurons allows an ideal observer to estimate location and found this code to be a previously unknown type of population code with unprecedented robustness to noise. In particular, the representational accuracy attained by grid cells over the coding range was in a qualitatively different class from what is possible with observed sensory and motor population codes. We found that a simple neural network can effectively correct the grid code. To the best of our knowledge, these results are the first demonstration that the brain contains, and may exploit, powerful error-correcting codes for analog variables.
A new class of codes for Boolean masking of cryptographic computations
Carlet, Claude; Kim, Jon-Lark; Solé, Patrick
2011-01-01
We introduce a new class of rate one half binary codes: complementary information set codes. A binary linear code of length 2n and dimension n is called a complementary information set code (CIS code for short) if it has two disjoint information sets. This class of codes contains self-dual codes as a subclass. It is connected to graph correlation immune Boolean functions of use in the security of hardware implementations of cryptographic primitives. Such codes permit to improve the cost of masking cryptographic algorithms against side channel attacks. In this paper we investigate this new class of codes: we give optimal or best known CIS codes of length < 132. We derive general constructions based on cyclic codes and on double circulant codes. We derive a Varshamov-Gilbert bound for long CIS codes, and show that they can all be classified in small lengths \\leq 12 by the building up construction. Some nonlinear S-boxes are constructed by using Z4-codes, based on the notion of dual distance of an unrestricte...
Jeon, Sangyong
2015-01-01
We give a pedagogical review of relativistic hydrodynamics relevant to relativistic heavy ion collisions. Topics discussed include linear response theory derivation of 2nd order viscous hydrodynamics including the Kubo formulas, kinetic theory derivation of 2nd order viscous hydrodynamics, anisotropic hydrodynamics and a brief review of numerical algorithms. Emphasis is given to the theory of hydrodynamics rather than phenomenology.
Lin, J. W.; Erickson, T. A.
2011-12-01
Historically, the application of high-performance computing (HPC) to the atmospheric sciences has focused on using the increases in processor speed, storage, and parallelization to run longer simulations of larger and more complex models. Such a focus, however, has led to a user culture where code robustness and reusability is ignored or discouraged. Additionally, such a culture works against nurturing and growing connections between high-performance computational earth sciences and scientific users outside of that community. Given the explosion in computational power available to researchers unconnected with the traditional HPC centers, as well as in the number of quality tools available to conduct analysis and visualization, the programming insularity of the earth science modeling and analysis community acts as a formidible barrier to increasing the usefulness and robustness of computational earth science products. In this talk, we suggest adoption of best practices from the software engineering community, and in particular the open-source community, has the potential to improve the quality of code and increase the impact of earth sciences HPC. In particular, we will discuss the impact of practices such as unit testing and code review, the need and preconditions for code reusability, and the importance of APIs and open frameworks to enable scientific discovery across sub-disciplines. We will present examples of the cross-disciplinary fertilization possible with open APIs. Finally, we will discuss ways funding agencies and the computational earth sciences community can help encourage the adoption of such best practices.
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.
DIST: a computer code system for calculation of distribution ratios of solutes in the purex system
Energy Technology Data Exchange (ETDEWEB)
Tachimori, Shoichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1996-05-01
Purex is a solvent extraction process for reprocessing the spent nuclear fuel using tri n-butylphosphate (TBP). A computer code system DIST has been developed to calculate distribution ratios for the major solutes in the Purex process. The DIST system is composed of database storing experimental distribution data of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}: DISTEX and of Zr(IV), Tc(VII): DISTEXFP and calculation programs to calculate distribution ratios of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}(DIST1), and Zr(IV), Tc(VII)(DITS2). The DIST1 and DIST2 determine, by the best-fit procedures, the most appropriate values of many parameters put on empirical equations by using the DISTEX data which fulfill the assigned conditions and are applied to calculate distribution ratios of the respective solutes. Approximately 5,000 data were stored in the DISTEX and DISTEXFP. In the present report, the following items are described, 1) specific features of DIST1 and DIST2 codes and the examples of calculation 2) explanation of databases, DISTEX, DISTEXFP and a program DISTIN, which manages the data in the DISTEX and DISTEXFP by functions as input, search, correction and delete. and at the annex, 3) programs of DIST1, DIST2, and figure-drawing programs DIST1G and DIST2G 4) user manual for DISTIN. 5) source programs of DIST1 and DIST2. 6) the experimental data stored in the DISTEX and DISTEXFP. (author). 122 refs.
Directory of Open Access Journals (Sweden)
Kumar Parijat Tripathi
Full Text Available RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool, QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery tools. It offers a report on statistical analysis of functional and Gene Ontology (GO annotation's enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein-protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA by ab initio methods helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is
Directory of Open Access Journals (Sweden)
Marcos Antonio Klunk
Full Text Available ABSTRACTDiagenetic reactions, characterized by the dissolution and precipitation of minerals at low temperatures, control the quality of sedimentary rocks as hydrocarbon reservoirs. Geochemical modeling, a tool used to understand diagenetic processes, is performed through computer codes based on thermodynamic and kinetic parameters. In a comparative study, we reproduced the diagenetic reactions observed in Snorre Field reservoir sandstones, Norwegian North Sea. These reactions had been previously modeled in the literature using DISSOL-THERMAL code. In this study, we modeled the diagenetic reactions in the reservoirs using Geochemist's Workbench (GWB and TOUGHREACT software, based on a convective-diffusive-reactive model and on the thermodynamic and kinetic parameters compiled for each reaction. TOUGHREACT and DISSOL-THERMAL modeling showed dissolution of quartz, K-feldspar and plagioclase in a similar temperature range from 25 to 80°C. In contrast, GWB modeling showed dissolution of albite, plagioclase and illite, as well as precipitation of quartz, K-feldspar and kaolinite in the same temperature range. The modeling generated by the different software for temperatures of 100, 120 and 140°C showed similarly the dissolution of quartz, K-feldspar, plagioclase and kaolinite, but differed in the precipitation of albite and illite. At temperatures of 150 and 160°C, GWB and TOUGHREACT produced different results from the DISSOL-THERMAL, except for the dissolution of quartz, plagioclase and kaolinite. The comparative study allows choosing the numerical modeling software whose results are closer to the diagenetic reactions observed in the petrographic analysis of the modeled reservoirs.
Spokoyny, Ilana; Chen, James Y; Raman, Rema; Ernstrom, Karin; Agrawal, Kunal; Modir, Royya F; Meyer, Dawn M; Meyer, Brett C
2016-12-01
Head computed tomography (CT) is critical for stroke code evaluations and often happens prior to completion of the neurological exam. Eye deviation on neuroimaging (DeyeCOM sign) has utility for predicting stroke diagnosis and correlates with National Institutes of Health Stroke Scale (NIHSS) gaze score. We further assessed the utility of the DeyeCOM sign, without complex caliper-based eye deviation calculations, but simply with a visual determination method. Patients with initial head CT and final diagnosis from an institutional review board-approved consecutive prospective registry of stroke codes at the University of California, San Diego, were included. Five stroke specialists and 1 neuroradiologist reviewed each CT. DeyeCOM+ patients were compared to DeyeCOM- patients (baseline characteristics, diagnosis, and NIHSS gaze score). Kappa statistics compared stroke specialists to neuroradiologist reads, and visual determination to caliper measurement of DeyeCOM sign. Of 181 patients, 46 were DeyeCOM+. Ischemic stroke was more commonly diagnosed in DeyeCOM+ patients compared to other diagnoses (P = .039). DeyeCOM+ patients were more likely to have an NIHSS gaze score of 1 or higher (P = .006). The NIHSS score of DeyeCOM+ stroke versus DeyeCOM- stroke patients was 8.3 ± 6.0 versus 6.7 ± 8.0 (P = .065). Functional outcomes were similar (P = .59). Stroke specialists had excellent agreement with the neuroradiologist (Κ = .89). Visual inspection had excellent agreement with the caliper method (Κ = .88). Using a time-sensitive visual determination of gaze deviation on imaging was predictive of ischemic stroke diagnosis and presence of NIHSS gaze score, and was consistent with the more complex caliper method. This study furthers the clinical utility of the DeyeCOM sign for predicting ischemic strokes. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Zuccaro, Antonio; Guarracino, Mario Rosario
2015-01-01
RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool), QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery) tools. It offers a report on statistical analysis of functional and Gene Ontology (GO) annotation’s enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein—protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA) by ab initio methods) helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is freely
Relativistic Hydrodynamics on Graphic Cards
Gerhard, Jochen; Bleicher, Marcus
2012-01-01
We show how to accelerate relativistic hydrodynamics simulations using graphic cards (graphic processing units, GPUs). These improvements are of highest relevance e.g. to the field of high-energetic nucleus-nucleus collisions at RHIC and LHC where (ideal and dissipative) relativistic hydrodynamics is used to calculate the evolution of hot and dense QCD matter. The results reported here are based on the Sharp And Smooth Transport Algorithm (SHASTA), which is employed in many hydrodynamical models and hybrid simulation packages, e.g. the Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). We have redesigned the SHASTA using the OpenCL computing framework to work on accelerators like graphic processing units (GPUs) as well as on multi-core processors. With the redesign of the algorithm the hydrodynamic calculations have been accelerated by a factor 160 allowing for event-by-event calculations and better statistics in hybrid calculations.
Gradient expansion for anisotropic hydrodynamics
Florkowski, Wojciech; Ryblewski, Radoslaw; Spaliński, Michał
2016-12-01
We compute the gradient expansion for anisotropic hydrodynamics. The results are compared with the corresponding expansion of the underlying kinetic-theory model with the collision term treated in the relaxation time approximation. We find that a recent formulation of anisotropic hydrodynamics based on an anisotropic matching principle yields the first three terms of the gradient expansion in agreement with those obtained for the kinetic theory. This gives further support for this particular hydrodynamic model as a good approximation of the kinetic-theory approach. We further find that the gradient expansion of anisotropic hydrodynamics is an asymptotic series, and the singularities of the analytic continuation of its Borel transform indicate the presence of nonhydrodynamic modes.