Massive hybrid parallelism for fully implicit multiphysics
International Nuclear Information System (INIS)
Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W.
2013-01-01
As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)
Massive hybrid parallelism for fully implicit multiphysics
Energy Technology Data Exchange (ETDEWEB)
Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W. [Idaho National Laboratory, 2525 N. Fremont Ave., Idaho Falls, ID 83415 (United States)
2013-07-01
As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)
MASSIVE HYBRID PARALLELISM FOR FULLY IMPLICIT MULTIPHYSICS
Energy Technology Data Exchange (ETDEWEB)
Cody J. Permann; David Andrs; John W. Peterson; Derek R. Gaston
2013-05-01
As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided.
Parallel multiphysics algorithms and software for computational nuclear engineering
International Nuclear Information System (INIS)
Gaston, D; Hansen, G; Kadioglu, S; Knoll, D A; Newman, C; Park, H; Permann, C; Taitano, W
2009-01-01
There is a growing trend in nuclear reactor simulation to consider multiphysics problems. This can be seen in reactor analysis where analysts are interested in coupled flow, heat transfer and neutronics, and in fuel performance simulation where analysts are interested in thermomechanics with contact coupled to species transport and chemistry. These more ambitious simulations usually motivate some level of parallel computing. Many of the coupling efforts to date utilize simple code coupling or first-order operator splitting, often referred to as loose coupling. While these approaches can produce answers, they usually leave questions of accuracy and stability unanswered. Additionally, the different physics often reside on separate grids which are coupled via simple interpolation, again leaving open questions of stability and accuracy. Utilizing state of the art mathematics and software development techniques we are deploying next generation tools for nuclear engineering applications. The Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning provide the underlying mathematical structure for our tools. JFNK is understood to be a modern multiphysics algorithm, but we are also utilizing its unique properties as a scale bridging algorithm. To facilitate rapid development of multiphysics applications we have developed the Multiphysics Object-Oriented Simulation Environment (MOOSE). Examples from two MOOSE-based applications: PRONGHORN, our multiphysics gas cooled reactor simulation tool and BISON, our multiphysics, multiscale fuel performance simulation tool will be presented.
Concurrent, parallel, multiphysics coupling in the FACETS project
Energy Technology Data Exchange (ETDEWEB)
Cary, J R; Carlsson, J A; Hakim, A H; Kruger, S E; Miah, M; Pletzer, A; Shasharina, S [Tech-X Corporation, 5621 Arapahoe Avenue, Suite A, Boulder, CO 80303 (United States); Candy, J; Groebner, R J [General Atomics (United States); Cobb, J; Fahey, M R [Oak Ridge National Laboratory (United States); Cohen, R H; Epperly, T [Lawrence Livermore National Laboratory (United States); Estep, D J [Colorado State University (United States); Krasheninnikov, S [University of California at San Diego (United States); Malony, A D [ParaTools, Inc (United States); McCune, D C [Princeton Plasma Physics Laboratory (United States); McInnes, L; Balay, S [Argonne National Laboratory (United States); Pankin, A, E-mail: cary@txcorp.co [Lehigh University (United States)
2009-07-01
FACETS (Framework Application for Core-Edge Transport Simulations), is now in its third year. The FACETS team has developed a framework for concurrent coupling of parallel computational physics for use on Leadership Class Facilities (LCFs). In the course of the last year, FACETS has tackled many of the difficult problems of moving to parallel, integrated modeling by developing algorithms for coupled systems, extracting legacy applications as components, modifying them to run on LCFs, and improving the performance of all components. The development of FACETS abides by rigorous engineering standards, including cross platform build and test systems, with the latter covering regression, performance, and visualization. In addition, FACETS has demonstrated the ability to incorporate full turbulence computations for the highest fidelity transport computations. Early indications are that the framework, using such computations, scales to multiple tens of thousands of processors. These accomplishments were a result of an interdisciplinary collaboration among computational physics, computer scientists and applied mathematicians on the team.
3D multiphysics modeling of superconducting cavities with a massively parallel simulation suite
Directory of Open Access Journals (Sweden)
Oleksiy Kononenko
2017-10-01
Full Text Available Radiofrequency cavities based on superconducting technology are widely used in particle accelerators for various applications. The cavities usually have high quality factors and hence narrow bandwidths, so the field stability is sensitive to detuning from the Lorentz force and external loads, including vibrations and helium pressure variations. If not properly controlled, the detuning can result in a serious performance degradation of a superconducting accelerator, so an understanding of the underlying detuning mechanisms can be very helpful. Recent advances in the simulation suite ace3p have enabled realistic multiphysics characterization of such complex accelerator systems on supercomputers. In this paper, we present the new capabilities in ace3p for large-scale 3D multiphysics modeling of superconducting cavities, in particular, a parallel eigensolver for determining mechanical resonances, a parallel harmonic response solver to calculate the response of a cavity to external vibrations, and a numerical procedure to decompose mechanical loads, such as from the Lorentz force or piezoactuators, into the corresponding mechanical modes. These capabilities have been used to do an extensive rf-mechanical analysis of dressed TESLA-type superconducting cavities. The simulation results and their implications for the operational stability of the Linac Coherent Light Source-II are discussed.
International Nuclear Information System (INIS)
Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.
2015-01-01
Highlights: • COBRA-TF was adopted by the Consortium for Advanced Simulation of LWRs. • We have improved code performance to support running large-scale LWR simulations. • Code optimization has led to reductions in execution time and memory usage. • An MPI parallelization has reduced full-core simulation time from days to minutes. - Abstract: This paper describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations—including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices—are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a “single program multiple data” parallelization strategy targeting distributed memory “multiple instruction multiple data” platforms utilizing domain decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard Message-Passing Interface (MPI) calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pressurized water reactor (PWR) pre-processor utility that uses a greatly simplified set of user input compared with the traditional CTF input. To run CTF in
International Nuclear Information System (INIS)
Gaston, Derek; Guo, Luanjing; Hansen, Glen; Huang, Hai; Johnson, Richard; Park, HyeongKae; Podgorney, Robert; Tonks, Michael; Williamson, Richard
2012-01-01
There is a growing trend within energy and environmental simulation to consider tightly coupled solutions to multiphysics problems. This can be seen in nuclear reactor analysis where analysts are interested in coupled flow, heat transfer and neutronics, and in nuclear fuel performance simulation where analysts are interested in thermomechanics with contact coupled to species transport and chemistry. In energy and environmental applications, energy extraction involves geomechanics, flow through porous media and fractured formations, adding heat transport for enhanced oil recovery and geothermal applications, and adding reactive transport in the case of applications modeling the underground flow of contaminants. These more ambitious simulations usually motivate some level of parallel computing. Many of the physics coupling efforts to date utilize simple code coupling or first-order operator splitting, often referred to as loose coupling. While these approaches can produce answers, they usually leave questions of accuracy and stability unanswered. Additionally, the different physics often reside on distinct meshes and data are coupled via simple interpolation, again leaving open questions of stability and accuracy.
NREL Multiphysics Modeling Tools and ISC Device for Designing Safer Li-Ion Batteries
Energy Technology Data Exchange (ETDEWEB)
Pesaran, Ahmad A.; Yang, Chuanbo
2016-03-24
The National Renewable Energy Laboratory has developed a portfolio of multiphysics modeling tools to aid battery designers better understand the response of lithium ion batteries to abusive conditions. We will discuss this portfolio, which includes coupled electrical, thermal, chemical, electrochemical, and mechanical modeling. These models can simulate the response of a cell to overheating, overcharge, mechanical deformation, nail penetration, and internal short circuit. Cell-to-cell thermal propagation modeling will be discussed.
The application of a multi-physics tool kit to spatial reactor dynamics
International Nuclear Information System (INIS)
Clifford, I.; Jasak, H.
2009-01-01
Traditionally coupled field nuclear reactor analysis has been carried out using several loosely coupled solvers, each having been developed independently from the others. In the field of multi-physics, the current generation of object-oriented tool kits provides robust close coupling of multiple fields on a single framework. This paper describes the initial results obtained as part of continuing research in the use of the OpenFOAM multi-physics tool kit for reactor dynamics application development. An unstructured, three-dimensional, time-dependent multi-group diffusion code Diffusion FOAM has been developed using the OpenFOAM multi-physics tool kit as a basis. The code is based on the finite-volume methodology and uses a newly developed block-coupled sparse matrix solver for the coupled solution of the multi-group diffusion equations. A description of this code is given with particular emphasis on the newly developed block-coupled solver, along with a selection of results obtained thus far. The code has performed well, indicating that the OpenFOAM tool kit is suited to reactor dynamics applications. This work has shown that the neutronics and simplified thermal-hydraulics of a reactor May be represented and solved for using a common calculation platform, and opens up the possibility for research into robust close-coupling of neutron diffusion and thermal-fluid calculations. This work has further opened up the possibility for research in a number of other areas, including research into three-dimensional unstructured meshes for reactor dynamics applications. (authors)
The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer
International Nuclear Information System (INIS)
Slattery, S. R.; Wilson, P. P. H.; Pawlowski, R. P.
2013-01-01
The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysics simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)
The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer
Energy Technology Data Exchange (ETDEWEB)
Slattery, S. R.; Wilson, P. P. H. [Department of Engineering Physics, University of Wisconsin - Madison, 1500 Engineering Dr., Madison, WI 53706 (United States); Pawlowski, R. P. [Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87185 (United States)
2013-07-01
The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysics simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)
Poulet, Thomas; Paesold, Martin; Veveakis, Manolis
2017-03-01
Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.
National Aeronautics and Space Administration — In this proposal, researchers from Cascade Technologies and Stanford University outline a multi-year research plan to develop large-eddy simulation (LES) tools to...
International Nuclear Information System (INIS)
Sanchez, Victor Hugo; Miassoedov, Alexei; Steinbrueck, M.; Tromm, W.
2016-01-01
This paper describes the KIT numerical simulation tools under extension and validation for the analysis of design and beyond design basis accidents (DBA) of Light Water Reactors (LWR). The description of the complex thermal hydraulic, neutron kinetics and chemo-physical phenomena going on during off-normal conditions requires the development of multi-physics and multi-scale simulations tools which are fostered by the rapid increase in computer power nowadays. The KIT numerical tools for DBA and beyond DBA are validated using experimental data of KIT or from abroad. The developments, extensions, coupling approaches and validation work performed at KIT are shortly outlined and discussed in this paper.
Energy Technology Data Exchange (ETDEWEB)
Sanchez, Victor Hugo; Miassoedov, Alexei; Steinbrueck, M.; Tromm, W. [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)
2016-05-15
This paper describes the KIT numerical simulation tools under extension and validation for the analysis of design and beyond design basis accidents (DBA) of Light Water Reactors (LWR). The description of the complex thermal hydraulic, neutron kinetics and chemo-physical phenomena going on during off-normal conditions requires the development of multi-physics and multi-scale simulations tools which are fostered by the rapid increase in computer power nowadays. The KIT numerical tools for DBA and beyond DBA are validated using experimental data of KIT or from abroad. The developments, extensions, coupling approaches and validation work performed at KIT are shortly outlined and discussed in this paper.
6th International Parallel Tools Workshop
Brinkmann, Steffen; Gracia, José; Resch, Michael; Nagel, Wolfgang
2013-01-01
The latest advances in the High Performance Computing hardware have significantly raised the level of available compute performance. At the same time, the growing hardware capabilities of modern supercomputing architectures have caused an increasing complexity of the parallel application development. Despite numerous efforts to improve and simplify parallel programming, there is still a lot of manual debugging and tuning work required. This process is supported by special software tools, facilitating debugging, performance analysis, and optimization and thus making a major contribution to the development of robust and efficient parallel software. This book introduces a selection of the tools, which were presented and discussed at the 6th International Parallel Tools Workshop, held in Stuttgart, Germany, 25-26 September 2012.
Energy Technology Data Exchange (ETDEWEB)
Campbell, Michael T. [Illinois Rocstar LLC, Champaign, IL (United States); Safdari, Masoud [Illinois Rocstar LLC, Champaign, IL (United States); Kress, Jessica E. [Illinois Rocstar LLC, Champaign, IL (United States); Anderson, Michael J. [Illinois Rocstar LLC, Champaign, IL (United States); Horvath, Samantha [Illinois Rocstar LLC, Champaign, IL (United States); Brandyberry, Mark D. [Illinois Rocstar LLC, Champaign, IL (United States); Kim, Woohyun [Illinois Rocstar LLC, Champaign, IL (United States); Sarwal, Neil [Illinois Rocstar LLC, Champaign, IL (United States); Weisberg, Brian [Illinois Rocstar LLC, Champaign, IL (United States)
2016-10-15
The project described in this report constructed and exercised an innovative multiphysics coupling toolkit called the Illinois Rocstar MultiPhysics Application Coupling Toolkit (IMPACT). IMPACT is an open source, flexible, natively parallel infrastructure for coupling multiple uniphysics simulation codes into multiphysics computational systems. IMPACT works with codes written in several high-performance-computing (HPC) programming languages, and is designed from the beginning for HPC multiphysics code development. It is designed to be minimally invasive to the individual physics codes being integrated, and has few requirements on those physics codes for integration. The goal of IMPACT is to provide the support needed to enable coupling existing tools together in unique and innovative ways to produce powerful new multiphysics technologies without extensive modification and rewrite of the physics packages being integrated. There are three major outcomes from this project: 1) construction, testing, application, and open-source release of the IMPACT infrastructure, 2) production of example open-source multiphysics tools using IMPACT, and 3) identification and engagement of interested organizations in the tools and applications resulting from the project. This last outcome represents the incipient development of a user community and application echosystem being built using IMPACT. Multiphysics coupling standardization can only come from organizations working together to define needs and processes that span the space of necessary multiphysics outcomes, which Illinois Rocstar plans to continue driving toward. The IMPACT system, including source code, documentation, and test problems are all now available through the public gitHUB.org system to anyone interested in multiphysics code coupling. Many of the basic documents explaining use and architecture of IMPACT are also attached as appendices to this document. Online HTML documentation is available through the gitHUB site
Automatic Parallelization Tool: Classification of Program Code for Parallel Computing
Directory of Open Access Journals (Sweden)
Mustafa Basthikodi
2016-04-01
Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.
Energy Technology Data Exchange (ETDEWEB)
Jonkman, Jason; Annoni, Jennifer; Hayman, Greg; Jonkman, Bonnie; Purkayastha, Avi
2017-01-01
This paper presents the development of FAST.Farm, a new multiphysics tool applicable to engineering problems in research and industry involving wind farm performance and cost optimization that is needed to address the current underperformance, failures, and expenses plaguing the wind industry. Achieving wind cost-of-energy targets - which requires improvements in wind farm performance and reliability, together with reduced uncertainty and expenditures - has been eluded by the complicated nature of the wind farm design problem, especially the sophisticated interaction between atmospheric phenomena and wake dynamics and array effects. FAST.Farm aims to balance the need for accurate modeling of the relevant physics for predicting power performance and loads while maintaining low computational cost to support a highly iterative and probabilistic design process and system-wide optimization. FAST.Farm makes use of FAST to model the aero-hydro-servo-elastics of distinct turbines in the wind farm, and it is based on some of the principles of the Dynamic Wake Meandering (DWM) model, but avoids many of the limitations of existing DWM implementations.
Development of FAST.Farm: A New Multiphysics Engineering Tool for Wind-Farm Design and Analysis
Energy Technology Data Exchange (ETDEWEB)
Jonkman, Jason; Annoni, Jennifer; Hayman, Greg; Jonkman, Bonnie; Purkayastha, Avi
2017-01-09
This paper presents the development of FAST.Farm, a new multiphysics tool applicable to engineering problems in research and industry involving wind farm performance and cost optimization that is needed to address the current underperformance, failures, and expenses plaguing the wind industry. Achieving wind cost-of-energy targets - which requires improvements in wind farm performance and reliability, together with reduced uncertainty and expenditures - has been eluded by the complicated nature of the wind farm design problem, especially the sophisticated interaction between atmospheric phenomena and wake dynamics and array effects. FAST.Farm aims to balance the need for accurate modeling of the relevant physics for predicting power performance and loads while maintaining low computational cost to support a highly iterative and probabilistic design process and system-wide optimization. FAST.Farm makes use of FAST to model the aero-hydro-servo-elastics of distinct turbines in the wind farm, and it is based on some of the principles of the Dynamic Wake Meandering (DWM) model, but avoids many of the limitations of existing DWM implementations.
NonLinear Parallel OPtimization Tool, Phase II
National Aeronautics and Space Administration — The technological advancement proposed is a novel large-scale Noninear Parallel OPtimization Tool (NLPAROPT). This software package will eliminate the computational...
Development of parallel/serial program analyzing tool
International Nuclear Information System (INIS)
Watanabe, Hiroshi; Nagao, Saichi; Takigawa, Yoshio; Kumakura, Toshimasa
1999-03-01
Japan Atomic Energy Research Institute has been developing 'KMtool', a parallel/serial program analyzing tool, in order to promote the parallelization of the science and engineering computation program. KMtool analyzes the performance of program written by FORTRAN77 and MPI, and it reduces the effort for parallelization. This paper describes development purpose, design, utilization and evaluation of KMtool. (author)
A Coupling Tool for Parallel Molecular Dynamics-Continuum Simulations
Neumann, Philipp
2012-06-01
We present a tool for coupling Molecular Dynamics and continuum solvers. It is written in C++ and is meant to support the developers of hybrid molecular - continuum simulations in terms of both realisation of the respective coupling algorithm as well as parallel execution of the hybrid simulation. We describe the implementational concept of the tool and its parallel extensions. We particularly focus on the parallel execution of particle insertions into dense molecular systems and propose a respective parallel algorithm. Our implementations are validated for serial and parallel setups in two and three dimensions. © 2012 IEEE.
Innovative Software Algorithms and Tools parallel sessions summary
International Nuclear Information System (INIS)
Gaines, Irwin
2001-01-01
A variety of results were presented in the poster and 5 parallel sessions of the Innovative Software, Algorithms and Tools (ISAT) sessions. I will briefly summarize these presentations and attempt to identify some unifying trends
Design of a novel parallel reconfigurable machine tool
CSIR Research Space (South Africa)
Modungwa, D
2008-06-01
Full Text Available of meeting the demands for high mechanical dexterity adaptation as well as high stiffness necessary for mould and die re-conditioning. This paper presents, the design of parallel reconfigurable machine tool (PRMT) based on both application...
An educational tool for interactive parallel and distributed processing
DEFF Research Database (Denmark)
Pagliarini, Luigi; Lund, Henrik Hautop
2012-01-01
In this article we try to describe how the modular interactive tiles system (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing a handson educational tool that allows a change in the representation...... of abstract problems related to designing interactive parallel and distributed systems. Indeed, the MITS seems to bring a series of goals into education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback......, connectivity, topology, island modeling, and user and multi-user interaction which can rarely be found in other tools. Finally, we introduce the system of modular interactive tiles as a tool for easy, fast, and flexible hands-on exploration of these issues, and through examples we show how to implement...
pcircle - A Suite of Scalable Parallel File System Tools
Energy Technology Data Exchange (ETDEWEB)
2015-10-01
Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.
An Educational Tool for Interactive Parallel and Distributed Processing
DEFF Research Database (Denmark)
Pagliarini, Luigi; Lund, Henrik Hautop
2011-01-01
In this paper we try to describe how the Modular Interactive Tiles System (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing an educational hands-on tool that allows a change of representation of the abs......In this paper we try to describe how the Modular Interactive Tiles System (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing an educational hands-on tool that allows a change of representation...... of the abstract problems related to designing interactive parallel and distributed systems. Indeed, MITS seems to bring a series of goals into the education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback......, connectivity, topology, island modeling, user and multiuser interaction, which can hardly be found in other tools. Finally, we introduce the system of modular interactive tiles as a tool for easy, fast, and flexible hands-on exploration of these issues, and through examples show how to implement interactive...
Multiphysics simulation electromechanical system applications and optimization
Dede, Ercan M; Nomura, Tsuyoshi
2014-01-01
This book highlights a unique combination of numerical tools and strategies for handling the challenges of multiphysics simulation, with a specific focus on electromechanical systems as the target application. Features: introduces the concept of design via simulation, along with the role of multiphysics simulation in today's engineering environment; discusses the importance of structural optimization techniques in the design and development of electromechanical systems; provides an overview of the physics commonly involved with electromechanical systems for applications such as electronics, ma
Optical Pushing: A Tool for Parallelized Biomolecule Manipulation
Sitters, G.; Laurens, N.; de Rijk, E.J.; Kress, H.; Peterman, E.J.G.; Wuite, G.J.L.
2016-01-01
The ability to measure and manipulate single molecules has greatly advanced the field of biophysics. Yet, the addition of more single-molecule tools that enable one to measure in a parallel fashion is important to diversify the questions that can be addressed. Here we present optical pushing (OP), a
Numerical simulation of Vlasov equation with parallel tools
International Nuclear Information System (INIS)
Peyroux, J.
2005-11-01
This project aims to make even more powerful the resolution of Vlasov codes through the various parallelization tools (MPI, OpenMP...). A simplified test case served as a base for constructing the parallel codes for obtaining a data-processing skeleton which, thereafter, could be re-used for increasingly complex models (more than four variables of phase space). This will thus make it possible to treat more realistic situations linked, for example, to the injection of ultra short and ultra intense impulses in inertial fusion plasmas, or the study of the instability of trapped ions now taken as being responsible for the generation of turbulence in tokamak plasmas. (author)
A tool for simulating parallel branch-and-bound methods
Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail
2016-01-01
The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
A tool for simulating parallel branch-and-bound methods
Directory of Open Access Journals (Sweden)
Golubeva Yana
2016-01-01
Full Text Available The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer’s interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
Scalable Adaptive Multilevel Solvers for Multiphysics Problems
Energy Technology Data Exchange (ETDEWEB)
Xu, Jinchao [Pennsylvania State Univ., University Park, PA (United States). Dept. of Mathematics
2014-11-26
In this project, we carried out many studies on adaptive and parallel multilevel methods for numerical modeling for various applications, including Magnetohydrodynamics (MHD) and complex fluids. We have made significant efforts and advances in adaptive multilevel methods of the multiphysics problems: multigrid methods, adaptive finite element methods, and applications.
Parallel genetic algorithm as a tool for nuclear reactors reload
International Nuclear Information System (INIS)
Santos, Darley Roberto G.; Schirru, Roberto
1999-01-01
This work intends to present a tool which can be used by designers in order to get better solutions, in terms of computational costs, to solve problems of nuclear reactor reloads. It is known that the project of nuclear fuel reload is a complex combinatorial one. Generally, iterative processes are the most used ones because they generate answers to satisfy all restrictions. The model presented here uses Artificial Intelligence techniques, more precisely Genetic Algorithms techniques, mixed with parallelization techniques.Test of the tool presented here were highly satisfactory, due to a considerable reduction in computational time. (author)
A dataflow analysis tool for parallel processing of algorithms
Jones, Robert L., III
1993-01-01
A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on a set of identical parallel processors. Typical applications include signal processing and control law problems. Graph analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool is shown to facilitate the application of the design process to a given problem.
PLAST: parallel local alignment search tool for database comparison
Directory of Open Access Journals (Sweden)
Lavenier Dominique
2009-10-01
Full Text Available Abstract Background Sequence similarity searching is an important and challenging task in molecular biology and next-generation sequencing should further strengthen the need for faster algorithms to process such vast amounts of data. At the same time, the internal architecture of current microprocessors is tending towards more parallelism, leading to the use of chips with two, four and more cores integrated on the same die. The main purpose of this work was to design an effective algorithm to fit with the parallel capabilities of modern microprocessors. Results A parallel algorithm for comparing large genomic banks and targeting middle-range computers has been developed and implemented in PLAST software. The algorithm exploits two key parallel features of existing and future microprocessors: the SIMD programming model (SSE instruction set and the multithreading concept (multicore. Compared to multithreaded BLAST software, tests performed on an 8-processor server have shown speedup ranging from 3 to 6 with a similar level of accuracy. Conclusion A parallel algorithmic approach driven by the knowledge of the internal microprocessor architecture allows significant speedup to be obtained while preserving standard sensitivity for similarity search problems.
Parallel Enhancements of the General Mission Analysis Tool, Phase I
National Aeronautics and Space Administration — The General Mission Analysis Tool (GMAT) is a state of the art spacecraft mission design tool under active development at NASA's Goddard Space Flight Center (GSFC)....
Multiphysics modeling of a rail gun launcher
Directory of Open Access Journals (Sweden)
Y W Kwon
2016-03-01
Full Text Available A finite element based multiphysics modeling was conducted for a rail gunlauncher to predict the exit velocity of the launch object, and temperaturedistribution. For this modeling, electromagnetic field analysis, heat transferanalysis, thermal stress analysis, and dynamic analysis were conducted fora system consisting of two parallel rails and a moving armature. In particular,an emphasis was given to model the contact interface between rails andthe armature. A contact theory was used to estimate the electric as well asthermal conductivities at the interface. Using the developed model, aparametric study was conducted to understand effects of variousparameters on the exit velocity as well as the temperature distribution in therail gun launcher.
A Coupling Tool for Parallel Molecular Dynamics-Continuum Simulations
Neumann, Philipp; Tchipev, Nikola
2012-01-01
We present a tool for coupling Molecular Dynamics and continuum solvers. It is written in C++ and is meant to support the developers of hybrid molecular - continuum simulations in terms of both realisation of the respective coupling algorithm
MCBooster: a tool for MC generation for massively parallel platforms
Alves Junior, Antonio Augusto
2016-01-01
MCBooster is a header-only, C++11-compliant library for the generation of large samples of phase-space Monte Carlo events on massively parallel platforms. It was released on GitHub in the spring of 2016. The library core algorithms implement the Raubold-Lynch method; they are able to generate the full kinematics of decays with up to nine particles in the final state. The library supports the generation of sequential decays as well as the parallel evaluation of arbitrary functions over the generated events. The output of MCBooster completely accords with popular and well-tested software packages such as GENBOD (W515 from CERNLIB) and TGenPhaseSpace from the ROOT framework. MCBooster is developed on top of the Thrust library and runs on Linux systems. It deploys transparently on NVidia CUDA-enabled GPUs as well as multicore CPUs. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of perfor...
Multiphysics simulations: Challenges and opportunities
Keyes, David E.; McInnes, Lois Curfman; Woodward, Carol S.; Gropp, William D.; Myra, Eric S.; Pernice, Michael; Bell, John B.; Brown, Jed; Clo, Alain M.; Connors, Jeffrey Mark; Constantinescu, Emil M.; Estep, Donald J.; Evans, Katherine J.; Farhat, Charbel H.; Hakim, Ammar H.; Hammond, Glenn E.; Hansen, Glen A.; Hill, Judith C.; Isaac, Tobin; Jiao, Xiangmin; Jordan, Kirk E.; Kaushik, Dinesh K.; Kaxiras, Efthimios; Koniges, Alice E.; Lee, Kihwan; Lott, Aaron; Lu, Qiming; Magerlein, John H.; Maxwell, Reed M.; McCourt, Michael J.; Mehl, Miriam; Pawlowski, Roger P.; Randles, Amanda Peters; Reynolds, Daniel R.; Riviè re, Bé atrice M.; Rü de, Ulrich; Scheibe, Timothy D.; Shadid, John N.; Sheehan, Brendan; Shephard, Mark S.; Siegel, Andrew R.; Smith, Barry F.; Tang, Xianzhu; Wilson, Cian R G; Wohlmuth, Barbara Ian
2013-01-01
We consider multiphysics applications from algorithmic and architectural perspectives, where "algorithmic" includes both mathematical analysis and computational complexity, and "architectural" includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. © The Author(s) 2012.
Multiphysics simulations: Challenges and opportunities
Keyes, David E.
2013-02-01
We consider multiphysics applications from algorithmic and architectural perspectives, where "algorithmic" includes both mathematical analysis and computational complexity, and "architectural" includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. © The Author(s) 2012.
10th International Workshop on Parallel Tools for High Performance Computing
Gracia, José; Hilbrich, Tobias; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang
2017-01-01
This book presents the proceedings of the 10th International Parallel Tools Workshop, held October 4-5, 2016 in Stuttgart, Germany – a forum to discuss the latest advances in parallel tools. High-performance computing plays an increasingly important role for numerical simulation and modelling in academic and industrial research. At the same time, using large-scale parallel systems efficiently is becoming more difficult. A number of tools addressing parallel program development and analysis have emerged from the high-performance computing community over the last decade, and what may have started as collection of small helper script has now matured to production-grade frameworks. Powerful user interfaces and an extensive body of documentation allow easy usage by non-specialists.
Parallel workflow tools to facilitate human brain MRI post-processing
Directory of Open Access Journals (Sweden)
Zaixu eCui
2015-05-01
Full Text Available Multi-modal magnetic resonance imaging (MRI techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues.
Multiphysics simulations: challenges and opportunities.
Energy Technology Data Exchange (ETDEWEB)
Keyes, D.; McInnes, L. C.; Woodward, C.; Gropp, W.; Myra, E.; Pernice, M. (Mathematics and Computer Science); (KAUST and Columbia Univ.); (Lawrence Livermore National Laboratory); (Univ. of Illinois at Urbana-Champaign); (Univ. of Mich.); (Idaho National Lab.)
2012-11-29
This report is an outcome of the workshop Multiphysics Simulations: Challenges and Opportunities, sponsored by the Institute of Computing in Science (ICiS). Additional information about the workshop, including relevant reading and presentations on multiphysics issues in applications, algorithms, and software, is available via https://sites.google.com/site/icismultiphysics2011/. We consider multiphysics applications from algorithmic and architectural perspectives, where 'algorithmic' includes both mathematical analysis and computational complexity and 'architectural' includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. We also initiate a modest suite of test problems encompassing features present in many applications.
An approach for coupled-code multiphysics core simulations from a common input
International Nuclear Information System (INIS)
Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; Pawlowski, Roger; Clarno, Kevin; Simunovic, Srdjan; Slattery, Stuart; Turner, John; Palmtag, Scott
2015-01-01
Highlights: • We describe an approach for coupled-code multiphysics reactor core simulations. • The approach can enable tight coupling of distinct physics codes with a common input. • Multi-code multiphysics coupling and parallel data transfer issues are explained. • The common input approach and how the information is processed is described. • Capabilities are demonstrated on an eigenvalue and power distribution calculation. - Abstract: This paper describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which is built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak
Engineering Multiphysics Research
Directory of Open Access Journals (Sweden)
Tom Eppes
2011-05-01
Full Text Available This paper describes an engineering undergraduate course that covers the methods and techniques of multiphysics modeling. Students become active participants in analysis and discovery by being challenged to solve a sequence of problems related to high priority technology areas. Projects range from power systems and thermal control of habitats to autonomous flight systems and harsh environment electronics. Working in a cooperative learning environment, teams encounter a series of assignments that build on existing skills while gradually expanding their knowledge and expertise in disciplines outside of their own. This project-based approach employs a scaffolding structure with assignments organized in progressively challenging modules supported by mentoring. Each project begins with a problem definition which requires consideration of factors and influences beyond a single discipline. Solution development then moves to setting material properties, boundary constraints and including the necessary physics engines. For many students, this is the first in depth exposure to problems with specialized terminologies, driving equations and limiting conditions. Lastly, solving and post processing are addressed exploring steady state, time-variant, frequency response, optimization and sensitivity methods. The paper discusses the teaching and learning strategies, course structure, outcome assessment and project examples.
Tightly Coupled Multiphysics Algorithm for Pebble Bed Reactors
International Nuclear Information System (INIS)
Park, HyeongKae; Knoll, Dana; Gaston, Derek; Martineau, Richard
2010-01-01
We have developed a tightly coupled multiphysics simulation tool for the pebble-bed reactor (PBR) concept, a type of Very High-Temperature gas-cooled Reactor (VHTR). The simulation tool, PRONGHORN, takes advantages of the Multiphysics Object-Oriented Simulation Environment library, and is capable of solving multidimensional thermal-fluid and neutronics problems implicitly with a Newton-based approach. Expensive Jacobian matrix formation is alleviated via the Jacobian-free Newton-Krylov method, and physics-based preconditioning is applied to minimize Krylov iterations. Motivation for the work is provided via analysis and numerical experiments on simpler multiphysics reactor models. We then provide detail of the physical models and numerical methods in PRONGHORN. Finally, PRONGHORN's algorithmic capability is demonstrated on a number of PBR test cases.
7th International Workshop on Parallel Tools for High Performance Computing
Gracia, José; Nagel, Wolfgang; Resch, Michael
2014-01-01
Current advances in High Performance Computing (HPC) increasingly impact efficient software development workflows. Programmers for HPC applications need to consider trends such as increased core counts, multiple levels of parallelism, reduced memory per core, and I/O system challenges in order to derive well performing and highly scalable codes. At the same time, the increasing complexity adds further sources of program defects. While novel programming paradigms and advanced system libraries provide solutions for some of these challenges, appropriate supporting tools are indispensable. Such tools aid application developers in debugging, performance analysis, or code optimization and therefore make a major contribution to the development of robust and efficient parallel software. This book introduces a selection of the tools presented and discussed at the 7th International Parallel Tools Workshop, held in Dresden, Germany, September 3-4, 2013.
8th International Workshop on Parallel Tools for High Performance Computing
Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang
2015-01-01
Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.
9th International Workshop on Parallel Tools for High Performance Computing
Hilbrich, Tobias; Niethammer, Christoph; Gracia, José; Nagel, Wolfgang; Resch, Michael
2016-01-01
High Performance Computing (HPC) remains a driver that offers huge potentials and benefits for science and society. However, a profound understanding of the computational matters and specialized software is needed to arrive at effective and efficient simulations. Dedicated software tools are important parts of the HPC software landscape, and support application developers. Even though a tool is by definition not a part of an application, but rather a supplemental piece of software, it can make a fundamental difference during the development of an application. Such tools aid application developers in the context of debugging, performance analysis, and code optimization, and therefore make a major contribution to the development of robust and efficient parallel software. This book introduces a selection of the tools presented and discussed at the 9th International Parallel Tools Workshop held in Dresden, Germany, September 2-3, 2015, which offered an established forum for discussing the latest advances in paral...
Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials
Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar
2015-01-01
The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition
Tuning of tool dynamics for increased stability of parallel (simultaneous) turning processes
Ozturk, E.; Comak, A.; Budak, E.
2016-01-01
Parallel (simultaneous) turning operations make use of more than one cutting tool acting on a common workpiece offering potential for higher productivity. However, dynamic interaction between the tools and workpiece and resulting chatter vibrations may create quality problems on machined surfaces. In order to determine chatter free cutting process parameters, stability models can be employed. In this paper, stability of parallel turning processes is formulated in frequency and time domain for two different parallel turning cases. Predictions of frequency and time domain methods demonstrated reasonable agreement with each other. In addition, the predicted stability limits are also verified experimentally. Simulation and experimental results show multi regional stability diagrams which can be used to select most favorable set of process parameters for higher stable material removal rates. In addition to parameter selection, developed models can be used to determine the best natural frequency ratio of tools resulting in the highest stable depth of cuts. It is concluded that the most stable operations are obtained when natural frequency of the tools are slightly off each other and worst stability occurs when the natural frequency of the tools are exactly the same.
Methodologies and Tools for Tuning Parallel Programs: 80% Art, 20% Science, and 10% Luck
Yan, Jerry C.; Bailey, David (Technical Monitor)
1996-01-01
The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessors. However, without effective means to monitor (and analyze) program execution, tuning the performance of parallel programs becomes exponentially difficult as program complexity and machine size increase. In the past few years, the ubiquitous introduction of performance tuning tools from various supercomputer vendors (Intel's ParAide, TMC's PRISM, CRI's Apprentice, and Convex's CXtrace) seems to indicate the maturity of performance instrumentation/monitor/tuning technologies and vendors'/customers' recognition of their importance. However, a few important questions remain: What kind of performance bottlenecks can these tools detect (or correct)? How time consuming is the performance tuning process? What are some important technical issues that remain to be tackled in this area? This workshop reviews the fundamental concepts involved in analyzing and improving the performance of parallel and heterogeneous message-passing programs. Several alternative strategies will be contrasted, and for each we will describe how currently available tuning tools (e.g. AIMS, ParAide, PRISM, Apprentice, CXtrace, ATExpert, Pablo, IPS-2) can be used to facilitate the process. We will characterize the effectiveness of the tools and methodologies based on actual user experiences at NASA Ames Research Center. Finally, we will discuss their limitations and outline recent approaches taken by vendors and the research community to address them.
Parallel tools GUI framework-DOE SBIR phase I final technical report
Energy Technology Data Exchange (ETDEWEB)
Galarowicz, James [Argo Navis Technologies LLC., Annapolis, MD (United States)
2013-12-05
Many parallel performance, profiling, and debugging tools require a graphical way of displaying the very large datasets typically gathered from high performance computing (HPC) applications. Most tool projects create their graphical user interfaces (GUI) from scratch, many times spending their project resources on simply redeveloping commonly used infrastructure. Our goal was to create a multiplatform GUI framework, based on Nokia/Digia’s popular Qt libraries, which will specifically address the needs of these parallel tools. The Parallel Tools GUI Framework (PTGF) uses a plugin architecture facilitating rapid GUI development and reduced development costs for new and existing tool projects by allowing the reuse of many common GUI elements, called “widgets.” Widgets created include, 2D data visualizations, a source code viewer with syntax highlighting, and integrated help and welcome screens. Application programming interface (API) design was focused on minimizing the time to getting a functional tool working. Having a standard, unified, and userfriendly interface which operates on multiple platforms will benefit HPC application developers by reducing training time and allowing users to move between tools rapidly during a single session. However, Argo Navis Technologies LLC will not be submitting a DOE SBIR Phase II proposal and commercialization plan for the PTGF project. Our preliminary estimates for gross income over the next several years was based upon initial customer interest and income generated by similar projects. Unfortunately, as we further assessed the market during Phase I, we grew to realize that there was not enough demand to warrant such a large investment. While we do find that the project is worth our continued investment of time and money, we do not think it worthy of the DOE's investment at this time. We are grateful that the DOE has afforded us the opportunity to make this assessment, and come to this conclusion.
Vdebug: debugging tool for parallel scientific programs. Design report on vdebug
International Nuclear Information System (INIS)
Matsuda, Katsuyuki; Takemiya, Hiroshi
2000-02-01
We report on a debugging tool called vdebug which supports debugging work for parallel scientific simulation programs. It is difficult to debug scientific programs with an existing debugger, because the volume of data generated by the programs is too large for users to check data in characters. Usually, the existing debugger shows data values in characters. To alleviate it, we have developed vdebug which enables to check the validity of large amounts of data by showing these data values visually. Although targets of vdebug have been restricted to sequential programs, we have made it applicable to parallel programs by realizing the function of merging and visualizing data distributed on programs on each computer node. Now, vdebug works on seven kinds of parallel computers. In this report, we describe the design of vdebug. (author)
Hoff, Claus; Cady, Eric; Chainyk, Mike; Kissil, Andrew; Levine, Marie; Moore, Greg
2011-01-01
The efficient simulation of multidisciplinary thermo-opto-mechanical effects in precision deployable systems has for years been limited by numerical toolsets that do not necessarily share the same finite element basis, level of mesh discretization, data formats, or compute platforms. Cielo, a general purpose integrated modeling tool funded by the Jet Propulsion Laboratory and the Exoplanet Exploration Program, addresses shortcomings in the current state of the art via features that enable the use of a single, common model for thermal, structural and optical aberration analysis, producing results of greater accuracy, without the need for results interpolation or mapping. This paper will highlight some of these advances, and will demonstrate them within the context of detailed external occulter analyses, focusing on in-plane deformations of the petal edges for both steady-state and transient conditions, with subsequent optical performance metrics including intensity distributions at the pupil and image plane.
Energy Technology Data Exchange (ETDEWEB)
Petrov, Victor [Department of Nuclear Engineering & Radiological Sciences, University of Michigan, 2355 Bonisteel Boulv, Ann Arbor, MI (United States); Kendrick, Brian K. [Theoretical Division (T-1, MS B221), Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Walter, Daniel [Department of Nuclear Engineering & Radiological Sciences, University of Michigan, 2355 Bonisteel Boulv, Ann Arbor, MI (United States); Manera, Annalisa, E-mail: manera@umich.edu [Department of Nuclear Engineering & Radiological Sciences, University of Michigan, 2355 Bonisteel Boulv, Ann Arbor, MI (United States); Secker, Jeffrey [Westinghouse Electric Company Nuclear Fuel Division, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States)
2016-04-01
In the present paper we report about the first attempt to demonstrate and assess the ability of state-of-the-art high-fidelity computational tools to reproduce the complex patterns of CRUD deposits found on the surface of operating Pressurized Water Reactors (PWRs) fuel rods. A fuel assembly of the Seabrook Unit 1 PWR was selected as the test problem. During Seabrook Cycle 5, CRUD induced power shift (CIPS) and CRUD induced localized corrosion (CILC) failures were observed. Measurements of the clad oxide thickness on both failed and non-failed rods are available, together with visual observations and the results from CRUD scrapes of peripheral rods. Blind simulations were performed using the Computational Fluid Dynamics (CFD) code STAR-CCM+ coupled to an advanced chemistry code, MAMBA, developed at Los Alamos National Laboratory. The blind simulations were then compared to plant data, which were released after completion of the simulations.
PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool
AlTurki, Musab
2011-01-01
Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
Herrera, I.; Herrera, G. S.
2015-12-01
Most geophysical systems are macroscopic physical systems. The behavior prediction of such systems is carried out by means of computational models whose basic models are partial differential equations (PDEs) [1]. Due to the enormous size of the discretized version of such PDEs it is necessary to apply highly parallelized super-computers. For them, at present, the most efficient software is based on non-overlapping domain decomposition methods (DDM). However, a limiting feature of the present state-of-the-art techniques is due to the kind of discretizations used in them. Recently, I. Herrera and co-workers using 'non-overlapping discretizations' have produced the DVS-Software which overcomes this limitation [2]. The DVS-software can be applied to a great variety of geophysical problems and achieves very high parallel efficiencies (90%, or so [3]). It is therefore very suitable for effectively applying the most advanced parallel supercomputers available at present. In a parallel talk, in this AGU Fall Meeting, Graciela Herrera Z. will present how this software is being applied to advance MOD-FLOW. Key Words: Parallel Software for Geophysics, High Performance Computing, HPC, Parallel Computing, Domain Decomposition Methods (DDM)REFERENCES [1]. Herrera Ismael and George F. Pinder, Mathematical Modelling in Science and Engineering: An axiomatic approach", John Wiley, 243p., 2012. [2]. Herrera, I., de la Cruz L.M. and Rosas-Medina A. "Non Overlapping Discretization Methods for Partial, Differential Equations". NUMER METH PART D E, 30: 1427-1454, 2014, DOI 10.1002/num 21852. (Open source) [3]. Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
A New Tool for Intelligent Parallel Processing of Radar/SAR Remotely Sensed Imagery
Directory of Open Access Journals (Sweden)
A. Castillo Atoche
2013-01-01
Full Text Available A novel parallel tool for large-scale image enhancement/reconstruction and postprocessing of radar/SAR sensor systems is addressed. The proposed parallel tool performs the following intelligent processing steps: image formation, for the application of different system-level effects of image degradation with a particular remote sensing (RS system and simulation of random noising effects, enhancement/reconstruction by employing nonparametric robust high-resolution techniques, and image postprocessing using the fuzzy anisotropic diffusion technique which incorporates a better edge-preserving noise removal effect and faster diffusion process. This innovative tool allows the processing of high-resolution images provided with different radar/SAR sensor systems as required by RS endusers for environmental monitoring, risk prevention, and resource management. To verify the performance implementation of the proposed parallel framework, the processing steps are developed and specifically tested on graphic processing units (GPU, achieving considerable speedups compared to the serial version of the same techniques implemented in C language.
Parallel analysis tools and new visualization techniques for ultra-large climate data set
Energy Technology Data Exchange (ETDEWEB)
Middleton, Don [National Center for Atmospheric Research, Boulder, CO (United States); Haley, Mary [National Center for Atmospheric Research, Boulder, CO (United States)
2014-12-10
ParVis was a project funded under LAB 10-05: “Earth System Modeling: Advanced Scientific Visualization of Ultra-Large Climate Data Sets”. Argonne was the lead lab with partners at PNNL, SNL, NCAR and UC-Davis. This report covers progress from January 1st, 2013 through Dec 1st, 2014. Two previous reports covered the period from Summer, 2010, through September 2011 and October 2011 through December 2012, respectively. While the project was originally planned to end on April 30, 2013, personnel and priority changes allowed many of the institutions to continue work through FY14 using existing funds. A primary focus of ParVis was introducing parallelism to climate model analysis to greatly reduce the time-to-visualization for ultra-large climate data sets. Work in the first two years was conducted on two tracks with different time horizons: one track to provide immediate help to climate scientists already struggling to apply their analysis to existing large data sets and another focused on building a new data-parallel library and tool for climate analysis and visualization that will give the field a platform for performing analysis and visualization on ultra-large datasets for the foreseeable future. In the final 2 years of the project, we focused mostly on the new data-parallel library and associated tools for climate analysis and visualization.
Unified Lambert Tool for Massively Parallel Applications in Space Situational Awareness
Woollands, Robyn M.; Read, Julie; Hernandez, Kevin; Probe, Austin; Junkins, John L.
2018-03-01
This paper introduces a parallel-compiled tool that combines several of our recently developed methods for solving the perturbed Lambert problem using modified Chebyshev-Picard iteration. This tool (unified Lambert tool) consists of four individual algorithms, each of which is unique and better suited for solving a particular type of orbit transfer. The first is a Keplerian Lambert solver, which is used to provide a good initial guess (warm start) for solving the perturbed problem. It is also used to determine the appropriate algorithm to call for solving the perturbed problem. The arc length or true anomaly angle spanned by the transfer trajectory is the parameter that governs the automated selection of the appropriate perturbed algorithm, and is based on the respective algorithm convergence characteristics. The second algorithm solves the perturbed Lambert problem using the modified Chebyshev-Picard iteration two-point boundary value solver. This algorithm does not require a Newton-like shooting method and is the most efficient of the perturbed solvers presented herein, however the domain of convergence is limited to about a third of an orbit and is dependent on eccentricity. The third algorithm extends the domain of convergence of the modified Chebyshev-Picard iteration two-point boundary value solver to about 90% of an orbit, through regularization with the Kustaanheimo-Stiefel transformation. This is the second most efficient of the perturbed set of algorithms. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver for solving multiple revolution perturbed transfers. This method does require "shooting" but differs from Newton-like shooting methods in that it does not require propagation of a state transition matrix. The unified Lambert tool makes use of the General Mission Analysis Tool and we use it to compute thousands of perturbed Lambert trajectories in parallel on the Space Situational
Hardware and software and machine-tool simulation with parallel structures mechanisms
Directory of Open Access Journals (Sweden)
Keba P.V.
2016-12-01
Full Text Available The usage spectrum of mechanisms with parallel structure is spreading all the time. The mechanisms of machine-tools and manipulators become more complicated and it is necessary to improve the program-controlled modules. Closed circuit mechanisms are mostly spread in robotic complexes, where manipulator performs complicated spatial movements by the given trajectory. The usage spectrum is very wide and the most popular are sorting, welding, assembling and others. However, the problem of designing the operating programs is still present even today. It is just because the developed post-processors are created for the equipment that we have for now. But new machine tool constructions appear every day and there is a necessity to control them. The problems associated with using of hardware and software of mechanisms with parallel structure in computer-aided simulation are considered. The program for inverse problem kinematics solving is designed. New method of designing the control programs is found. The kinematic analysis methods options and calculated data obtained by computer mathematics systems are shown with «Tools Glide» software taken as an example.
Multiphysics modeling of magnetorheological dampers
Directory of Open Access Journals (Sweden)
D Case
2016-09-01
Full Text Available The dynamics of a small scale magnetorheological damper were modeled and analyzed using multiphysics commercial finite element software to couple the electromagnetic field distribution with the non-Newtonian fluid flow. The magnetic flux lines and field intensity generated within the damper and cyclic fluid flow in the damper under harmonic motion were simulated with the AC/DC and CFD physics modules of COMSOL Multiphysics, respectively. Coupling of the physics is achieved through a modified Bingham plastic definition, relating the fluid's dynamic viscosity to the intensity of the induced magnetic field. Good agreement is confirmed between simulation results and experimentally observed resistance forces in the damper. This study was conducted to determine the feasibility of utilizing magnetorheological dampers in a medical orthosis for pathological tremor attenuation. The implemented models are thus dimensioned on a relatively small scale. The method used, however, is not specific to the damper's size or geometry and can be extended to larger-scale devices with little or no complication.
A MULTIDIMENSIONAL AND MULTIPHYSICS APPROACH TO NUCLEAR FUEL BEHAVIOR SIMULATION
Energy Technology Data Exchange (ETDEWEB)
R. L. Williamson; J. D. Hales; S. R. Novascone; M. R. Tonks; D. R. Gaston; C. J. Permann; D. Andrs; R. C. Martineau
2012-04-01
Important aspects of fuel rod behavior, for example pellet-clad mechanical interaction (PCMI), fuel fracture, oxide formation, non-axisymmetric cooling, and response to fuel manufacturing defects, are inherently multidimensional in addition to being complicated multiphysics problems. Many current modeling tools are strictly 2D axisymmetric or even 1.5D. This paper outlines the capabilities of a new fuel modeling tool able to analyze either 2D axisymmetric or fully 3D models. These capabilities include temperature-dependent thermal conductivity of fuel; swelling and densification; fuel creep; pellet fracture; fission gas release; cladding creep; irradiation growth; and gap mechanics (contact and gap heat transfer). The need for multiphysics, multidimensional modeling is then demonstrated through a discussion of results for a set of example problems. The first, a 10-pellet rodlet, demonstrates the viability of the solution method employed. This example highlights the effect of our smeared cracking model and also shows the multidimensional nature of discrete fuel pellet modeling. The second example relies on our the multidimensional, multiphysics approach to analyze a missing pellet surface problem. As a final example, we show a lower-length-scale simulation coupled to a continuum-scale simulation.
ParBiBit: Parallel tool for binary biclustering on modern distributed-memory systems.
González-Domínguez, Jorge; Expósito, Roberto R
2018-01-01
Biclustering techniques are gaining attention in the analysis of large-scale datasets as they identify two-dimensional submatrices where both rows and columns are correlated. In this work we present ParBiBit, a parallel tool to accelerate the search of interesting biclusters on binary datasets, which are very popular on different fields such as genetics, marketing or text mining. It is based on the state-of-the-art sequential Java tool BiBit, which has been proved accurate by several studies, especially on scenarios that result on many large biclusters. ParBiBit uses the same methodology as BiBit (grouping the binary information into patterns) and provides the same results. Nevertheless, our tool significantly improves performance thanks to an efficient implementation based on C++11 that includes support for threads and MPI processes in order to exploit the compute capabilities of modern distributed-memory systems, which provide several multicore CPU nodes interconnected through a network. Our performance evaluation with 18 representative input datasets on two different eight-node systems shows that our tool is significantly faster than the original BiBit. Source code in C++ and MPI running on Linux systems as well as a reference manual are available at https://sourceforge.net/projects/parbibit/.
Energy Technology Data Exchange (ETDEWEB)
Peyroux, J
2005-11-15
This project aims to make even more powerful the resolution of Vlasov codes through the various parallelization tools (MPI, OpenMP...). A simplified test case served as a base for constructing the parallel codes for obtaining a data-processing skeleton which, thereafter, could be re-used for increasingly complex models (more than four variables of phase space). This will thus make it possible to treat more realistic situations linked, for example, to the injection of ultra short and ultra intense impulses in inertial fusion plasmas, or the study of the instability of trapped ions now taken as being responsible for the generation of turbulence in tokamak plasmas. (author)
Energy Technology Data Exchange (ETDEWEB)
Peyroux, J
2005-11-15
This project aims to make even more powerful the resolution of Vlasov codes through the various parallelization tools (MPI, OpenMP...). A simplified test case served as a base for constructing the parallel codes for obtaining a data-processing skeleton which, thereafter, could be re-used for increasingly complex models (more than four variables of phase space). This will thus make it possible to treat more realistic situations linked, for example, to the injection of ultra short and ultra intense impulses in inertial fusion plasmas, or the study of the instability of trapped ions now taken as being responsible for the generation of turbulence in tokamak plasmas. (author)
Coupled multi-physics simulation frameworks for reactor simulation: A bottom-up approach
International Nuclear Information System (INIS)
Tautges, Timothy J.; Caceres, Alvaro; Jain, Rajeev; Kim, Hong-Jun; Kraftcheck, Jason A.; Smith, Brandon M.
2011-01-01
A 'bottom-up' approach to multi-physics frameworks is described, where first common interfaces to simulation data are developed, then existing physics modules are adapted to communicate through those interfaces. Physics modules read and write data through those common interfaces, which also provide access to common simulation services like parallel IO, mesh partitioning, etc.. Multi-physics codes are assembled as a combination of physics modules, services, interface implementations, and driver code which coordinates calling these various pieces. Examples of various physics modules and services connected to this framework are given. (author)
The Fluxgate Magnetometer Simulation in Comsol Multiphysics
Directory of Open Access Journals (Sweden)
Kolomeytsev Andrey
2018-01-01
Full Text Available This article describes the fluxgate magnetometer simulation in Comsol Multiphysics software package. The simulation results coincide with the experiment described earlier. Decomposition of the output signal by the Fourier coefficients shows a frequency doubling.
The Fluxgate Magnetometer Simulation in Comsol Multiphysics
Kolomeytsev Andrey; Baranov Pavel; Zatonov Ivan
2018-01-01
This article describes the fluxgate magnetometer simulation in Comsol Multiphysics software package. The simulation results coincide with the experiment described earlier. Decomposition of the output signal by the Fourier coefficients shows a frequency doubling.
Review of multi-physics temporal coupling methods for analysis of nuclear reactors
International Nuclear Information System (INIS)
Zerkak, Omar; Kozlowski, Tomasz; Gajev, Ivan
2015-01-01
Highlights: • Review of the numerical methods used for the multi-physics temporal coupling. • Review of high-order improvements to the Operator Splitting coupling method. • Analysis of truncation error due to the temporal coupling. • Recommendations on best-practice approaches for multi-physics temporal coupling. - Abstract: The advanced numerical simulation of a realistic physical system typically involves multi-physics problem. For example, analysis of a LWR core involves the intricate simulation of neutron production and transport, heat transfer throughout the structures of the system and the flowing, possibly two-phase, coolant. Such analysis involves the dynamic coupling of multiple simulation codes, each one devoted to the solving of one of the coupled physics. Multiple temporal coupling methods exist, yet the accuracy of such coupling is generally driven by the least accurate numerical scheme. The goal of this paper is to review in detail the approaches and numerical methods that can be used for the multi-physics temporal coupling, including a comprehensive discussion of the issues associated with the temporal coupling, and define approaches that can be used to perform multi-physics analysis. The paper is not limited to any particular multi-physics process or situation, but is intended to provide a generic description of multi-physics temporal coupling schemes for any development stage of the individual (single-physics) tools and methods. This includes a wide spectrum of situation, where the individual (single-physics) solvers are based on pre-existing computation codes embedded as individual components, or a new development where the temporal coupling can be developed and implemented as a part of code development. The discussed coupling methods are demonstrated in the framework of LWR core analysis
Directory of Open Access Journals (Sweden)
Kim De Leeneer
Full Text Available Despite improvements in terms of sequence quality and price per basepair, Sanger sequencing remains restricted to screening of individual disease genes. The development of massively parallel sequencing (MPS technologies heralded an era in which molecular diagnostics for multigenic disorders becomes reality. Here, we outline different PCR amplification based strategies for the screening of a multitude of genes in a patient cohort. We performed a thorough evaluation in terms of set-up, coverage and sequencing variants on the data of 10 GS-FLX experiments (over 200 patients. Crucially, we determined the actual coverage that is required for reliable diagnostic results using MPS, and provide a tool to calculate the number of patients that can be screened in a single run. Finally, we provide an overview of factors contributing to false negative or false positive mutation calls and suggest ways to maximize sensitivity and specificity, both important in a routine setting. By describing practical strategies for screening of multigenic disorders in a multitude of samples and providing answers to questions about minimum required coverage, the number of patients that can be screened in a single run and the factors that may affect sensitivity and specificity we hope to facilitate the implementation of MPS technology in molecular diagnostics.
DWFS: A Wrapper Feature Selection Tool Based on a Parallel Genetic Algorithm
Soufan, Othman
2015-02-26
Many scientific problems can be formulated as classification tasks. Data that harbor relevant information are usually described by a large number of features. Frequently, many of these features are irrelevant for the class prediction. The efficient implementation of classification models requires identification of suitable combinations of features. The smaller number of features reduces the problem\\'s dimensionality and may result in higher classification performance. We developed DWFS, a web-based tool that allows for efficient selection of features for a variety of problems. DWFS follows the wrapper paradigm and applies a search strategy based on Genetic Algorithms (GAs). A parallel GA implementation examines and evaluates simultaneously large number of candidate collections of features. DWFS also integrates various filteringmethods thatmay be applied as a pre-processing step in the feature selection process. Furthermore, weights and parameters in the fitness function of GA can be adjusted according to the application requirements. Experiments using heterogeneous datasets from different biomedical applications demonstrate that DWFS is fast and leads to a significant reduction of the number of features without sacrificing performance as compared to several widely used existing methods. DWFS can be accessed online at www.cbrc.kaust.edu.sa/dwfs.
DWFS: A Wrapper Feature Selection Tool Based on a Parallel Genetic Algorithm
Soufan, Othman; Kleftogiannis, Dimitrios A.; Kalnis, Panos; Bajic, Vladimir B.
2015-01-01
Many scientific problems can be formulated as classification tasks. Data that harbor relevant information are usually described by a large number of features. Frequently, many of these features are irrelevant for the class prediction. The efficient implementation of classification models requires identification of suitable combinations of features. The smaller number of features reduces the problem's dimensionality and may result in higher classification performance. We developed DWFS, a web-based tool that allows for efficient selection of features for a variety of problems. DWFS follows the wrapper paradigm and applies a search strategy based on Genetic Algorithms (GAs). A parallel GA implementation examines and evaluates simultaneously large number of candidate collections of features. DWFS also integrates various filteringmethods thatmay be applied as a pre-processing step in the feature selection process. Furthermore, weights and parameters in the fitness function of GA can be adjusted according to the application requirements. Experiments using heterogeneous datasets from different biomedical applications demonstrate that DWFS is fast and leads to a significant reduction of the number of features without sacrificing performance as compared to several widely used existing methods. DWFS can be accessed online at www.cbrc.kaust.edu.sa/dwfs.
TME (Task Mapping Editor): tool for executing distributed parallel computing. TME user's manual
International Nuclear Information System (INIS)
Takemiya, Hiroshi; Yamagishi, Nobuhiro; Imamura, Toshiyuki
2000-03-01
At the Center for Promotion of Computational Science and Engineering, a software environment PPExe has been developed to support scientific computing on a parallel computer cluster (distributed parallel scientific computing). TME (Task Mapping Editor) is one of components of the PPExe and provides a visual programming environment for distributed parallel scientific computing. Users can specify data dependence among tasks (programs) visually as a data flow diagram and map these tasks onto computers interactively through GUI of TME. The specified tasks are processed by other components of PPExe such as Meta-scheduler, RIM (Resource Information Monitor), and EMS (Execution Management System) according to the execution order of these tasks determined by TME. In this report, we describe the usage of TME. (author)
Multiphysics/multiscale multifluid computations
International Nuclear Information System (INIS)
Yadigaroglu, George
2014-01-01
Regarding experimentation, interesting examples of multi-scale approaches are found: the small-scale experiments to understand the mechanisms of counter-current flow limitations (CCFL) such as the growth of instabilities on films, droplet entrainment, etc; meso-scale experiments to quantify the CCFL conditions in typical geometries such as tubes and gaps between parallel plates, and finally full-scale experimentation in a typical reactor geometry - the UPTF tests. Another example is the mixing of the atmosphere produced by plumes and jets in a reactor containment: one needs first basic turbulence information that can be obtained at the microscopic level; follow medium-scale experiments to understand the behaviour of jets and plumes; finally reactor-scale tests can be conducted in facilities such as PANDA at PSI, in Switzerland to study the phenomena at large scale
Multiphysical Testing of Soils and Shales
Ferrari, Alessio
2013-01-01
Significant advancements in the experimental analysis of soils and shales have been achieved during the last few decades. Outstanding progress in the field has led to the theoretical development of geomechanical theories and important engineering applications. This book provides the reader with an overview of recent advances in a variety of advanced experimental techniques and results for the analysis of the behaviour of geomaterials under multiphysical testing conditions. Modern trends in experimental geomechanics for soils and shales are discussed, including testing materials in variably saturated conditions, non-isothermal experiments, micro-scale investigations and image analysis techniques. Six theme papers from leading researchers in experimental geomechanics are also included. This book is intended for postgraduate students, researchers and practitioners in fields where multiphysical testing of soils and shales plays a fundamental role, such as unsaturated soil and rock mechanics, petroleum engineering...
DEFF Research Database (Denmark)
Ask, Kristine Skoglund; Bardakci, Turgay; Parmer, Marthe Petrine
2016-01-01
Generic Parallel Artificial Liquid Membrane Extraction (PALME) methods for non-polar basic and non-polar acidic drugs from human plasma were investigated with respect to phospholipid removal. In both cases, extractions in 96-well format were performed from plasma (125μL), through 4μL organic...
Development and verification of the neutron diffusion solver for the GeN-Foam multi-physics platform
International Nuclear Information System (INIS)
Fiorina, Carlo; Kerkar, Nordine; Mikityuk, Konstantin; Rubiolo, Pablo; Pautz, Andreas
2016-01-01
Highlights: • Development and verification of a neutron diffusion solver based on OpenFOAM. • Integration in the GeN-Foam multi-physics platform. • Implementation and verification of acceleration techniques. • Implementation of isotropic discontinuity factors. • Automatic adjustment of discontinuity factors. - Abstract: The Laboratory for Reactor Physics and Systems Behaviour at the PSI and the EPFL has been developing in recent years a new code system for reactor analysis based on OpenFOAM®. The objective is to supplement available legacy codes with a modern tool featuring state-of-the-art characteristics in terms of scalability, programming approach and flexibility. As part of this project, a new solver has been developed for the eigenvalue and transient solution of multi-group diffusion equations. Several features distinguish the developed solver from other available codes, in particular: object oriented programming to ease code modification and maintenance; modern parallel computing capabilities; use of general unstructured meshes; possibility of mesh deformation; cell-wise parametrization of cross-sections; and arbitrary energy group structure. In addition, the solver is integrated into the GeN-Foam multi-physics solver. The general features of the solver and its integration with GeN-Foam have already been presented in previous publications. The present paper describes the diffusion solver in more details and provides an overview of new features recently implemented, including the use of acceleration techniques and discontinuity factors. In addition, a code verification is performed through a comparison with Monte Carlo results for both a thermal and a fast reactor system.
Directory of Open Access Journals (Sweden)
Frank Anders
2009-08-01
Full Text Available Abstract Background The work presented here investigates parallel imaging applied to T1-weighted high resolution imaging for use in longitudinal volumetric clinical studies involving Alzheimer's disease (AD and Mild Cognitive Impairment (MCI patients. This was in an effort to shorten acquisition times to minimise the risk of motion artefacts caused by patient discomfort and disorientation. The principle question is, "Can parallel imaging be used to acquire images at 1.5 T of sufficient quality to allow volumetric analysis of patient brains?" Methods Optimisation studies were performed on a young healthy volunteer and the selected protocol (including the use of two different parallel imaging acceleration factors was then tested on a cohort of 15 elderly volunteers including MCI and AD patients. In addition to automatic brain segmentation, hippocampus volumes were manually outlined and measured in all patients. The 15 patients were scanned on a second occasion approximately one week later using the same protocol and evaluated in the same manner to test repeatability of measurement using images acquired with the GRAPPA parallel imaging technique applied to the MPRAGE sequence. Results Intraclass correlation tests show that almost perfect agreement between repeated measurements of both segmented brain parenchyma fraction and regional measurement of hippocampi. The protocol is suitable for both global and regional volumetric measurement dementia patients. Conclusion In summary, these results indicate that parallel imaging can be used without detrimental effect to brain tissue segmentation and volumetric measurement and should be considered for both clinical and research studies where longitudinal measurements of brain tissue volumes are of interest.
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
Energy Technology Data Exchange (ETDEWEB)
Cetiner, Mustafa Sacit; none,; Flanagan, George F. [ORNL; Poore III, Willis P. [ORNL; Muhlheim, Michael David [ORNL
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.
Ab initio quantum chemistry in parallel-portable tools and applications
International Nuclear Information System (INIS)
Harrison, R.J.; Shepard, R.; Kendall, R.A.
1991-01-01
In common with many of the computational sciences, ab initio chemistry faces computational constraints to which a partial solution is offered by the prospect of highly parallel computers. Ab initio codes are large and complex (O(10 5 ) lines of FORTRAN), representing a significant investment of communal effort. The often conflicting requirements of portability and efficiency have been successfully resolved on vector computers by reliance on matrix oriented kernels. This proves inadequate even upon closely-coupled shared-memory parallel machines. We examine the algorithms employed during a typical sequence of calculations. Then we investigate how efficient portable parallel implementations may be derived, including the complex multi-reference singles and doubles configuration interaction algorithm. A portable toolkit, modeled after the Intel iPSC and the ANL-ACRF PARMACS, is developed, using shared memory and TCP/IP sockets. The toolkit is used as an initial platform for programs portable between LANS, Crays and true distributed-memory MIMD machines. Timings are presented. 53 refs., 4 tabs
Final Report: Simulation Tools for Parallel Microwave Particle in Cell Modeling
International Nuclear Information System (INIS)
Stoltz, Peter H.
2008-01-01
Transport of high-power rf fields and the subsequent deposition of rf power into plasma is an important component of developing tokamak fusion energy. Two limitations on rf heating are: (i) breakdown of the metallic structures used to deliver rf power to the plasma, and (ii) a detailed understanding of how rf power couples into a plasma. Computer simulation is a main tool for helping solve both of these problems, but one of the premier tools, VORPAL, is traditionally too difficult to use for non-experts. During this Phase II project, we developed the VorpalView user interface tool. This tool allows Department of Energy researchers a fully graphical interface for analyzing VORPAL output to more easily model rf power delivery and deposition in plasmas.
Nuclear reactor multi-physics simulations with coupled MCNP5 and STAR-CCM+
International Nuclear Information System (INIS)
Cardoni, Jeffrey Neil; Rizwan-uddin
2011-01-01
The MCNP5 Monte Carlo particle transport code has been coupled to the computational fluid dynamics code, STAR-CCM+, to provide a high fidelity multi-physics simulation tool for pressurized water nuclear reactors. The codes are executed separately and coupled externally through a Perl script. The Perl script automates the exchange of temperature, density, and volumetric heating information between the codes using ASCII text data files. Fortran90 and Java utility programs assist job automation with data post-processing and file management. The MCNP5 utility code, MAKXSF, pre-generates temperature dependent cross section libraries for the thermal feedback calculations. The MCNP5–STAR-CCM+ coupled simulation tool, dubbed MULTINUKE, was applied to a steady state, PWR cell model to demonstrate its usage and capabilities. The demonstration calculation showed reasonable results that agree with PWR values typically reported in literature. Temperature and fission reaction rate distributions were realistic and intuitive. Reactivity coefficients were also deemed reasonable in comparison to historically reported data. The demonstration problem consisted of 9,984 CFD cells and 7,489 neutronic cells. MCNP5 tallied fission energy deposition over 3,328 UO_2 cells. The coupled solution converged within eight hours and in three MULTINUKE iterations. The simulation was carried out on a 64 bit, quad core, Intel 2.8 GHz microprocessor with 1 GB RAM. The simulations on a quad core machine indicated that a massively parallelized implementation of MULTINUKE can be used to assess larger multi-million cell models. (author)
Modular ORIGEN-S for multi-physics code systems
International Nuclear Information System (INIS)
Yesilyurt, Gokhan; Clarno, Kevin T.; Gauld, Ian C.; Galloway, Jack
2011-01-01
The ORIGEN-S code in the SCALE 6.0 nuclear analysis code suite is a well-validated tool to calculate the time-dependent concentrations of nuclides due to isotopic depletion, decay, and transmutation for many systems in a wide range of time scales. Application areas include nuclear reactor and spent fuel storage analyses, burnup credit evaluations, decay heat calculations, and environmental assessments. Although simple to use within the SCALE 6.0 code system, especially with the ORIGEN-ARP graphical user interface, it is generally complex to use as a component within an externally developed code suite because of its tight coupling within the infrastructure of the larger SCALE 6.0 system. The ORIGEN2 code, which has been widely integrated within other simulation suites, is no longer maintained by Oak Ridge National Laboratory (ORNL), has obsolete data, and has a relatively small validation database. Therefore, a modular version of the SCALE/ORIGEN-S code was developed to simplify its integration with other software packages to allow multi-physics nuclear code systems to easily incorporate the well-validated isotopic depletion, decay, and transmutation capability to perform realistic nuclear reactor and fuel simulations. SCALE/ORIGEN-S was extensively restructured to develop a modular version that allows direct access to the matrix solvers embedded in the code. Problem initialization and the solver were segregated to provide a simple application program interface and fewer input/output operations for the multi-physics nuclear code systems. Furthermore, new interfaces were implemented to access and modify the ORIGEN-S input variables and nuclear cross-section data through external drivers. Three example drivers were implemented, in the C, C++, and Fortran 90 programming languages, to demonstrate the modular use of the new capability. This modular version of SCALE/ORIGEN-S has been embedded within several multi-physics software development projects at ORNL, including
Modular ORIGEN-S for multi-physics code systems
Energy Technology Data Exchange (ETDEWEB)
Yesilyurt, Gokhan; Clarno, Kevin T.; Gauld, Ian C., E-mail: yesilyurtg@ornl.gov, E-mail: clarnokt@ornl.gov, E-mail: gauldi@ornl.gov [Oak Ridge National Laboratory, TN (United States); Galloway, Jack, E-mail: jack@galloways.net [Los Alamos National Laboratory, Los Alamos, NM (United States)
2011-07-01
The ORIGEN-S code in the SCALE 6.0 nuclear analysis code suite is a well-validated tool to calculate the time-dependent concentrations of nuclides due to isotopic depletion, decay, and transmutation for many systems in a wide range of time scales. Application areas include nuclear reactor and spent fuel storage analyses, burnup credit evaluations, decay heat calculations, and environmental assessments. Although simple to use within the SCALE 6.0 code system, especially with the ORIGEN-ARP graphical user interface, it is generally complex to use as a component within an externally developed code suite because of its tight coupling within the infrastructure of the larger SCALE 6.0 system. The ORIGEN2 code, which has been widely integrated within other simulation suites, is no longer maintained by Oak Ridge National Laboratory (ORNL), has obsolete data, and has a relatively small validation database. Therefore, a modular version of the SCALE/ORIGEN-S code was developed to simplify its integration with other software packages to allow multi-physics nuclear code systems to easily incorporate the well-validated isotopic depletion, decay, and transmutation capability to perform realistic nuclear reactor and fuel simulations. SCALE/ORIGEN-S was extensively restructured to develop a modular version that allows direct access to the matrix solvers embedded in the code. Problem initialization and the solver were segregated to provide a simple application program interface and fewer input/output operations for the multi-physics nuclear code systems. Furthermore, new interfaces were implemented to access and modify the ORIGEN-S input variables and nuclear cross-section data through external drivers. Three example drivers were implemented, in the C, C++, and Fortran 90 programming languages, to demonstrate the modular use of the new capability. This modular version of SCALE/ORIGEN-S has been embedded within several multi-physics software development projects at ORNL, including
Investigating Darcy-scale assumptions by means of a multiphysics algorithm
Tomin, Pavel; Lunati, Ivan
2016-09-01
Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative
Parallel Beam Dynamics Simulation Tools for Future Light Source Linac Modeling
International Nuclear Information System (INIS)
Qiang, Ji; Pogorelov, Ilya v.; Ryne, Robert D.
2007-01-01
Large-scale modeling on parallel computers is playing an increasingly important role in the design of future light sources. Such modeling provides a means to accurately and efficiently explore issues such as limits to beam brightness, emittance preservation, the growth of instabilities, etc. Recently the IMPACT codes suite was enhanced to be applicable to future light source design. Simulations with IMPACT-Z were performed using up to one billion simulation particles for the main linac of a future light source to study the microbunching instability. Combined with the time domain code IMPACT-T, it is now possible to perform large-scale start-to-end linac simulations for future light sources, including the injector, main linac, chicanes, and transfer lines. In this paper we provide an overview of the IMPACT code suite, its key capabilities, and recent enhancements pertinent to accelerator modeling for future linac-based light sources
A Query Cache Tool for Optimizing Repeatable and Parallel OLAP Queries
Santos, Ricardo Jorge; Bernardino, Jorge
On-line analytical processing against data warehouse databases is a common form of getting decision making information for almost every business field. Decision support information oftenly concerns periodic values based on regular attributes, such as sales amounts, percentages, most transactioned items, etc. This means that many similar OLAP instructions are periodically repeated, and simultaneously, between the several decision makers. Our Query Cache Tool takes advantage of previously executed queries, storing their results and the current state of the data which was accessed. Future queries only need to execute against the new data, inserted since the queries were last executed, and join these results with the previous ones. This makes query execution much faster, because we only need to process the most recent data. Our tool also minimizes the execution time and resource consumption for similar queries simultaneously executed by different users, putting the most recent ones on hold until the first finish and returns the results for all of them. The stored query results are held until they are considered outdated, then automatically erased. We present an experimental evaluation of our tool using a data warehouse based on a real-world business dataset and use a set of typical decision support queries to discuss the results, showing a very high gain in query execution time.
Ask, Kristine Skoglund; Bardakci, Turgay; Parmer, Marthe Petrine; Halvorsen, Trine Grønhaug; Øiestad, Elisabeth Leere; Pedersen-Bjergaard, Stig; Gjelstad, Astrid
2016-09-10
Generic Parallel Artificial Liquid Membrane Extraction (PALME) methods for non-polar basic and non-polar acidic drugs from human plasma were investigated with respect to phospholipid removal. In both cases, extractions in 96-well format were performed from plasma (125μL), through 4μL organic solvent used as supported liquid membranes (SLMs), and into 50μL aqueous acceptor solutions. The acceptor solutions were subsequently analysed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) using in-source fragmentation and monitoring the m/z 184→184 transition for investigation of phosphatidylcholines (PC), sphingomyelins (SM), and lysophosphatidylcholines (Lyso-PC). In both generic methods, no phospholipids were detected in the acceptor solutions. Thus, PALME appeared to be highly efficient for phospholipid removal. To further support this, qualitative (post-column infusion) and quantitative matrix effects were investigated with fluoxetine, fluvoxamine, and quetiapine as model analytes. No signs of matrix effects were observed. Finally, PALME was evaluated for the aforementioned drug substances, and data were in accordance with European Medicines Agency (EMA) guidelines. Copyright © 2016 Elsevier B.V. All rights reserved.
Adaptive hybrid mesh refinement for multiphysics applications
International Nuclear Information System (INIS)
Khamayseh, Ahmed; Almeida, Valmor de
2007-01-01
The accuracy and convergence of computational solutions of mesh-based methods is strongly dependent on the quality of the mesh used. We have developed methods for optimizing meshes that are comprised of elements of arbitrary polygonal and polyhedral type. We present in this research the development of r-h hybrid adaptive meshing technology tailored to application areas relevant to multi-physics modeling and simulation. Solution-based adaptation methods are used to reposition mesh nodes (r-adaptation) or to refine the mesh cells (h-adaptation) to minimize solution error. The numerical methods perform either the r-adaptive mesh optimization or the h-adaptive mesh refinement method on the initial isotropic or anisotropic meshes to equidistribute weighted geometric and/or solution error function. We have successfully introduced r-h adaptivity to a least-squares method with spherical harmonics basis functions for the solution of the spherical shallow atmosphere model used in climate modeling. In addition, application of this technology also covers a wide range of disciplines in computational sciences, most notably, time-dependent multi-physics, multi-scale modeling and simulation
Multiphysics modeling of the steel continuous casting process
Hibbeler, Lance C.
This work develops a macroscale, multiphysics model of the continuous casting of steel. The complete model accounts for the turbulent flow and nonuniform distribution of superheat in the molten steel, the elastic-viscoplastic thermal shrinkage of the solidifying shell, the heat transfer through the shell-mold interface with variable gap size, and the thermal distortion of the mold. These models are coupled together with carefully constructed boundary conditions with the aid of reduced-order models into a single tool to investigate behavior in the mold region, for practical applications such as predicting ideal tapers for a beam-blank mold. The thermal and mechanical behaviors of the mold are explored as part of the overall modeling effort, for funnel molds and for beam-blank molds. These models include high geometric detail and reveal temperature variations on the mold-shell interface that may be responsible for cracks in the shell. Specifically, the funnel mold has a column of mold bolts in the middle of the inside-curve region of the funnel that disturbs the uniformity of the hot face temperatures, which combined with the bending effect of the mold on the shell, can lead to longitudinal facial cracks. The shoulder region of the beam-blank mold shows a local hot spot that can be reduced with additional cooling in this region. The distorted shape of the funnel mold narrow face is validated with recent inclinometer measurements from an operating caster. The calculated hot face temperatures and distorted shapes of the mold are transferred into the multiphysics model of the solidifying shell. The boundary conditions for the first iteration of the multiphysics model come from reduced-order models of the process; one such model is derived in this work for mold heat transfer. The reduced-order model relies on the physics of the solution to the one-dimensional heat-conduction equation to maintain the relationships between inputs and outputs of the model. The geometric
Parallel Factor Analysis as an exploratory tool for wavelet transformed event-related EEG
DEFF Research Database (Denmark)
Mørup, Morten; Hansen, Lars Kai; Hermann, Cristoph S.
2006-01-01
by the inter-trial phase coherence (ITPC) encompassing ANOVA analysis of differences between conditions and 5-way analysis of channel x frequency x time x subject x condition. A flow chart is presented on how to perform data exploration using the PARAFAC decomposition on multi-way arrays. This includes (A......) channel x frequency x time 3-way arrays of F test values from a repeated measures analysis of variance (ANOVA) between two stimulus conditions; (B) subject-specific 3-way analyses; and (C) an overall 5-way analysis of channel x frequency x time x subject x condition. The PARAFAC decompositions were able...... of the 3-way array of ANOVA F test values clearly showed the difference of regions of interest across modalities, while the 5-way analysis enabled visualization of both quantitative and qualitative differences. Consequently, PARAFAC is a promising data exploratory tool in the analysis of the wavelets...
Design Process of IDT Aided by Multiphysics FE Analyses
Directory of Open Access Journals (Sweden)
A Martowicz
2016-09-01
Full Text Available Presented work is devoted to a design process performed for the interdigital transducer, which is a perspective application for the area of structural health monitoring. In order to obtain the desirable characteristic of the transducer fully coupled numerical analyses were performed in ANSYS Multiphysics software. Utilised finite element models considered both structural dynamics and properties of used piezoelectric material. The process of design improvement was preceded by the sensitivity analysis. In order to search for the best electrode pattern selected geometrical features of the transducer were assumed to vary within allowed ranges. The design parameters, which were taken into account, related to the efficiency of proposed transducer design for the emission of acoustic waves in the monitored structure. The search objectives considered the criteria related to the shape of the beampattern and amplitudes of generated Lamb waves. As a result of the optimization procedure, the simultaneous increase of anti-symmetric mode amplitude and the reduction of undesirable symmetric mode amplitude of generated Lamb waves in the direction perpendicular to the transducer fingers was expected. Another aim of the optimization was to minimize the main lobe width and undesirable contribution of both symmetric and anti-symmetric waves in the parallel direction to the transducer fingers. The response surface method and genetic algorithms were used for fast and effective search through the input design domain.
A Massively Parallel Solver for the Mechanical Harmonic Analysis of Accelerator Cavities
International Nuclear Information System (INIS)
2015-01-01
ACE3P is a 3D massively parallel simulation suite that developed at SLAC National Accelerator Laboratory that can perform coupled electromagnetic, thermal and mechanical study. Effectively utilizing supercomputer resources, ACE3P has become a key simulation tool for particle accelerator R and D. A new frequency domain solver to perform mechanical harmonic response analysis of accelerator components is developed within the existing parallel framework. This solver is designed to determine the frequency response of the mechanical system to external harmonic excitations for time-efficient accurate analysis of the large-scale problems. Coupled with the ACE3P electromagnetic modules, this capability complements a set of multi-physics tools for a comprehensive study of microphonics in superconducting accelerating cavities in order to understand the RF response and feedback requirements for the operational reliability of a particle accelerator. (auth)
A parallel and sensitive software tool for methylation analysis on multicore platforms.
Tárraga, Joaquín; Pérez, Mariano; Orduña, Juan M; Duato, José; Medina, Ignacio; Dopazo, Joaquín
2015-10-01
DNA methylation analysis suffers from very long processing time, as the advent of Next-Generation Sequencers has shifted the bottleneck of genomic studies from the sequencers that obtain the DNA samples to the software that performs the analysis of these samples. The existing software for methylation analysis does not seem to scale efficiently neither with the size of the dataset nor with the length of the reads to be analyzed. As it is expected that the sequencers will provide longer and longer reads in the near future, efficient and scalable methylation software should be developed. We present a new software tool, called HPG-Methyl, which efficiently maps bisulphite sequencing reads on DNA, analyzing DNA methylation. The strategy used by this software consists of leveraging the speed of the Burrows-Wheeler Transform to map a large number of DNA fragments (reads) rapidly, as well as the accuracy of the Smith-Waterman algorithm, which is exclusively employed to deal with the most ambiguous and shortest reads. Experimental results on platforms with Intel multicore processors show that HPG-Methyl significantly outperforms in both execution time and sensitivity state-of-the-art software such as Bismark, BS-Seeker or BSMAP, particularly for long bisulphite reads. Software in the form of C libraries and functions, together with instructions to compile and execute this software. Available by sftp to anonymous@clariano.uv.es (password 'anonymous'). juan.orduna@uv.es or jdopazo@cipf.es. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Multiphysics modelling of manufacturing processes: A review
DEFF Research Database (Denmark)
Jabbari, Masoud; Baran, Ismet; Mohanty, Sankhya
2018-01-01
Numerical modelling is increasingly supporting the analysis and optimization of manufacturing processes in the production industry. Even if being mostly applied to multistep processes, single process steps may be so complex by nature that the needed models to describe them must include multiphysics...... the diversity in the field of modelling of manufacturing processes as regards process, materials, generic disciplines as well as length scales: (1) modelling of tape casting for thin ceramic layers, (2) modelling the flow of polymers in extrusion, (3) modelling the deformation process of flexible stamps...... for nanoimprint lithography, (4) modelling manufacturing of composite parts and (5) modelling the selective laser melting process. For all five examples, the emphasis is on modelling results as well as describing the models in brief mathematical details. Alongside with relevant references to the original work...
Multiphysics modelling of the spray forming process
International Nuclear Information System (INIS)
Mi, J.; Grant, P.S.; Fritsching, U.; Belkessam, O.; Garmendia, I.; Landaberea, A.
2008-01-01
An integrated, multiphysics numerical model has been developed through the joint efforts of the University of Oxford (UK), University of Bremen (Germany) and Inasmet (Spain) to simulate the spray forming process. The integrated model consisted of four sub-models: (1) an atomization model simulating the fragmentation of a continuous liquid metal stream into droplet spray during gas atomization; (2) a droplet spray model simulating the droplet spray mass and enthalpy evolution in the gas flow field prior to deposition; (3) a droplet deposition model simulating droplet deposition, splashing and re-deposition behavior and the resulting preform shape and heat flow; and (4) a porosity model simulating the porosity distribution inside a spray formed ring preform. The model has been validated against experiments of the spray forming of large diameter IN718 Ni superalloy rings. The modelled preform shape, surface temperature and final porosity distribution showed good agreement with experimental measurements
Energy Technology Data Exchange (ETDEWEB)
Le Pallec, J. C.; Crouzet, N.; Bergeaud, V.; Delavaud, C. [CEA/DEN/DM2S, CEA/Saclay, 91191 Gif sur Yvette Cedex (France)
2012-07-01
The control of uncertainties in the field of reactor physics and their propagation in best-estimate modeling are a major issue in safety analysis. In this framework, the CEA develops a methodology to perform multi-physics simulations including uncertainties analysis. The present paper aims to present and apply this methodology for the analysis of an accidental situation such as REA (Rod Ejection Accident). This accident is characterized by a strong interaction between the different areas of the reactor physics (neutronic, fuel thermal and thermal hydraulic). The modeling is performed with CRONOS2 code. The uncertainties analysis has been conducted with the URANIE platform developed by the CEA: For each identified response from the modeling (output) and considering a set of key parameters with their uncertainties (input), a surrogate model in the form of a neural network has been produced. The set of neural networks is then used to carry out a sensitivity analysis which consists on a global variance analysis with the determination of the Sobol indices for all responses. The sensitivity indices are obtained for the input parameters by an approach based on the use of polynomial chaos. The present exercise helped to develop a methodological flow scheme, to consolidate the use of URANIE tool in the framework of parallel calculations. Finally, the use of polynomial chaos allowed computing high order sensitivity indices and thus highlighting and classifying the influence of identified uncertainties on each response of the analysis (single and interaction effects). (authors)
Alameda, J. C.
2011-12-01
Development and optimization of computational science models, particularly on high performance computers, and with the advent of ubiquitous multicore processor systems, practically on every system, has been accomplished with basic software tools, typically, command-line based compilers, debuggers, performance tools that have not changed substantially from the days of serial and early vector computers. However, model complexity, including the complexity added by modern message passing libraries such as MPI, and the need for hybrid code models (such as openMP and MPI) to be able to take full advantage of high performance computers with an increasing core count per shared memory node, has made development and optimization of such codes an increasingly arduous task. Additional architectural developments, such as many-core processors, only complicate the situation further. In this paper, we describe how our NSF-funded project, "SI2-SSI: A Productive and Accessible Development Workbench for HPC Applications Using the Eclipse Parallel Tools Platform" (WHPC) seeks to improve the Eclipse Parallel Tools Platform, an environment designed to support scientific code development targeted at a diverse set of high performance computing systems. Our WHPC project to improve Eclipse PTP takes an application-centric view to improve PTP. We are using a set of scientific applications, each with a variety of challenges, and using PTP to drive further improvements to both the scientific application, as well as to understand shortcomings in Eclipse PTP from an application developer perspective, to drive our list of improvements we seek to make. We are also partnering with performance tool providers, to drive higher quality performance tool integration. We have partnered with the Cactus group at Louisiana State University to improve Eclipse's ability to work with computational frameworks and extremely complex build systems, as well as to develop educational materials to incorporate into
Multiphysics modelling and experimental validation of high concentration photovoltaic modules
International Nuclear Information System (INIS)
Theristis, Marios; Fernández, Eduardo F.; Sumner, Mike; O'Donovan, Tadhg S.
2017-01-01
Highlights: • A multiphysics modelling approach for concentrating photovoltaics was developed. • An experimental campaign was conducted to validate the models. • The experimental results were in good agreement with the models. • The multiphysics modelling allows the concentrator’s optimisation. - Abstract: High concentration photovoltaics, equipped with high efficiency multijunction solar cells, have great potential in achieving cost-effective and clean electricity generation at utility scale. Such systems are more complex compared to conventional photovoltaics because of the multiphysics effect that is present. Modelling the power output of such systems is therefore crucial for their further market penetration. Following this line, a multiphysics modelling procedure for high concentration photovoltaics is presented in this work. It combines an open source spectral model, a single diode electrical model and a three-dimensional finite element thermal model. In order to validate the models and the multiphysics modelling procedure against actual data, an outdoor experimental campaign was conducted in Albuquerque, New Mexico using a high concentration photovoltaic monomodule that is thoroughly described in terms of its geometry and materials. The experimental results were in good agreement (within 2.7%) with the predicted maximum power point. This multiphysics approach is relatively more complex when compared to empirical models, but besides the overall performance prediction it can also provide better understanding of the physics involved in the conversion of solar irradiance into electricity. It can therefore be used for the design and optimisation of high concentration photovoltaic modules.
Reactive transport modeling of the ABM experiment with Comsol Multiphysics
International Nuclear Information System (INIS)
Pekala, Marek; Idiart, Andres; Arcos, David
2012-01-01
solution) in a stack of 30 bentonite blocks of 11 distinct initial compositions. In the model, ion diffusion is allowed between the individual bentonite blocks and between the bentonite blocks and a sand layer filling the bentonite-rock gap. The effective diffusion coefficient values for individual bentonite blocks were estimated based on the dry density of the bentonite, and the temperature-dependent evolution of the diffusion coefficients is approximated in the course of the simulation. In order to solve the problem, a set of non-linear algebraic equations (mass action law for the cation-exchange reactions, and charge and mass balance equations) have been coupled with Fickian diffusion equations. As mentioned above, the Finite Element code COMSOL Multiphysics has been used to carry out the simulations. Preliminary results for the studied problem indicate that the effect of diffusion for the studied cations and chloride is significant and has the potential to explain quantitatively the observed patterns of homogenisation in the chemical composition in the bentonite package. However, the work is currently in progress and further analyses, including a sensitivity study of variables such as diffusion coefficients and boundary conditions, are on-going. A model simulating coupled cation-exchange and diffusion of major ions in the Package 1 of the ABM field experiment has been developed. This work demonstrates the feasibility of implementing a reactive transport model directly into Comsol Multiphysics using conservation and mass action equations. Comsol offers an intuitive and at the same time powerful modelling environment for simulating coupled multiphase, multi-species reactive transport phenomena and mechanical effects in complex geometries. For this reason, Amphos 21 has been involved in work aiming to couple Comsol with other codes such as the geochemical code PHREEQC. Such code integration has the potential to provide tools uniquely suited to solving complicated reactive
Exploring a Multiphysics Resolution Approach for Additive Manufacturing
Estupinan Donoso, Alvaro Antonio; Peters, Bernhard
2018-06-01
Metal additive manufacturing (AM) is a fast-evolving technology aiming to efficiently produce complex parts while saving resources. Worldwide, active research is being performed to solve the existing challenges of this growing technique. Constant computational advances have enabled multiscale and multiphysics numerical tools that complement the traditional physical experimentation. In this contribution, an advanced discrete-continuous concept is proposed to address the physical phenomena involved during laser powder bed fusion. The concept treats powder as discrete by the extended discrete element method, which predicts the thermodynamic state and phase change for each particle. The fluid surrounding is solved with multiphase computational fluid dynamics techniques to determine momentum, heat, gas and liquid transfer. Thus, results track the positions and thermochemical history of individual particles in conjunction with the prevailing fluid phases' temperature and composition. It is believed that this methodology can be employed to complement experimental research by analysis of the comprehensive results, which can be extracted from it to enable AM processes optimization for parts qualification.
Zinno, Ivana; De Luca, Claudio; Elefante, Stefano; Imperatore, Pasquale; Manunta, Michele; Casu, Francesco
2014-05-01
Differential Synthetic Aperture Radar Interferometry (DInSAR) is an effective technique to estimate and monitor ground displacements with centimetre accuracy [1]. In the last decade, advanced DInSAR algorithms, such as the Small Baseline Subset (SBAS) [2] one that is aimed at following the temporal evolution of the ground deformation, showed to be significantly useful remote sensing tools for the geoscience communities as well as for those related to hazard monitoring and risk mitigation. DInSAR scenario is currently characterized by the large and steady increasing availability of huge SAR data archives that have a broad range of diversified features according to the characteristics of the employed sensor. Indeed, besides the old generation sensors, that include ERS, ENVISAT and RADARSAT systems, the new X-band generation constellations, such as COSMO-SkyMed and TerraSAR-X, have permitted an overall study of ground deformations with an unprecedented detail thanks to their improved spatial resolution and reduced revisit time. Furthermore, the incoming ESA Sentinel-1 SAR satellite is characterized by a global coverage acquisition strategy and 12-day revisit time and, therefore, will further contribute to improve deformation analyses and monitoring capabilities. However, in this context, the capability to process such huge SAR data archives is strongly limited by the existing DInSAR algorithms, which are not specifically designed to exploit modern high performance computational infrastructures (e.g. cluster, grid and cloud computing platforms). The goal of this paper is to present a Parallel version of the SBAS algorithm (P-SBAS) which is based on a dual-level parallelization approach and embraces combined parallel strategies [3], [4]. A detailed description of the P-SBAS algorithm will be provided together with a scalability analysis focused on studying its performances. In particular, a P-SBAS scalability analysis with respect to the number of exploited CPUs has
Multiphysics Integrated Coupling Environment (MICE) User Manual
Energy Technology Data Exchange (ETDEWEB)
Varija Agarwal; Donna Post Guillen
2013-08-01
The complex, multi-part nature of waste glass melters used in nuclear waste vitrification poses significant modeling challenges. The focus of this project has been to couple a 1D MATLAB model of the cold cap region within a melter with a 3D STAR-CCM+ model of the melter itself. The Multiphysics Integrated Coupling Environment (MICE) has been developed to create a cohesive simulation of a waste glass melter that accurately represents the cold cap. The one-dimensional mathematical model of the cold cap uses material properties, axial heat, and mass fluxes to obtain a temperature profile for the cold cap, the region where feed-to-glass conversion occurs. The results from Matlab are used to update simulation data in the three-dimensional STAR-CCM+ model so that the cold cap is appropriately incorporated into the 3D simulation. The two processes are linked through ModelCenter integration software using time steps that are specified for each process. Data is to be exchanged circularly between the two models, as the inputs and outputs of each model depend on the other.
International Nuclear Information System (INIS)
Michael J. Bockelie
2002-01-01
This DOE SBIR Phase II final report summarizes research that has been performed to develop a parallel adaptive tool for modeling steady, two phase turbulent reacting flow. The target applications for the new tool are full scale, fossil-fuel fired boilers and furnaces such as those used in the electric utility industry, chemical process industry and mineral/metal process industry. The type of analyses to be performed on these systems are engineering calculations to evaluate the impact on overall furnace performance due to operational, process or equipment changes. To develop a Computational Fluid Dynamics (CFD) model of an industrial scale furnace requires a carefully designed grid that will capture all of the large and small scale features of the flowfield. Industrial systems are quite large, usually measured in tens of feet, but contain numerous burners, air injection ports, flames and localized behavior with dimensions that are measured in inches or fractions of inches. To create an accurate computational model of such systems requires capturing length scales within the flow field that span several orders of magnitude. In addition, to create an industrially useful model, the grid can not contain too many grid points - the model must be able to execute on an inexpensive desktop PC in a matter of days. An adaptive mesh provides a convenient means to create a grid that can capture both fine flow field detail within a very large domain with a ''reasonable'' number of grid points. However, the use of an adaptive mesh requires the development of a new flow solver. To create the new simulation tool, we have combined existing reacting CFD modeling software with new software based on emerging block structured Adaptive Mesh Refinement (AMR) technologies developed at Lawrence Berkeley National Laboratory (LBNL). Specifically, we combined: -physical models, modeling expertise, and software from existing combustion simulation codes used by Reaction Engineering International
ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code
Energy Technology Data Exchange (ETDEWEB)
Noble, Charles R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Anderson, Andrew T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barton, Nathan R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bramwell, Jamie A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Capps, Arlie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chang, Michael H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chou, Jin J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dawson, David M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Diana, Emily R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunn, Timothy A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Faux, Douglas R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fisher, Aaron C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Greene, Patrick T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Heinz, Ines [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kanarska, Yuliya [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Khairallah, Saad A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Liu, Benjamin T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Margraf, Jon D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nichols, Albert L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nourgaliev, Robert N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Puso, Michael A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reus, James F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Robinson, Peter B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shestakov, Alek I. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Solberg, Jerome M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Taller, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tsuji, Paul H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Christopher A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Jeremy L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-05-23
ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.
Design and multi-physics optimization of rotary MRF brakes
Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan
2018-03-01
Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.
Multiphysics Modelling and Simulation for Systems Design Conference
Abbes, Mohamed; Choley, Jean-Yves; Boukharouba, Taoufik; Elnady, Tamer; Kanaev, Andrei; Amar, Mounir; Chaari, Fakher
2015-01-01
This book reports on the state of the art in the field of multiphysics systems. It consists of accurately reviewed contributions to the MMSSD’2014 conference, which was held from December 17 to 19, 2004 in Hammamet, Tunisia. The different chapters, covering new theories, methods and a number of case studies, provide readers with an up-to-date picture of multiphysics modeling and simulation. They highlight the role played by high-performance computing and newly available software in promoting the study of multiphysics coupling effects, and show how these technologies can be practically implemented to bring about significant improvements in the field of design, control and monitoring of machines. In addition to providing a detailed description of the methods and their applications, the book also identifies new research issues, challenges and opportunities, thus providing researchers and practitioners with both technical information to support their daily work and a new source of inspiration for their future...
Multiphysics modeling using COMSOL a first principles approach
Pryor, Roger W
2011-01-01
Multiphysics Modeling Using COMSOL rapidly introduces the senior level undergraduate, graduate or professional scientist or engineer to the art and science of computerized modeling for physical systems and devices. It offers a step-by-step modeling methodology through examples that are linked to the Fundamental Laws of Physics through a First Principles Analysis approach. The text explores a breadth of multiphysics models in coordinate systems that range from 1D to 3D and introduces the readers to the numerical analysis modeling techniques employed in the COMSOL Multiphysics software. After readers have built and run the examples, they will have a much firmer understanding of the concepts, skills, and benefits acquired from the use of computerized modeling techniques to solve their current technological problems and to explore new areas of application for their particular technological areas of interest.
The Integrated Plasma Simulator: A Flexible Python Framework for Coupled Multiphysics Simulation
Energy Technology Data Exchange (ETDEWEB)
Foley, Samantha S [ORNL; Elwasif, Wael R [ORNL; Bernholdt, David E [ORNL
2011-11-01
High-fidelity coupled multiphysics simulations are an increasingly important aspect of computational science. In many domains, however, there has been very limited experience with simulations of this sort, therefore research in coupled multiphysics often requires computational frameworks with significant flexibility to respond to the changing directions of the physics and mathematics. This paper presents the Integrated Plasma Simulator (IPS), a framework designed for loosely coupled simulations of fusion plasmas. The IPS provides users with a simple component architecture into which a wide range of existing plasma physics codes can be inserted as components. Simulations can take advantage of multiple levels of parallelism supported in the IPS, and can be controlled by a high-level ``driver'' component, or by other coordination mechanisms, such as an asynchronous event service. We describe the requirements and design of the framework, and how they were implemented in the Python language. We also illustrate the flexibility of the framework by providing examples of different types of simulations that utilize various features of the IPS.
Multi-Physics Simulation of TREAT Kinetics using MAMMOTH
Energy Technology Data Exchange (ETDEWEB)
DeHart, Mark; Gleicher, Frederick; Ortensi, Javier; Alberti, Anthony; Palmer, Todd
2015-11-01
With the advent of next generation reactor systems and new fuel designs, the U.S. Department of Energy (DOE) has identified the need for the resumption of transient testing of nuclear fuels. DOE has decided that the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory (INL) is best suited for future testing. TREAT is a thermal neutron spectrum nuclear test facility that is designed to test nuclear fuels in transient scenarios. These specific fuels transient tests range from simple temperature transients to full fuel melt accidents. The current TREAT core is driven by highly enriched uranium (HEU) dispersed in a graphite matrix (1:10000 U-235/C atom ratio). At the center of the core, fuel is removed allowing for the insertion of an experimental test vehicle. TREAT’s design provides experimental flexibility and inherent safety during neutron pulsing. This safety stems from the graphite in the driver fuel having a strong negative temperature coefficient of reactivity resulting from a thermal Maxwellian shift with increased leakage, as well as graphite acting as a temperature sink. Air cooling is available, but is generally used post-transient for heat removal. DOE and INL have expressed a desire to develop a simulation capability that will accurately model the experiments before they are irradiated at the facility, with an emphasis on effective and safe operation while minimizing experimental time and cost. At INL, the Multi-physics Object Oriented Simulation Environment (MOOSE) has been selected as the model development framework for this work. This paper describes the results of preliminary simulations of a TREAT fuel element under transient conditions using the MOOSE-based MAMMOTH reactor physics tool.
COMSOL Multiphysics Model for HLW Canister Filling
Energy Technology Data Exchange (ETDEWEB)
Kesterson, M. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-04-11
The U.S. Department of Energy (DOE) is building a Tank Waste Treatment and Immobilization Plant (WTP) at the Hanford Site in Washington to remediate 55 million gallons of radioactive waste that is being temporarily stored in 177 underground tanks. Efforts are being made to increase the loading of Hanford tank wastes in glass while meeting melter lifetime expectancies and process, regulatory, and product quality requirements. Wastes containing high concentrations of Al_{2}O_{3} and Na_{2}O can contribute to nepheline (generally NaAlSiO_{4}) crystallization, which can sharply reduce the chemical durability of high level waste (HLW) glass. Nepheline crystallization can occur during slow cooling of the glass within the stainless steel canister. The purpose of this work was to develop a model that can be used to predict temperatures of the glass in a WTP HLW canister during filling and cooling. The intent of the model is to support scoping work in the laboratory. It is not intended to provide precise predictions of temperature profiles, but rather to provide a simplified representation of glass cooling profiles within a full scale, WTP HLW canister under various glass pouring rates. These data will be used to support laboratory studies for an improved understanding of the mechanisms of nepheline crystallization. The model was created using COMSOL Multiphysics, a commercially available software. The model results were compared to available experimental data, TRR-PLT-080, and were found to yield sufficient results for the scoping nature of the study. The simulated temperatures were within 60 ºC for the centerline, 0.0762m (3 inch) from centerline, and 0.2286m (9 inch) from centerline thermocouples once the thermocouples were covered with glass. The temperature difference between the experimental and simulated values reduced to 40 ºC, 4 hours after the thermocouple was covered, and down to 20 ºC, 6 hours after the thermocouple was covered
A Generic Mesh Data Structure with Parallel Applications
Cochran, William Kenneth, Jr.
2009-01-01
High performance, massively-parallel multi-physics simulations are built on efficient mesh data structures. Most data structures are designed from the bottom up, focusing on the implementation of linear algebra routines. In this thesis, we explore a top-down approach to design, evaluating the various needs of many aspects of simulation, not just…
Efficient topology optimisation of multiscale and multiphysics problems
DEFF Research Database (Denmark)
Alexandersen, Joe
The aim of this Thesis is to present efficient methods for optimising high-resolution problems of a multiscale and multiphysics nature. The Thesis consists of two parts: one treating topology optimisation of microstructural details and the other treating topology optimisation of conjugate heat...
A survey of open source multiphysics frameworks in engineering
Babur, O.; Smilauer, V.; Verhoeff, T.; Brand, van den M.G.J.
2015-01-01
This paper presents a systematic survey of open source multiphysics frameworks in the en- gineering domains. These domains share many commonalities despite the diverse application areas. A thorough search for the available frameworks with both academic and industrial ori- gins has revealed numerous
A theory manual for multi-physics code coupling in LIME.
Energy Technology Data Exchange (ETDEWEB)
Belcourt, Noel; Bartlett, Roscoe Ainsworth; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren
2011-03-01
The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.
International Nuclear Information System (INIS)
Yoshimura, Shinobu; Kawai, Hiroshi; Sugimoto, Shin'ichiro; Hori, Muneo; Nakajima, Norihiro; Kobayashi, Kei
2010-01-01
Recently importance of nuclear energy has been recognized again due to serious concerns of global warming and energy security. In parallel, it is one of critical issues to verify safety capability of ageing nuclear power plants (NPPs) subjected to strong earthquake. Since 2007, we have been developing the multi-scale and multi-physics based numerical simulator for quantitatively predicting actual quake-proof capability of ageing NPPs under operation or just after plant trip subjected to strong earthquake. In this paper, we describe an overview of the simulator with some preliminary results. (author)
Optimization of coupled multiphysics methodology for safety analysis of pebble bed modular reactor
Mkhabela, Peter Tshepo
The research conducted within the framework of this PhD thesis is devoted to the high-fidelity multi-physics (based on neutronics/thermal-hydraulics coupling) analysis of Pebble Bed Modular Reactor (PBMR), which is a High Temperature Reactor (HTR). The Next Generation Nuclear Plant (NGNP) will be a HTR design. The core design and safety analysis methods are considerably less developed and mature for HTR analysis than those currently used for Light Water Reactors (LWRs). Compared to LWRs, the HTR transient analysis is more demanding since it requires proper treatment of both slower and much longer transients (of time scale in hours and days) and fast and short transients (of time scale in minutes and seconds). There is limited operation and experimental data available for HTRs for validation of coupled multi-physics methodologies. This PhD work developed and verified reliable high fidelity coupled multi-physics models subsequently implemented in robust, efficient, and accurate computational tools to analyse the neutronics and thermal-hydraulic behaviour for design optimization and safety evaluation of PBMR concept The study provided a contribution to a greater accuracy of neutronics calculations by including the feedback from thermal hydraulics driven temperature calculation and various multi-physics effects that can influence it. Consideration of the feedback due to the influence of leakage was taken into account by development and implementation of improved buckling feedback models. Modifications were made in the calculation procedure to ensure that the xenon depletion models were accurate for proper interpolation from cross section tables. To achieve this, the NEM/THERMIX coupled code system was developed to create the system that is efficient and stable over the duration of transient calculations that last over several tens of hours. Another achievement of the PhD thesis was development and demonstration of full-physics, three-dimensional safety analysis
Predictive modeling of coupled multi-physics systems: I. Theory
International Nuclear Information System (INIS)
Cacuci, Dan Gabriel
2014-01-01
Highlights: • We developed “predictive modeling of coupled multi-physics systems (PMCMPS)”. • PMCMPS reduces predicted uncertainties in predicted model responses and parameters. • PMCMPS treats efficiently very large coupled systems. - Abstract: This work presents an innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS).” This methodology takes into account fully the coupling terms between the systems but requires only the computational resources that would be needed to perform predictive modeling on each system separately. The PMCMPS methodology uses the maximum entropy principle to construct an optimal approximation of the unknown a priori distribution based on a priori known mean values and uncertainties characterizing the parameters and responses for both multi-physics models. This “maximum entropy”-approximate a priori distribution is combined, using Bayes’ theorem, with the “likelihood” provided by the multi-physics simulation models. Subsequently, the posterior distribution thus obtained is evaluated using the saddle-point method to obtain analytical expressions for the optimally predicted values for the multi-physics models parameters and responses along with corresponding reduced uncertainties. Noteworthy, the predictive modeling methodology for the coupled systems is constructed such that the systems can be considered sequentially rather than simultaneously, while preserving exactly the same results as if the systems were treated simultaneously. Consequently, very large coupled systems, which could perhaps exceed available computational resources if treated simultaneously, can be treated with the PMCMPS methodology presented in this work sequentially and without any loss of generality or information, requiring just the resources that would be needed if the systems were treated sequentially
Energy Technology Data Exchange (ETDEWEB)
Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith
2011-07-01
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.
International Nuclear Information System (INIS)
Takemiya, Hiroshi; Yamagishi, Nobuhiro
2000-02-01
We report on a RPC(Remote Procedure Call)-based communication library, Starpc, for a parallel computer cluster. Starpc supports communication between Java Applets and C programs as well as between C programs. Starpc has the following three features. (1) It enables communication between Java Applets and C programs on an arbitrary computer without security violation, although Java Applets are supposed to communicate only with programs on the specific computer (Web server) in subject to a restriction on security. (2) Diverse network communication protocols are available on Starpc, because of using Nexus communication library developed at Argonne National Laboratory. (3) It works on many kinds of computers including eight parallel computers and four WS servers. In this report, the usage of Starpc and the development of applications using Starpc are described. (author)
Multiphysical model of heterogenous flow moving along а channel of variable cross-section
Directory of Open Access Journals (Sweden)
М. А. Васильева
2017-10-01
Full Text Available The article deals with the problem aimed at solving the fundamental problems of developing effective methods and tools for designing, controlling and managing the stream of fluid flowing in variable-section pipelines intended for the production of pumping equipment, medical devices and used in such areas of industry as mining, chemical, food production, etc. Execution of simulation modelling of flow motion according to the scheme of twisted paddle static mixer allows to estimate the efficiency of mixing by calculating the trajectory and velocities of the suspended particles going through the mixer, and also to estimate the pressure drop on the hydraulic flow resistance. The model examines the mixing of solids dissolved in a liquid at room temperature. To visualize the process of distributing the mixture particles over the cross-section and analyzing the mixing efficiency, the Poincaréplot module of the COMSOL Multiphysics software environment was used. For the first time, a multi-physical stream of heterogeneous flow model has been developed that describes in detail the physical state of the fluid at all points of the considered section at the initial time, takes into account the design parameters of the channel (orientation, dimensions, material, etc., specifies the laws of variation of the parameters at the boundaries of the calculated section in conditions of the wave change in the internal section of the working chamber-channel of the inductive peristaltic pumping unit under the influence of the energy of the magnetic field.
Directory of Open Access Journals (Sweden)
Xiang Chen
2014-01-01
Full Text Available This paper presents a comparison study of two important three-degree-of-freedom (DOF parallel manipulators, the Sprint Z3 head and the A3 head, both commonly used in industry. As an initial step, the inverse kinematics are derived and an analysis of two classes of limbs is carried out via screw theory. For comparison, three transmission indices are then defined to describe their motion/force transmission performance. Based on the same main parameters, the compared results reveal some distinct characteristics in addition to the similarities between the two parallel manipulators. To a certain extent, the A3 head outperforms the common Sprint Z3 head, providing a new and satisfactory option for a machine tool head in industry.
International Nuclear Information System (INIS)
Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe
2013-01-01
Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)
McCallum, Ethan
2011-01-01
It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.
MOOSE: A parallel computational framework for coupled systems of nonlinear equations
International Nuclear Information System (INIS)
Gaston, Derek; Newman, Chris; Hansen, Glen; Lebrun-Grandie, Damien
2009-01-01
Systems of coupled, nonlinear partial differential equations (PDEs) often arise in simulation of nuclear processes. MOOSE: Multiphysics Object Oriented Simulation Environment, a parallel computational framework targeted at the solution of such systems, is presented. As opposed to traditional data-flow oriented computational frameworks, MOOSE is instead founded on the mathematical principle of Jacobian-free Newton-Krylov (JFNK). Utilizing the mathematical structure present in JFNK, physics expressions are modularized into 'Kernels,' allowing for rapid production of new simulation tools. In addition, systems are solved implicitly and fully coupled, employing physics-based preconditioning, which provides great flexibility even with large variance in time scales. A summary of the mathematics, an overview of the structure of MOOSE, and several representative solutions from applications built on the framework are presented.
Quench Simulation of Superconducting Magnets with Commercial Multiphysics Software
AUTHOR|(SzGeCERN)751171; Auchmann, Bernhard; Jarkko, Niiranen; Maciejewski, Michal
The simulation of quenches in superconducting magnets is a multiphysics problem of highest complexity. Operated at 1.9 K above absolute zero, the material properties of superconductors and superfluid helium vary by several orders of magnitude over a range of only 10 K. The heat transfer from metal to helium goes through different transfer and boiling regimes as a function of temperature, heat flux, and transferred energy. Electrical, magnetic, thermal, and fluid dynamic effects are intimately coupled, yet live on vastly different time and spatial scales. While the physical models may be the same in all cases, it is an open debate whether the user should opt for commercial multiphysics software like ANSYS or COMSOL, write customized models based on general purpose network solvers like SPICE, or implement the physics models and numerical solvers entirely in custom software like the QP3, THEA, and ROXIE codes currently in use at the European Organisation for Nuclear Research (CERN). Each approach has its strengt...
Parallel Programming with Intel Parallel Studio XE
Blair-Chappell , Stephen
2012-01-01
Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the
Integral Full Core Multi-Physics PWR Benchmark with Measured Data
Energy Technology Data Exchange (ETDEWEB)
Forget, Benoit; Smith, Kord; Kumar, Shikhar; Rathbun, Miriam; Liang, Jingang
2018-04-11
In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevant multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.
Towards an efficient multiphysics model for nuclear reactor dynamics
Directory of Open Access Journals (Sweden)
Obaidurrahman K.
2015-01-01
Full Text Available Availability of fast computer resources nowadays has facilitated more in-depth modeling of complex engineering systems which involve strong multiphysics interactions. This multiphysics modeling is an important necessity in nuclear reactor safety studies where efforts are being made worldwide to combine the knowledge from all associated disciplines at one place to accomplish the most realistic simulation of involved phenomenon. On these lines coupled modeling of nuclear reactor neutron kinetics, fuel heat transfer and coolant transport is a regular practice nowadays for transient analysis of reactor core. However optimization between modeling accuracy and computational economy has always been a challenging task to ensure the adequate degree of reliability in such extensive numerical exercises. Complex reactor core modeling involves estimation of evolving 3-D core thermal state, which in turn demands an expensive multichannel based detailed core thermal hydraulics model. A novel approach of power weighted coupling between core neutronics and thermal hydraulics presented in this work aims to reduce the bulk of core thermal calculations in core dynamics modeling to a significant extent without compromising accuracy of computation. Coupled core model has been validated against a series of international benchmarks. Accuracy and computational efficiency of the proposed multiphysics model has been demonstrated by analyzing a reactivity initiated transient.
Directory of Open Access Journals (Sweden)
D Cébron
2016-04-01
Full Text Available The present paper is concerned with the numerical simulation of Magneto-Hydro-Dynamic (MHD problems with industrial tools. MHD has receivedattention some twenty to thirty years ago as a possible alternative inpropulsion applications; MHD propelled ships have even been designed forthat purpose. However, such propulsion systems have been proved of lowefficiency and fundamental researches in the area have progressivelyreceived much less attention over the past decades. Numerical simulationof MHD problem could however provide interesting solutions in the field ofturbulent flow control. The development of recent efficient numericaltechniques for multi-physic applications provide promising tool for theengineer for that purpose. In the present paper, some elementary testcases in laminar flow with magnetic forcing terms are analysed; equationsof the coupled problem are exposed, analytical solutions are derived ineach case and are compared to numerical solutions obtained with anumerical tool for multi-physic applications. The present work can be seenas a validation of numerical tools (based on the finite element method foracademic as well as industrial application purposes.
Coupling between a multi-physics workflow engine and an optimization framework
Di Gallo, L.; Reux, C.; Imbeaux, F.; Artaud, J.-F.; Owsiak, M.; Saoutic, B.; Aiello, G.; Bernardi, P.; Ciraolo, G.; Bucalossi, J.; Duchateau, J.-L.; Fausser, C.; Galassi, D.; Hertout, P.; Jaboulay, J.-C.; Li-Puma, A.; Zani, L.
2016-03-01
A generic coupling method between a multi-physics workflow engine and an optimization framework is presented in this paper. The coupling architecture has been developed in order to preserve the integrity of the two frameworks. The objective is to provide the possibility to replace a framework, a workflow or an optimizer by another one without changing the whole coupling procedure or modifying the main content in each framework. The coupling is achieved by using a socket-based communication library for exchanging data between the two frameworks. Among a number of algorithms provided by optimization frameworks, Genetic Algorithms (GAs) have demonstrated their efficiency on single and multiple criteria optimization. Additionally to their robustness, GAs can handle non-valid data which may appear during the optimization. Consequently GAs work on most general cases. A parallelized framework has been developed to reduce the time spent for optimizations and evaluation of large samples. A test has shown a good scaling efficiency of this parallelized framework. This coupling method has been applied to the case of SYCOMORE (SYstem COde for MOdeling tokamak REactor) which is a system code developed in form of a modular workflow for designing magnetic fusion reactors. The coupling of SYCOMORE with the optimization platform URANIE enables design optimization along various figures of merit and constraints.
Directory of Open Access Journals (Sweden)
Mufti eMahmud
2014-03-01
Full Text Available Micro-Electrode Arrays (MEAs have emerged as a mature technique to investigate brain (dysfunctions in vivo and in in vitro animal models. Often referred to as smart Petri dishes, MEAs has demonstrated a great potential particularly for medium-throughput studies in vitro, both in academic and pharmaceutical industrial contexts. Enabling rapid comparison of ionic/pharmacological/genetic manipulations with control conditions, MEAs are often employed to screen compounds by monitoring non-invasively the spontaneous and evoked neuronal electrical activity in longitudinal studies, with relatively inexpensive equipment. However, in order to acquire sufficient statistical significance, recordings last up to tens of minutes and generate large amount of raw data (e.g., 60 channels/MEA, 16 bits A/D conversion, 20kHz sampling rate: ~8GB/MEA,h uncompressed. Thus, when the experimental conditions to be tested are numerous, the availability of fast, standardized, and automated signal preprocessing becomes pivotal for any subsequent analysis and data archiving. To this aim, we developed an in-house cloud-computing system, named QSpike Tools, where CPU-intensive operations, required for preprocessing of each recorded channel (e.g., filtering, multi-unit activity detection, spike-sorting, etc., are decomposed and batch-queued to a multi-core architecture or to computer cluster. With the commercial availability of new and inexpensive high-density MEAs, we believe that disseminating QSpike Tools might facilitate its wide adoption and customization, and possibly inspire the creation of community-supported cloud-computing facilities for MEAs users.
Mahmud, Mufti; Pulizzi, Rocco; Vasilaki, Eleni; Giugliano, Michele
2014-01-01
Micro-Electrode Arrays (MEAs) have emerged as a mature technique to investigate brain (dys)functions in vivo and in in vitro animal models. Often referred to as "smart" Petri dishes, MEAs have demonstrated a great potential particularly for medium-throughput studies in vitro, both in academic and pharmaceutical industrial contexts. Enabling rapid comparison of ionic/pharmacological/genetic manipulations with control conditions, MEAs are employed to screen compounds by monitoring non-invasively the spontaneous and evoked neuronal electrical activity in longitudinal studies, with relatively inexpensive equipment. However, in order to acquire sufficient statistical significance, recordings last up to tens of minutes and generate large amount of raw data (e.g., 60 channels/MEA, 16 bits A/D conversion, 20 kHz sampling rate: approximately 8 GB/MEA,h uncompressed). Thus, when the experimental conditions to be tested are numerous, the availability of fast, standardized, and automated signal preprocessing becomes pivotal for any subsequent analysis and data archiving. To this aim, we developed an in-house cloud-computing system, named QSpike Tools, where CPU-intensive operations, required for preprocessing of each recorded channel (e.g., filtering, multi-unit activity detection, spike-sorting, etc.), are decomposed and batch-queued to a multi-core architecture or to a computers cluster. With the commercial availability of new and inexpensive high-density MEAs, we believe that disseminating QSpike Tools might facilitate its wide adoption and customization, and inspire the creation of community-supported cloud-computing facilities for MEAs users.
SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS
Directory of Open Access Journals (Sweden)
M. K. Bouza
2017-01-01
Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.
Directory of Open Access Journals (Sweden)
Jonathan Berson
2012-02-01
Full Text Available Contact electrochemical transfer of silver from a metal-film stamp (parallel process or a metal-coated scanning probe (serial process is demonstrated to allow site-selective metallization of monolayer template patterns of any desired shape and size created by constructive nanolithography. The precise nanoscale control of metal delivery to predefined surface sites, achieved as a result of the selective affinity of the monolayer template for electrochemically generated metal ions, provides a versatile synthetic tool en route to the bottom-up assembly of electric nanocircuits. These findings offer direct experimental support to the view that, in electrochemical metal deposition, charge is carried across the electrode–solution interface by ion migration to the electrode rather than by electron transfer to hydrated ions in solution.
International Nuclear Information System (INIS)
Navarro, V.; Alonso, J.; Asensio, L.; Yustres, A.; Pintado, X.
2012-01-01
Document available in extended abstract form only. The use of numerical methods, especially the Finite Element Method (FEM), for solving boundary problems in Unsaturated Soil Mechanics has experienced significant progress. Several codes, both built mainly for research purposes and commercial software, are now available. In the last years, Multi-physic Partial Differentiation Equation Solvers (MPDES) have turned out to be an interesting proposal. In this family of solvers, the user defines the governing equations and the behaviour models, generally using a computer algebra environment. The code automatically assembles and solves the equation systems, saving the user having to redefine the structures of memory storage or to implement solver algorithms. The user can focus on the definition of the physics of the problem, while it is possible to couple virtually any physical or chemical process that can be described by a PDE. This can be done, for instance, in COMSOL Multiphysics (CM). Nonetheless, the versatility of CM is compromised by the impossibility to implement models with variables defined by implicit functions. Elasto-plastic models involve an implicit coupling among stress increments, plastic strains and plastic variables increments. For this reason, they cannot be implemented in CM in a straightforward way. This means a very relevant limitation for the use of this tool in the analysis of geomechanical boundary value problems. In this work, a strategy to overcome this problem using the multi-physics concept is presented. A mixed method is proposed, considering the constitutive stresses, the pre-consolidation pressure and the plastic variables as main unknowns of the model. Mixed methods usually present stability problems. However, the algorithmics present in CM include several numerical strategies to minimise this kind of problems. Besides, CM is based on the application of the FEM with Lagrange multipliers, an approach that significantly contributes stability
IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.
Ha, Vi Q; Lykotrafitis, George
2016-12-08
We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multi-Physics Analysis of the Fermilab Booster RF Cavity
International Nuclear Information System (INIS)
Awida, M.; Reid, J.; Yakovlev, V.; Lebedev, V.; Khabiboulline, T.; Champion, M.
2012-01-01
After about 40 years of operation the RF accelerating cavities in Fermilab Booster need an upgrade to improve their reliability and to increase the repetition rate in order to support a future experimental program. An increase in the repetition rate from 7 to 15 Hz entails increasing the power dissipation in the RF cavities, their ferrite loaded tuners, and HOM dampers. The increased duty factor requires careful modelling for the RF heating effects in the cavity. A multi-physic analysis investigating both the RF and thermal properties of Booster cavity under various operating conditions is presented in this paper.
DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0.
Jiang, Xiaohui; Kumar, Kamal; Hu, Xin; Wallqvist, Anders; Reifman, Jaques
2008-09-08
Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS) to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.
DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0
Directory of Open Access Journals (Sweden)
Wallqvist Anders
2008-09-01
Full Text Available Abstract Background Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. Implementation To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. Conclusion The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.
Multi-physic simulations of irradiation experiments in a technological irradiation reactor
International Nuclear Information System (INIS)
Bonaccorsi, Th.
2007-09-01
A Material Testing Reactor (MTR) makes it possible to irradiate material samples under intense neutron and photonic fluxes. These experiments are carried out in experimental devices localised in the reactor core or in periphery (reflector). Available physics simulation tools only treat, most of the time, one physics field in a very precise way. Multi-physic simulations of irradiation experiments therefore require a sequential use of several calculation codes and data exchanges between these codes: this corresponds to problems coupling. In order to facilitate multi-physic simulations, this thesis sets up a data model based on data-processing objects, called Technological Entities. This data model is common to all of the physics fields. It permits defining the geometry of an irradiation device in a parametric way and to associate information about materials to it. Numerical simulations are encapsulated into interfaces providing the ability to call specific functionalities with the same command (to initialize data, to launch calculations, to post-treat, to get results,... ). Thus, once encapsulated, numerical simulations can be re-used for various studies. This data model is developed in a SALOME platform component. The first application case made it possible to perform neutronic simulations (OSIRIS reactor and RJH) coupled with fuel behavior simulations. In a next step, thermal hydraulics could also be taken into account. In addition to the improvement of the calculation accuracy due to the physical phenomena coupling, the time spent in the development phase of the simulation is largely reduced and the possibilities of uncertainty treatment are under consideration. (author)
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Energy Technology Data Exchange (ETDEWEB)
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well
Nardi, Albert; Idiart, Andrés; Trinchero, Paolo; de Vries, Luis Manuel; Molinero, Jorge
2014-08-01
This paper presents the development, verification and application of an efficient interface, denoted as iCP, which couples two standalone simulation programs: the general purpose Finite Element framework COMSOL Multiphysics® and the geochemical simulator PHREEQC. The main goal of the interface is to maximize the synergies between the aforementioned codes, providing a numerical platform that can efficiently simulate a wide number of multiphysics problems coupled with geochemistry. iCP is written in Java and uses the IPhreeqc C++ dynamic library and the COMSOL Java-API. Given the large computational requirements of the aforementioned coupled models, special emphasis has been placed on numerical robustness and efficiency. To this end, the geochemical reactions are solved in parallel by balancing the computational load over multiple threads. First, a benchmark exercise is used to test the reliability of iCP regarding flow and reactive transport. Then, a large scale thermo-hydro-chemical (THC) problem is solved to show the code capabilities. The results of the verification exercise are successfully compared with those obtained using PHREEQC and the application case demonstrates the scalability of a large scale model, at least up to 32 threads.
High-Fidelity Space-Time Adaptive Multiphysics Simulations in Nuclear Engineering
Energy Technology Data Exchange (ETDEWEB)
Solin, Pavel [Univ. of Reno, NV (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)
2014-03-09
We delivered a series of fundamentally new computational technologies that have the potential to significantly advance the state-of-the-art of computer simulations of transient multiphysics nuclear reactor processes. These methods were implemented in the form of a C++ library, and applied to a number of multiphysics coupled problems relevant to nuclear reactor simulations.
High-Fidelity Space-Time Adaptive Multiphysics Simulations in Nuclear Engineering
International Nuclear Information System (INIS)
Solin, Pavel; Ragusa, Jean
2014-01-01
We delivered a series of fundamentally new computational technologies that have the potential to significantly advance the state-of-the-art of computer simulations of transient multiphysics nuclear reactor processes. These methods were implemented in the form of a C++ library, and applied to a number of multiphysics coupled problems relevant to nuclear reactor simulations.
Practical integrated simulation systems for coupled numerical simulations in parallel
Energy Technology Data Exchange (ETDEWEB)
Osamu, Hazama; Zhihong, Guo [Japan Atomic Energy Research Inst., Centre for Promotion of Computational Science and Engineering, Tokyo (Japan)
2003-07-01
In order for the numerical simulations to reflect 'real-world' phenomena and occurrences, incorporation of multidisciplinary and multi-physics simulations considering various physical models and factors are becoming essential. However, there still exist many obstacles which inhibit such numerical simulations. For example, it is still difficult in many instances to develop satisfactory software packages which allow for such coupled simulations and such simulations will require more computational resources. A precise multi-physics simulation today will require parallel processing which again makes it a complicated process. Under the international cooperative efforts between CCSE/JAERI and Fraunhofer SCAI, a German institute, a library called the MpCCI, or Mesh-based Parallel Code Coupling Interface, has been implemented together with a library called STAMPI to couple two existing codes to develop an 'integrated numerical simulation system' intended for meta-computing environments. (authors)
Visualizing Network Traffic to Understand the Performance of Massively Parallel Simulations
Landge, A. G.
2012-12-01
The performance of massively parallel applications is often heavily impacted by the cost of communication among compute nodes. However, determining how to best use the network is a formidable task, made challenging by the ever increasing size and complexity of modern supercomputers. This paper applies visualization techniques to aid parallel application developers in understanding the network activity by enabling a detailed exploration of the flow of packets through the hardware interconnect. In order to visualize this large and complex data, we employ two linked views of the hardware network. The first is a 2D view, that represents the network structure as one of several simplified planar projections. This view is designed to allow a user to easily identify trends and patterns in the network traffic. The second is a 3D view that augments the 2D view by preserving the physical network topology and providing a context that is familiar to the application developers. Using the massively parallel multi-physics code pF3D as a case study, we demonstrate that our tool provides valuable insight that we use to explain and optimize pF3D-s performance on an IBM Blue Gene/P system. © 1995-2012 IEEE.
International Nuclear Information System (INIS)
Mo Zeyao
2004-11-01
Multiphysics parallel numerical simulations are usually essential to simplify researches on complex physical phenomena in which several physics are tightly coupled. It is very important on how to concatenate those coupled physics for fully scalable parallel simulation. Meanwhile, three objectives should be balanced, the first is efficient data transfer among simulations, the second and the third are efficient parallel executions and simultaneously developments of those simulation codes. Two concatenating algorithms for multiphysics parallel numerical simulations coupling radiation hydrodynamics with neutron transport on unstructured grid are presented. The first algorithm, Fully Loosely Concatenation (FLC), focuses on the independence of code development and the independence running with optimal performance of code. The second algorithm. Two Level Tightly Concatenation (TLTC), focuses on the optimal tradeoffs among above three objectives. Theoretical analyses for communicational complexity and parallel numerical experiments on hundreds of processors on two parallel machines have showed that these two algorithms are efficient and can be generalized to other multiphysics parallel numerical simulations. In especial, algorithm TLTC is linearly scalable and has achieved the optimal parallel performance. (authors)
Multiscale Multiphysics Developments for Accident Tolerant Fuel Concepts
International Nuclear Information System (INIS)
Gamble, K. A.; Hales, J. D.; Yu, J.; Zhang, Y.; Bai, X.; Andersson, D.; Patra, A.; Wen, W.; Tome, C.; Baskes, M.; Martinez, E.; Stanek, C. R.; Miao, Y.; Ye, B.; Hofman, G. L.; Yacout, A. M.; Liu, W.
2015-01-01
U 3 Si 2 and iron-chromium-aluminum (Fe-Cr-Al) alloys are two of many proposed accident-tolerant fuel concepts for the fuel and cladding, respectively. The behavior of these materials under normal operating and accident reactor conditions is not well known. As part of the Department of Energy's Accident Tolerant Fuel High Impact Problem program significant work has been conducted to investigate the U 3 Si 2 and FeCrAl behavior under reactor conditions. This report presents the multiscale and multiphysics effort completed in fiscal year 2015. The report is split into four major categories including Density Functional Theory Developments, Molecular Dynamics Developments, Mesoscale Developments, and Engineering Scale Developments. The work shown here is a compilation of a collaborative effort between Idaho National Laboratory, Los Alamos National Laboratory, Argonne National Laboratory and Anatech Corp.
Induction Heating Process Design Using COMSOL Multiphysics Software
Directory of Open Access Journals (Sweden)
Andy Triwinarko
2011-08-01
Full Text Available Induction heating is clean environmental heating process due to a non-contact heating process. There is lots of the induction heating type that be used in the home appliance but it is still new technology in Indonesia. The main interesting area of the induction heating design is the efficiency of the usage of energy and choice of the plate material. COMSOL Multiphysics Software can be used to simulate and estimate the induction heating process. Therefore, the software can be used to design the induction heating process that will have a optimum efficiency. The properties of the induction heating design were also simulated and analyzed such as effect of inductors width, inductors distance, and conductive plate material. The result was shown that the good design of induction heating must have a short width and distance inductor and used silicon carbide as material plate with high frequency controller.
Multiphysics Numerical Modeling of a Fin and Tube Heat Exchanger
DEFF Research Database (Denmark)
Singh, Shobhana; Sørensen, Kim; Condra, Thomas Joseph
2015-01-01
In the present research work, a modeling effort to predict the performance of a liquid-gas type fin and tube heat exchanger design is made. Three dimensional (3D) steady state numerical model is developed using commercial software COMSOL Multiphysics based on finite element method (FEM......). For the purposes here, only gas flowing over the fin side is simulated assuming constant inner tube wall temperature. The study couples conjugate heat transfer mechanism with turbulent flow in order to describe the temperature and velocity profile. In addition, performance characteristics of the heat exchanger...... design in terms of heat transfer and pressure loss are determined by parameters such as overall heat transfer coefficient, Colburn j-factor, flow resistance factor, and efficiency index. The model provides useful insights necessary for optimization of heat exchanger design....
Modeling of circulating nuclear fuels with Comsol Multiphysics
International Nuclear Information System (INIS)
Cammi, A.; Di Marcello, V.; Luzzi, L.
2007-01-01
This paper presents multi-physics modelling of circulating nuclear fuel in a simple geometry by means of COMSOL 3.3. Among the Circulating Fuel Reactors (CFR), the most promising is the Molten Salt Reactor (MSR). Physics of such circulating nuclear fuel requires five coupled equations of conservation laws: the momentum balance, the energy balance, the neutron balance and the precursors balance. In this complex field, represented by the coupling of thermal-hydrodynamics with neutronics, the highly non linear regime and the wide disparity of time scales, COMSOL was used to investigate the region of reactor that comprises only the flowing fluid, and a parametric study was performed by varying the size of the analyzed region and the inlet velocity of fluid. This study is sufficient to achieve a preliminary evaluation of the thermo-physical behaviour of the system and paves the way for further progress concerning a more complex and realistic MSR geometry. (authors)
A Global Sensitivity Analysis Methodology for Multi-physics Applications
Energy Technology Data Exchange (ETDEWEB)
Tong, C H; Graziani, F R
2007-02-02
Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.
Experimental Evaluation of Acoustic Engine Liner Models Developed with COMSOL Multiphysics
Schiller, Noah H.; Jones, Michael G.; Bertolucci, Brandon
2017-01-01
Accurate modeling tools are needed to design new engine liners capable of reducing aircraft noise. The purpose of this study is to determine if a commercially-available finite element package, COMSOL Multiphysics, can be used to accurately model a range of different acoustic engine liner designs, and in the process, collect and document a benchmark dataset that can be used in both current and future code evaluation activities. To achieve these goals, a variety of liner samples, ranging from conventional perforate-over-honeycomb to extended-reaction designs, were installed in one wall of the grazing flow impedance tube at the NASA Langley Research Center. The liners were exposed to high sound pressure levels and grazing flow, and the effect of the liner on the sound field in the flow duct was measured. These measurements were then compared with predictions. While this report only includes comparisons for a subset of the configurations, the full database of all measurements and predictions is available in electronic format upon request. The results demonstrate that both conventional perforate-over-honeycomb and extended-reaction liners can be accurately modeled using COMSOL. Therefore, this modeling tool can be used with confidence to supplement the current suite of acoustic propagation codes, and ultimately develop new acoustic engine liners designed to reduce aircraft noise.
International Nuclear Information System (INIS)
Liu Rong; Zhou Wenzhong; Prudil, Andrew
2015-01-01
This paper presents the development of a light water reactor fuel performance code, which considers almost all the related physical models, including heat generation and conduction, species diffusion, thermomechanics (thermal expansion, elastic strain, densification, and fission product swelling strain), grain growth, fission gas production and release, gap heat transfer, mechanical contact, gap/plenum pressure with plenum volume, cladding thermal and irradiation creep and oxidation. All the equations are implemented into COMSOL Multiphysics finite-element platform with a 2D axisymmetric geometry of a fuel pellet and cladding. Comparisons are made for the simulation results between COMSOL and another simulation tool of BISON. The comparisons show the capability of our simulation tool to predict light water UO 2 fuel performances. In our modeling and simulation work, the performance of enhanced thermal conductivity UO 2 -BeO fuel and newly-adopted corrosion resistant SiC cladding material was also studied. UO 2 -BeO high thermal conductivity nuclear fuel would decrease fuel temperatures and facilitate a reduction in pellet cladding interaction through lessening thermal stresses that result in fuel cracking, relocation, and swelling. The safety of the reactor would be improved. However, for SiC cladding, although due to its high thermal expansion, the gap closure time is delayed, irradiation induced point defects and defect-clusters in the SiC crystal will dramatically decrease SiC thermal conductivity, and cause significant increase in the fuel temperature. (author)
An integrated multiphysics model for friction stir welding of 6061 Aluminum alloy
Directory of Open Access Journals (Sweden)
M Nourani
2016-09-01
Full Text Available This article presents a new, combined ‘integrated’- ‘multiphysics’ model of friction stir welding (FSW where a set of governing equations from non-Newtonian incompressible fluid dynamics, conductive and convective heat transfer, and plain stress solid mechanics have been coupled for calculating the process variables and material behaviour both during and after welding. More specifically, regarding the multiphysics feature, the model is capable of simultaneously predicting the local distribution, location and magnitude of maximum temperature, strain, and strain rate fields around the tool pin during the process; while for the integrated (post-analysis part, the above predictions have been used to study the microstructure and residual stress field of welded parts within the same developed code. A slip/stick condition between the tool and workpiece, friction and deformation heat source, convection and conduction heat transfer in the workpiece, a solid mechanics-based viscosity definition, and the Zener-Hollomon- based rigid-viscoplastic material properties with solidus cut-off temperature and empirical softening regime have been employed. In order to validate all the predicted variables collectively, the model has been compared to a series of published case studies on individual/limited set of variables, as well as in-house experiments on FSW of aluminum 6061.
The Cea multi-scale and multi-physics simulation project for nuclear applications
International Nuclear Information System (INIS)
Ledermann, P.; Chauliac, C.; Thomas, J.B.
2005-01-01
Full text of publication follows. Today numerical modelling is everywhere recognized as an essential tool of capitalization, integration and share of knowledge. For this reason, it becomes the central tool of research. Until now, the Cea developed a set of scientific software allowing to model, in each situation, the operation of whole or part of a nuclear installation and these codes are largely used in nuclear industry. However, for the future, it is essential to aim for a better accuracy, a better control of uncertainties and better performance in computing times. The objective is to obtain validated models allowing accurate predictive calculations for actual complex nuclear problems such as fuel behaviour in accidental situation. This demands to master a large and interactive set of phenomena ranging from nuclear reaction to heat transfer. To this end, Cea, with industrial partners (EDF, Framatome-ANP, ANDRA) has designed an integrated platform of calculation, devoted to the study of nuclear systems, and intended at the same time for industries and scientists. The development of this platform is under way with the start in 2005 of the integrated project NURESIM, with 18 European partners. Improvement is coming not only through a multi-scale description of all phenomena but also through an innovative design approach requiring deep functional analysis which is upstream from the development of the simulation platform itself. In addition, the studies of future nuclear systems are increasingly multidisciplinary (simultaneous modelling of core physics, thermal-hydraulics and fuel behaviour). These multi-physics and multi-scale aspects make mandatory to pay very careful attention to software architecture issues. A global platform is thus developed integrating dedicated specialized platforms: DESCARTES for core physics, NEPTUNE for thermal-hydraulics, PLEIADES for fuel behaviour, SINERGY for materials behaviour under irradiation, ALLIANCES for the performance
Suzuki, Yuma; Shimizu, Tetsuhide; Yang, Ming
2017-01-01
The quantitative evaluation of the biomolecules transport with multi-physics in nano/micro scale is demanded in order to optimize the design of microfluidics device for the biomolecules detection with high detection sensitivity and rapid diagnosis. This paper aimed to investigate the effectivity of the computational simulation using the numerical model of the biomolecules transport with multi-physics near a microchannel surface on the development of biomolecules-detection devices. The biomolecules transport with fluid drag force, electric double layer (EDL) force, and van der Waals force was modeled by Newtonian Equation of motion. The model validity was verified in the influence of ion strength and flow velocity on biomolecules distribution near the surface compared with experimental results of previous studies. The influence of acting forces on its distribution near the surface was investigated by the simulation. The trend of its distribution to ion strength and flow velocity was agreement with the experimental result by the combination of all acting forces. Furthermore, EDL force dominantly influenced its distribution near its surface compared with fluid drag force except for the case of high velocity and low ion strength. The knowledges from the simulation might be useful for the design of biomolecules-detection devices and the simulation can be expected to be applied on its development as the design tool for high detection sensitivity and rapid diagnosis in the future.
Software Tools for Battery Design | Transportation Research | NREL
Software Tools for Battery Design Software Tools for Battery Design Under the Computer-Aided Engineering for Electric Drive Vehicle Batteries (CAEBAT) project, NREL has developed software tools to help using CAEBAT software tools. Knowledge of the interplay of multi-physics at varied scales is imperative
Crockett, Thomas W.
1995-01-01
This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.
1982-01-01
Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn
Lithium-Ion Battery Safety Study Using Multi-Physics Internal Short-Circuit Model (Presentation)
Energy Technology Data Exchange (ETDEWEB)
Kim, G-.H.; Smith, K.; Pesaran, A.
2009-06-01
This presentation outlines NREL's multi-physics simulation study to characterize an internal short by linking and integrating electrochemical cell, electro-thermal, and abuse reaction kinetics models.
Multiphysics simulation of thermal phenomena in direct laser metal powder deposition
CSIR Research Space (South Africa)
Pityana, SL
2016-11-01
Full Text Available presents on two dimensional multi-physics models to describe the physical mechanism of heat transfer, melting and solidification that take place during and post laser-powder interaction. The simulated transient temperature profile, the geometrical features...
Energy Technology Data Exchange (ETDEWEB)
Donald Estep; Michael Holst; Simon Tavener
2010-02-08
This project was concerned with the accurate computational error estimation for numerical solutions of multiphysics, multiscale systems that couple different physical processes acting across a large range of scales relevant to the interests of the DOE. Multiscale, multiphysics models are characterized by intimate interactions between different physics across a wide range of scales. This poses significant computational challenges addressed by the proposal, including: (1) Accurate and efficient computation; (2) Complex stability; and (3) Linking different physics. The research in this project focused on Multiscale Operator Decomposition methods for solving multiphysics problems. The general approach is to decompose a multiphysics problem into components involving simpler physics over a relatively limited range of scales, and then to seek the solution of the entire system through some sort of iterative procedure involving solutions of the individual components. MOD is a very widely used technique for solving multiphysics, multiscale problems; it is heavily used throughout the DOE computational landscape. This project made a major advance in the analysis of the solution of multiscale, multiphysics problems.
An introduction to LIME 1.0 and its use in coupling codes for multiphysics simulations.
Energy Technology Data Exchange (ETDEWEB)
Belcourt, Noel; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren
2011-11-01
LIME is a small software package for creating multiphysics simulation codes. The name was formed as an acronym denoting 'Lightweight Integrating Multiphysics Environment for coupling codes.' LIME is intended to be especially useful when separate computer codes (which may be written in any standard computer language) already exist to solve different parts of a multiphysics problem. LIME provides the key high-level software (written in C++), a well defined approach (with example templates), and interface requirements to enable the assembly of multiple physics codes into a single coupled-multiphysics simulation code. In this report we introduce important software design characteristics of LIME, describe key components of a typical multiphysics application that might be created using LIME, and provide basic examples of its use - including the customized software that must be written by a user. We also describe the types of modifications that may be needed to individual physics codes in order for them to be incorporated into a LIME-based multiphysics application.
Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes
Piro, Markus Hans Alexander
Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system
Multiscale Multiphysics and Multidomain Models I: Basic Theory.
Wei, Guo-Wei
2013-12-01
This work extends our earlier two-domain formulation of a differential geometry based multiscale paradigm into a multidomain theory, which endows us the ability to simultaneously accommodate multiphysical descriptions of aqueous chemical, physical and biological systems, such as fuel cells, solar cells, nanofluidics, ion channels, viruses, RNA polymerases, molecular motors and large macromolecular complexes. The essential idea is to make use of the differential geometry theory of surfaces as a natural means to geometrically separate the macroscopic domain of solvent from the microscopic domain of solute, and dynamically couple continuum and discrete descriptions. Our main strategy is to construct energy functionals to put on an equal footing of multiphysics, including polar (i.e., electrostatic) solvation, nonpolar solvation, chemical potential, quantum mechanics, fluid mechanics, molecular mechanics, coarse grained dynamics and elastic dynamics. The variational principle is applied to the energy functionals to derive desirable governing equations, such as multidomain Laplace-Beltrami (LB) equations for macromolecular morphologies, multidomain Poisson-Boltzmann (PB) equation or Poisson equation for electrostatic potential, generalized Nernst-Planck (NP) equations for the dynamics of charged solvent species, generalized Navier-Stokes (NS) equation for fluid dynamics, generalized Newton's equations for molecular dynamics (MD) or coarse-grained dynamics and equation of motion for elastic dynamics. Unlike the classical PB equation, our PB equation is an integral-differential equation due to solvent-solute interactions. To illustrate the proposed formalism, we have explicitly constructed three models, a multidomain solvation model, a multidomain charge transport model and a multidomain chemo-electro-fluid-MD-elastic model. Each solute domain is equipped with distinct surface tension, pressure, dielectric function, and charge density distribution. In addition to long
Multiscale multiphysics and multidomain models—Flexibility and rigidity
International Nuclear Information System (INIS)
Xia, Kelin; Opron, Kristopher; Wei, Guo-Wei
2013-01-01
The emerging complexity of large macromolecules has led to challenges in their full scale theoretical description and computer simulation. Multiscale multiphysics and multidomain models have been introduced to reduce the number of degrees of freedom while maintaining modeling accuracy and achieving computational efficiency. A total energy functional is constructed to put energies for polar and nonpolar solvation, chemical potential, fluid flow, molecular mechanics, and elastic dynamics on an equal footing. The variational principle is utilized to derive coupled governing equations for the above mentioned multiphysical descriptions. Among these governing equations is the Poisson-Boltzmann equation which describes continuum electrostatics with atomic charges. The present work introduces the theory of continuum elasticity with atomic rigidity (CEWAR). The essence of CEWAR is to formulate the shear modulus as a continuous function of atomic rigidity. As a result, the dynamics complexity of a macromolecular system is separated from its static complexity so that the more time-consuming dynamics is handled with continuum elasticity theory, while the less time-consuming static analysis is pursued with atomic approaches. We propose a simple method, flexibility-rigidity index (FRI), to analyze macromolecular flexibility and rigidity in atomic detail. The construction of FRI relies on the fundamental assumption that protein functions, such as flexibility, rigidity, and energy, are entirely determined by the structure of the protein and its environment, although the structure is in turn determined by all the interactions. As such, the FRI measures the topological connectivity of protein atoms or residues and characterizes the geometric compactness of the protein structure. As a consequence, the FRI does not resort to the interaction Hamiltonian and bypasses matrix diagonalization, which underpins most other flexibility analysis methods. FRI's computational complexity is of O
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
Preliminary Coupling of MATRA Code for Multi-physics Analysis
International Nuclear Information System (INIS)
Kim, Seongjin; Choi, Jinyoung; Yang, Yongsik; Kwon, Hyouk; Hwang, Daehyun
2014-01-01
The boundary conditions such as the inlet temperature, mass flux, averaged heat flux, power distributions of the rods, and core geometry is given by constant values or functions of time. These conditions are separately calculated and provided by other codes, such as a neutronics or a system codes, into the MATRA code. In addition, the coupling of several codes in the different physics field is focused and embodied. In this study, multiphysics coupling methods were developed for a subchannel code (MATRA) with neutronics codes (MASTER, DeCART) and a fuel performance code (FRAPCON-3). Preliminary evaluation results for representative sample cases are presented. The MASTER and DeCART codes provide the power distribution of the rods in the core to the MATRA code. In case of the FRAPCON-3 code, the variation of the rod diameter induced by the thermal expansion is yielded and provided. The MATRA code transfers the thermal-hydraulic conditions that each code needs. Moreover, the coupling method with each code is described
CANDU fuel bundle deformation modelling with COMSOL multiphysics
International Nuclear Information System (INIS)
Bell, J.S.; Lewis, B.J.
2012-01-01
Highlights: ► The deformation behaviour of a CANDU fuel bundle was modelled. ► The model has been developed on a commercial finite-element platform. ► Pellet/sheath interaction and end-plate restraint effects were considered. ► The model was benchmarked against the BOW code and a variable-load experiment. - Abstract: A model to describe deformation behaviour of a CANDU 37-element bundle has been developed under the COMSOL Multiphysics finite-element platform. Beam elements were applied to the fuel elements (composed of fuel sheaths and pellets) and endplates in order to calculate the bowing behaviour of the fuel elements. This model is important to help assess bundle-deformation phenomena, which may lead to more restrictive coolant flow through the sub-channels of the horizontally oriented bundle. The bundle model was compared to the BOW code for the occurrence of a dry-out patch, and benchmarked against an out-reactor experiment with a variable load on an outer fuel element.
Curing of Thick Thermoset Composite Laminates: Multiphysics Modeling and Experiments
Anandan, S.; Dhaliwal, G. S.; Huo, Z.; Chandrashekhara, K.; Apetre, N.; Iyyer, N.
2017-11-01
Fiber reinforced polymer composites are used in high-performance aerospace applications as they are resistant to fatigue, corrosion free and possess high specific strength. The mechanical properties of these composite components depend on the degree of cure and residual stresses developed during the curing process. While these parameters are difficult to determine experimentally in large and complex parts, they can be simulated using numerical models in a cost-effective manner. These simulations can be used to develop cure cycles and change processing parameters to obtain high-quality parts. In the current work, a numerical model was built in Comsol MultiPhysics to simulate the cure behavior of a carbon/epoxy prepreg system (IM7/Cycom 5320-1). A thermal spike was observed in thick laminates when the recommended cure cycle was used. The cure cycle was modified to reduce the thermal spike and maintain the degree of cure at the laminate center. A parametric study was performed to evaluate the effect of air flow in the oven, post cure cycles and cure temperatures on the thermal spike and the resultant degree of cure in the laminate.
International Nuclear Information System (INIS)
Lazaro, A.; Ordonez, J.; Martorell, S.; Przemyslaw, S.; Ammirabile, L.; Tsige-Tamirat, H.
2015-01-01
The sodium cooled fast reactor (SFR) is one of the reactor types selected by the Generation IV International Forum. SFR stand out due to its remarkable past operational experience in related projects and its potential to achieve the ambitious goals laid for the new generation of nuclear reactors. Regardless its operational experience, there is a need to apply computational tools able to simulate the system behaviour under conditions that may overtake the reactor safety limits from the early stages of the design process, including the three-dimensional phenomena that may arise in these transients. This paper presents the different steps followed towards the development of a multi-physics platform with capabilities to simulate complex phenomena using a coupled neutronic-thermal-hydraulic scheme. The development started with a one-dimensional thermal-hydraulic model of the European Sodium Fast Reactor (ESFR) design with point kinetic neutronic feedback benchmarked with its peers in the framework of the FP7-CP-ESFR project using the state-of-the-art thermal-hydraulic system code TRACE. The model was successively extended into a three-dimensional model coupled with the spatial kinetic neutronic code PARCS able to simulate three-dimensional multi-physic phenomena along with the comparison of the results for symmetric cases. The last part of the paper shows the application of the developed tool to the analysis of transients involving asymmetrical effects, such as the coast-down of a primary and secondary pump or the withdrawal of a peripheral control rod bank, demonstrating the unique capability of the code to simulate such transients and the capability of the design to withstand them under design basis
Parallel computing: numerics, applications, and trends
National Research Council Canada - National Science Library
Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter
2009-01-01
... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...
Modelling transport phenomena in a multi-physics context
Marra, Francesco
2015-01-01
Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.
Modelling transport phenomena in a multi-physics context
Energy Technology Data Exchange (ETDEWEB)
Marra, Francesco [Dipartimento di Ingegneria Chimica e Alimentare - Università degli studi di Salerno Via Ponte Don Melillo - 84084 Fisciano SA (Italy)
2015-01-22
Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.
Multiphysical modelling of fluid transport through osteo-articular media
Directory of Open Access Journals (Sweden)
Thibault Lemaire
2010-03-01
Full Text Available In this study, a multiphysical description of fluid transport through osteo-articular porous media is presented. Adapted from the model of Moyne and Murad, which is intended to describe clayey materials behaviour, this multiscale modelling allows for the derivation of the macroscopic response of the tissue from microscopical information. First the model is described. At the pore scale, electrohydrodynamics equations governing the electrolyte movement are coupled with local electrostatics (Gauss-Poisson equation, and ionic transport equations. Using a change of variables and an asymptotic expansion method, the macroscopic description is carried out. Results of this model are used to show the importance of couplings effects on the mechanotransduction of compact bone remodelling.Neste estudo uma descrição multifísica do transporte de fluidos em meios porosos osteo articulares é apresentada. Adaptado a partir do modelo de Moyne e Murad proposto para descrever o comportamento de materiais argilosos a modelagem multiescala permite a derivação da resposta macroscópica do tecido a partir da informação microscópica. Na primeira parte o modelo é apresentado. Na escala do poro as equações da eletro-hidrodinâmica governantes do movimento dos eletrolitos são acopladas com a eletrostática local (equação de Gauss-Poisson e as equações de transporte iônico. Usando uma mudança de variáveis e o método de expansão assintótica a derivação macroscópica é conduzida. Resultados do modelo proposto são usados para salientar a importância dos efeitos de acoplamento sobre a transdução mecânica da remodelagem de ossos compactados.
Modelling transport phenomena in a multi-physics context
International Nuclear Information System (INIS)
Marra, Francesco
2015-01-01
Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating
Parallelization of Subchannel Analysis Code MATRA
International Nuclear Information System (INIS)
Kim, Seongjin; Hwang, Daehyun; Kwon, Hyouk
2014-01-01
A stand-alone calculation of MATRA code used up pertinent computing time for the thermal margin calculations while a relatively considerable time is needed to solve the whole core pin-by-pin problems. In addition, it is strongly required to improve the computation speed of the MATRA code to satisfy the overall performance of the multi-physics coupling calculations. Therefore, a parallel approach to improve and optimize the computability of the MATRA code is proposed and verified in this study. The parallel algorithm is embodied in the MATRA code using the MPI communication method and the modification of the previous code structure was minimized. An improvement is confirmed by comparing the results between the single and multiple processor algorithms. The speedup and efficiency are also evaluated when increasing the number of processors. The parallel algorithm was implemented to the subchannel code MATRA using the MPI. The performance of the parallel algorithm was verified by comparing the results with those from the MATRA with the single processor. It is also noticed that the performance of the MATRA code was greatly improved by implementing the parallel algorithm for the 1/8 core and whole core problems
Improvement of Parallel Algorithm for MATRA Code
International Nuclear Information System (INIS)
Kim, Seong-Jin; Seo, Kyong-Won; Kwon, Hyouk; Hwang, Dae-Hyun
2014-01-01
The feasibility study to parallelize the MATRA code was conducted in KAERI early this year. As a result, a parallel algorithm for the MATRA code has been developed to decrease a considerably required computing time to solve a bigsize problem such as a whole core pin-by-pin problem of a general PWR reactor and to improve an overall performance of the multi-physics coupling calculations. It was shown that the performance of the MATRA code was greatly improved by implementing the parallel algorithm using MPI communication. For problems of a 1/8 core and whole core for SMART reactor, a speedup was evaluated as about 10 when the numbers of used processor were 25. However, it was also shown that the performance deteriorated as the axial node number increased. In this paper, the procedure of a communication between processors is optimized to improve the previous parallel algorithm.. To improve the performance deterioration of the parallelized MATRA code, the communication algorithm between processors was newly presented. It was shown that the speedup was improved and stable regardless of the axial node number
Bellerby, Tim
2015-04-01
PM (Parallel Models) is a new parallel programming language specifically designed for writing environmental and geophysical models. The language is intended to enable implementers to concentrate on the science behind the model rather than the details of running on parallel hardware. At the same time PM leaves the programmer in control - all parallelisation is explicit and the parallel structure of any given program may be deduced directly from the code. This paper describes a PM implementation based on the Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) standards, looking at issues involved with translating the PM parallelisation model to MPI/OpenMP protocols and considering performance in terms of the competing factors of finer-grained parallelisation and increased communication overhead. In order to maximise portability, the implementation stays within the MPI 1.3 standard as much as possible, with MPI-2 MPI-IO file handling the only significant exception. Moreover, it does not assume a thread-safe implementation of MPI. PM adopts a two-tier abstract representation of parallel hardware. A PM processor is a conceptual unit capable of efficiently executing a set of language tasks, with a complete parallel system consisting of an abstract N-dimensional array of such processors. PM processors may map to single cores executing tasks using cooperative multi-tasking, to multiple cores or even to separate processing nodes, efficiently sharing tasks using algorithms such as work stealing. While tasks may move between hardware elements within a PM processor, they may not move between processors without specific programmer intervention. Tasks are assigned to processors using a nested parallelism approach, building on ideas from Reyes et al. (2009). The main program owns all available processors. When the program enters a parallel statement then either processors are divided out among the newly generated tasks (number of new tasks number of processors
A Multiphysics Framework to Learn and Predict in Presence of Multiple Scales
Tomin, P.; Lunati, I.
2015-12-01
Modeling complex phenomena in the subsurface remains challenging due to the presence of multiple interacting scales, which can make it impossible to focus on purely macroscopic phenomena (relevant in most applications) and neglect the processes at the micro-scale. We present and discuss a general framework that allows us to deal with the situation in which the lack of scale separation requires the combined use of different descriptions at different scale (for instance, a pore-scale description at the micro-scale and a Darcy-like description at the macro-scale) [1,2]. The method is based on conservation principles and constructs the macro-scale problem by numerical averaging of micro-scale balance equations. By employing spatiotemporal adaptive strategies, this approach can efficiently solve large-scale problems [2,3]. In addition, being based on a numerical volume-averaging paradigm, it offers a tool to illuminate how macroscopic equations emerge from microscopic processes, to better understand the meaning of microscopic quantities, and to investigate the validity of the assumptions routinely used to construct the macro-scale problems. [1] Tomin, P., and I. Lunati, A Hybrid Multiscale Method for Two-Phase Flow in Porous Media, Journal of Computational Physics, 250, 293-307, 2013 [2] Tomin, P., and I. Lunati, Local-global splitting and spatiotemporal-adaptive Multiscale Finite Volume Method, Journal of Computational Physics, 280, 214-231, 2015 [3] Tomin, P., and I. Lunati, Spatiotemporal adaptive multiphysics simulations of drainage-imbibition cycles, Computational Geosciences, 2015 (under review)
Acceleration methods for multi-physics compressible flow
Peles, Oren; Turkel, Eli
2018-04-01
In this work we investigate the Runge-Kutta (RK)/Implicit smoother scheme as a convergence accelerator for complex multi-physics flow problems including turbulent, reactive and also two-phase flows. The flows considered are subsonic, transonic and supersonic flows in complex geometries, and also can be either steady or unsteady flows. All of these problems are considered to be a very stiff. We then introduce an acceleration method for the compressible Navier-Stokes equations. We start with the multigrid method for pure subsonic flow, including reactive flows. We then add the Rossow-Swanson-Turkel RK/Implicit smoother that enables performing all these complex flow simulations with a reasonable CFL number. We next discuss the RK/Implicit smoother for time dependent problem and also for low Mach numbers. The preconditioner includes an intrinsic low Mach number treatment inside the smoother operator. We also develop a modified Roe scheme with a corresponding flux Jacobian matrix. We then give the extension of the method for real gas and reactive flow. Reactive flows are governed by a system of inhomogeneous Navier-Stokes equations with very stiff source terms. The extension of the RK/Implicit smoother requires an approximation of the source term Jacobian. The properties of the Jacobian are very important for the stability of the method. We discuss what the chemical physics theory of chemical kinetics tells about the mathematical properties of the Jacobian matrix. We focus on the implication of the Le-Chatelier's principle on the sign of the diagonal entries of the Jacobian. We present the implementation of the method for turbulent flow. We use a two RANS turbulent model - one equation model - Spalart-Allmaras and a two-equation model - k-ω SST model. The last extension is for two-phase flows with a gas as a main phase and Eulerian representation of a dispersed particles phase (EDP). We present some examples for such flow computations inside a ballistic evaluation
Pika: A snow science simulation tool built using the open-source framework MOOSE
Slaughter, A.; Johnson, M.
2017-12-01
The Department of Energy (DOE) is currently investing millions of dollars annually into various modeling and simulation tools for all aspects of nuclear energy. An important part of this effort includes developing applications based on the open-source Multiphysics Object Oriented Simulation Environment (MOOSE; mooseframework.org) from Idaho National Laboratory (INL).Thanks to the efforts of the DOE and outside collaborators, MOOSE currently contains a large set of physics modules, including phase-field, level set, heat conduction, tensor mechanics, Navier-Stokes, fracture and crack propagation (via the extended finite-element method), flow in porous media, and others. The heat conduction, tensor mechanics, and phase-field modules, in particular, are well-suited for snow science problems. Pika--an open-source MOOSE-based application--is capable of simulating both 3D, coupled nonlinear continuum heat transfer and large-deformation mechanics applications (such as settlement) and phase-field based micro-structure applications. Additionally, these types of problems may be coupled tightly in a single solve or across length and time scales using a loosely coupled Picard iteration approach. In addition to the wide range of physics capabilities, MOOSE-based applications also inherit an extensible testing framework, graphical user interface, and documentation system; tools that allow MOOSE and other applications to adhere to nuclear software quality standards. The snow science community can learn from the nuclear industry and harness the existing effort to build simulation tools that are open, modular, and share a common framework. In particular, MOOSE-based multiphysics solvers are inherently parallel, dimension agnostic, adaptive in time and space, fully coupled, and capable of interacting with other applications. The snow science community should build on existing tools to enable collaboration between researchers and practitioners throughout the world, and advance the
Massively parallel multicanonical simulations
Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard
2018-03-01
Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.
International Nuclear Information System (INIS)
Roque, Bénédicte; Archier, P.; Bourdot, P.; De Saint-Jean, C.; Gabriel, F.; Palau, J-M.; Pascal, V.; Schneider, D.; Rimpault, G.; Vidal, J-F.
2013-01-01
The neutronic specificities of the ASTRID FR (Advanced Sodium Technological Reactor for Industrial Demonstration) require improved tools with accuracies meeting the design team requirements, particularly for high safety level and a low Sodium Void Effect. CEA and its industrial partners have launched a large program for developing a new generation of simulation tools facing the challenges of multiphysics coupling and highperformance computing on massively parallel computers. The new APOLLO3® code, will take over, after a commissioning period, the ERANOS2 code, currently used for ASTRID conceptual design. This code will take advantage of new numerical developments for neutronic core reactor calculations. The transition is defined so as to meet the ASTRID development plans and will require the achievement of many tasks
International Nuclear Information System (INIS)
Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.
1997-01-01
The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment
Aspects of computation on asynchronous parallel processors
International Nuclear Information System (INIS)
Wright, M.
1989-01-01
The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues
Mission Trade Space Evaluation through Multiphysics Design and Optimization
National Aeronautics and Space Administration — In recent years, modeling and simulation tools have enabled engineers to design highly complex systems while taking into consideration constraints across multiple...
Multiphysics pore-scale model for the rehydration of porous foods
Sman, van der R.G.M.; Vergeldt, F.J.; As, van H.; Dalen, van G.; Voda, A.; Duynhoven, van J.P.M.
2014-01-01
In this paper we present a pore-scale model describing the multiphysics occurring during the rehydration of freeze-dried vegetables. This pore-scale model is part of a multiscale simulation model, which should explain the effect of microstructure and pre-treatments on the rehydration rate.
Non-Linear Multi-Physics Analysis and Multi-Objective Optimization in Electroheating Applications
Czech Academy of Sciences Publication Activity Database
di Barba, P.; Doležel, Ivo; Mognaschi, M. E.; Savini, A.; Karban, P.
2014-01-01
Roč. 50, č. 2 (2014), s. 7016604-7016604 ISSN 0018-9464 Institutional support: RVO:61388998 Keywords : coupled multi-physics problems * finite element method * non-linear equations Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.386, year: 2014
Validation of a 3D multi-physics model for unidirectional silicon solidification
Simons, P.; Lankhorst, A.M.; Habraken, A.; Faber, A.J.; Tiuleanu, D.; Pingel, R.
2012-01-01
A model for transient movements of solidification fronts has been added to X-stream, an existing multi-physics simulation program for high temperature processes with flow and chemical reactions. The implementation uses an enthalpy formulation and works on fixed grids. First we show the results of a
Assessing climate impact on reinforced concrete durability with a multi-physics model
DEFF Research Database (Denmark)
Michel, Alexander; Flint, Madeleine M.
to shorter-term fluctuations in boundary conditions and therefore may underestimate climate change impacts. A highly sensitive fully-coupled, validated, multi-physics model for heat, moisture and ion transport and corrosion was used to assess a reinforced concrete structure located in coastal Norfolk...
DEFF Research Database (Denmark)
Khan, Mohammad Rezwan; Kær, Søren Knudsen
2016-01-01
A three-dimensional multiphysics-based thermal model of a battery pack is presented. The model is intended to demonstrate the cooling mechanism inside the battery pack. Heat transfer (HT) and computational fluid dynamics (CFD) physics are coupled for both time-dependent and steady-state simulatio...
ACME - Algorithms for Contact in a Multiphysics Environment API Version 1.0
International Nuclear Information System (INIS)
BROWN, KEVIN H.; SUMMERS, RANDALL M.; GLASS, MICHEAL W.; GULLERUD, ARNE S.; HEINSTEIN, MARTIN W.; JONES, REESE E.
2001-01-01
An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library
International Nuclear Information System (INIS)
Ammar, Karim
2014-01-01
Since Phenix shutting down in 2010, CEA does not have Sodium Fast Reactor (SFR) in operating condition. According to global energetic challenge and fast reactor abilities, CEA launched a program of industrial demonstrator called ASTRID (Advanced Sodium Technological Reactor for Industrial Demonstration), a reactor with electric power capacity equal to 600 MW. Objective of the prototype is, in first to be a response to environmental constraints, in second demonstrates the industrial viability of SFR reactor. The goal is to have a safety level at least equal to 3. generation reactors. ASTRID design integrates Fukushima feedback; Waste reprocessing (with minor actinide transmutation) and it linked industry. Installation safety is the priority. In all cases, no radionuclide should be released into environment. To achieve this objective, it is imperative to predict the impact of uncertainty sources on reactor behaviour. In this context, this thesis aims to develop new optimization methods for SFR cores. The goal is to improve the robustness and reliability of reactors in response to existing uncertainties. We will use ASTRID core as reference to estimate interest of new methods and tools developed. The impact of multi-Physics uncertainties in the calculation of the core performance and the use of optimization methods introduce new problems: How to optimize 'complex' cores (i.e. associated with design spaces of high dimensions with more than 20 variable parameters), taking into account the uncertainties? What is uncertainties behaviour for optimization core compare to reference core? Taking into account uncertainties, optimization core are they still competitive? Optimizations improvements are higher than uncertainty margins? The thesis helps to develop and implement methods necessary to take into account uncertainties in the new generation of simulation tools. Statistical methods to ensure consistency of complex multi-Physics simulation results are also
MASTODON: A geosciences simulation tool built using the open-source framework MOOSE
Slaughter, A.
2017-12-01
The Department of Energy (DOE) is currently investing millions of dollars annually into various modeling and simulation tools for all aspects of nuclear energy. An important part of this effort includes developing applications based on the open-source Multiphysics Object Oriented Simulation Environment (MOOSE; mooseframework.org) from Idaho National Laboratory (INL).Thanks to the efforts of the DOE and outside collaborators, MOOSE currently contains a large set of physics modules, including phase field, level set, heat conduction, tensor mechanics, Navier-Stokes, fracture (extended finite-element method), and porous media, among others. The tensor mechanics and contact modules, in particular, are well suited for nonlinear geosciences problems. Multi-hazard Analysis for STOchastic time-DOmaiN phenomena (MASTODON; https://seismic-research.inl.gov/SitePages/Mastodon.aspx)--a MOOSE-based application--is capable of analyzing the response of 3D soil-structure systems to external hazards with current development focused on earthquakes. It is capable of simulating seismic events and can perform extensive "source-to-site" simulations including earthquake fault rupture, nonlinear wave propagation, and nonlinear soil-structure interaction analysis. MASTODON also includes a dynamic probabilistic risk assessment capability that enables analysts to not only perform deterministic analyses, but also easily perform probabilistic or stochastic simulations for the purpose of risk assessment. Although MASTODON has been developed for the nuclear industry, it can be used to assess the risk for any structure subjected to earthquakes.The geosciences community can learn from the nuclear industry and harness the enormous effort underway to build simulation tools that are open, modular, and share a common framework. In particular, MOOSE-based multiphysics solvers are inherently parallel, dimension agnostic, adaptive in time and space, fully coupled, and capable of interacting with other
Directory of Open Access Journals (Sweden)
James G. Worner
2017-05-01
Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship. ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.
Expressing Parallelism with ROOT
Energy Technology Data Exchange (ETDEWEB)
Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab
2017-11-22
The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.
Expressing Parallelism with ROOT
Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.
2017-10-01
The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.
Parallel kinematics type, kinematics, and optimal design
Liu, Xin-Jun
2014-01-01
Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others. This book is intended for researchers, scientists, engineers and postgraduates or above with interes...
International Nuclear Information System (INIS)
Chauliac, Christian; Bestion, Dominique; Crouzet, Nicolas; Aragones, Jose-Maria; Cacuci, Dan Gabriel; Weiss, Frank-Peter; Zimmermann, Martin A.
2010-01-01
The NURESIM project, the numerical simulation platform, is developed in the frame of the NURISP European Collaborative Project (FP7), which includes 22 organizations from 14 European countries. NURESIM intends to be a reference platform providing high quality software tools, physical models, generic functions and assessment results. The NURESIM platform provides an accurate representation of the physical phenomena by promoting and incorporating the latest advances in core physics, two-phase thermal-hydraulics and fuel modelling. It includes multi-scale and multi-physics features, especially for coupling core physics and thermal-hydraulics models for reactor safety. Easy coupling of the different codes and solvers is provided through the use of a common data structure and generic functions (e.g., for interpolation between non-conforming meshes). More generally, the platform includes generic pre-processing, post-processing and supervision functions through the open-source SALOME software, in order to make the codes more user-friendly. The platform also provides the informatics environment for testing and comparing different codes. The contribution summarizes the achievements and ongoing developments of the simulation platform in core physics, thermal-hydraulics, multi-physics, uncertainties and code integration
Tsai, Pedro T. H.
2000-01-01
Approved for public release; distribution is unlimited This research focuses on the design of a language-independent concept, Glimpse, for performance debugging of multi-threaded programs. This research extends previous work on Graze, a tool designed and implemented for performance debugging of C++ programs. Not only is Glimpse easily portable among different programming languages, (i) it is useful in many different paradigms ranging from few long-lived threads to many short-lived...
A multi-physics analysis for the actuation of the SSS in opal reactor
Directory of Open Access Journals (Sweden)
Ferraro Diego
2018-01-01
Full Text Available OPAL is a 20 MWth multi-purpose open-pool type Research Reactor located at Lucas Heights, Australia. It was designed, built and commissioned by INVAP between 2000 and 2006 and it has been operated by the Australia Nuclear Science and Technology Organization (ANSTO showing a very good overall performance. On November 2016, OPAL reached 10 years of continuous operation, becoming one of the most reliable and available in its kind worldwide, with an unbeaten record of being fully operational 307 days a year. One of the enhanced safety features present in this state-of-art reactor is the availability of an independent, diverse and redundant Second Shutdown System (SSS, which consists in the drainage of the heavy water reflector contained in the Reflector Vessel. As far as high quality experimental data is available from reactor commissioning and operation stages and even from early component design validation stages, several models both regarding neutronic and thermo-hydraulic approaches have been developed during recent years using advanced calculations tools and the novel capabilities to couple them. These advanced models were developed in order to assess the capability of such codes to simulate and predict complex behaviours and develop highly detail analysis. In this framework, INVAP developed a three-dimensional CFD model that represents the detailed hydraulic behaviour of the Second Shutdown System for an actuation scenario, where the heavy water drainage 3D temporal profiles inside the Reflector Vessel can be obtained. This model was validated, comparing the computational results with experimental measurements performed in a real-size physical model built by INVAP during early OPAL design engineering stages. Furthermore, detailed 3D Serpent Monte Carlo models are also available, which have been already validated with experimental data from reactor commissioning and operating cycles. In the present work the neutronic and thermohydraulic
A multi-physics analysis for the actuation of the SSS in opal reactor
Ferraro, Diego; Alberto, Patricio; Villarino, Eduardo; Doval, Alicia
2018-05-01
OPAL is a 20 MWth multi-purpose open-pool type Research Reactor located at Lucas Heights, Australia. It was designed, built and commissioned by INVAP between 2000 and 2006 and it has been operated by the Australia Nuclear Science and Technology Organization (ANSTO) showing a very good overall performance. On November 2016, OPAL reached 10 years of continuous operation, becoming one of the most reliable and available in its kind worldwide, with an unbeaten record of being fully operational 307 days a year. One of the enhanced safety features present in this state-of-art reactor is the availability of an independent, diverse and redundant Second Shutdown System (SSS), which consists in the drainage of the heavy water reflector contained in the Reflector Vessel. As far as high quality experimental data is available from reactor commissioning and operation stages and even from early component design validation stages, several models both regarding neutronic and thermo-hydraulic approaches have been developed during recent years using advanced calculations tools and the novel capabilities to couple them. These advanced models were developed in order to assess the capability of such codes to simulate and predict complex behaviours and develop highly detail analysis. In this framework, INVAP developed a three-dimensional CFD model that represents the detailed hydraulic behaviour of the Second Shutdown System for an actuation scenario, where the heavy water drainage 3D temporal profiles inside the Reflector Vessel can be obtained. This model was validated, comparing the computational results with experimental measurements performed in a real-size physical model built by INVAP during early OPAL design engineering stages. Furthermore, detailed 3D Serpent Monte Carlo models are also available, which have been already validated with experimental data from reactor commissioning and operating cycles. In the present work the neutronic and thermohydraulic models, available for
The cell method for electrical engineering and multiphysics problems an introduction
Alotto, Piergiorgio; Repetto, Maurizio; Rosso, Carlo
2013-01-01
This book presents a numerical scheme for the solution of field problems governed by partial differential equations: the cell method. The technique lends itself naturally to the solution of multiphysics problems with several interacting phenomena. The Cell Method, based on a space-time tessellation, is intimately related to the work of Tonti and to his ideas of classification diagrams or, as they are nowadays called, Tonti diagrams: a graphical representation of the problem's equations made possible by a suitable selection of a space-time framework relating physical variables to each other. The main features of the cell method are presented and links with many other discrete numerical methods (finite integration techniques, finite difference time domain, finite volumes, mimetic finite differences, etc.) are discussed. After outlining the theoretical basis of the method, a set of physical problems which have been solved with the cell method is described. These single and multiphysics problems stem from the aut...
Multiphysics simulation by design for electrical machines, power electronics and drives
Rosu, Marius; Lin, Dingsheng; Ionel, Dan M; Popescu, Mircea; Blaabjerg, Frede; Rallabandi, Vandana; Staton, David
2018-01-01
This book combines the knowledge of experts from both academia and the software industry to present theories of multiphysics simulation by design for electrical machines, power electronics, and drives. The comprehensive design approach described within supports new applications required by technologies sustaining high drive efficiency. The highlighted framework considers the electric machine at the heart of the entire electric drive. The book also emphasizes the simulation by design concept--a concept that frames the entire highlighted design methodology, which is described and illustrated by various advanced simulation technologies. Multiphysics Simulation by Design for Electrical Machines, Power Electronics and Drives begins with the basics of electrical machine design and manufacturing tolerances. It also discusses fundamental aspects of the state of the art design process and includes examples from industrial practice. It explains FEM-based analysis techniques for electrical machine design--providing deta...
Multiphysics Model Development and the Core Analysis for In Situ Breeding and Burning Reactor
Directory of Open Access Journals (Sweden)
Shengyi Si
2013-01-01
Full Text Available The in situ breeding and burning reactor (ISBBR, which makes use of the outstanding breeding capability of metallic pellet and the excellent irradiation-resistant performance of SiCf/SiC ceramic composites cladding, can approach the design purpose of ultralong cycle and ultrahigh burnup and maintain stable radial power distribution during the cycle life without refueling and shuffling. Since the characteristics of the fuel pellet and cladding are different from the traditional fuel rod of ceramic pellet and metallic cladding, the multiphysics behaviors in ISBBR are also quite different. A computer code, named TANG, to model the specific multiphysics behaviors in ISBBR has been developed. The primary calculation results provided by TANG demonstrate that ISBBR has an excellent comprehensive performance of GEN-IV and a great development potential.
A Multi-physics Approach to Understanding Low Porosity Soils and Reservoir Rocks
Prasad, M.; Mapeli, C.; Livo, K.; Hasanov, A.; Schindler, M.; Ou, L.
2017-12-01
We present recent results on our multiphysics approach to rock physics. Thus, we evaluate geophysical measurements by simultaneously measuring petrophysical properties or imaging strains. In this paper, we present simultaneously measured acoustic and electrical anisotropy data as functions of pressure. Similarly, we present strains and strain localization images simultaneously acquired with acoustic measurements as well as NMR T2 relaxations on pressurized fluids as well as rocks saturated with these pressurized fluids. Such multiphysics experiments allow us to constrain and assign appropriate causative mechanisms to development rock physics models. They also allow us to decouple various effects, for example, fluid versus pressure, on geophysical measurements. We show applications towards reservoir characterization as well as CO2 sequestration applications.
A posteriori error analysis of multiscale operator decomposition methods for multiphysics models
International Nuclear Information System (INIS)
Estep, D; Carey, V; Tavener, S; Ginting, V; Wildey, T
2008-01-01
Multiphysics, multiscale models present significant challenges in computing accurate solutions and for estimating the error in information computed from numerical solutions. In this paper, we describe recent advances in extending the techniques of a posteriori error analysis to multiscale operator decomposition solution methods. While the particulars of the analysis vary considerably with the problem, several key ideas underlie a general approach being developed to treat operator decomposition multiscale methods. We explain these ideas in the context of three specific examples
Multi-physics modeling in electrical engineering. Application to a magneto-thermo-mechanical model
International Nuclear Information System (INIS)
Journeaux, Antoine
2013-01-01
The modeling of multi-physics problems in electrical engineering is presented, with an application to the numerical computation of vibrations within the end windings of large turbo-generators. This study is divided into four parts: the impositions of current density, the computation of local forces, the transfer of data between disconnected meshes, and the computation of multi-physics problems using weak coupling, Firstly, the representation of current density within numerical models is presented. The process is decomposed into two stages: the construction of the initial current density, and the determination of a divergence-free field. The representation of complex geometries makes the use of analytical methods impossible. A method based on an electrokinetic problem is used and a fully geometrical method are tested. The geometrical method produces results closer to the real current density than the electrokinetic problem. Methods to compute forces are numerous, and this study focuses on the virtual work principle and the Laplace force considering the recommendations of the literature. Laplace force is highly accurate but is applicable only if the permeability is uniform. The virtual work principle is finally preferred as it appears as the most general way to compute local forces. Mesh-to-mesh data transfer methods are developed to compute multi-physics models using multiples meshes adapted to the subproblems and multiple computational software. The interpolation method, a locally conservative projection, and an orthogonal projection are compared. Interpolation method is said to be fast but highly diffusive, and the orthogonal projections are highly accurate. The locally conservative method produces results similar to the orthogonal projection but avoid the assembly of linear systems. The numerical computation of multi-physical problems using multiple meshes and projections is then presented. However for a given class of problems, there is not an unique coupling
International Nuclear Information System (INIS)
Donkor, M. O.
2013-06-01
Computational fluid dynamics (CFD) technique was adopted to investigate the hydrodynamics of gold leaching tanks. Comsol multiphysics code 3.4 provided the platform for modelling and simulation of the flow pattern of the gold leaching process. The impeller motion was integrated in the geometry using the simplified numerical method technique. The k-ε model was used to solve the Reynolds-averaged Navier-Stokes equations and velocity distributions in the vertical and horizontal section in the tank was obtained. It was found that the flow distribution in the simulated flow field was consistent with the characteristic down pumping flow pattern of the axial impellers. The convergence of the iterative procedure was tested and reasonable predictions were achieved for an industrial reactor. There were significant variations in velocity magnitudes with the impeller discharge region recording the highest. CFD modelling was consistent with the tracer test results and demonstrated the use of reactors active volume. The obtained CFD results showed a good agreement with literature information. Because CFD is capable of predicting the complete velocity distribution and simulating the tracer experiment in a tank, it provided a good alternative to carry out resistance time distribution (RDT) studies. CFD modelling was useful and informative tool for analyzing problematic hydrodynamics of gold leaching tanks and for the design of theoretical corrective measures and can be extended to other plants like water treatment plant and oil processing plant. (author)
AC losses in horizontally parallel HTS tapes for possible wireless power transfer applications
Shen, Boyang; Geng, Jianzhao; Zhang, Xiuchang; Fu, Lin; Li, Chao; Zhang, Heng; Dong, Qihuan; Ma, Jun; Gawith, James; Coombs, T. A.
2017-12-01
This paper presents the concept of using horizontally parallel HTS tapes with AC loss study, and the investigation on possible wireless power transfer (WPT) applications. An example of three parallel HTS tapes was proposed, whose AC loss study was carried out both from experiment using electrical method; and simulation using 2D H-formulation on the FEM platform of COMSOL Multiphysics. The electromagnetic induction around the three parallel tapes was monitored using COMSOL simulation. The electromagnetic induction and AC losses generated by a conventional three turn coil was simulated as well, and then compared to the case of three parallel tapes with the same AC transport current. The analysis demonstrates that HTS parallel tapes could be potentially used into wireless power transfer systems, which could have lower total AC losses than conventional HTS coils.
Multiphysical Simulation of PT-CT Contact with Outer Boundary Condition
Energy Technology Data Exchange (ETDEWEB)
Chang, Se-Myong [Kunsan National Univ., Gunsan (Korea, Republic of); Kim, Hyoung Tae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-10-15
The present study is about preliminary calculation results for these ICSP activity works, where the COMSOL Multiphysics code is used to simulate plastic deformation of a pressure tube as a result of the interaction of stress and temperature. It is shown that the thermal stress model of COMSOL is compatible to simulate the multiple heat transfers (including the radiation heat transfer and heat conduction) and stress strain in the simplified 2-D problem. The benchmark test result for radiation heat transfer is in good agreement with the analytical solution for the concentric configuration of PT(pressure tube) and CT(calandria tube). In this paper, the authors did an open computation of these multi-physical phenomena by changing the outer boundary condition of CT according to the experimental result of ICSP. A series of simulation has been done based on the benchmark test proposed by IAEA/ICSP. The unsteady multi-physics was treated some numerical models with COMSOL. The comparison with CATHENA code is verified as a good agreement as we increase the accuracy of numerical method, Gaussian quadrature. The open computation for the validation of this numerical code is still on-going, and the temperature inside and outside the PT shows a very good agreement.
Optimization of an implicit constrained multi-physics system for motor wheels of electric vehicle
International Nuclear Information System (INIS)
Lei, Fei; Du, Bin; Liu, Xin; Xie, Xiaoping; Chai, Tian
2016-01-01
In this paper, implicit constrained multi-physics model of a motor wheel for an electric vehicle is built and then optimized. A novel optimization approach is proposed to solve the compliance problem between implicit constraints and stochastic global optimization. Firstly, multi-physics model of motor wheel is built from the theories of structural mechanics, electromagnetism and thermal physics. Then, implicit constraints are applied from the vehicle performances and magnetic characteristics. Implicit constrained optimization is carried out by a series of unconstrained optimization and verifications. In practice, sequentially updated subspaces are designed to completely substitute the original design space in local areas. In each subspace, a solution is obtained and is then verified by the implicit constraints. Optimal solutions which satisfy the implicit constraints are accepted as final candidates. The final global optimal solution is optimized from those candidates. Discussions are carried out to discover the differences between optimal solutions with unconstrained problem and different implicit constrained problems. Results show that the implicit constraints have significant influences on the optimal solution and the proposed approach is effective in finding the optimals. - Highlights: • An implicit constrained multi-physics model is built for sizing a motor wheel. • Vehicle dynamic performances are applied as implicit constraints for nonlinear system. • An efficient novel optimization is proposed to explore the constrained design space. • The motor wheel is optimized to achieve maximum efficiency on vehicle dynamics. • Influences of implicit constraints on vehicle performances are compared and analyzed.
Multiphysical Simulation of PT-CT Contact with Outer Boundary Condition
International Nuclear Information System (INIS)
Chang, Se-Myong; Kim, Hyoung Tae
2016-01-01
The present study is about preliminary calculation results for these ICSP activity works, where the COMSOL Multiphysics code is used to simulate plastic deformation of a pressure tube as a result of the interaction of stress and temperature. It is shown that the thermal stress model of COMSOL is compatible to simulate the multiple heat transfers (including the radiation heat transfer and heat conduction) and stress strain in the simplified 2-D problem. The benchmark test result for radiation heat transfer is in good agreement with the analytical solution for the concentric configuration of PT(pressure tube) and CT(calandria tube). In this paper, the authors did an open computation of these multi-physical phenomena by changing the outer boundary condition of CT according to the experimental result of ICSP. A series of simulation has been done based on the benchmark test proposed by IAEA/ICSP. The unsteady multi-physics was treated some numerical models with COMSOL. The comparison with CATHENA code is verified as a good agreement as we increase the accuracy of numerical method, Gaussian quadrature. The open computation for the validation of this numerical code is still on-going, and the temperature inside and outside the PT shows a very good agreement
Writing parallel programs that work
CERN. Geneva
2012-01-01
Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...
Preliminary Multiphysics Analyses of HFIR LEU Fuel Conversion using COMSOL
Energy Technology Data Exchange (ETDEWEB)
Freels, James D [ORNL; Bodey, Isaac T [ORNL; Arimilli, Rao V [ORNL; Curtis, Franklin G [ORNL; Ekici, Kivanc [ORNL; Jain, Prashant K [ORNL
2011-06-01
The research documented herein was performed by several individuals across multiple organizations. We have previously acknowledged our funding for the project, but another common thread among the authors of this document, and hence the research performed, is the analysis tool COMSOL. The research has been divided into categories to allow the COMSOL analysis to be performed independently to the extent possible. As will be seen herein, the research has progressed to the point where it is expected that next year (2011) a large fraction of the research will require collaboration of our efforts as we progress almost exclusively into three-dimensional (3D) analysis. To the extent possible, we have tried to segregate the development effort into two-dimensional (2D) analysis in order to arrive at techniques and methodology that can be extended to 3D models in a timely manner. The Research Reactors Division (RRD) of ORNL has contracted with the University of Tennessee, Knoxville (UTK) Mechanical, Aerospace and Biomedical Engineering Department (MABE) to perform a significant fraction of this research. This group has been chosen due to their expertise and long-term commitment in using COMSOL and also because the participating students are able to work onsite on a part-time basis due to the close proximity of UTK with the ORNL campus. The UTK research has been governed by a statement of work (SOW) which clearly defines the specific tasks reported herein on the perspective areas of research. Ph.D. student Isaac T. Bodey has focused on heat transfer, fluid flow, modeling, and meshing issues and has been aided by his major professor Dr. Rao V. Arimilli and is the primary contributor to Section 2 of this report. Ph.D student Franklin G. Curtis has been focusing exclusively on fluid-structure interaction (FSI) due to the mechanical forces acting on the plate caused by the flow and has also been aided by his major professor Dr. Kivanc Ekici and is the primary contributor to Section
Parallel PDE-Based Simulations Using the Common Component Architecture
International Nuclear Information System (INIS)
McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia
2006-01-01
The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications
Pattern-Driven Automatic Parallelization
Directory of Open Access Journals (Sweden)
Christoph W. Kessler
1996-01-01
Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.
Synchronization Techniques in Parallel Discrete Event Simulation
Lindén, Jonatan
2018-01-01
Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...
Energy Technology Data Exchange (ETDEWEB)
Kim, Seung Jun [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-07-17
The current study aims to predict the steady state power of a generic solution vessel and to develop a corresponding heat transfer coefficient correlation for a Moly99 production facility by conducting a fully coupled multi-physics simulation. A prediction of steady state power for the current application is inherently interconnected between thermal hydraulic characteristics (i.e. Multiphase computational fluid dynamics solved by ANSYS-Fluent 17.2) and the corresponding neutronic behavior (i.e. particle transport solved by MCNP6.2) in the solution vessel. Thus, the development of a coupling methodology is vital to understand the system behavior at a variety of system design and postulated operating scenarios. In this study, we report on the k-effective (keff) calculation for the baseline solution vessel configuration with a selected solution concentration using MCNP K-code modeling. The associated correlation of thermal properties (e.g. density, viscosity, thermal conductivity, specific heat) at the selected solution concentration are developed based on existing experimental measurements in the open literature. The numerical coupling methodology between multiphase CFD and MCNP is successfully demonstrated, and the detailed coupling procedure is documented. In addition, improved coupling methods capturing realistic physics in the solution vessel thermal-neutronic dynamics are proposed and tested further (i.e. dynamic height adjustment, mull-cell approach). As a key outcome of the current study, a multi-physics coupling methodology between MCFD and MCNP is demonstrated and tested for four different operating conditions. Those different operating conditions are determined based on the neutron source strength at a fixed geometry condition. The steady state powers for the generic solution vessel at various operating conditions are reported, and a generalized correlation of the heat transfer coefficient for the current application is discussed. The assessment of multi-physics
Modelling organs, tissues, cells and devices using Matlab and Comsol multiphysics
Dokos, Socrates
2017-01-01
This book presents a theoretical and practical overview of computational modeling in bioengineering, focusing on a range of applications including electrical stimulation of neural and cardiac tissue, implantable drug delivery, cancer therapy, biomechanics, cardiovascular dynamics, as well as fluid-structure interaction for modelling of organs, tissues, cells and devices. It covers the basic principles of modeling and simulation with ordinary and partial differential equations using MATLAB and COMSOL Multiphysics numerical software. The target audience primarily comprises postgraduate students and researchers, but the book may also be beneficial for practitioners in the medical device industry.
Study of stability of dc glow discharges with the use of Comsol Multiphysics software
Energy Technology Data Exchange (ETDEWEB)
Almeida, P G C; Benilov, M S; Faria, M J [Departamento de Fisica, Universidade da Madeira, Largo do Municipio, 9000 Funchal (Portugal)
2011-10-19
Stability of different axially symmetric modes of current transfer in dc glow discharges is investigated in the framework of the linear stability theory with the use of Comsol Multiphysics software. Conditions of current-controlled microdischarges in xenon are treated as an example. Both real and complex eigenvalues have been detected, meaning that perturbations can vary with time both monotonically and with oscillations. In general, results given by the linear stability theory confirm intuitive concepts developed in the literature and conform to the experiment. On the other hand, suggestions are provided for further experimental and theoretical work.
Advanced graphical user interface for multi-physics simulations using AMST
Hoffmann, Florian; Vogel, Frank
2017-07-01
Numerical modelling of particulate matter has gained much popularity in recent decades. Advanced Multi-physics Simulation Technology (AMST) is a state-of-the-art three dimensional numerical modelling technique combining the eX-tended Discrete Element Method (XDEM) with Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) [1]. One major limitation of this code is the lack of a graphical user interface (GUI) meaning that all pre-processing has to be made directly in a HDF5-file. This contribution presents the first graphical pre-processor developed for AMST.
Specification of the Advanced Burner Test Reactor Multi-Physics Coupling Demonstration Problem
Energy Technology Data Exchange (ETDEWEB)
Shemon, E. R. [Argonne National Lab. (ANL), Argonne, IL (United States); Grudzinski, J. J. [Argonne National Lab. (ANL), Argonne, IL (United States); Lee, C. H. [Argonne National Lab. (ANL), Argonne, IL (United States); Thomas, J. W. [Argonne National Lab. (ANL), Argonne, IL (United States); Yu, Y. Q. [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-12-21
This document specifies the multi-physics nuclear reactor demonstration problem using the SHARP software package developed by NEAMS. The SHARP toolset simulates the key coupled physics phenomena inside a nuclear reactor. The PROTEUS neutronics code models the neutron transport within the system, the Nek5000 computational fluid dynamics code models the fluid flow and heat transfer, and the DIABLO structural mechanics code models structural and mechanical deformation. The three codes are coupled to the MOAB mesh framework which allows feedback from neutronics, fluid mechanics, and mechanical deformation in a compatible format.
Energy Technology Data Exchange (ETDEWEB)
Bonaccorsi, Th
2007-09-15
A Material Testing Reactor (MTR) makes it possible to irradiate material samples under intense neutron and photonic fluxes. These experiments are carried out in experimental devices localised in the reactor core or in periphery (reflector). Available physics simulation tools only treat, most of the time, one physics field in a very precise way. Multi-physic simulations of irradiation experiments therefore require a sequential use of several calculation codes and data exchanges between these codes: this corresponds to problems coupling. In order to facilitate multi-physic simulations, this thesis sets up a data model based on data-processing objects, called Technological Entities. This data model is common to all of the physics fields. It permits defining the geometry of an irradiation device in a parametric way and to associate information about materials to it. Numerical simulations are encapsulated into interfaces providing the ability to call specific functionalities with the same command (to initialize data, to launch calculations, to post-treat, to get results,... ). Thus, once encapsulated, numerical simulations can be re-used for various studies. This data model is developed in a SALOME platform component. The first application case made it possible to perform neutronic simulations (OSIRIS reactor and RJH) coupled with fuel behavior simulations. In a next step, thermal hydraulics could also be taken into account. In addition to the improvement of the calculation accuracy due to the physical phenomena coupling, the time spent in the development phase of the simulation is largely reduced and the possibilities of uncertainty treatment are under consideration. (author)
International Nuclear Information System (INIS)
Cacuci, Dan Gabriel; Badea, Madalina Corina
2014-01-01
Highlights: • We applied the PMCMPS methodology to a paradigm neutron diffusion model. • We underscore the main steps in applying PMCMPS to treat very large coupled systems. • PMCMPS reduces the uncertainties in the optimally predicted responses and model parameters. • PMCMPS is for sequentially treating coupled systems that cannot be treated simultaneously. - Abstract: This work presents paradigm applications to reactor physics of the innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS)” developed by Cacuci (2014). This methodology enables the assimilation of experimental and computational information and computes optimally predicted responses and model parameters with reduced predicted uncertainties, taking fully into account the coupling terms between the multi-physics systems, but using only the computational resources that would be needed to perform predictive modeling on each system separately. The paradigm examples presented in this work are based on a simple neutron diffusion model, chosen so as to enable closed-form solutions with clear physical interpretations. These paradigm examples also illustrate the computational efficiency of the PMCMPS, which enables the assimilation of additional experimental information, with a minimal increase in computational resources, to reduce the uncertainties in predicted responses and best-estimate values for uncertain model parameters, thus illustrating how very large systems can be treated without loss of information in a sequential rather than simultaneous manner
Massimino, G.; Colombo, A.; D'Alessandro, L.; Procopio, F.; Ardito, R.; Ferrera, M.; Corigliano, A.
2018-05-01
In this paper a complete multiphysics modelling via the finite element method (FEM) of an air-coupled array of piezoelectric micromachined ultrasonic transducers (PMUT) and its experimental validation are presented. Two numerical models are described for the single transducer, axisymmetric and 3D, with the following features: the presence of fabrication induced residual stresses, which determine a non-linear initial deformed configuration of the diaphragm and a substantial fundamental mode frequency shift; the multiple coupling between different physics, namely electro-mechanical coupling for the piezo-electric model, thermo-acoustic-structural interaction and thermo-acoustic-pressure interaction for the waves propagation in the surrounding fluid. The model for the single transducer is enhanced considering the full set of PMUTs belonging to the silicon dye in a 4 × 4 array configuration. The results of the numerical multiphysics models are compared with experimental ones in terms of the initial static pre-deflection, of the diaphragm central point spectrum and of the sound intensity at 3.5 cm on the vertical direction along the axis of the diaphragm.
Advanced multi-physics simulation capability for very high temperature reactors
International Nuclear Information System (INIS)
Lee, Hyun Chul; Tak, Nam Il; Jo Chang Keun; Noh, Jae Man; Cho, Bong Hyun; Cho, Jin Woung; Hong, Ser Gi
2012-01-01
The purpose of this research is to develop methodologies and computer code for high-fidelity multi-physics analysis of very high temperature gas-cooled reactors(VHTRs). The research project was performed through Korea-US I-NERI program. The main research topic was development of methodologies for high-fidelity 3-D whole core transport calculation, development of DeCART code for VHTR reactor physics analysis, generation of VHTR specific 190-group cross-section library for DeCART code, development of DeCART/CORONA coupled code system for neutronics/thermo-fluid multi-physics analysis, and benchmark analysis against various benchmark problems derived from PMR200 reactor. The methodologies and the code systems will be utilized a key technologies in the Nuclear Hydrogen Development and Demonstration program. Export of code system is expected in the near future and the code systems developed in this project are expected to contribute to development and export of nuclear hydrogen production system
A Multi-Physics simulation of the Reactor Core using CUPID/MASTER
International Nuclear Information System (INIS)
Lee, Jae Ryong; Cho, Hyoung Kyu; Yoon, Han Young; Cho, Jin Young; Jeong, Jae Jun
2011-01-01
KAERI has been developing a component-scale thermal hydraulics code, CUPID. The aim of the code is for multi-dimensional, multi-physics and multi-scale thermal hydraulics analysis. In our previous papers, the CUPID code has proved to be able to reproduce multidimensional thermal hydraulic analysis by validated with various conceptual problems and experimental data. For the numerical closure, it adopts a three dimensional, transient, two-phase and three-field model, and includes physical models and correlations of the interfacial mass, momentum, and energy transfer. For the multi-scale analysis, the CUPID is on progress to merge into system-scale thermal hydraulic code, MARS. In the present paper, a multi-physics simulation was performed by coupling the CUPID with three dimensional neutron kinetics code, MASTER. The MASTER is merged into the CUPID as a dynamic link library (DLL). The APR1400 reactor core during control rod drop/ejection accident was simulated as an example by adopting a porous media approach to employ fuel assembly. The following sections present the numerical modeling for the reactor core, coupling of the kinetics code, and the simulation results
Advanced Multiphysics Thermal-Hydraulics Models for the High Flux Isotope Reactor
Energy Technology Data Exchange (ETDEWEB)
Jain, Prashant K [ORNL; Freels, James D [ORNL
2015-01-01
Engineering design studies to determine the feasibility of converting the High Flux Isotope Reactor (HFIR) from using highly enriched uranium (HEU) to low-enriched uranium (LEU) fuel are ongoing at Oak Ridge National Laboratory (ORNL). This work is part of an effort sponsored by the US Department of Energy (DOE) Reactor Conversion Program. HFIR is a very high flux pressurized light-water-cooled and moderated flux-trap type research reactor. HFIR s current missions are to support neutron scattering experiments, isotope production, and materials irradiation, including neutron activation analysis. Advanced three-dimensional multiphysics models of HFIR fuel were developed in COMSOL software for safety basis (worst case) operating conditions. Several types of physics including multilayer heat conduction, conjugate heat transfer, turbulent flows (RANS model) and structural mechanics were combined and solved for HFIR s inner and outer fuel elements. Alternate design features of the new LEU fuel were evaluated using these multiphysics models. This work led to a new, preliminary reference LEU design that combines a permanent absorber in the lower unfueled region of all of the fuel plates, a burnable absorber in the inner element side plates, and a relocated and reshaped (but still radially contoured) fuel zone. Preliminary results of estimated thermal safety margins are presented. Fuel design studies and model enhancement continue.
Conductance Thin Film Model of Flexible Organic Thin Film Device using COMSOL Multiphysics
Carradero-Santiago, Carolyn; Vedrine-Pauléus, Josee
We developed a virtual model to analyze the electrical conductivity of multilayered thin films placed above a graphene conducting and flexible polyethylene terephthalate (PET) substrate. The organic layers of poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) as a hole conducting layer, poly(3-hexylthiophene-2,5-diyl) (P3HT), as a p-type, phenyl-C61-butyric acid methyl ester (PCBM) and as n-type, with aluminum as a top conductor. COMSOL Multiphysics was the software we used to develop the virtual model to analyze potential variations and conductivity through the thin-film layers. COMSOL Multiphysics software allows simulation and modeling of physical phenomena represented by differential equations such as heat transfer, fluid flow, electromagnetism, and structural mechanics. In this work, using the AC/DC, electric currents module we defined the geometry of the model and properties for each of the six layers: PET/graphene/PEDOT:PSS/P3HT/PCBM/aluminum. We analyzed the model with varying thicknesses of graphene and active layers (P3HT/PCBM). This simulation allowed us to analyze the electrical conductivity, and visualize the model with varying voltage potential, or bias across the plates, useful for applications in solar cell devices.
Fu, X.; Hu, L.; Lee, K. M.; Zou, J.; Ruan, X. D.; Yang, H. Y.
2010-10-01
This paper presents a method for dry calibration of an electromagnetic flowmeter (EMF). This method, which determines the voltage induced in the EMF as conductive liquid flows through a magnetic field, numerically solves a coupled set of multiphysical equations with measured boundary conditions for the magnetic, electric, and flow fields in the measuring pipe of the flowmeter. Specifically, this paper details the formulation of dry calibration and an efficient algorithm (that adaptively minimizes the number of measurements and requires only the normal component of the magnetic flux density as boundary conditions on the pipe surface to reconstruct the magnetic field involved) for computing the sensitivity of EMF. Along with an in-depth discussion on factors that could significantly affect the final precision of a dry calibrated EMF, the effects of flow disturbance on measuring errors have been experimentally studied by installing a baffle at the inflow port of the EMF. Results of the dry calibration on an actual EMF were compared against flow-rig calibration; excellent agreements (within 0.3%) between dry calibration and flow-rig tests verify the multiphysical computation of the fields and the robustness of the method. As requiring no actual flow, the dry calibration is particularly useful for calibrating large-diameter EMFs where conventional flow-rig methods are often costly and difficult to implement.
Multi-physics Model for the Aging Prediction of a Vanadium Redox Flow Battery System
International Nuclear Information System (INIS)
Merei, Ghada; Adler, Sophie; Magnor, Dirk; Sauer, Dirk Uwe
2015-01-01
Highlights: • Present a multi-physics model of vanadium redox-flow battery. • This model is essential for aging prediction. • It is applicable for VRB system of different power and capacity ratings. • Good results comparing with current research in this field. - Abstract: The all-vanadium redox-flow battery is an attractive candidate to compensate the fluctuations of non-dispatchable renewable energy generation. While several models for vanadium redox batteries have been described yet, no model has been published, which is adequate for the aging prediction. Therefore, the present paper presents a multi-physics model which determines all parameters that are essential for an aging prediction. In a following paper, the corresponding aging model of vanadium redox flow battery (VRB) is described. The model combines existing models for the mechanical losses and temperature development with new approaches for the batteries side reactions. The model was implemented in Matlab/Simulink. The modeling results presented in the paper prove to be consistent with the experimental results of other research groups
Productive Parallel Programming: The PCN Approach
Directory of Open Access Journals (Sweden)
Ian Foster
1992-01-01
Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.
International Nuclear Information System (INIS)
Wagner, John C.; Mosher, Scott W.; Evans, Thomas M.; Peplow, Douglas E.; Turner, John A.
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which
International Nuclear Information System (INIS)
Wagner, J.C.; Mosher, S.W.; Evans, T.M.; Peplow, D.E.; Turner, J.A.
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method
Morse, H Stephen
1994-01-01
Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
Using a Linux Cluster for Parallel Simulations of an Active Magnetic Regenerator Refrigerator
DEFF Research Database (Denmark)
Petersen, T.F.; Pryds, N.; Smith, A.
2006-01-01
This paper describes the implementation of a Comsol Multiphysics model on a Linux computer Cluster. The Magnetic Refrigerator (MR) is a special type of refrigerator with potential to reduce the energy consumption of household refrigeration by a factor of two or more. To conduct numerical analysis....... The coupled set of equations and the transient convergence towards the final steady state means that the model has an excessive solution time. To make parametric studies practical, the developed model was implemented on a Cluster to allow parallel simulations, which has decreased the solution time...
Czech Academy of Sciences Publication Activity Database
Ferfecki, P.; Zapoměl, Jaroslav; Kozánek, Jan
2017-01-01
Roč. 104, February (2017), s. 1-11 ISSN 0965-9978 R&D Projects: GA ČR GA15-06621S Institutional support: RVO:61388998 Keywords : magnetorheological squeeze film dampers * magnetorheological oils * closed form formulas * multiphysical problem Subject RIV: JR - Other Machinery OBOR OECD: Mechanical engineering Impact factor: 3.000, year: 2016
Perkó, Z.
2015-01-01
This thesis presents novel adjoint and spectral methods for the sensitivity and uncertainty (S&U) analysis of multi-physics problems encountered in the field of reactor physics. The first part focuses on the steady state of reactors and extends the adjoint sensitivity analysis methods well
Parallel imaging microfluidic cytometer.
Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching
2011-01-01
By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.
Computer-Aided Parallelizer and Optimizer
Jin, Haoqiang
2011-01-01
The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.
Parallelization of the preconditioned IDR solver for modern multicore computer systems
Bessonov, O. A.; Fedoseyev, A. I.
2012-10-01
This paper present the analysis, parallelization and optimization approach for the large sparse matrix solver CNSPACK for modern multicore microprocessors. CNSPACK is an advanced solver successfully used for coupled solution of stiff problems arising in multiphysics applications such as CFD, semiconductor transport, kinetic and quantum problems. It employs iterative IDR algorithm with ILU preconditioning (user chosen ILU preconditioning order). CNSPACK has been successfully used during last decade for solving problems in several application areas, including fluid dynamics and semiconductor device simulation. However, there was a dramatic change in processor architectures and computer system organization in recent years. Due to this, performance criteria and methods have been revisited, together with involving the parallelization of the solver and preconditioner using Open MP environment. Results of the successful implementation for efficient parallelization are presented for the most advances computer system (Intel Core i7-9xx or two-processor Xeon 55xx/56xx).
Introduction to parallel programming
Brawer, Steven
1989-01-01
Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race
Fox, Geoffrey C; Messina, Guiseppe C
2014-01-01
A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop
Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit
Energy Technology Data Exchange (ETDEWEB)
Merzari, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, E. R. [Argonne National Lab. (ANL), Argonne, IL (United States); Yu, Y. Q. [Argonne National Lab. (ANL), Argonne, IL (United States); Thomas, J. W. [Argonne National Lab. (ANL), Argonne, IL (United States); Obabko, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States); Tautges, Timothy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Solberg, Jerome [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ferencz, Robert Mark [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Whitesides, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-12-21
This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models of a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.
Multiphysics Model of Palladium Hydride Isotope Exchange Accounting for Higher Dimensionality
Energy Technology Data Exchange (ETDEWEB)
Gharagozloo, Patricia E.; Eliassi, Mehdi; Bon, Bradley Luis
2015-03-01
This report summarizes computational model developm ent and simulations results for a series of isotope exchange dynamics experiments i ncluding long and thin isothermal beds similar to the Foltz and Melius beds and a lar ger non-isothermal experiment on the NENG7 test bed. The multiphysics 2D axi-symmetr ic model simulates the temperature and pressure dependent exchange reactio n kinetics, pressure and isotope dependent stoichiometry, heat generation from the r eaction, reacting gas flow through porous media, and non-uniformities in the bed perme ability. The new model is now able to replicate the curved reaction front and asy mmetry of the exit gas mass fractions over time. The improved understanding of the exchange process and its dependence on the non-uniform bed properties and te mperatures in these larger systems is critical to the future design of such sy stems.
Directory of Open Access Journals (Sweden)
E Holzbecher
2016-03-01
Full Text Available In a classical paper Henry set up a conceptual model for simulating saltwater intrusion into coastal aquifers. Up to now the problem has been taken up by software developers and modellers as a benchmark for codes simulating coupled flow and transport in porous media. The Henry test case has been treated using different numerical methods based on various formulations of differential equations. We compare several of these approaches using multiphysics software. We model the problem using Finite Elements, utilizing the primitive variables and the streamfunction approach, both with and without using the Oberbeck-Boussinesq assumption. We compare directly coupled solvers with segregated solver strategies. Changing finite element orders and mesh refinement, we find that models based on the streamfunction converge 2-4 times faster than runs based on primitive variables. Concerning the solution strategy, we find an advantage of Picard iterations compared to monolithic Newton iterations.
DEFF Research Database (Denmark)
Lepech, Michael; Geiker, Mette; Michel, Alexander
This paper looks to address the grand challenge of integrating construction materials engineering research within a multi-scale, inter-disciplinary research and management framework for sustainable concrete infrastructure. The ultimate goal is to drive sustainability-focused innovation and adoption...... cycles in the broader architecture, engineering, construction (AEC) industry. Specifically, a probabilistic design framework for sustainable concrete infrastructure and a multi-physics service life model for reinforced concrete are presented as important points of integration for innovation between...... design, consists of concrete service life models and life cycle assessment (LCA) models. Both types of models (service life and LCA) are formulated stochastically so that the service life and time(s) to repair, as well as total sustainability impact, are described by a probability distribution. A central...
Kilbane, J.; Polzin, K. A.
2014-01-01
An annular linear induction pump (ALIP) that could be used for circulating liquid-metal coolant in a fission surface power reactor system is modeled in the present work using the computational COMSOL Multiphysics package. The pump is modeled using a two-dimensional, axisymmetric geometry and solved under conditions similar to those used during experimental pump testing. Real, nonlinear, temperature-dependent material properties can be incorporated into the model for both the electrically-conducting working fluid in the pump (NaK-78) and structural components of the pump. The intricate three-phase coil configuration of the pump is implemented in the model to produce an axially-traveling magnetic wave that is qualitatively similar to the measured magnetic wave. The model qualitatively captures the expected feature of a peak in efficiency as a function of flow rate.
Module-based Hybrid Uncertainty Quantification for Multi-physics Applications: Theory and Software
Energy Technology Data Exchange (ETDEWEB)
Tong, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Iaccarino, Gianluca [Stanford Univ., CA (United States); Mittal, Akshay [Stanford Univ., CA (United States)
2013-10-08
In this project we proposed to develop an innovative uncertainty quantification methodology that captures the best of the two competing approaches in UQ, namely, intrusive and non-intrusive approaches. The idea is to develop the mathematics and the associated computational framework and algorithms to facilitate the use of intrusive or non-intrusive UQ methods in different modules of a multi-physics multi-module simulation model in a way that physics code developers for different modules are shielded (as much as possible) from the chores of accounting for the uncertain ties introduced by the other modules. As the result of our research and development, we have produced a number of publications, conference presentations, and a software product.
Contribution to the study of multi-physical phenomena in cementitious materials
International Nuclear Information System (INIS)
Bary, B.
2010-09-01
This document is a synthesis of the applied research studies undertaken by the author during ten years, first at the University of Marne-La-Vallee during the period 1999-2002, then at the CEA. These studies concern the modeling and the numerical simulations of the cementitious materials behavior subjected on the one hand to moderate thermomechanical and hydric loadings, and on the other hand to chemical attacks due to the migration of calcium, carbonate and sulfate ions. The developed approaches may be viewed as multi-physical in the sense that the models used for describing the behavior couple various fields and phenomena such as mechanics, thermal, hydric and ionic transfers, and chemistry. In addition, analytical up-scaling techniques are applied to estimate the physical properties associated with these phenomena (mechanical, hydraulic and diffusive parameters) as a function of the microstructure and the hydric state of the material. (author)
Numerical methods for reliability and safety assessment multiscale and multiphysics systems
Hami, Abdelkhalak
2015-01-01
This book offers unique insight on structural safety and reliability by combining computational methods that address multiphysics problems, involving multiple equations describing different physical phenomena, and multiscale problems, involving discrete sub-problems that together describe important aspects of a system at multiple scales. The book examines a range of engineering domains and problems using dynamic analysis, nonlinear methods, error estimation, finite element analysis, and other computational techniques. This book also: · Introduces novel numerical methods · Illustrates new practical applications · Examines recent engineering applications · Presents up-to-date theoretical results · Offers perspective relevant to a wide audience, including teaching faculty/graduate students, researchers, and practicing engineers
Developing a multi-physics solver in APOLLO3 and applications to cross section homogenization
International Nuclear Information System (INIS)
Dugan, Kevin-James
2016-01-01
Multi-physics coupling is becoming of large interest in the nuclear engineering and computational science fields. The ability to obtain accurate solutions to realistic models is important to the design and licensing of novel reactor designs, especially in design basis accident situations. The physical models involved in calculating accident behavior in nuclear reactors includes: neutron transport, thermal conduction/convection, thermo-mechanics in fuel and support structure, fuel stoichiometry, among others. However, this thesis focuses on the coupling between two models, neutron transport and thermal conduction/convection.The goal of this thesis is to develop a multi-physics solver for simulating accidents in nuclear reactors. The focus is both on the simulation environment and the data treatment used in such simulations.This work discusses the development of a multi-physics framework based around the Jacobian-Free Newton-Krylov (JFNK) method. The framework includes linear and nonlinear solvers, along with interfaces to existing numerical codes that solve neutron transport and thermal hydraulics models (APOLLO3 and MCTH respectively) through the computation of residuals. a new formulation for the neutron transport residual is explored, which reduces the solution size and search space by a large factor; instead of the residual being based on the angular flux, it is based on the fission source.The question of whether using a fundamental mode distribution of the neutron flux for cross section homogenization is sufficiently accurate during fast transients is also explored. It is shown that in an infinite homogeneous medium, using homogenized cross sections produced with a fundamental mode flux differ significantly from a reference solution. The error is remedied by using an alternative weighting flux taken from a time dependent calculation; either a time-integrated flux or an asymptotic solution. The time-integrated flux comes from the multi-physics solution of the
Audigier, Chloé; Mansi, Tommaso; Delingette, Hervé; Rapaka, Saikiran; Passerini, Tiziano; Mihalef, Viorel; Jolly, Marie-Pierre; Pop, Raoul; Diana, Michele; Soler, Luc; Kamen, Ali; Comaniciu, Dorin; Ayache, Nicholas
2017-09-01
We aim at developing a framework for the validation of a subject-specific multi-physics model of liver tumor radiofrequency ablation (RFA). The RFA computation becomes subject specific after several levels of personalization: geometrical and biophysical (hemodynamics, heat transfer and an extended cellular necrosis model). We present a comprehensive experimental setup combining multimodal, pre- and postoperative anatomical and functional images, as well as the interventional monitoring of intra-operative signals: the temperature and delivered power. To exploit this dataset, an efficient processing pipeline is introduced, which copes with image noise, variable resolution and anisotropy. The validation study includes twelve ablations from five healthy pig livers: a mean point-to-mesh error between predicted and actual ablation extent of 5.3 ± 3.6 mm is achieved. This enables an end-to-end preclinical validation framework that considers the available dataset.
Energy Technology Data Exchange (ETDEWEB)
Qiu, Yuefeng, E-mail: yuefeng.qiu@kit.edu; Lu, Lei; Fischer, Ulrich
2015-10-15
Highlights: • Integrated approach for neutronics, thermal and structural analyses was developed. • MCNP5/6, TRIPOLI-4 were coupled with CFX, Fluent and ANSYS Workbench. • A novel meshing approach has been proposed for describing MC geometry. - Abstract: Coupled multi-physics analyses on fusion reactor devices require high-fidelity neutronic models, and flexible, accurate data exchanging between various calculation codes. An integrated coupling approach has been developed to enable the conversion of CAD, mesh, or hybrid geometries for Monte Carlo (MC) codes MCNP5/6, TRIPOLI-4, and translation of nuclear heating data for CFD codes Fluent, CFX and structural mechanical software ANSYS Workbench. The coupling approach has been implemented based on SALOME platform with CAD modeling, mesh generation and data visualization capabilities. A novel meshing approach has been developed for generating suitable meshes for MC geometry descriptions. The coupling approach has been concluded to be reliable and efficient after verification calculations of several application cases.
International Nuclear Information System (INIS)
Yue Liyang; Wang Zengbo; Li Lin
2012-01-01
Light could interact differently with thin-film contaminants and particle contaminates because of their different surface morphologies. In the case of dry laser cleaning of small transparent particles, it is well known that particles could function like mini-lenses, causing a localized near-field hot spot effect on the cleaning process. This paper looks into a special, yet important, phenomenon of dry laser cleaning of particles trapped in micro-sized slots. The effects of slot size, particle size and particle aggregate states in the cleaning process have been theoretically investigated, based on a coupled electromagnetic-thermal-mechanical multiphysics modelling and simulation approach. The study is important for the development and optimization of laser cleaning processes for contamination removal from cracks and slots. (paper)
Eighth SIAM conference on parallel processing for scientific computing: Final program and abstracts
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-12-31
This SIAM conference is the premier forum for developments in parallel numerical algorithms, a field that has seen very lively and fruitful developments over the past decade, and whose health is still robust. Themes for this conference were: combinatorial optimization; data-parallel languages; large-scale parallel applications; message-passing; molecular modeling; parallel I/O; parallel libraries; parallel software tools; parallel compilers; particle simulations; problem-solving environments; and sparse matrix computations.
Selection of High Performance Alloy for Gas Turbine Blade Using Multiphysics Analysis
Directory of Open Access Journals (Sweden)
H Khawaja
2016-09-01
Full Text Available With the extensive increase in the utilization of energy resources in the modern era, the need of energy extraction from various resources has pronounced in recent years. Thus comprehensive efforts have been made around the globe in the technological development of turbo machines where means of energy extraction is energized fluids. This development led the aviation industry to power boost due to better performing engines. Meanwhile, the structural conformability requirements relative to the functional requirements have also increased with the advent of newer, better performing materials. Thus there is a need to study the material behavior and its usage with the idea of selecting the best possible material for its application. In this work a gas turbine blade of a small turbofan engine, where geometry and aerodynamic data was available, was analyzed for its structural behavior in the proposed mission envelope, where the engine turbine is subjected to high thermal, inertial and aerodynamic loads. Multiphysics Finite Element (FE linear stress analysis was carried out on the turbine blade. The results revealed the upper limit of Ultimate Tensile Strength (UTS for the blade. Based on the limiting factor, high performance alloys were selected from the literature. The two most recommended alloy categories for gas turbine blades are NIMONIC and INCONEL from where total of 21 types of INCONEL alloys and 12 of NIMONIC alloys, available on commercial bases, were analyzed individually to meet the structural requirements. After applying selection criteria, four alloys were finalized from NIMONIC and INCONEL alloys for further analysis. On the basis of stress-strain behavior of finalized alloys, the Multiphysics FE nonlinear stress analysis was then carried out for the selection of the individual alloy by imposing a restriction of Ultimate Factor of Safety (UFOS of 1.33 and yield strength. Final selection is made keeping in view other factors
Energy Technology Data Exchange (ETDEWEB)
Fiorina, Carlo, E-mail: carlo.fiorina@psi.ch [Paul Scherrer Institut, Nuclear Energy and Safety Department, Laboratory for Reactor Physics and Systems Behaviour – PSI, Villigen 5232 (Switzerland); Clifford, Ivor [Paul Scherrer Institut, Nuclear Energy and Safety Department, Laboratory for Reactor Physics and Systems Behaviour – PSI, Villigen 5232 (Switzerland); Aufiero, Manuele [LPSC-IN2P3-CNRS/UJF/Grenoble INP, 53 avenue des Martyrs, 38026 Grenoble Cedex (France); Mikityuk, Konstantin [Paul Scherrer Institut, Nuclear Energy and Safety Department, Laboratory for Reactor Physics and Systems Behaviour – PSI, Villigen 5232 (Switzerland)
2015-12-01
Highlights: • Development of a new multi-physics solver based on OpenFOAM{sup ®}. • Tight coupling of thermal-hydraulics, thermal-mechanics and neutronics. • Combined use of traditional RANS and porous-medium models. • Mesh for neutronics deformed according to the predicted displacement field. • Use of three unstructured meshes, adaptive time step, parallel computing. - Abstract: The FAST group at the Paul Scherrer Institut has been developing a code system for reactor analysis for many years. For transient analysis, this code system is currently based on a state-of-the-art coupled TRACE-PARCS routine. This work presents an attempt to supplement the FAST code system with a novel solver characterized by tight coupling between the different equations, parallel computing capabilities, adaptive time-stepping and more accurate treatment of some of the phenomena involved in a reactor transient. The new solver is based on OpenFOAM{sup ®}, an open-source C++ library for the solution of partial differential equations using finite-volume discretization. It couples together a multi-scale fine/coarse mesh sub-solver for thermal-hydraulics, a multi-group diffusion sub-solver for neutronics, a displacement-based sub-solver for thermal-mechanics and a finite-difference model for the temperature field in the fuel. It is targeted toward the analysis of pin-based reactors (e.g., liquid metal fast reactors or light water reactors) or homogeneous reactors (e.g., fast-spectrum molten salt reactors). This paper presents each “single-physics” sub-solver and the overall coupling strategy, using the sodium-cooled fast reactor as a test case, and essential code verification tests are described.
International Nuclear Information System (INIS)
Fiorina, Carlo; Clifford, Ivor; Aufiero, Manuele; Mikityuk, Konstantin
2015-01-01
Highlights: • Development of a new multi-physics solver based on OpenFOAM"®. • Tight coupling of thermal-hydraulics, thermal-mechanics and neutronics. • Combined use of traditional RANS and porous-medium models. • Mesh for neutronics deformed according to the predicted displacement field. • Use of three unstructured meshes, adaptive time step, parallel computing. - Abstract: The FAST group at the Paul Scherrer Institut has been developing a code system for reactor analysis for many years. For transient analysis, this code system is currently based on a state-of-the-art coupled TRACE-PARCS routine. This work presents an attempt to supplement the FAST code system with a novel solver characterized by tight coupling between the different equations, parallel computing capabilities, adaptive time-stepping and more accurate treatment of some of the phenomena involved in a reactor transient. The new solver is based on OpenFOAM"®, an open-source C++ library for the solution of partial differential equations using finite-volume discretization. It couples together a multi-scale fine/coarse mesh sub-solver for thermal-hydraulics, a multi-group diffusion sub-solver for neutronics, a displacement-based sub-solver for thermal-mechanics and a finite-difference model for the temperature field in the fuel. It is targeted toward the analysis of pin-based reactors (e.g., liquid metal fast reactors or light water reactors) or homogeneous reactors (e.g., fast-spectrum molten salt reactors). This paper presents each “single-physics” sub-solver and the overall coupling strategy, using the sodium-cooled fast reactor as a test case, and essential code verification tests are described.
Parallel Atomistic Simulations
Energy Technology Data Exchange (ETDEWEB)
HEFFELFINGER,GRANT S.
2000-01-18
Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.
Performance Analysis of Parallel Mathematical Subroutine library PARCEL
International Nuclear Information System (INIS)
Yamada, Susumu; Shimizu, Futoshi; Kobayashi, Kenichi; Kaburaki, Hideo; Kishida, Norio
2000-01-01
The parallel mathematical subroutine library PARCEL (Parallel Computing Elements) has been developed by Japan Atomic Energy Research Institute for easy use of typical parallelized mathematical codes in any application problems on distributed parallel computers. The PARCEL includes routines for linear equations, eigenvalue problems, pseudo-random number generation, and fast Fourier transforms. It is shown that the results of performance for linear equations routines exhibit good parallelization efficiency on vector, as well as scalar, parallel computers. A comparison of the efficiency results with the PETSc (Portable Extensible Tool kit for Scientific Computations) library has been reported. (author)
Parallel programming with Easy Java Simulations
Esquembre, F.; Christian, W.; Belloni, M.
2018-01-01
Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.
CERN. Geneva
2016-01-01
The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...
Parallelism in matrix computations
Gallopoulos, Efstratios; Sameh, Ahmed H
2016-01-01
This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...
The kpx, a program analyzer for parallelization
International Nuclear Information System (INIS)
Matsuyama, Yuji; Orii, Shigeo; Ota, Toshiro; Kume, Etsuo; Aikawa, Hiroshi.
1997-03-01
The kpx is a program analyzer, developed as a common technological basis for promoting parallel processing. The kpx consists of three tools. The first is ktool, that shows how much execution time is spent in program segments. The second is ptool, that shows parallelization overhead on the Paragon system. The last is xtool, that shows parallelization overhead on the VPP system. The kpx, designed to work for any FORTRAN cord on any UNIX computer, is confirmed to work well after testing on Paragon, SP2, SR2201, VPP500, VPP300, Monte-4, SX-4 and T90. (author)
DEFF Research Database (Denmark)
Sitchinava, Nodar; Zeh, Norbert
2012-01-01
We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....
Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole
2012-07-01
Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Application Portable Parallel Library
Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott
1995-01-01
Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.
A multi-physics code system based on ANC9, VIPRE-W and BOA for CIPS evaluation
International Nuclear Information System (INIS)
Zhang, B.; Sung, Y.; Secker, J.; Beard, C.; Hilton, P.; Wang, G.; Oelrich, R.; Karoutas, Z.; Sung, Y.
2011-01-01
This paper summarizes the development of a multi-physics code system for evaluation of Crud Induced Power Shift (CIPS) phenomenon experienced in some Pressurized Water Reactors (PWR). CIPS is an unexpected change in reactor core axial power distribution, caused by boron compounds in crud deposited in the high power fuel assemblies undergoing subcooled boiling. As part of the Consortium for Advanced Simulation of Light Water Reactors (CASL) sponsored by the US Department of Energy (DOE), this paper describes the initial linkage and application of a multi-physics code system ANC9/VIPRE-W/BOA for evaluating changes in core power distributions due to boron deposited in crud. The initial linkage of the code system along with the application results will be the base for the future CASL development. (author)
A multi-physics code system based on ANC9, VIPRE-W and BOA for CIPS evaluation
Energy Technology Data Exchange (ETDEWEB)
Zhang, B.; Sung, Y.; Secker, J.; Beard, C.; Hilton, P.; Wang, G.; Oelrich, R.; Karoutas, Z.; Sung, Y. [Westinghouse Electric Company LLC, Pittsburgh (United States)
2011-07-01
This paper summarizes the development of a multi-physics code system for evaluation of Crud Induced Power Shift (CIPS) phenomenon experienced in some Pressurized Water Reactors (PWR). CIPS is an unexpected change in reactor core axial power distribution, caused by boron compounds in crud deposited in the high power fuel assemblies undergoing subcooled boiling. As part of the Consortium for Advanced Simulation of Light Water Reactors (CASL) sponsored by the US Department of Energy (DOE), this paper describes the initial linkage and application of a multi-physics code system ANC9/VIPRE-W/BOA for evaluating changes in core power distributions due to boron deposited in crud. The initial linkage of the code system along with the application results will be the base for the future CASL development. (author)
International Nuclear Information System (INIS)
Fiorina, C.; Mikityuk, K.
2015-01-01
A new multi-physics solver for nuclear reactor analysis, named GeN-Foam (Generalized Nuclear Foam), has been developed by the FAST group at the Paul Scherrer Institut. It is based on OpenFOAM and has been developed for the multi-physics transient analyses of pin-based (e.g., liquid metal Fast Reactors, Light Water Reactors) or homogeneous (e.g., fast spectrum Molten Salt Reactors) nuclear reactors. It includes solutions of coarse or fine mesh thermal-hydraulics, thermal-mechanics and neutron diffusion. In particular, thermal-hydraulics solution can combine on the same mesh both a traditional RANS model and a porous medium model, depending on the desired degree of approximation for each region. In case the active reactor core is modeled as a porous medium, a simple sub-solver computes the sub-scale radial temperature profiles in fuel and cladding. The mesh used for neutronics calculations is deformed according to the displacement field predicted by the thermal-mechanics solver, thus allowing for a direct prediction of expansion-related feedback effects in Fast Reactors. To limit computational requirements, GeN-Foam permits the use of three different unstructured meshes for thermal-hydraulics, thermal-mechanics and neutron diffusion. For the same reason, an adaptive time step is employed. The different equations can be solved altogether or selectively included. In this work, GeN-Foam is applied to the analysis of the European Sodium Fast Reactor (ESFR). In particular, a 3-D model of the ESFR core is set up employing a coarse-mesh porous-medium approach for the thermal-hydraulics. The reactor steady-state and different accidental transients are investigated to offer an overview of GeN-Foam use and capabilities, as well as to preliminarily investigate the impact of a relatively accurate thermal-mechanic treatment on the predicted ESFR behavior. A code-to-code benchmark against the TRACE system code is performed to verify the adequacy of the results provided by the new
Parallel discrete event simulation
Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.
1991-01-01
In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation
Parallel reservoir simulator computations
International Nuclear Information System (INIS)
Hemanth-Kumar, K.; Young, L.C.
1995-01-01
The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90
Zhang, Jingyi
Ferroelectric (FE) and closely related antiferroelectric (AFE) materials have unique electromechanical properties that promote various applications in the area of capacitors, sensors, generators (FE) and high density energy storage (AFE). These smart materials with extensive applications have drawn wide interest in the industrial and scientific world because of their reliability and tunable property. However, reliability issues changes its paradigms and requires guidance from detailed mechanism theory as the materials applications are pushed for better performance. A host of modeling work were dedicated to study the macro-structural behavior and microstructural evolution in FE and AFE material under various conditions. This thesis is focused on direct observation of domain evolution under multiphysics loading for both FE and AFE material. Landau-Devonshire time-dependent phase field models were built for both materials, and were simulated in finite element software Comsol. In FE model, dagger-shape 90 degree switched domain was observed at preexisting crack tip under pure mechanical loading. Polycrystal structure was tested under same condition, and blocking effect of the growth of dagger-shape switched domain from grain orientation difference and/or grain boundary was directly observed. AFE ceramic model was developed using two sublattice theory, this model was used to investigate the mechanism of energy efficiency increase with self-confined loading in experimental tests. Consistent results was found in simulation and careful investigation of calculation results gave confirmation that origin of energy density increase is from three aspects: self-confinement induced inner compression field as the cause of increase of critical field, fringe leak as the source of elevated saturation polarization and uneven defects distribution as the reason for critical field shifting and phase transition speed. Another important affecting aspect in polycrystalline materials is the
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Energy Technology Data Exchange (ETDEWEB)
1991-10-23
An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.
Massively parallel mathematical sieves
Energy Technology Data Exchange (ETDEWEB)
Montry, G.R.
1989-01-01
The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.
International Nuclear Information System (INIS)
Ivanov, K.; Avramova, M.
2007-01-01
Current trends in nuclear power generation and regulation as well as the design of next generation reactor concepts along with the continuing computer technology progress stimulate the development, qualification and application of multi-physics multi-scale coupled code systems. The efforts have been focused on extending the analysis capabilities by coupling models, which simulate different phenomena or system components, as well as on refining the scale and level of detail of the coupling. This paper reviews the progress made in this area and outlines the remaining challenges. The discussion is illustrated with examples based on neutronics/thermohydraulics coupling in the reactor core modeling. In both fields recent advances and developments are towards more physics-based high-fidelity simulations, which require implementation of improved and flexible coupling methodologies. First, the progresses in coupling of different physics codes along with the advances in multi-level techniques for coupled code simulations are discussed. Second, the issues related to the consistent qualification of coupled multi-physics and multi-scale code systems for design and safety evaluation are presented. The increased importance of uncertainty and sensitivity analysis are discussed along with approaches to propagate the uncertainty quantification between the codes. The incoming OECD LWR Uncertainty Analysis in Modeling (UAM) benchmark is the first international activity to address this issue and it is described in the paper. Finally, the remaining challenges with multi-physics coupling are outlined. (authors)
Energy Technology Data Exchange (ETDEWEB)
Ivanov, K.; Avramova, M. [Pennsylvania State Univ., University Park, PA (United States)
2007-07-01
Current trends in nuclear power generation and regulation as well as the design of next generation reactor concepts along with the continuing computer technology progress stimulate the development, qualification and application of multi-physics multi-scale coupled code systems. The efforts have been focused on extending the analysis capabilities by coupling models, which simulate different phenomena or system components, as well as on refining the scale and level of detail of the coupling. This paper reviews the progress made in this area and outlines the remaining challenges. The discussion is illustrated with examples based on neutronics/thermohydraulics coupling in the reactor core modeling. In both fields recent advances and developments are towards more physics-based high-fidelity simulations, which require implementation of improved and flexible coupling methodologies. First, the progresses in coupling of different physics codes along with the advances in multi-level techniques for coupled code simulations are discussed. Second, the issues related to the consistent qualification of coupled multi-physics and multi-scale code systems for design and safety evaluation are presented. The increased importance of uncertainty and sensitivity analysis are discussed along with approaches to propagate the uncertainty quantification between the codes. The incoming OECD LWR Uncertainty Analysis in Modeling (UAM) benchmark is the first international activity to address this issue and it is described in the paper. Finally, the remaining challenges with multi-physics coupling are outlined. (authors)
International Nuclear Information System (INIS)
Horrein, L.; Bouscayrol, A.; Cheng, Y.; El Fassi, M.
2015-01-01
Highlights: • Internal Combustion Engine (ICE) dynamical and static models. • Organization of ICE model using Energetic Macroscopic Representation. • Description of the distribution of the chemical, thermal and mechanical power. • Implementation of the ICE model in a global vehicle model. - Abstract: In the simulation of new vehicles, the Internal Combustion Engine (ICE) is generally modeled by a static map. This model yields the mechanical power and the fuel consumption. But some studies require the heat energy from the ICE to be considered (i.e. waste heat recovery, thermal regulation of the cabin). A dynamical multi-physical model of a diesel engine is developed to consider its heat energy. This model is organized using Energetic Macroscopic Representation (EMR) in order to be interconnected to other various models of vehicle subsystems. An experimental validation is provided. Moreover a multi-physical quasi-static model is also derived. According to different modeling aims, a comparison of the dynamical and the quasi-static model is discussed in the case of the simulation of a thermal vehicle. These multi-physical models with different simulation time consumption provide good basis for studying the effects of the thermal energy on the vehicle behaviors, including the possibilities of waste heat recovery
Two-Step Multi-Physics Analysis of an Annular Linear Induction Pump for Fission Power Systems
Geng, Steven M.; Reid, Terry V.
2016-01-01
One of the key technologies associated with fission power systems (FPS) is the annular linear induction pump (ALIP). ALIPs are used to circulate liquid-metal fluid for transporting thermal energy from the nuclear reactor to the power conversion device. ALIPs designed and built to date for FPS project applications have not performed up to expectations. A unique, two-step approach was taken toward the multi-physics examination of an ALIP using ANSYS Maxwell 3D and Fluent. This multi-physics approach was developed so that engineers could investigate design variations that might improve pump performance. Of interest was to determine if simple geometric modifications could be made to the ALIP components with the goal of increasing the Lorentz forces acting on the liquid-metal fluid, which in turn would increase pumping capacity. The multi-physics model first calculates the Lorentz forces acting on the liquid metal fluid in the ALIP annulus. These forces are then used in a computational fluid dynamics simulation as (a) internal boundary conditions and (b) source functions in the momentum equations within the Navier-Stokes equations. The end result of the two-step analysis is a predicted pump pressure rise that can be compared with experimental data.
Abdeljabbar Kharrat, Nourhene; Plateaux, Régis; Miladi Chaabane, Mariem; Choley, Jean-Yves; Karra, Chafik; Haddar, Mohamed
2018-05-01
The present work tackles the modeling of multi-physics systems applying a topological approach while proceeding with a new methodology using a topological modification to the structure of systems. Then the comparison with the Magos' methodology is made. Their common ground is the use of connectivity within systems. The comparison and analysis of the different types of modeling show the importance of the topological methodology through the integration of the topological modification to the topological structure of a multi-physics system. In order to validate this methodology, the case of Pogo-stick is studied. The first step consists in generating a topological graph of the system. Then the connectivity step takes into account the contact with the ground. During the last step of this research; the MGS language (Modeling of General System) is used to model the system through equations. Finally, the results are compared to those obtained by MODELICA. Therefore, this proposed methodology may be generalized to model multi-physics systems that can be considered as a set of local elements.
Survey on present status and trend of parallel programming environments
International Nuclear Information System (INIS)
Takemiya, Hiroshi; Higuchi, Kenji; Honma, Ichiro; Ohta, Hirofumi; Kawasaki, Takuji; Imamura, Toshiyuki; Koide, Hiroshi; Akimoto, Masayuki.
1997-03-01
This report intends to provide useful information on software tools for parallel programming through the survey on parallel programming environments of the following six parallel computers, Fujitsu VPP300/500, NEC SX-4, Hitachi SR2201, Cray T94, IBM SP, and Intel Paragon, all of which are installed at Japan Atomic Energy Research Institute (JAERI), moreover, the present status of R and D's on parallel softwares of parallel languages, compilers, debuggers, performance evaluation tools, and integrated tools is reported. This survey has been made as a part of our project of developing a basic software for parallel programming environment, which is designed on the concept of STA (Seamless Thinking Aid to programmers). (author)
Parallel framework for topology optimization using the method of moving asymptotes
DEFF Research Database (Denmark)
Aage, Niels; Lazarov, Boyan Stefanov
2013-01-01
and simple to implement linear solvers and optimization algorithms. However, to ensure generality, the code is developed to be easily extendable in terms of physical models as well as in terms of solution methods, without compromising the parallel scalability. The widely used Method of Moving Asymptotes......The complexity of problems attacked in topology optimization has increased dramatically during the past decade. Examples include fully coupled multiphysics problems in thermo-elasticity, fluid-structure interaction, Micro-Electro Mechanical System (MEMS) design and large-scale three dimensional...... optimization algorithm is parallelized and included as a fundamental part of the code. The capabilities of the presented approaches are demonstrated on topology optimization of a Stokes flow problem with target outflow constraints as well as the minimum compliance problem with a volume constraint from linear...
International Nuclear Information System (INIS)
Heggarty, J.W.
1999-06-01
response to the challenge of achieving parallel R-matrix computation. The primary objective was to develop parallel codes, targeted at multicomputers, that are capable of performing R-matrix calculations hitherto intractable using classic supercomputers. In particular, Fortran implementations of two internal region methods (the R-matrix Floquet method and the two-dimensional R-matrix propagation method) and three external region methods (the Light-Walker propagation method, the Baluja, Burke and Morgan propagation method and the Variable Phase Method) from four widely utilised R-matrix packages were investigated to ascertain whether, in these cases, parallel R-matrix computation was practicable and, if so, to determine the most effective way to port such codes to contemporary multicomputers. When attempting to develop the parallel codes, a number of computer aided automatic parallelization tools were investigated. These were found to be inadequate. Consequently, a parallelization approach was developed to provide simple guidelines for manual parallelization. This parallelization approach proved effective and efficient parallel versions of the five R-matrix codes were successfully developed. (author)
Interfacial mixing in high-energy-density matter with a multiphysics kinetic model
Haack, Jeffrey R.; Hauck, Cory D.; Murillo, Michael S.
2017-12-01
We have extended a recently developed multispecies, multitemperature Bhatnagar-Gross-Krook model [Haack et al., J. Stat. Phys. 168, 822 (2017), 10.1007/s10955-017-1824-9], to include multiphysics capabilities that enable modeling of a wider range of physical conditions. In terms of geometry, we have extended from the spatially homogeneous setting to one spatial dimension. In terms of the physics, we have included an atomic ionization model, accurate collision physics across coupling regimes, self-consistent electric fields, and degeneracy in the electronic screening. We apply the model to a warm dense matter scenario in which the ablator-fuel interface of an inertial confinement fusion target is heated, but for larger length and time scales and for much higher temperatures than can be simulated using molecular dynamics. Relative to molecular dynamics, the kinetic model greatly extends the temperature regime and the spatiotemporal scales over which we are able to model. In our numerical results we observe hydrogen from the ablator material jetting into the fuel during the early stages of the implosion and compare the relative size of various diffusion components (Fickean diffusion, electrodiffusion, and barodiffusion) that drive this process. We also examine kinetic effects, such as anisotropic distributions and velocity separation, in order to determine when this problem can be described with a hydrodynamic model.
Experimental multiphysical characterization of an SMA driven, camber morphing owl wing section
Stroud, Hannah R.; Leal, Pedro B. C.; Hartl, Darren J.
2018-03-01
In the context of aerospace engineering, morphing structures are useful in their ability to change the outer mold line (OML) while improving or maintaining certain aerodynamic performance metrics. Skin-based morphing is of particular interest in that it minimizes installation volume. Shape memory alloys (SMAs) have a high force to volume ratio that makes them a suitable choice for skin-based morphing. Because the thermomechanical properties of SMAs are coupled, strain can be generated via a temperature variation; this phenomenon is used as the actuation method. Therefore, it is necessary to determine the interaction of the system not only with aerodynamic loads, but with thermal loads as well. This paper describes the wind tunnel testing and in situ thermomechanical analysis of an SMA actuated, avian inspired morphing wing. The morphing wing is embedded with two SMA composite actuators and consists of a foam core enveloped in a fiberglass-epoxy composite. As the SMA wire is heated, the actuator contracts, morphing the wing from the original owl OML to a highly cambered, high lift OML. Configuration characteristics are analyzed in situ using simultaneous three dimensional digital image correlation (DIC) and infrared thermography, thereby coupling strain and thermal measurements. This method of testing allows for the nonintrusive, multiphysical data acquisition of each actuator separately and the system as a whole.
Development of a three dimension multi-physics code for molten salt fast reactor
International Nuclear Information System (INIS)
Cheng Maosong; Dai Zhimin
2014-01-01
Molten Salt Reactor (MSR) was selected as one of the six innovative nuclear reactors by the Generation IV International Forum (GIF). The circulating-fuel in the can-type molten salt fast reactor makes the neutronics and thermo-hydraulics of the reactor strongly coupled and different from that of traditional solid-fuel reactors. In the present paper: a new coupling model is presented that physically describes the inherent relations between the neutron flux, the delayed neutron precursor, the heat transfer and the turbulent flow. Based on the model, integrating nuclear data processing, CAD modeling, structured and unstructured mesh technology, data analysis and visualization application, a three dimension steady state simulation code system (MSR3DS) for the can-type molten salt fast reactor is developed and validated. In order to demonstrate the ability of the code, the three dimension distributions of the velocity, the neutron flux, the delayed neutron precursor and the temperature were obtained for the simplified MOlten Salt Advanced Reactor Transmuter (MOSART) using this code. The results indicate that the MSR3DS code can provide a feasible description of multi-physical coupling phenomena in can-type molten salt fast reactor. Furthermore, the code can well predict the flow effect of fuel salt and the transport effect of the turbulent diffusion. (authors)
Final report on LDRD project : coupling strategies for multi-physics applications.
Energy Technology Data Exchange (ETDEWEB)
Hopkins, Matthew Morgan; Moffat, Harry K.; Carnes, Brian; Hooper, Russell Warren; Pawlowski, Roger P.
2007-11-01
Many current and future modeling applications at Sandia including ASC milestones will critically depend on the simultaneous solution of vastly different physical phenomena. Issues due to code coupling are often not addressed, understood, or even recognized. The objectives of the LDRD has been both in theory and in code development. We will show that we have provided a fundamental analysis of coupling, i.e., when strong coupling vs. a successive substitution strategy is needed. We have enabled the implementation of tighter coupling strategies through additions to the NOX and Sierra code suites to make coupling strategies available now. We have leveraged existing functionality to do this. Specifically, we have built into NOX the capability to handle fully coupled simulations from multiple codes, and we have also built into NOX the capability to handle Jacobi Free Newton Krylov simulations that link multiple applications. We show how this capability may be accessed from within the Sierra Framework as well as from outside of Sierra. The critical impact from this LDRD is that we have shown how and have delivered strategies for enabling strong Newton-based coupling while respecting the modularity of existing codes. This will facilitate the use of these codes in a coupled manner to solve multi-physic applications.
Propagation of neutron-reaction uncertainties through multi-physics models of novel LWR's
Directory of Open Access Journals (Sweden)
Hernandez-Solis Augusto
2017-01-01
Full Text Available The novel design of the renewable boiling water reactor (RBWR allows a breeding ratio greater than unity and thus, it aims at providing for a self-sustained fuel cycle. The neutron reactions that compose the different microscopic cross-sections and angular distributions are uncertain, so when they are employed in the determination of the spatial distribution of the neutron flux in a nuclear reactor, a methodology should be employed to account for these associated uncertainties. In this work, the Total Monte Carlo (TMC method is used to propagate the different neutron-reactions (as well as angular distributions covariances that are part of the TENDL-2014 nuclear data (ND library. The main objective is to propagate them through coupled neutronic and thermal-hydraulic models in order to assess the uncertainty of important safety parameters related to multi-physics, such as peak cladding temperature along the axial direction of an RBWR fuel assembly. The objective of this study is to quantify the impact that ND covariances of important nuclides such as U-235, U-238, Pu-239 and the thermal scattering of hydrogen in H2O have in the deterministic safety analysis of novel nuclear reactors designs.
Using COMSOL Multiphysics Software to Analyze the Thin Film Resistance Model of a Conductor on PET
Carradero-Santiago, Carolyn; Merced-Sanabria, Milzaida; Vedrine-Pauléus, Josee
2015-03-01
In this research work, we will develop a virtual model to analyze the electrical conductivity of a thin film with three layers, one of graphene or conducting metal film, polyethylene terephthalate (PET) and Poly(3,4-ethylenedioxythiophene) Polystyrene sulfonate (PEDOT:PSS). COMSOL Multiphysics will be the software use to develop the virtual model to analyze the thin-film layers. COMSOL software allows simulation and modelling of physical phenomena represented by differential equations such as that of heat transfer, fluid movement, electromagnetism and structural mechanics. In the work, we will define the geometry of the model; in this case we want three layers-PET, the conducting layer and PEDOT:PSS. We will then add the materials and assign PET as the lower layer, the above conductor as the middle layer and the PEDOT:PSS as the upper layer. We will analyze the model with varying thickness of the top conducting layer. This simulation will allow us to analyze the electrical conductivity, and visualize the model with varying voltage potential, or bias across the plates.
Varghese, Julian
This research work has contributed in various ways to help develop a better understanding of textile composites and materials with complex microstructures in general. An instrumental part of this work was the development of an object-oriented framework that made it convenient to perform multiscale/multiphysics analyses of advanced materials with complex microstructures such as textile composites. In addition to the studies conducted in this work, this framework lays the groundwork for continued research of these materials. This framework enabled a detailed multiscale stress analysis of a woven DCB specimen that revealed the effect of the complex microstructure on the stress and strain energy release rate distribution along the crack front. In addition to implementing an oxidation model, the framework was also used to implement strategies that expedited the simulation of oxidation in textile composites so that it would take only a few hours. The simulation showed that the tow architecture played a significant role in the oxidation behavior in textile composites. Finally, a coupled diffusion/oxidation and damage progression analysis was implemented that was used to study the mechanical behavior of textile composites under mechanical loading as well as oxidation. A parametric study was performed to determine the effect of material properties and the number of plies in the laminate on its mechanical behavior. The analyses indicated a significant effect of the tow architecture and other parameters on the damage progression in the laminates.
Directory of Open Access Journals (Sweden)
Jiazhou Wu
2018-06-01
Full Text Available A three-dimensional multiphysical transient model was developed to investigate keyhole formation, weld pool dynamics, and mass transfer in laser welding of dissimilar materials. The coupling of heat transfer, fluid flow, keyhole free surface evolution, and solute diffusion between dissimilar metals was simulated. The adaptive heat source model was used to trace the change of keyhole shape, and the Rayleigh scattering of the laser beam was considered. The keyhole wall was calculated using the fluid volume equation, primarily considering the recoil pressure induced by metal evaporation, surface tension, and hydrostatic pressure. Fluid flow, diffusion, and keyhole formation were considered simultaneously in mass transport processes. Welding experiments of 304L stainless steel and industrial pure titanium TA2 were performed to verify the simulation results. It is shown that spatters are shaped during the welding process. The thickness of the intermetallic reaction layer between the two metals and the diffusion of elements in the weld are calculated, which are important criteria for welding quality. The simulation results correspond well with the experimental results.
Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei
2013-08-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.
MahmoodPoorDehkordy, F.; Briggs, M. A.; Day-Lewis, F. D.; Bagtzoglou, A. C.
2017-12-01
Although hyporheic zones are often modeled at the reach scale as homogeneous "boxes" of exchange, heterogeneity caused by variations of pore sizes and connectivity is not uncommon. This heterogeneity leads to the creation of more- and less-mobile zones of hydraulic exchange that influence reactive solute transport processes. Whereas fluid sampling is generally sensitive to more-mobile zones, geoelectrical measurement is sensitive to ionic tracer dynamics in both less- and more-mobile zones. Heterogeneity in pore connectivity leads to a lag between fluid and bulk electrical conductivity (EC) resulting in a hysteresis loop, observed during tracer breakthrough tests, that contains information about the less-mobile porosity attributes of the medium. Here, we present a macro-scale model of solute transport and electrical conduction developed using COMSOL Multiphysics. The model is used to simulate geoelectrical monitoring of ionic transport for bed sediments based on (1) a stochastic sand-and-cobble mixture and (2) a dune feature with strong permeability layering. In both of these disparate sediment types, hysteresis between fluid and bulk EC is observed, and depends in part on fluid flux rate through the model domain. Using the hysteresis loop, the ratio of less-mobile to mobile porosity and mass-transfer coefficient are estimated graphically. The results indicate the presence and significance of less-mobile porosity in the hyporheic zones and demonstrate the capability of the proposed model to detect heterogeneity in flow processes and estimate less-mobile zone parameters.
Sensitivity analysis of VERA-CS and FRAPCON coupling in a multiphysics environment
International Nuclear Information System (INIS)
Blakely, Cole; Zhang, Hongbin; Ban, Heng
2018-01-01
Highlights: •VERA-CS and FRAPCON coupling. •Uncertainty quantification and sensitivity analysis for coupled VERA-CS and FRAPCON simulations in a multiphysics environment LOTUS. -- Abstract: A demonstration and description of the LOCA Toolkit for US light water reactors (LOTUS) is presented. Through LOTUS, the core simulator VERA-CS developed by CASL is coupled with the fuel performance code FRAPCON. The coupling is performed with consistent uncertainty propagation with all model inconsistencies being well-documented. Monte Carlo sampling is performed on a single 17 × 17 fuel assembly with a three cycle depletion case. Both uncertainty quantification (UQ) and sensitivity analysis (SA) are used at multiple states within the simulation to elucidate the behavior of minimum departure from nucleate boiling ratio (MDNBR), maximum fuel centerline temperature (MFCT), and gap conductance at peak power (GCPP). The SA metrics used are the Pearson correlation coefficient, Sobol sensitivity indices, and the density-based, delta moment independent measures. Results for MDNBR show consistency among all SA measures, as well for all states throughout the fuel lifecycle. MFCT results contain consistent rankings between SA measures, but show differences throughout the lifecycle. GCPP exhibits predominantly linear relations at low and high burnup, but highly nonlinear relations at intermediate burnup due to abrupt shifts between models. Such behavior is largely undetectable to traditional regression or variance-based methods and demonstrates the utility of density-based methods.
A multiphysics and multiscale model for low frequency electromagnetic direct-chill casting
International Nuclear Information System (INIS)
Košnik, N; Guštin, A Z; Mavrič, B; Šarler, B
2016-01-01
Simulation and control of macrosegregation, deformation and grain size in low frequency electromagnetic (EM) direct-chill casting (LFEMC) is important for downstream processing. Respectively, a multiphysics and multiscale model is developed for solution of Lorentz force, temperature, velocity, concentration, deformation and grain structure of LFEMC processed aluminum alloys, with focus on axisymmetric billets. The mixture equations with lever rule, linearized phase diagram, and stationary thermoelastic solid phase are assumed, together with EM induction equation for the field imposed by the coil. Explicit diffuse approximate meshless solution procedure [1] is used for solving the EM field, and the explicit local radial basis function collocation method [2] is used for solving the coupled transport phenomena and thermomechanics fields. Pressure-velocity coupling is performed by the fractional step method [3]. The point automata method with modified KGT model is used to estimate the grain structure [4] in a post-processing mode. Thermal, mechanical, EM and grain structure outcomes of the model are demonstrated. A systematic study of the complicated influences of the process parameters can be investigated by the model, including intensity and frequency of the electromagnetic field. The meshless solution framework, with the implemented simplest physical models, will be further extended by including more sophisticated microsegregation and grain structure models, as well as a more realistic solid and solid-liquid phase rheology. (paper)
International Nuclear Information System (INIS)
Yang, Xiaobin; Li, Xiuhong; He, Yafeng; Wang, Xiaojun; Xu, Bo
2017-01-01
Highlights: • The differential equation including temperature and magnetic field was derived for a long cylindrical superconductor. • Thermal stress and electromagnetic stress were studied at the same time under pulse field magnetizing. • The distributions of the magnetic field, the temperature and stresses are studied and compared for two pulse fields of the different duration. • The Role thermal stress and electromagnetic stress play in the process of pulse field magnetizing is discussed. - Abstract: A multiphysics model for the numerical computation of stresses, trapped field and temperature distribution of a infinite long superconducting cylinder is proposed, based on which the stresses, including the thermal stresses and mechanical stresses due to Lorentz force, and trapped fields in the superconductor subjected to pulsed magnetic fields are analyzed. By comparing the results under pulsed magnetic fields with different pulse durations, it is found that the both the mechanical stress due to the electromagnetic force and the thermal stress due to temperature gradient contribute to the total stress level in the superconductor. For pulsed magnetic field with short durations, the thermal stress is the dominant contribution to the total stress, because the heat generated by AC-loss builds up significant temperature gradient in such short durations. However, for a pulsed field with a long duration the gradient of temperature and flux, as well as the maximal tensile stress, are much smaller. And the results of this paper is meaningful for the design and manufacture of superconducting permanent magnets.
Li, Hua; Wang, Xiaogui; Yan, Guoping; Lam, K. Y.; Cheng, Sixue; Zou, Tao; Zhuo, Renxi
2005-03-01
In this paper, a novel multiphysic mathematical model is developed for simulation of swelling equilibrium of ionized temperature sensitive hydrogels with the volume phase transition, and it is termed the multi-effect-coupling thermal-stimulus (MECtherm) model. This model consists of the steady-state Nernst-Planck equation, Poisson equation and swelling equilibrium governing equation based on the Flory's mean field theory, in which two types of polymer-solvent interaction parameters, as the functions of temperature and polymer-network volume fraction, are specified with or without consideration of the hydrogen bond interaction. In order to examine the MECtherm model consisting of nonlinear partial differential equations, a meshless Hermite-Cloud method is used for numerical solution of one-dimensional swelling equilibrium of thermal-stimulus responsive hydrogels immersed in a bathing solution. The computed results are in very good agreements with experimental data for the variation of volume swelling ratio with temperature. The influences of the salt concentration and initial fixed-charge density are discussed in detail on the variations of volume swelling ratio of hydrogels, mobile ion concentrations and electric potential of both interior hydrogels and exterior bathing solution.
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
Energy Technology Data Exchange (ETDEWEB)
Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement of path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.
A templated approach for multi-physics modeling of hybrid energy systems in Modelica
Energy Technology Data Exchange (ETDEWEB)
Greenwood, Michael Scott [ORNL; Cetiner, Sacit M. [ORNL; Harrison, Thomas J. [ORNL; Fugate, David [Oak Ridge National Laboratory (ORNL)
2018-01-01
A prototypical hybrid energy system (HES) couples a primary thermal power generator (i.e., nuclear power plant) with one or more additional subsystems beyond the traditional balance of plant electricity generation system. The definition and architecture of an HES can be adapted based on the needs and opportunities of a given local market. For example, locations in need of potable water may be best served by coupling a desalination plant to the HES. A location near an oil refinery may have a need for emission-free hydrogen production. The flexible, multidomain capabilities of Modelica are being used to investigate the dynamics (e.g., thermal hydraulics and electrical generation/consumption) of such a hybrid system. This paper examines the simulation infrastructure created to enable the coupling of multiphysics subsystem models for HES studies. A demonstration of a tightly coupled nuclear hybrid energy system implemented using the Modelica based infrastructure is presented for two representative cases. An appendix is also included providing a step-by-step procedure for using the template-based infrastructure.
Directory of Open Access Journals (Sweden)
Jiajia Zheng
2014-02-01
Full Text Available A novel magnetorheological (MR damper with a multistage piston and independent input currents is designed and analyzed. The equivalent magnetic circuit model is investigated along with the relation between magnetic induction density in the working gap and input currents of the electromagnetic coils. Finite element method (FEM is used to analyze the distribution of magnetic field through the MR fluid region. Considering the real situation, coupling equations are presented to analyze the electromagnetic-thermal-flow coupling problems. Software COMSOL is used to analyze the multiphysics, that is, electromagnetic, thermal dynamic, and fluid mechanic. A measurement index involving total damping force, dynamic range, and induction time needed for magnetic coil is put forward to evaluate the performance of the novel multistage MR damper. The simulation results show that it is promising for applications under high velocity and works better when more electromagnetic coils are applied with input currents separately. Besides, in order to reduce energy consumption, it is recommended to apply more electromagnetic coils with relative low currents based on the analysis of pressure drop along the annular gap.
Design of a Modular Monolithic Implicit Solver for Multi-Physics Applications
Carton De Wiart, Corentin; Diosady, Laslo T.; Garai, Anirban; Burgess, Nicholas; Blonigan, Patrick; Ekelschot, Dirk; Murman, Scott M.
2018-01-01
The design of a modular multi-physics high-order space-time finite-element framework is presented together with its extension to allow monolithic coupling of different physics. One of the main objectives of the framework is to perform efficient high- fidelity simulations of capsule/parachute systems. This problem requires simulating multiple physics including, but not limited to, the compressible Navier-Stokes equations, the dynamics of a moving body with mesh deformations and adaptation, the linear shell equations, non-re effective boundary conditions and wall modeling. The solver is based on high-order space-time - finite element methods. Continuous, discontinuous and C1-discontinuous Galerkin methods are implemented, allowing one to discretize various physical models. Tangent and adjoint sensitivity analysis are also targeted in order to conduct gradient-based optimization, error estimation, mesh adaptation, and flow control, adding another layer of complexity to the framework. The decisions made to tackle these challenges are presented. The discussion focuses first on the "single-physics" solver and later on its extension to the monolithic coupling of different physics. The implementation of different physics modules, relevant to the capsule/parachute system, are also presented. Finally, examples of coupled computations are presented, paving the way to the simulation of the full capsule/parachute system.
Energy Technology Data Exchange (ETDEWEB)
Yang, Xiaobin, E-mail: yangxb@lzu.edu.cn; Li, Xiuhong; He, Yafeng; Wang, Xiaojun; Xu, Bo
2017-04-15
Highlights: • The differential equation including temperature and magnetic field was derived for a long cylindrical superconductor. • Thermal stress and electromagnetic stress were studied at the same time under pulse field magnetizing. • The distributions of the magnetic field, the temperature and stresses are studied and compared for two pulse fields of the different duration. • The Role thermal stress and electromagnetic stress play in the process of pulse field magnetizing is discussed. - Abstract: A multiphysics model for the numerical computation of stresses, trapped field and temperature distribution of a infinite long superconducting cylinder is proposed, based on which the stresses, including the thermal stresses and mechanical stresses due to Lorentz force, and trapped fields in the superconductor subjected to pulsed magnetic fields are analyzed. By comparing the results under pulsed magnetic fields with different pulse durations, it is found that the both the mechanical stress due to the electromagnetic force and the thermal stress due to temperature gradient contribute to the total stress level in the superconductor. For pulsed magnetic field with short durations, the thermal stress is the dominant contribution to the total stress, because the heat generated by AC-loss builds up significant temperature gradient in such short durations. However, for a pulsed field with a long duration the gradient of temperature and flux, as well as the maximal tensile stress, are much smaller. And the results of this paper is meaningful for the design and manufacture of superconducting permanent magnets.
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Parallelism and array processing
International Nuclear Information System (INIS)
Zacharov, V.
1983-01-01
Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)
Portable parallel programming in a Fortran environment
International Nuclear Information System (INIS)
May, E.N.
1989-01-01
Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs
Parallel magnetic resonance imaging
International Nuclear Information System (INIS)
Larkman, David J; Nunes, Rita G
2007-01-01
Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)
Energy Technology Data Exchange (ETDEWEB)
Deb, M.K.; Kennon, S.R.
1998-04-01
A cooperative R&D effort between industry and the US government, this project, under the HPPP (High Performance Parallel Processing) initiative of the Dept. of Energy, started the investigations into parallel object-oriented (OO) numerics. The basic goal was to research and utilize the emerging technologies to create a physics-independent computational kernel for applications using adaptive finite element method. The industrial team included Computational Mechanics Co., Inc. (COMCO) of Austin, TX (as the primary contractor), Scientific Computing Associates, Inc. (SCA) of New Haven, CT, Texaco and CONVEX. Sandia National Laboratory (Albq., NM) was the technology partner from the government side. COMCO had the responsibility of the main kernel design and development, SCA had the lead in parallel solver technology and guidance on OO technologies was Sandia`s main expertise in this venture. CONVEX and Texaco supported the partnership by hardware resource and application knowledge, respectively. As such, a minimum of fifty-percent cost-sharing was provided by the industry partnership during this project. This report describes the R&D activities and provides some details about the prototype kernel and example applications.
Parallel single-cell analysis microfluidic platform
van den Brink, Floris Teunis Gerardus; Gool, Elmar; Frimat, Jean-Philippe; Bomer, Johan G.; van den Berg, Albert; le Gac, Severine
2011-01-01
We report a PDMS microfluidic platform for parallel single-cell analysis (PaSCAl) as a powerful tool to decipher the heterogeneity found in cell populations. Cells are trapped individually in dedicated pockets, and thereafter, a number of invasive or non-invasive analysis schemes are performed.
The STAPL Parallel Graph Library
Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence
2013-01-01
This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable
Atomistic to Continuum Multiscale and Multiphysics Simulation of NiTi Shape Memory Alloy
Gur, Sourav
(transformation temperature, phase fraction evolution kinetics due to temperature) are also demonstrated herein. Next, to couple and transfer the statistical information of length scale dependent phase transformation process, multiscale/ multiphysics methods are used. Here, the computational difficulty from the fact that the representative governing equations (i.e. different sub-methods such as molecular dynamics simulations, phase field simulations and continuum level constitutive/ material models) are only valid or can be implemented over a range of spatiotemporal scales. Therefore, in the present study, a wavelet based multiscale coupling method is used, where simulation results (phase fraction evolution kinetics) from different sub-methods are linked via concurrent multiscale coupling fashion. Finally, these multiscale/ multiphysics simulation results are used to develop/ modify the macro/ continuum scale thermo-mechanical constitutive relations for NiTi SMA. Finally, the improved material model is used to model new devices, such as thermal diodes and smart dampers.
A self-taught artificial agent for multi-physics computational model personalization.
Neumann, Dominik; Mansi, Tommaso; Itu, Lucian; Georgescu, Bogdan; Kayvanpour, Elham; Sedaghat-Hamedani, Farbod; Amr, Ali; Haas, Jan; Katus, Hugo; Meder, Benjamin; Steidl, Stefan; Hornegger, Joachim; Comaniciu, Dorin
2016-12-01
Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates) and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model. Copyright © 2016. Published by Elsevier B.V.
Multiphysics modeling of two-phase film boiling within porous corrosion deposits
Energy Technology Data Exchange (ETDEWEB)
Jin, Miaomiao, E-mail: mmjin@mit.edu; Short, Michael, E-mail: hereiam@mit.edu
2016-07-01
Porous corrosion deposits on nuclear fuel cladding, known as CRUD, can cause multiple operational problems in light water reactors (LWRs). CRUD can cause accelerated corrosion of the fuel cladding, increase radiation fields and hence greater exposure risk to plant workers once activated, and induce a downward axial power shift causing an imbalance in core power distribution. In order to facilitate a better understanding of CRUD's effects, such as localized high cladding surface temperatures related to accelerated corrosion rates, we describe an improved, fully-coupled, multiphysics model to simulate heat transfer, chemical reactions and transport, and two-phase fluid flow within these deposits. Our new model features a reformed assumption of 2D, two-phase film boiling within the CRUD, correcting earlier models' assumptions of single-phase coolant flow with wick boiling under high heat fluxes. This model helps to better explain observed experimental values of the effective CRUD thermal conductivity. Finally, we propose a more complete set of boiling regimes, or a more detailed mechanism, to explain recent CRUD deposition experiments by suggesting the new concept of double dryout specifically in thick porous media with boiling chimneys. - Highlights: • A two-phase model of CRUD's effects on fuel cladding is developed and improved. • This model eliminates the formerly erroneous assumption of wick boiling. • Higher fuel cladding temperatures are predicted when accounting for two-phase flow. • Double-peaks in thermal conductivity vs. heat flux in experiments are explained. • A “double dryout” mechanism in CRUD is proposed based on the model and experiments.
Parallel-Processing Test Bed For Simulation Software
Blech, Richard; Cole, Gary; Townsend, Scott
1996-01-01
Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).
K.I.S.S. Parallel Coding (lecture 2)
CERN. Geneva
2018-01-01
K.I.S.S.ing parallel computing means, finally, loving it. Parallel computing will be approached in a theoretical and experimental way, using the most advanced and used C API: OpenMP. OpenMP is an open source project constantly developed and updated to hide the awful complexity of parallel coding in an awesome interface. The result is a tool which leaves plenty of space for clever solutions and terrific results in terms of efficiency and performance maximisation.
Directory of Open Access Journals (Sweden)
Khaled Sadek
2009-10-01
Full Text Available In this paper, the reliability of capacitive shunt RF MEMS switches have been investigated using three dimensional (3D coupled multiphysics finite element (FE analysis. The coupled field analysis involved three consecutive multiphysics interactions. The first interaction is characterized as a two-way sequential electromagnetic (EM-thermal field coupling. The second interaction represented a one-way sequential thermal-structural field coupling. The third interaction portrayed a two-way sequential structural-electrostatic field coupling. An automated substructuring algorithm was utilized to reduce the computational cost of the complicated coupled multiphysics FE analysis. The results of the substructured FE model with coupled field analysis is shown to be in good agreement with the outcome of previously published experimental and numerical studies. The current numerical results indicate that the pull-in voltage and the buckling temperature of the RF switch are functions of the microfabrication residual stress state, the switch operational frequency and the surrounding packaging temperature. Furthermore, the current results point out that by introducing proper mechanical approaches such as corrugated switches and through-holes in the switch membrane, it is possible to achieve reliable pull-in voltages, at various operating temperatures. The performed analysis also shows that by controlling the mean and gradient residual stresses, generated during microfabrication, in conjunction with the proposed mechanical approaches, the power handling capability of RF MEMS switches can be increased, at a wide range of operational frequencies. These design features of RF MEMS switches are of particular importance in applications where a high RF power (frequencies above 10 GHz and large temperature variations are expected, such as in satellites and airplane condition monitoring.
Cacace, Mauro; Jacquey, Antoine B.
2017-09-01
Theory and numerical implementation describing groundwater flow and the transport of heat and solute mass in fully saturated fractured rocks with elasto-plastic mechanical feedbacks are developed. In our formulation, fractures are considered as being of lower dimension than the hosting deformable porous rock and we consider their hydraulic and mechanical apertures as scaling parameters to ensure continuous exchange of fluid mass and energy within the fracture-solid matrix system. The coupled system of equations is implemented in a new simulator code that makes use of a Galerkin finite-element technique. The code builds on a flexible, object-oriented numerical framework (MOOSE, Multiphysics Object Oriented Simulation Environment) which provides an extensive scalable parallel and implicit coupling to solve for the multiphysics problem. The governing equations of groundwater flow, heat and mass transport, and rock deformation are solved in a weak sense (either by classical Newton-Raphson or by free Jacobian inexact Newton-Krylow schemes) on an underlying unstructured mesh. Nonlinear feedbacks among the active processes are enforced by considering evolving fluid and rock properties depending on the thermo-hydro-mechanical state of the system and the local structure, i.e. degree of connectivity, of the fracture system. A suite of applications is presented to illustrate the flexibility and capability of the new simulator to address problems of increasing complexity and occurring at different spatial (from centimetres to tens of kilometres) and temporal scales (from minutes to hundreds of years).
Directory of Open Access Journals (Sweden)
M. Cacace
2017-09-01
Full Text Available Theory and numerical implementation describing groundwater flow and the transport of heat and solute mass in fully saturated fractured rocks with elasto-plastic mechanical feedbacks are developed. In our formulation, fractures are considered as being of lower dimension than the hosting deformable porous rock and we consider their hydraulic and mechanical apertures as scaling parameters to ensure continuous exchange of fluid mass and energy within the fracture–solid matrix system. The coupled system of equations is implemented in a new simulator code that makes use of a Galerkin finite-element technique. The code builds on a flexible, object-oriented numerical framework (MOOSE, Multiphysics Object Oriented Simulation Environment which provides an extensive scalable parallel and implicit coupling to solve for the multiphysics problem. The governing equations of groundwater flow, heat and mass transport, and rock deformation are solved in a weak sense (either by classical Newton–Raphson or by free Jacobian inexact Newton–Krylow schemes on an underlying unstructured mesh. Nonlinear feedbacks among the active processes are enforced by considering evolving fluid and rock properties depending on the thermo-hydro-mechanical state of the system and the local structure, i.e. degree of connectivity, of the fracture system. A suite of applications is presented to illustrate the flexibility and capability of the new simulator to address problems of increasing complexity and occurring at different spatial (from centimetres to tens of kilometres and temporal scales (from minutes to hundreds of years.
SPINning parallel systems software
International Nuclear Information System (INIS)
Matlin, O.S.; Lusk, E.; McCune, W.
2002-01-01
We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin
Parallel programming with Python
Palach, Jan
2014-01-01
A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.
Parallel processing of structural integrity analysis codes
International Nuclear Information System (INIS)
Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.
1996-01-01
Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab
International Nuclear Information System (INIS)
Huang, J H; Wang, X J; Wang, J
2016-01-01
The primary purpose of this paper is to propose a mathematical model of PLZT ceramic with coupled multi-physics fields, e.g. thermal, electric, mechanical and light field. To this end, the coupling relationships of multi-physics fields and the mechanism of some effects resulting in the photostrictive effect are analyzed theoretically, based on which a mathematical model considering coupled multi-physics fields is established. According to the analysis and experimental results, the mathematical model can explain the hysteresis phenomenon and the variation trend of the photo-induced voltage very well and is in agreement with the experimental curves. In addition, the PLZT bimorph is applied as an energy transducer for a photovoltaic–electrostatic hybrid actuated micromirror, and the relation of the rotation angle and the photo-induced voltage is discussed based on the novel photostrictive mathematical model. (paper)
Energy Technology Data Exchange (ETDEWEB)
Bonaccorsi, Th
2007-09-15
A Material Testing Reactor (MTR) makes it possible to irradiate material samples under intense neutron and photonic fluxes. These experiments are carried out in experimental devices localised in the reactor core or in periphery (reflector). Available physics simulation tools only treat, most of the time, one physics field in a very precise way. Multi-physic simulations of irradiation experiments therefore require a sequential use of several calculation codes and data exchanges between these codes: this corresponds to problems coupling. In order to facilitate multi-physic simulations, this thesis sets up a data model based on data-processing objects, called Technological Entities. This data model is common to all of the physics fields. It permits defining the geometry of an irradiation device in a parametric way and to associate information about materials to it. Numerical simulations are encapsulated into interfaces providing the ability to call specific functionalities with the same command (to initialize data, to launch calculations, to post-treat, to get results,... ). Thus, once encapsulated, numerical simulations can be re-used for various studies. This data model is developed in a SALOME platform component. The first application case made it possible to perform neutronic simulations (OSIRIS reactor and RJH) coupled with fuel behavior simulations. In a next step, thermal hydraulics could also be taken into account. In addition to the improvement of the calculation accuracy due to the physical phenomena coupling, the time spent in the development phase of the simulation is largely reduced and the possibilities of uncertainty treatment are under consideration. (author)
Parallelization methods study of thermal-hydraulics codes
International Nuclear Information System (INIS)
Gaudart, Catherine
2000-01-01
The variety of parallelization methods and machines leads to a wide selection for programmers. In this study we suggest, in an industrial context, some solutions from the experience acquired through different parallelization methods. The study is about several scientific codes which simulate a large variety of thermal-hydraulics phenomena. A bibliography on parallelization methods and a first analysis of the codes showed the difficulty of our process on the whole applications to study. Therefore, it would be necessary to identify and extract a representative part of these applications and parallelization methods. The linear solver part of the codes forced itself. On this particular part several parallelization methods had been used. From these developments one could estimate the necessary work for a non initiate programmer to parallelize his application, and the impact of the development constraints. The different methods of parallelization tested are the numerical library PETSc, the parallelizer PAF, the language HPF, the formalism PEI and the communications library MPI and PYM. In order to test several methods on different applications and to follow the constraint of minimization of the modifications in codes, a tool called SPS (Server of Parallel Solvers) had be developed. We propose to describe the different constraints about the optimization of codes in an industrial context, to present the solutions given by the tool SPS, to show the development of the linear solver part with the tested parallelization methods and lastly to compare the results against the imposed criteria. (author) [fr
Parallel Fast Legendre Transform
Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.
1998-01-01
We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were
Practical parallel programming
Bauer, Barr E
2014-01-01
This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.
Parallel hierarchical radiosity rendering
Energy Technology Data Exchange (ETDEWEB)
Carter, Michael [Iowa State Univ., Ames, IA (United States)
1993-07-01
In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.
Parallel universes beguile science
2007-01-01
A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.
Energy Technology Data Exchange (ETDEWEB)
2017-04-04
A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.
International Nuclear Information System (INIS)
Gardes, D.; Volkov, P.
1981-01-01
A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr
Parallel hierarchical global illumination
Energy Technology Data Exchange (ETDEWEB)
Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)
1997-10-08
Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.
Imran, H. M.; Kala, J.; Ng, A. W. M.; Muthukumaran, S.
2018-04-01
Appropriate choice of physics options among many physics parameterizations is important when using the Weather Research and Forecasting (WRF) model. The responses of different physics parameterizations of the WRF model may vary due to geographical locations, the application of interest, and the temporal and spatial scales being investigated. Several studies have evaluated the performance of the WRF model in simulating the mean climate and extreme rainfall events for various regions in Australia. However, no study has explicitly evaluated the sensitivity of the WRF model in simulating heatwaves. Therefore, this study evaluates the performance of a WRF multi-physics ensemble that comprises 27 model configurations for a series of heatwave events in Melbourne, Australia. Unlike most previous studies, we not only evaluate temperature, but also wind speed and relative humidity, which are key factors influencing heatwave dynamics. No specific ensemble member for all events explicitly showed the best performance, for all the variables, considering all evaluation metrics. This study also found that the choice of planetary boundary layer (PBL) scheme had largest influence, the radiation scheme had moderate influence, and the microphysics scheme had the least influence on temperature simulations. The PBL and microphysics schemes were found to be more sensitive than the radiation scheme for wind speed and relative humidity. Additionally, the study tested the role of Urban Canopy Model (UCM) and three Land Surface Models (LSMs). Although the UCM did not play significant role, the Noah-LSM showed better performance than the CLM4 and NOAH-MP LSMs in simulating the heatwave events. The study finally identifies an optimal configuration of WRF that will be a useful modelling tool for further investigations of heatwaves in Melbourne. Although our results are invariably region-specific, our results will be useful to WRF users investigating heatwave dynamics elsewhere.
Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement
Directory of Open Access Journals (Sweden)
Bo Yang
2016-07-01
Full Text Available A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by
Multi-scale and multi-physics model of the uterine smooth muscle with mechanotransduction.
Yochum, Maxime; Laforêt, Jérémy; Marque, Catherine
2018-02-01
Preterm labor is an important public health problem. However, the efficiency of the uterine muscle during labor is complex and still poorly understood. This work is a first step towards a model of the uterine muscle, including its electrical and mechanical components, to reach a better understanding of the uterus synchronization. This model is proposed to investigate, by simulation, the possible role of mechanotransduction for the global synchronization of the uterus. The electrical diffusion indeed explains the local propagation of contractile activity, while the tissue stretching may play a role in the synchronization of distant parts of the uterine muscle. This work proposes a multi-physics (electrical, mechanical) and multi-scales (cell, tissue, whole uterus) model, which is applied to a realistic uterus 3D mesh. This model includes electrical components at different scales: generation of action potentials at the cell level, electrical diffusion at the tissue level. It then links these electrical events to the mechanical behavior, at the cellular level (via the intracellular calcium concentration), by simulating the force generated by each active cell. It thus computes an estimation of the intra uterine pressure (IUP) by integrating the forces generated by each active cell at the whole uterine level, as well as the stretching of the tissue (by using a viscoelastic law for the behavior of the tissue). It finally includes at the cellular level stretch activated channels (SACs) that permit to create a loop between the mechanical and the electrical behavior (mechanotransduction). The simulation of different activated regions of the uterus, which in this first "proof of concept" case are electrically isolated, permits the activation of inactive regions through the stretching (induced by the electrically active regions) computed at the whole organ scale. This permits us to evidence the role of the mechanotransduction in the global synchronization of the uterus. The
Johnson, S.; Chiaramonte, L.; Cruz, L.; Izadi, G.
2016-12-01
Advances in the accuracy and fidelity of numerical methods have significantly improved our understanding of coupled processes in unconventional reservoirs. However, such multi-physics models are typically characterized by many parameters and require exceptional computational resources to evaluate systems of practical importance, making these models difficult to use for field analyses or uncertainty quantification. One approach to remove these limitations is through targeted complexity reduction and field data constrained parameterization. For the latter, a variety of field data streams may be available to engineers and asset teams, including micro-seismicity from proximate sites, well logs, and 3D surveys, which can constrain possible states of the reservoir as well as the distributions of parameters. We describe one such workflow, using the Argos multi-physics code and requisite geomechanical analysis to parameterize the underlying models. We illustrate with a field study involving a constraint analysis of various field data and details of the numerical optimizations and model reduction to demonstrate how complex models can be applied to operation design in hydraulic fracturing operations, including selection of controllable completion and fluid injection design properties. The implication of this work is that numerical methods are mature and computationally tractable enough to enable complex engineering analysis and deterministic field estimates and to advance research into stochastic analyses for uncertainty quantification and value of information applications.
Konishi, Toshifumi; Yamane, Daisuke; Matsushima, Takaaki; Masu, Kazuya; Machida, Katsuyuki; Toshiyoshi, Hiroshi
2014-01-01
This paper reports the design and evaluation results of a capacitive CMOS-MEMS sensor that consists of the proposed sensor circuit and a capacitive MEMS device implemented on the circuit. To design a capacitive CMOS-MEMS sensor, a multi-physics simulation of the electromechanical behavior of both the MEMS structure and the sensing LSI was carried out simultaneously. In order to verify the validity of the design, we applied the capacitive CMOS-MEMS sensor to a MEMS accelerometer implemented by the post-CMOS process onto a 0.35-µm CMOS circuit. The experimental results of the CMOS-MEMS accelerometer exhibited good agreement with the simulation results within the input acceleration range between 0.5 and 6 G (1 G = 9.8 m/s2), corresponding to the output voltages between 908.6 and 915.4 mV, respectively. Therefore, we have confirmed that our capacitive CMOS-MEMS sensor and the multi-physics simulation will be beneficial method to realize integrated CMOS-MEMS technology.
Zheng, Jiajia; Li, Yancheng; Li, Zhaochun; Wang, Jiong
2015-10-01
This paper presents multi-physics modeling of an MR absorber considering the magnetic hysteresis to capture the nonlinear relationship between the applied current and the generated force under impact loading. The magnetic field, temperature field, and fluid dynamics are represented by the Maxwell equations, conjugate heat transfer equations, and Navier-Stokes equations. These fields are coupled through the apparent viscosity and the magnetic force, both of which in turn depend on the magnetic flux density and the temperature. Based on a parametric study, an inverse Jiles-Atherton hysteresis model is used and implemented for the magnetic field simulation. The temperature rise of the MR fluid in the annular gap caused by core loss (i.e. eddy current loss and hysteresis loss) and fluid motion is computed to investigate the current-force behavior. A group of impulsive tests was performed for the manufactured MR absorber with step exciting currents. The numerical and experimental results showed good agreement, which validates the effectiveness of the proposed multi-physics FEA model.
Multi-physics design and analyses of long life reactors for lunar outposts
Schriener, Timothy M.
event of a launch abort accident. Increasing the amount of fuel in the reactor core, and hence its operational life, would be possible by launching the reactor unfueled and fueling it on the Moon. Such a reactor would, thus, not be subject to launch criticality safety requirements. However, loading the reactor with fuel on the Moon presents a challenge, requiring special designs of the core and the fuel elements, which lend themselves to fueling on the lunar surface. This research investigates examples of both a solid core reactor that would be fueled at launch as well as an advanced concept which could be fueled on the Moon. Increasing the operational life of a reactor fueled at launch is exercised for the NaK-78 cooled Sectored Compact Reactor (SCoRe). A multi-physics design and analyses methodology is developed which iteratively couples together detailed Monte Carlo neutronics simulations with 3-D Computational Fluid Dynamics (CFD) and thermal-hydraulics analyses. Using this methodology the operational life of this compact, fast spectrum reactor is increased by reconfiguring the core geometry to reduce neutron leakage and parasitic absorption, for the same amount of HEU in the core, and meeting launch safety requirements. The multi-physics analyses determine the impacts of the various design changes on the reactor's neutronics and thermal-hydraulics performance. The option of increasing the operational life of a reactor by loading it on the Moon is exercised for the Pellet Bed Reactor (PeBR). The PeBR uses spherical fuel pellets and is cooled by He-Xe gas, allowing the reactor core to be loaded with fuel pellets and charged with working fluid on the lunar surface. The performed neutronics analyses ensure the PeBR design achieves a long operational life, and develops safe launch canister designs to transport the spherical fuel pellets to the lunar surface. The research also investigates loading the PeBR core with fuel pellets on the Moon using a transient Discrete
Wald, Ingo; Ize, Santiago
2015-07-28
Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.
Ultrascalable petaflop parallel supercomputer
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY
2010-07-20
A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.
DEFF Research Database (Denmark)
Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert
of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...
PARALLEL MOVING MECHANICAL SYSTEMS
Directory of Open Access Journals (Sweden)
Florian Ion Tiberius Petrescu
2014-09-01
Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.
Xyce parallel electronic simulator.
Energy Technology Data Exchange (ETDEWEB)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.
2010-05-01
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.
Betchov, R
2012-01-01
Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Parallel Prediction of Stock Volatility
Directory of Open Access Journals (Sweden)
Priscilla Jenq
2017-10-01
Full Text Available Volatility is a measurement of the risk of financial products. A stock will hit new highs and lows over time and if these highs and lows fluctuate wildly, then it is considered a high volatile stock. Such a stock is considered riskier than a stock whose volatility is low. Although highly volatile stocks are riskier, the returns that they generate for investors can be quite high. Of course, with a riskier stock also comes the chance of losing money and yielding negative returns. In this project, we will use historic stock data to help us forecast volatility. Since the financial industry usually uses S&P 500 as the indicator of the market, we will use S&P 500 as a benchmark to compute the risk. We will also use artificial neural networks as a tool to predict volatilities for a specific time frame that will be set when we configure this neural network. There have been reports that neural networks with different numbers of layers and different numbers of hidden nodes may generate varying results. In fact, we may be able to find the best configuration of a neural network to compute volatilities. We will implement this system using the parallel approach. The system can be used as a tool for investors to allocating and hedging assets.
Design Patterns: establishing a discipline of parallel software engineering
CERN. Geneva
2010-01-01
Many core processors present us with a software challenge. We must turn our serial code into parallel code. To accomplish this wholesale transformation of our software ecosystem, we must define established practice is in parallel programming and then develop tools to support that practice. This leads to design patterns supported by frameworks optimized at runtime with advanced autotuning compilers. In this talk I provide an update of my ongoing research with the ParLab at UC Berkeley to realize this vision. In particular, I will describe our draft parallel pattern language, our early experiments with software frameworks, and the associated runtime optimization tools.About the speakerTim Mattson is a parallel programmer (Ph.D. Chemistry, UCSC, 1985). He does linear algebra, finds oil, shakes molecules, solves differential equations, and models electrons in simple atomic systems. He has spent his career working with computer scientists to make sure the needs of parallel applications programmers are met.Tim has ...
Multi-physics and multi-scale characterization of shale anisotropy
Sarout, J.; Nadri, D.; Delle Piane, C.; Esteban, L.; Dewhurst, D.; Clennell, M. B.
2012-12-01
Shales are the most abundant sedimentary rock type in the Earth's shallow crust. In the past decade or so, they have attracted increased attention from the petroleum industry as reservoirs, as well as more traditionally for their sealing capacity for hydrocarbon/CO2 traps or underground waste repositories. The effectiveness of both fundamental and applied shale research is currently limited by (i) the extreme variability of physical, mechanical and chemical properties observed for these rocks, and by (ii) the scarce data currently available. The variability in observed properties is poorly understood due to many factors that are often irrelevant for other sedimentary rocks. The relationships between these properties and the petrophysical measurements performed at the field and laboratory scales are not straightforward, translating to a scale dependency typical of shale behaviour. In addition, the complex and often anisotropic micro-/meso-structures of shales give rise to a directional dependency of some of the measured physical properties that are tensorial by nature such as permeability or elastic stiffness. Currently, fundamental understanding of the parameters controlling the directional and scale dependency of shale properties is far from complete. Selected results of a multi-physics laboratory investigation of the directional and scale dependency of some critical shale properties are reported. In particular, anisotropic features of shale micro-/meso-structures are related to the directional-dependency of elastic and fluid transport properties: - Micro-/meso-structure (μm to cm scale) characterization by electron microscopy and X-ray tomography; - Estimation of elastic anisotropy parameters on a single specimen using elastic wave propagation (cm scale); - Estimation of the permeability tensor using the steady-state method on orthogonal specimens (cm scale); - Estimation of the low-frequency diffusivity tensor using NMR method on orthogonal specimens (example
Linear parallel processing machines I
Energy Technology Data Exchange (ETDEWEB)
Von Kunze, M
1984-01-01
As is well-known, non-context-free grammars for generating formal languages happen to be of a certain intrinsic computational power that presents serious difficulties to efficient parsing algorithms as well as for the development of an algebraic theory of contextsensitive languages. In this paper a framework is given for the investigation of the computational power of formal grammars, in order to start a thorough analysis of grammars consisting of derivation rules of the form aB ..-->.. A/sub 1/ ... A /sub n/ b/sub 1/...b /sub m/ . These grammars may be thought of as automata by means of parallel processing, if one considers the variables as operators acting on the terminals while reading them right-to-left. This kind of automata and their 2-dimensional programming language prove to be useful by allowing a concise linear-time algorithm for integer multiplication. Linear parallel processing machines (LP-machines) which are, in their general form, equivalent to Turing machines, include finite automata and pushdown automata (with states encoded) as special cases. Bounded LP-machines yield deterministic accepting automata for nondeterministic contextfree languages, and they define an interesting class of contextsensitive languages. A characterization of this class in terms of generating grammars is established by using derivation trees with crossings as a helpful tool. From the algebraic point of view, deterministic LP-machines are effectively represented semigroups with distinguished subsets. Concerning the dualism between generating and accepting devices of formal languages within the algebraic setting, the concept of accepting automata turns out to reduce essentially to embeddability in an effectively represented extension monoid, even in the classical cases.
Parallelization and automatic data distribution for nuclear reactor simulations
Energy Technology Data Exchange (ETDEWEB)
Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)
1997-07-01
Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.
Parallelization and automatic data distribution for nuclear reactor simulations
International Nuclear Information System (INIS)
Liebrock, L.M.
1997-01-01
Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed
Resistor Combinations for Parallel Circuits.
McTernan, James P.
1978-01-01
To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)
Parallel External Memory Graph Algorithms
DEFF Research Database (Denmark)
Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari
2010-01-01
In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of Â¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....
Parallel inter channel interaction mechanisms
International Nuclear Information System (INIS)
Jovic, V.; Afgan, N.; Jovic, L.
1995-01-01
Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)
International Nuclear Information System (INIS)
Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G
2007-01-01
The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results
A Parallel Butterfly Algorithm
Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing
2014-01-01
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
Fast parallel event reconstruction
CERN. Geneva
2010-01-01
On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer. Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...
DEFF Research Database (Denmark)
Lepech, M.; Michel, Alexander; Geiker, Mette
2016-01-01
and widespread depassivation, are the mechanism behind experimental results of HPFRCC steel corrosion studies found in the literature. Such results provide an indication of the fundamental mechanisms by which steel reinforced HPFRCC materials may be more durable than traditional reinforced concrete and other......Using a newly developed multi-physics transport, corrosion, and cracking model, which models these phenomena as a coupled physiochemical processes, the role of HPFRCC crack control and formation in regulating steel reinforcement corrosion is investigated. This model describes transport of water...... and chemical species, the electric potential distribution in the HPFRCC, the electrochemical propagation of steel corrosion, and the role of microcracks in the HPFRCC material. Numerical results show that the reduction in anode and cathode size on the reinforcing steel surface, due to multiple crack formation...
Pawar, Sumedh; Sharma, Atul
2018-01-01
This work presents mathematical model and solution methodology for a multiphysics engineering problem on arc formation during welding and inside a nozzle. A general-purpose commercial CFD solver ANSYS FLUENT 13.0.0 is used in this work. Arc formation involves strongly coupled gas dynamics and electro-dynamics, simulated by solution of coupled Navier-Stoke equations, Maxwell's equations and radiation heat-transfer equation. Validation of the present numerical methodology is demonstrated with an excellent agreement with the published results. The developed mathematical model and the user defined functions (UDFs) are independent of the geometry and are applicable to any system that involves arc-formation, in 2D axisymmetric coordinates system. The high-pressure flow of SF6 gas in the nozzle-arc system resembles arc chamber of SF6 gas circuit breaker; thus, this methodology can be extended to simulate arcing phenomenon during current interruption.
International Nuclear Information System (INIS)
García-Salaberri, Pablo A.; Vera, Marcos
2016-01-01
A multiphysics model for liquid-feed Direct Methanol Fuel Cells is presented. The model accounts for two-dimensional (2D) across-the-channel anisotropic mass and charge transport in the anode and cathode Gas Diffusion Layers (GDLs), including the effect of GDL assembly compression and electrical contact resistances at the Bipolar Plate (BPP) and membrane interfaces. A one-dimensional (1D) across-the-membrane model is used to describe local species diffusion through the microporous layers, methanol/water crossover, proton transport, and electrochemical reactions, thereby coupling both GDL sub-models. The 2D/1D model is extended to the third dimension and supplemented with 1D descriptions of the flow channels to yield a 3D/1D + 1D model that is successfully validated. A parametric study is then conducted on the 2D/1D model to examine the effect of operating conditions on cell performance. The results show that an optimum methanol concentration exists that maximizes power output due to the trade-off between anode polarization and cathode mixed overpotential. For fixed methanol concentration, cell performance is largely affected by the oxygen supply rate, cell temperature, and liquid/gas saturation levels. There is also an optimal GDL compression due to the trade-off between ohmic and concentration losses, which strongly depends on BPP material and, more weakly, on the actual operating conditions. - Highlights: • A multiphysics model for liquid-feed DMFCs is presented. • GDL anisotropic transport, assembly compression, and ohmic contact resistances are considered. • The model is successfully validated against previous experimental data. • Optimum methanol concentrations, GDL compressions, and operating temperatures are reported. • Oxygen-starved conditions with spontaneous hydrogen evolution in the anode are also considered.
International Nuclear Information System (INIS)
DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.
2010-01-01
The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement
Domain decomposition parallel computing for transient two-phase flow of nuclear reactors
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Ryong; Yoon, Han Young [KAERI, Daejeon (Korea, Republic of); Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)
2016-05-15
KAERI (Korea Atomic Energy Research Institute) has been developing a multi-dimensional two-phase flow code named CUPID for multi-physics and multi-scale thermal hydraulics analysis of Light water reactors (LWRs). The CUPID code has been validated against a set of conceptual problems and experimental data. In this work, the CUPID code has been parallelized based on the domain decomposition method with Message passing interface (MPI) library. For domain decomposition, the CUPID code provides both manual and automatic methods with METIS library. For the effective memory management, the Compressed sparse row (CSR) format is adopted, which is one of the methods to represent the sparse asymmetric matrix. CSR format saves only non-zero value and its position (row and column). By performing the verification for the fundamental problem set, the parallelization of the CUPID has been successfully confirmed. Since the scalability of a parallel simulation is generally known to be better for fine mesh system, three different scales of mesh system are considered: 40000 meshes for coarse mesh system, 320000 meshes for mid-size mesh system, and 2560000 meshes for fine mesh system. In the given geometry, both single- and two-phase calculations were conducted. In addition, two types of preconditioners for a matrix solver were compared: Diagonal and incomplete LU preconditioner. In terms of enhancement of the parallel performance, the OpenMP and MPI hybrid parallel computing for a pressure solver was examined. It is revealed that the scalability of hybrid calculation was enhanced for the multi-core parallel computation.
SALOME. A software integration platform for multi-physics, pre-processing and visualisation
International Nuclear Information System (INIS)
Bergeaud, Vincent; Lefebvre, Vincent
2010-01-01
In order to ease the development of applications integrating simulation codes, CAD modelers and post-processing tools. CEA and EDF R and D have invested in the SALOME platform, a tool dedicated to the environment of the scientific codes. The platform comes in the shape of a toolbox which offers functionalities for CAD, meshing, code coupling, visualization, GUI development. These tools can be combined to create integrated applications that make the scientific codes easier to use and well-interfaced with their environment be it other codes, CAD and meshing tools or visualization software. Many projects in CEA and EDF R and D now use SALOME, bringing technical coherence to the software suites of our institutions. (author)
Parallel Polarization State Generation.
She, Alan; Capasso, Federico
2016-05-17
The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.
Out-of-order parallel discrete event simulation for electronic system-level design
Chen, Weiwei
2014-01-01
This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems.? It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time.? Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifyin
About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems
Directory of Open Access Journals (Sweden)
Loredana MOCEAN
2009-01-01
Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.
Sight Application Analysis Tool
Energy Technology Data Exchange (ETDEWEB)
Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2014-09-17
The scale and complexity of scientific applications makes it very difficult to optimize, debug and extend them to support new capabilities. We have developed a tool that supports developers’ efforts to understand the logical flow of their applications and interactions between application components and hardware in a way that scales with application complexity and parallelism.
Parallel and distributed processing in two SGBDS: A case study
Francisco Javier Moreno; Nataly Castrillón Charari; Camilo Taborda Zuluaga
2017-01-01
Context: One of the strategies for managing large volumes of data is distributed and parallel computing. Among the tools that allow applying these characteristics are some Data Base Management Systems (DBMS), such as Oracle, DB2, and SQL Server. Method: In this paper we present a case study where we evaluate the performance of an SQL query in two of these DBMS. The evaluation is done through various forms of data distribution in a computer network with different degrees of parallelism. ...
Parallel Framework for Cooperative Processes
Directory of Open Access Journals (Sweden)
Mitică Craus
2005-01-01
Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.
International Nuclear Information System (INIS)
Bodey, Isaac T.; Curtis, Franklin G.; Arimilli, Rao V.; Ekici, Kivanc; Freels, James D.
2015-01-01
The findings presented in this report are results of a five year effort led by the RRD Division of the ORNL, which is focused on research and development toward the conversion of the High Flux Isotope Reactor (HFIR) fuel from high-enriched uranium (HEU) to low-enriched uranium (LEU). This report focuses on the tasks accomplished by the University of Tennessee Knoxville (UTK) team from the Department of Mechanical, Aerospace, and Biomedical Engineering (MABE) that provided expert support in multiphysics modeling of complex problems associated with the LEU conversion of the HFIR reactor. The COMSOL software was used as the main computational modeling tool, whereas Solidworks was also used in support of computer-aided-design (CAD) modeling of the proposed LEU fuel design. The UTK research has been governed by a statement of work (SOW), which was updated annually to clearly define the specific tasks reported herein. Ph.D. student Isaac T. Bodey has focused on heat transfer and fluid flow modeling issues and has been aided by his major professor Dr. Rao V. Arimilli. Ph.D. student Franklin G. Curtis has been focusing on modeling the fluid-structure interaction (FSI) phenomena caused by the mechanical forces acting on the fuel plates, which in turn affect the fluid flow in between the fuel plates, and ultimately the heat transfer, is also affected by the FSI changes. Franklin Curtis has been aided by his major professor Dr. Kivanc Ekici. M.Sc. student Adam R. Travis has focused two major areas of research: (1) on accurate CAD modeling of the proposed LEU plate design, and (2) reduction of the model complexity and dimensionality through interdimensional coupling of the fluid flow and heat transfer for the HFIR plate geometry. Adam Travis is also aided by his major professor, Dr. Kivanc Ekici. We must note that the UTK team, and particularly the graduate students, have been in very close collaboration with Dr. James D. Freels (ORNL technical monitor and mentor) and have
Clark, Martyn; Samaniego, Luis; Freer, Jim
2014-05-01
Multi-model and multi-physics approaches are a popular tool in environmental modelling, with many studies focusing on optimally combining output from multiple model simulations to reduce predictive errors and better characterize predictive uncertainty. However, a careful and systematic analysis of different hydrological models reveals that individual models are simply small permutations of a master modeling template, and inter-model differences are overwhelmed by uncertainty in the choice of the parameter values in the model equations. Furthermore, inter-model differences do not explicitly represent the uncertainty in modeling a given process, leading to many situations where different models provide the wrong results for the same reasons. In other cases, the available morphological data does not support the very fine spatial discretization of the landscape that typifies many modern applications of process-based models. To make the uncertainty characterization problem worse, the uncertain parameter values in process-based models are often fixed (hard-coded), and the models lack the agility necessary to represent the tremendous heterogeneity in natural systems. This presentation summarizes results from a systematic analysis of uncertainty in process-based hydrological models, where we explicitly analyze the myriad of subjective decisions made throughout both the model development and parameter estimation process. Results show that much of the uncertainty is aleatory in nature - given a "complete" representation of dominant hydrologic processes, uncertainty in process parameterizations can be represented using an ensemble of model parameters. Epistemic uncertainty associated with process interactions and scaling behavior is still important, and these uncertainties can be represented using an ensemble of different spatial configurations. Finally, uncertainty in forcing data can be represented using ensemble methods for spatial meteorological analysis. Our systematic
Energy Technology Data Exchange (ETDEWEB)
Bodey, Isaac T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Curtis, Franklin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Arimilli, Rao V. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ekici, Kivanc [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Freels, James D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-11-01
The findings presented in this report are results of a five year effort led by the RRD Division of the ORNL, which is focused on research and development toward the conversion of the High Flux Isotope Reactor (HFIR) fuel from high-enriched uranium (HEU) to low-enriched uranium (LEU). This report focuses on the tasks accomplished by the University of Tennessee Knoxville (UTK) team from the Department of Mechanical, Aerospace, and Biomedical Engineering (MABE) that provided expert support in multiphysics modeling of complex problems associated with the LEU conversion of the HFIR reactor. The COMSOL software was used as the main computational modeling tool, whereas Solidworks was also used in support of computer-aided-design (CAD) modeling of the proposed LEU fuel design. The UTK research has been governed by a statement of work (SOW), which was updated annually to clearly define the specific tasks reported herein. Ph.D. student Isaac T. Bodey has focused on heat transfer and fluid flow modeling issues and has been aided by his major professor Dr. Rao V. Arimilli. Ph.D. student Franklin G. Curtis has been focusing on modeling the fluid-structure interaction (FSI) phenomena caused by the mechanical forces acting on the fuel plates, which in turn affect the fluid flow in between the fuel plates, and ultimately the heat transfer, is also affected by the FSI changes. Franklin Curtis has been aided by his major professor Dr. Kivanc Ekici. M.Sc. student Adam R. Travis has focused two major areas of research: (1) on accurate CAD modeling of the proposed LEU plate design, and (2) reduction of the model complexity and dimensionality through interdimensional coupling of the fluid flow and heat transfer for the HFIR plate geometry. Adam Travis is also aided by his major professor, Dr. Kivanc Ekici. We must note that the UTK team, and particularly the graduate students, have been in very close collaboration with Dr. James D. Freels (ORNL technical monitor and mentor) and have
Parallel interactive data analysis with PROOF
International Nuclear Information System (INIS)
Ballintijn, Maarten; Biskup, Marek; Brun, Rene; Canal, Philippe; Feichtinger, Derek; Ganis, Gerardo; Kickinger, Guenter; Peters, Andreas; Rademakers, Fons
2006-01-01
The Parallel ROOT Facility, PROOF, enables the analysis of much larger data sets on a shorter time scale. It exploits the inherent parallelism in data of uncorrelated events via a multi-tier architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. The system provides transparent and interactive access to gigabytes today. Being part of the ROOT framework PROOF inherits the benefits of a performant object storage system and a wealth of statistical and visualization tools. This paper describes the data analysis model of ROOT and the latest developments on closer integration of PROOF into that model and the ROOT user environment, e.g. support for PROOF-based browsing of trees stored remotely, and the popular TTree::Draw() interface. We also outline the ongoing developments aimed to improve the flexibility and user-friendliness of the system
A parallel robot to assist vitreoretinal surgery
Energy Technology Data Exchange (ETDEWEB)
Nakano, Taiga; Sugita, Naohiko; Mitsuishi, Mamoru [University of Tokyo, School of Engineering, Tokyo (Japan); Ueta, Takashi; Tamaki, Yasuhiro [University of Tokyo, Graduate School of Medicine, Tokyo (Japan)
2009-11-15
This paper describes the development and evaluation of a parallel prototype robot for vitreoretinal surgery where physiological hand tremor limits performance. The manipulator was specifically designed to meet requirements such as size, precision, and sterilization; this has six-degree-of-freedom parallel architecture and provides positioning accuracy with micrometer resolution within the eye. The manipulator is controlled by an operator with a ''master manipulator'' consisting of multiple joints. Results of the in vitro experiments revealed that when compared to the manual procedure, a higher stability and accuracy of tool positioning could be achieved using the prototype robot. This microsurgical system that we have developed has superior operability as compared to traditional manual procedure and has sufficient potential to be used clinically for vitreoretinal surgery. (orig.)
Impact analysis on a massively parallel computer
International Nuclear Information System (INIS)
Zacharia, T.; Aramayo, G.A.
1994-01-01
Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper
Parallel Relational Universes – experiments in modularity
DEFF Research Database (Denmark)
Pagliarini, Luigi; Lund, Henrik Hautop
2015-01-01
: We here describe Parallel Relational Universes, an artistic method used for the psychological analysis of group dynamics. The design of the artistic system, which mediates group dynamics, emerges from our studies of modular playware and remixing playware. Inspired from remixing modular playware......, where users remix samples in the form of physical and functional modules, we created an artistic instantiation of such a concept with the Parallel Relational Universes, allowing arts alumni to remix artistic expressions. Here, we report the data emerged from a first pre-test, run with gymnasium’s alumni....... We then report both the artistic and the psychological findings. We discuss possible variations of such an instrument. Between an art piece and a psychological test, at a first cognitive analysis, it seems to be a promising research tool...
Parallel Monte Carlo reactor neutronics
International Nuclear Information System (INIS)
Blomquist, R.N.; Brown, F.B.
1994-01-01
The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved
DEFF Research Database (Denmark)
Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.
2015-01-01
about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...
Parallel consensual neural networks.
Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H
1997-01-01
A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.
An Expert System for the Development of Efficient Parallel Code
Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit
2004-01-01
We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.
A Parallel Particle Swarm Optimizer
National Research Council Canada - National Science Library
Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D
2003-01-01
.... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...
Patterns for Parallel Software Design
Ortega-Arjona, Jorge Luis
2010-01-01
Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin
DEFF Research Database (Denmark)
Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo
2013-01-01
a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...
An Automatic Instruction-Level Parallelization of Machine Code
Directory of Open Access Journals (Sweden)
MARINKOVIC, V.
2018-02-01
Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.
Directory of Open Access Journals (Sweden)
Sulin Garro Acón
2012-11-01
Full Text Available En este estudio se analizó la transferencia de calor en tres disipadores de calor utilizados para enfriar los procesadores de computadoras de escritorio. El objetivo de estos disipadores es evitar el sobrecalentamiento de la unidad de procesamiento y la consecuente reducción de la vida útil del computador. Los disipadores de calor se modelaron usando COMSOL Multiphysics con las dimensiones reales de los dispositivos y la generación de calor se modeló con una fuente puntual. Luego se modificaron los diseños de los disipadores para lograr una temperatura más baja en la zona más caliente del procesador. El resultado fue una reducción en la temperatura en el rango de 5-78 grados Kelvin, al rediseñarse el disipador de calor con variaciones feasibles como la reducción del grosor de las placas de intercambio de calor y el aumento de su número. Esto demuestra la posibilidad de desarrollar diseños optimizados para disipadores de calor que no requieran más materiales sino una mejor ingeniería. El trabajo se inició como parte del curso CM-4101 Modelización y Simulación.In this study, the heat transfer of three desktop- computer heat sinks was analyzed. The objective of using these heat sinks is to avoid overheating of the computer’s processing unit and in turn reduce the corresponding loss in the unit’s service time. The heat sinks were modeled using COMSOL Multiphysics with the actual dimensions of the devices, and heat generation was modeled with a point source. In the next step, the heat sink designs were modified to achieve a lower temperature in the higher temperature location on the heat sink. The results were temperature reductions in the range of 5-78 degrees Kelvin, by making feasible variations in design such as reducing the thickness of the heat exchanger fins and increasing their number. This paper demonstrates that there is room to develop improved designs that do not require more materials but rather a better engineering
Luke, Edward Allen
1993-01-01
Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.
International Nuclear Information System (INIS)
Richard, Joshua; Galloway, Jack; Fensin, Michael; Trellue, Holly
2015-01-01
Highlights: • A modular mapping methodogy for neutronic-thermal hydraulic nuclear reactor multiphysics, SMITHERS, has been developed. • Written in Python, SMITHERS takes a novel object-oriented approach for facilitating data transitions between solvers. This approach enables near-instant compatibility with existing MCNP/MONTEBURNS input decks. • It also allows for coupling with thermal-hydraulic solvers of various levels of fidelity. • Two BWR and PWR test problems are presented for verifying correct functionality of the SMITHERS code routines. - Abstract: A novel object-oriented modular mapping methodology for externally coupled neutronics–thermal hydraulics multiphysics simulations was developed. The Simulator using MCNP with Integrated Thermal-Hydraulics for Exploratory Reactor Studies (SMITHERS) code performs on-the-fly mapping of material-wise power distribution tallies implemented by MCNP-based neutron transport/depletion solvers for use in estimating coolant temperature and density distributions with a separate thermal-hydraulic solver. The key development of SMITHERS is that it reconstructs the hierarchical geometry structure of the material-wise power generation tallies from the depletion solver automatically, with only a modicum of additional information required from the user. Additionally, it performs the basis mapping from the combinatorial geometry of the depletion solver to the required geometry of the thermal-hydraulic solver in a generalizable manner, such that it can transparently accommodate varying levels of thermal-hydraulic solver geometric fidelity, from the nodal geometry of multi-channel analysis solvers to the pin-cell level of discretization for sub-channel analysis solvers. The mapping methodology was specifically developed to be flexible enough such that it could successfully integrate preexisting depletion solver case files with different thermal-hydraulic solvers. This approach allows the user to tailor the selection of a
The multi-physics, user-friendly gas-dynamics code Visual Tsunami 2.0
International Nuclear Information System (INIS)
Debonnel, C. S.; Trubov, L.; Zeballos, C. A.; Peterson, P. F.
2007-01-01
Since the early 1990's, the series of simulation code known as TSUNAMI has been the main tool employed to explore gas dynamics phenomena in thick-liquid protected inertial fusion target chambers. The applicability and user-friendliness of the code was recently extended through a set of MATLAB pre- and post-processing tools and graphical user interfaces [1]. Geometry, initial, and boundary conditions can be specified from within AutoCAD through a set of in-house AutoLISP graphical user interfaces. A novel MATLAB core was recently developed and tested, and is now routinely used with the user-friendly pre- and post-processors [2]. An overview of Visual Tsunami 2.0, the latest version of the code, is presented here. (authors)
PARALLEL IMPORT: REALITY FOR RUSSIA
Directory of Open Access Journals (Sweden)
Т. А. Сухопарова
2014-01-01
Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now
Parallel beam dynamics simulation of linear accelerators
International Nuclear Information System (INIS)
Qiang, Ji; Ryne, Robert D.
2002-01-01
In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies
The Galley Parallel File System
Nieuwejaar, Nils; Kotz, David
1996-01-01
Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.
Parallelization of the FLAPW method
International Nuclear Information System (INIS)
Canning, A.; Mannstadt, W.; Freeman, A.J.
1999-01-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer
Parallelization of the FLAPW method
Canning, A.; Mannstadt, W.; Freeman, A. J.
2000-08-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.
Novel Parallel Numerical Methods for Radiation and Neutron Transport
International Nuclear Information System (INIS)
Brown, P N
2001-01-01
In many of the multiphysics simulations performed at LLNL, transport calculations can take up 30 to 50% of the total run time. If Monte Carlo methods are used, the percentage can be as high as 80%. Thus, a significant core competence in the formulation, software implementation, and solution of the numerical problems arising in transport modeling is essential to Laboratory and DOE research. In this project, we worked on developing scalable solution methods for the equations that model the transport of photons and neutrons through materials. Our goal was to reduce the transport solve time in these simulations by means of more advanced numerical methods and their parallel implementations. These methods must be scalable, that is, the time to solution must remain constant as the problem size grows and additional computer resources are used. For iterative methods, scalability requires that (1) the number of iterations to reach convergence is independent of problem size, and (2) that the computational cost grows linearly with problem size. We focused on deterministic approaches to transport, building on our earlier work in which we performed a new, detailed analysis of some existing transport methods and developed new approaches. The Boltzmann equation (the underlying equation to be solved) and various solution methods have been developed over many years. Consequently, many laboratory codes are based on these methods, which are in some cases decades old. For the transport of x-rays through partially ionized plasmas in local thermodynamic equilibrium, the transport equation is coupled to nonlinear diffusion equations for the electron and ion temperatures via the highly nonlinear Planck function. We investigated the suitability of traditional-solution approaches to transport on terascale architectures and also designed new scalable algorithms; in some cases, we investigated hybrid approaches that combined both
A Tool for Performance Modeling of Parallel Programs
Directory of Open Access Journals (Sweden)
J.A. González
2003-01-01
Full Text Available Current performance prediction analytical models try to characterize the performance behavior of actual machines through a small set of parameters. In practice, substantial deviations are observed. These differences are due to factors as memory hierarchies or network latency. A natural approach is to associate a different proportionality constant with each basic block, and analogously, to associate different latencies and bandwidths with each "communication block". Unfortunately, to use this approach implies that the evaluation of parameters must be done for each algorithm. This is a heavy task, implying experiment design, timing, statistics, pattern recognition and multi-parameter fitting algorithms. Software support is required. We present a compiler that takes as source a C program annotated with complexity formulas and produces as output an instrumented code. The trace files obtained from the execution of the resulting code are analyzed with an interactive interpreter, giving us, among other information, the values of those parameters.
NonLinear Parallel OPtimization Tool, Phase I
National Aeronautics and Space Administration — CU Aerospace, in partnership with the University of Illinois propose the further development of a new sparse nonlinear programming architecture that exploits...
International Nuclear Information System (INIS)
Gomez Torres, Armando Miguel
2011-01-01
This doctoral thesis describes the methodological development of coupled neutron-kinetics/thermal-hydraulics codes for the design and safety analysis of reactor systems taking into account the feedback mechanisms on the fuel rod level, according to different approaches. A central part of this thesis is the development and validation of a high fidelity simulation tool, DYNSUB, which results from the ''two-way-coupling'' of DYN3D-SP3 and SUBCHANFLOW. It allows the determination of local safety parameters through a detailed description of the core behavior under stationary and transient conditions at fuel rod level.
Energy Technology Data Exchange (ETDEWEB)
Pesaran, A.; Kim, G.; Santhanagopalan, S.; Yang, C.
2015-04-21
Battery performance, cost, and safety must be further improved for larger market share of HEVs/PEVs and penetration into the grid. Significant investment is being made to develop new materials, fine tune existing ones, improve cell and pack designs, and enhance manufacturing processes to increase performance, reduce cost, and make batteries safer. Modeling, simulation, and design tools can play an important role by providing insight on how to address issues, reducing the number of build-test-break prototypes, and accelerating the development cycle of generating products.
Parallel Ada benchmarks for the SVMS
Collard, Philippe E.
1990-01-01
The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.
Ariane, Mostapha; Kassinos, Stavros; Velaga, Sitaram; Alexiadis, Alessio
2018-04-01
In this paper, the mass transfer coefficient (permeability) of boundary layers containing motile cilia is investigated by means of discrete multi-physics. The idea is to understand the main mechanisms of mass transport occurring in a ciliated-layer; one specific application being inhaled drugs in the respiratory epithelium. The effect of drug diffusivity, cilia beat frequency and cilia flexibility is studied. Our results show the existence of three mass transfer regimes. A low frequency regime, which we called shielding regime, where the presence of the cilia hinders mass transport; an intermediate frequency regime, which we have called diffusive regime, where diffusion is the controlling mechanism; and a high frequency regime, which we have called convective regime, where the degree of bending of the cilia seems to be the most important factor controlling mass transfer in the ciliated-layer. Since the flexibility of the cilia and the frequency of the beat changes with age and health conditions, the knowledge of these three regimes allows prediction of how mass transfer varies with these factors. Copyright © 2018 Elsevier Ltd. All rights reserved.
Multiphysics control of a two-fluid coaxial atomizer supported by electric-charge on the liquid jet
Machicoane, Nathanael; Osuna, Rodrigo; Aliseda, Alberto
2017-11-01
We present an experimental setup to investigate multiphysics control strategies on atomization of a laminar fluid stream by a coaxial turbulent jet. Spray control (i.e. driving the droplet size distribution and the spatio-temporal location of the droplets towards a desired objective) has many potential engineering applications, but requires a mechanistic understanding of the processes that control droplet formation and transport (primary and secondary instabilities, turbulent transport, hydrodynamic and electric forces on the droplets, ...). We characterize experimentally the break-up dynamics in a canonical coaxial atomizer, and the spray structure (droplet size, location, and velocity as a function of time) in a series of open loop conditions with harmonic forcing of the gas swirl ratio, liquid injection rate, the electric field strength at the nozzle and along the spray development region. The effect of these actuators are characterized for different gas Reynolds numbers ranging from 104-106. This open-loop characterization of the injector will be used to develop reduced order models for feedback control, as well as to validate assumptions underlying an adjoint-based computational control strategy. This work is part of a large-scale project funded by an ONR MURI to provide fundamental understanding of the mechanisms for feedback control of sprays.
Sun, S.; Kou, J.; Yu, B.
2011-01-01
The temporal discretization scheme is one important ingredient of efficient simulator for two-phase flow in the fractured porous media. The application of single-scale temporal scheme is restricted by the rapid changes of the pressure and saturation in the fractured system with capillarity. In this paper, we propose a multi-scale time splitting strategy to simulate multi-scale multi-physics processes of two-phase flow in fractured porous media. We use the multi-scale time schemes for both the pressure and saturation equations; that is, a large time-step size is employed for the matrix domain, along with a small time-step size being applied in the fractures. The total time interval is partitioned into four temporal levels: the first level is used for the pressure in the entire domain, the second level matching rapid changes of the pressure in the fractures, the third level treating the response gap between the pressure and the saturation, and the fourth level applied for the saturation in the fractures. This method can reduce the computational cost arisen from the implicit solution of the pressure equation. Numerical examples are provided to demonstrate the efficiency of the proposed method.
Li, Yingkun; Chen, Xiong; Xu, Jinsheng; Zhou, Changsheng; Musa, Omer
2018-05-01
In this paper, numerical investigation of ignition transient in a dual pulse solid rocket motor has been conducted. An in-house code has been developed in order to solve multi-physics governing equations, including unsteady compressible flow, heat conduction and structural dynamic. The simplified numerical models for solid propellant ignition and combustion have been added. The conventional serial staggered algorithm is adopted to simulate the fluid structure interaction problems in a loosely-coupled manner. The accuracy of the coupling procedure is validated by the behavior of a cantilever panel subjected to a shock wave. Then, the detailed flow field development, flame propagation characteristics, pressure evolution in the combustion chamber, and the structural response of metal diaphragm are analyzed carefully. The burst-time and burst-pressure of the metal diaphragm are also obtained. The individual effects of the igniter's mass flow rate, metal diaphragm thickness and diameter on the ignition transient have been systemically compared. The numerical results show that the evolution of the flow field in the combustion chamber, the temperature distribution on the propellant surface and the pressure loading on the metal diaphragm surface present a strong three-dimensional behavior during the initial ignition stage. The rupture of metal diaphragm is not only related to the magnitude of pressure loading on the diaphragm surface, but also to the history of pressure loading. The metal diaphragm thickness and diameter have a significant effect on the burst-time and burst-pressure of metal diaphragm.
Is Monte Carlo embarrassingly parallel?
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)
2012-07-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Is Monte Carlo embarrassingly parallel?
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2012-01-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Parallel integer sorting with medium and fine-scale parallelism
Dagum, Leonardo
1993-01-01
Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.
Template based parallel checkpointing in a massively parallel computer system
Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN
2009-01-13
A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.
Directory of Open Access Journals (Sweden)
Piotr Bała
2001-01-01
Full Text Available After at least a decade of parallel tool development, parallelization of scientific applications remains a significant undertaking. Typically parallelization is a specialized activity supported only partially by the programming tool set, with the programmer involved with parallel issues in addition to sequential ones. The details of concern range from algorithm design down to low-level data movement details. The aim of parallel programming tools is to automate the latter without sacrificing performance and portability, allowing the programmer to focus on algorithm specification and development. We present our use of two similar parallelization tools, Pfortran and Cray's Co-Array Fortran, in the parallelization of the GROMOS96 molecular dynamics module. Our parallelization started from the GROMOS96 distribution's shared-memory implementation of the replicated algorithm, but used little of that existing parallel structure. Consequently, our parallelization was close to starting with the sequential version. We found the intuitive extensions to Pfortran and Co-Array Fortran helpful in the rapid parallelization of the project. We present performance figures for both the Pfortran and Co-Array Fortran parallelizations showing linear speedup within the range expected by these parallelization methods.
A parallel solution for high resolution histological image analysis.
Bueno, G; González, R; Déniz, O; García-Rojo, M; González-García, J; Fernández-Carrobles, M M; Vállez, N; Salido, J
2012-10-01
This paper describes a general methodology for developing parallel image processing algorithms based on message passing for high resolution images (on the order of several Gigabytes). These algorithms have been applied to histological images and must be executed on massively parallel processing architectures. Advances in new technologies for complete slide digitalization in pathology have been combined with developments in biomedical informatics. However, the efficient use of these digital slide systems is still a challenge. The image processing that these slides are subject to is still limited both in terms of data processed and processing methods. The work presented here focuses on the need to design and develop parallel image processing tools capable of obtaining and analyzing the entire gamut of information included in digital slides. Tools have been developed to assist pathologists in image analysis and diagnosis, and they cover low and high-level image processing methods applied to histological images. Code portability, reusability and scalability have been tested by using the following parallel computing architectures: distributed memory with massive parallel processors and two networks, INFINIBAND and Myrinet, composed of 17 and 1024 nodes respectively. The parallel framework proposed is flexible, high performance solution and it shows that the efficient processing of digital microscopic images is possible and may offer important benefits to pathology laboratories. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Information criteria for quantifying loss of reversibility in parallelized KMC
Energy Technology Data Exchange (ETDEWEB)
Gourgoulias, Konstantinos, E-mail: gourgoul@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Rey-Bellet, Luc, E-mail: luc@math.umass.edu
2017-01-01
Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot be computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.
Parallel education: what is it?
Amos, Michelle Peta
2017-01-01
In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...
Balanced, parallel operation of flashlamps
International Nuclear Information System (INIS)
Carder, B.M.; Merritt, B.T.
1979-01-01
A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests
Mesh-based parallel code coupling interface
Energy Technology Data Exchange (ETDEWEB)
Wolf, K.; Steckel, B. (eds.) [GMD - Forschungszentrum Informationstechnik GmbH, St. Augustin (DE). Inst. fuer Algorithmen und Wissenschaftliches Rechnen (SCAI)
2001-04-01
MpCCI (mesh-based parallel code coupling interface) is an interface for multidisciplinary simulations. It provides industrial end-users as well as commercial code-owners with the facility to combine different simulation tools in one environment. Thereby new solutions for multidisciplinary problems will be created. This opens new application dimensions for existent simulation tools. This Book of Abstracts gives a short overview about ongoing activities in industry and research - all presented at the 2{sup nd} MpCCI User Forum in February 2001 at GMD Sankt Augustin. (orig.) [German] MpCCI (mesh-based parallel code coupling interface) definiert eine Schnittstelle fuer multidisziplinaere Simulationsanwendungen. Sowohl industriellen Anwender als auch kommerziellen Softwarehersteller wird mit MpCCI die Moeglichkeit gegeben, Simulationswerkzeuge unterschiedlicher Disziplinen miteinander zu koppeln. Dadurch entstehen neue Loesungen fuer multidisziplinaere Problemstellungen und fuer etablierte Simulationswerkzeuge ergeben sich neue Anwendungsfelder. Dieses Book of Abstracts bietet einen Ueberblick ueber zur Zeit laufende Arbeiten in der Industrie und in der Forschung, praesentiert auf dem 2{sup nd} MpCCI User Forum im Februar 2001 an der GMD Sankt Augustin. (orig.)
Parallel Aircraft Trajectory Optimization with Analytic Derivatives
Falck, Robert D.; Gray, Justin S.; Naylor, Bret
2016-01-01
Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.
Energy Technology Data Exchange (ETDEWEB)
Aldemir, Tunc [The Ohio State Univ., Columbus, OH (United States); Denning, Richard [The Ohio State Univ., Columbus, OH (United States); Catalyurek, Umit [The Ohio State Univ., Columbus, OH (United States); Unwin, Stephen [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-01-23
Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, such as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.
International Nuclear Information System (INIS)
Aldemir, Tunc; Denning, Richard; Catalyurek, Umit; Unwin, Stephen
2015-01-01
Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, such as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.
A Parallel Saturation Algorithm on Shared Memory Architectures
Ezekiel, Jonathan; Siminiceanu
2007-01-01
Symbolic state-space generators are notoriously hard to parallelize. However, the Saturation algorithm implemented in the SMART verification tool differs from other sequential symbolic state-space generators in that it exploits the locality of ring events in asynchronous system models. This paper explores whether event locality can be utilized to efficiently parallelize Saturation on shared-memory architectures. Conceptually, we propose to parallelize the ring of events within a decision diagram node, which is technically realized via a thread pool. We discuss the challenges involved in our parallel design and conduct experimental studies on its prototypical implementation. On a dual-processor dual core PC, our studies show speed-ups for several example models, e.g., of up to 50% for a Kanban model, when compared to running our algorithm only on a single core.
Beam dynamics simulations using a parallel version of PARMILA
International Nuclear Information System (INIS)
Ryne, R.D.
1996-01-01
The computer code PARMILA has been the primary tool for the design of proton and ion linacs in the United States for nearly three decades. Previously it was sufficient to perform simulations with of order 10000 particles, but recently the need to perform high resolution halo studies for next-generation, high intensity linacs has made it necessary to perform simulations with of order 100 million particles. With the advent of massively parallel computers such simulations are now within reach. Parallel computers already make it possible, for example, to perform beam dynamics calculations with tens of millions of particles, requiring over 10 GByte of core memory, in just a few hours. Also, parallel computers are becoming easier to use thanks to the availability of mature, Fortran-like languages such as Connection Machine Fortran and High Performance Fortran. We will describe our experience developing a parallel version of PARMILA and the performance of the new code
Beam dynamics simulations using a parallel version of PARMILA
International Nuclear Information System (INIS)
Ryne, Robert
1996-01-01
The computer code PARMILA has been the primary tool for the design of proton and ion linacs in the United States for nearly three decades. Previously it was sufficient to perform simulations with of order 10000 particles, but recently the need to perform high resolution halo studies for next-generation, high intensity linacs has made it necessary to perform simulations with of order 100 million particles. With the advent of massively parallel computers such simulations are now within reach. Parallel computers already make it possible, for example, to perform beam dynamics calculations with tens of millions of particles, requiring over 10 GByte of core memory, in just a few hours. Also, parallel computers are becoming easier to use thanks to the availability of mature, Fortran-like languages such as Connection Machine Fortran and High Performance Fortran. We will describe our experience developing a parallel version of PARMILA and the performance of the new code. (author)
Workspace Analysis for Parallel Robot
Directory of Open Access Journals (Sweden)
Ying Sun
2013-05-01
Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.
"Feeling" Series and Parallel Resistances.
Morse, Robert A.
1993-01-01
Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)
Parallel encoders for pixel detectors
International Nuclear Information System (INIS)
Nikityuk, N.M.
1991-01-01
A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs
Massively Parallel Finite Element Programming
Heister, Timo
2010-01-01
Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.
Event monitoring of parallel computations
Directory of Open Access Journals (Sweden)
Gruzlikov Alexander M.
2015-06-01
Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences
Massively Parallel Finite Element Programming
Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang
2010-01-01
Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.
The STAPL Parallel Graph Library
Harshvardhan,
2013-01-01
This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.
Highly parallel machines and future of scientific computing
International Nuclear Information System (INIS)
Singh, G.S.
1992-01-01
Computing requirement of large scale scientific computing has always been ahead of what state of the art hardware could supply in the form of supercomputers of the day. And for any single processor system the limit to increase in the computing power was realized a few years back itself. Now with the advent of parallel computing systems the availability of machines with the required computing power seems a reality. In this paper the author tries to visualize the future large scale scientific computing in the penultimate decade of the present century. The author summarized trends in parallel computers and emphasize the need for a better programming environment and software tools for optimal performance. The author concludes this paper with critique on parallel architectures, software tools and algorithms. (author). 10 refs., 2 tabs
Declarative Parallel Programming in Spreadsheet End-User Development
DEFF Research Database (Denmark)
Biermann, Florian
2016-01-01
Spreadsheets are first-order functional languages and are widely used in research and industry as a tool to conveniently perform all kinds of computations. Because cells on a spreadsheet are immutable, there are possibilities for implicit parallelization of spreadsheet computations. In this liter...... can directly apply results from functional array programming to a spreadsheet model of computations.......Spreadsheets are first-order functional languages and are widely used in research and industry as a tool to conveniently perform all kinds of computations. Because cells on a spreadsheet are immutable, there are possibilities for implicit parallelization of spreadsheet computations....... In this literature study, we provide an overview of the publications on spreadsheet end-user programming and declarative array programming to inform further research on parallel programming in spreadsheets. Our results show that there is a clear overlap between spreadsheet programming and array programming and we...
Exploiting Symmetry on Parallel Architectures.
Stiller, Lewis Benjamin
1995-01-01
This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.
Parallel algorithms for continuum dynamics
International Nuclear Information System (INIS)
Hicks, D.L.; Liebrock, L.M.
1987-01-01
Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors
COMSOL-PHREEQC: a tool for high performance numerical simulation of reactive transport phenomena
International Nuclear Information System (INIS)
Nardi, Albert; Vries, Luis Manuel de; Trinchero, Paolo; Idiart, Andres; Molinero, Jorge
2012-01-01
Document available in extended abstract form only. Comsol Multiphysics (COMSOL, from now on) is a powerful Finite Element software environment for the modelling and simulation of a large number of physics-based systems. The user can apply variables, expressions or numbers directly to solid and fluid domains, boundaries, edges and points, independently of the computational mesh. COMSOL then internally compiles a set of equations representing the entire model. The availability of extremely powerful pre and post processors makes COMSOL a numerical platform well known and extensively used in many branches of sciences and engineering. On the other hand, PHREEQC is a freely available computer program for simulating chemical reactions and transport processes in aqueous systems. It is perhaps the most widely used geochemical code in the scientific community and is openly distributed. The program is based on equilibrium chemistry of aqueous solutions interacting with minerals, gases, solid solutions, exchangers, and sorption surfaces, but also includes the capability to model kinetic reactions with rate equations that are user-specified in a very flexible way by means of Basic statements directly written in the input file. Here we present COMSOL-PHREEQC, a software interface able to communicate and couple these two powerful simulators by means of a Java interface. The methodology is based on Sequential Non Iterative Approach (SNIA), where PHREEQC is compiled as a dynamic subroutine (iPhreeqc) that is called by the interface to solve the geochemical system at every element of the finite element mesh of COMSOL. The numerical tool has been extensively verified by comparison with computed results of 1D, 2D and 3D benchmark examples solved with other reactive transport simulators. COMSOL-PHREEQC is parallelized so that CPU time can be highly optimized in multi-core processors or clusters. Then, fully 3D detailed reactive transport problems can be readily simulated by means of
Parallel Libraries to support High-Level Programming
DEFF Research Database (Denmark)
Larsen, Morten Nørgaard
and the Microsoft .NET iv framework. Normally, one would not directly think of the .NET framework when talking scientific applications, but Microsoft has in the last couple of versions of .NET introduce a number of tools for writing parallel and high performance code. The first section examines how programmers can...
Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.
2014-08-12
Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.
Parallel embedded systems: where real-time and low-power meet
DEFF Research Database (Denmark)
Karakehayov, Zdravko; Guo, Yu
2008-01-01
This paper introduces a combination of models and proofs for optimal power management via Dynamic Frequency Scaling and Dynamic Voltage Scaling. The approach is suitable for systems on a chip or microcontrollers where processors run in parallel with embedded peripherals. We have developed...... a software tool, called CASTLE, to provide computer assistance in the design process of energy-aware embedded systems. The tool considers single processor and parallel architectures. An example shows an energy reduction of 23% when the tool allocates two microcontrollers for parallel execution....
Parallel Implicit Algorithms for CFD
Keyes, David E.
1998-01-01
The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.
Second derivative parallel block backward differentiation type ...
African Journals Online (AJOL)
Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...
A Parallel Approach to Fractal Image Compression
Lubomir Dedera
2004-01-01
The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.
Massmann, J.; Nagel, T.; Bilke, L.; Böttcher, N.; Heusermann, S.; Fischer, T.; Kumar, V.; Schäfers, A.; Shao, H.; Vogel, P.; Wang, W.; Watanabe, N.; Ziefle, G.; Kolditz, O.
2016-12-01
As part of the German site selection process for a high-level nuclear waste repository, different repository concepts in the geological candidate formations rock salt, clay stone and crystalline rock are being discussed. An open assessment of these concepts using numerical simulations requires physical models capturing the individual particularities of each rock type and associated geotechnical barrier concept to a comparable level of sophistication. In a joint work group of the Helmholtz Centre for Environmental Research (UFZ) and the German Federal Institute for Geosciences and Natural Resources (BGR), scientists of the UFZ are developing and implementing multiphysical process models while BGR scientists apply them to large scale analyses. The advances in simulation methods for waste repositories are incorporated into the open-source code OpenGeoSys. Here, recent application-driven progress in this context is highlighted. A robust implementation of visco-plasticity with temperature-dependent properties into a framework for the thermo-mechanical analysis of rock salt will be shown. The model enables the simulation of heat transport along with its consequences on the elastic response as well as on primary and secondary creep or the occurrence of dilatancy in the repository near field. Transverse isotropy, non-isothermal hydraulic processes and their coupling to mechanical stresses are taken into account for the analysis of repositories in clay stone. These processes are also considered in the near field analyses of engineered barrier systems, including the swelling/shrinkage of the bentonite material. The temperature-dependent saturation evolution around the heat-emitting waste container is described by different multiphase flow formulations. For all mentioned applications, we illustrate the workflow from model development and implementation, over verification and validation, to repository-scale application simulations using methods of high performance computing.
Directory of Open Access Journals (Sweden)
Yanjuan Wang
2017-10-01
Full Text Available Abstract: In this paper, the endothermic methanol decomposition reaction is used to obtain syngas by transforming middle and low temperature solar energy into chemical energy. A two-dimensional multiphysics coupling model of a middle and low temperature of 150~300 °C solar receiver/reactor was developed, which couples momentum equation in porous catalyst bed, the governing mass conservation with chemical reaction, and energy conservation incorporating conduction/convection/radiation heat transfer. The complex thermochemical conversion process of the middle and low temperature solar receiver/reactor (MLTSRR system was analyzed. The numerical finite element method (FEM model was validated by comparing it with the experimental data and a good agreement was obtained, revealing that the numerical FEM model is reliable. The characteristics of chemical reaction, coupled heat transfer, the components of reaction products, and the temperature fields in the receiver/reactor were also revealed and discussed. The effects of the annulus vacuum space and the glass tube on the performance of the solar receiver/reactor were further studied. It was revealed that when the direct normal irradiation increases from 200 W/m2 to 800 W/m2, the theoretical efficiency of solar energy transformed into chemical energy can reach 0.14–0.75. When the methanol feeding rate is 13 kg/h, the solar flux increases from 500 W/m2 to 1000 W/m2, methanol conversion can fall by 6.8–8.9% with air in the annulus, and methanol conversion can decrease by 21.8–28.9% when the glass is removed from the receiver/reactor.
Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP
International Nuclear Information System (INIS)
Downar, Thomas; Seker, Volkan
2013-01-01
Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local 'hot' spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.
Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP
Energy Technology Data Exchange (ETDEWEB)
Downar, Thomas [Univ. of Michigan, Ann Arbor, MI (United States); Seker, Volkan [Univ. of Michigan, Ann Arbor, MI (United States)
2013-04-30
Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local "hot" spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.
Rubus: A compiler for seamless and extensible parallelism.
Directory of Open Access Journals (Sweden)
Muhammad Adnan
Full Text Available Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU, originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84
Parallel fabrication of macroporous scaffolds.
Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal
2018-07-01
Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.
Parallel plasma fluid turbulence calculations
International Nuclear Information System (INIS)
Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.
1994-01-01
The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated
Evaluating parallel optimization on transputers
Directory of Open Access Journals (Sweden)
A.G. Chalmers
2003-12-01
Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.
Energy Technology Data Exchange (ETDEWEB)
Liu, R. [Department of Mechanical and Biomedical Engineering, City University of Hong Kong, Hong Kong (China); Zhou, W., E-mail: wenzzhou@cityu.edu.hk [Department of Mechanical and Biomedical Engineering, City University of Hong Kong, Hong Kong (China); Shen, P. [Department of Mechanical and Biomedical Engineering, City University of Hong Kong, Hong Kong (China); Prudil, A. [Fuel and Fuel Channel Safety Branch, Canadian Nuclear Laboratories, Chalk River, Ontario (Canada); Chan, P.K. [Department of Chemistry and Chemical Engineering, Royal Military College of Canada, Kingston, Ontario (Canada)
2015-12-15
Highlights: • LWR fuel performance modeling capability developed. • Fully coupled multiphysics studies for enhanced thermal conductivity UO{sub 2}–BeO fuel. • UO{sub 2}–BeO fuel decreases fuel temperature and lessens thermal stresses. • UO{sub 2}–BeO fuel facilitates a reduction in PCMI. • Reactor safety can be improved for UO{sub 2}–BeO fuel. - Abstract: Commercial light water reactor fuel UO{sub 2} has a low thermal conductivity that leads to the development of a large temperature gradient across the fuel pellet, limiting the reactor operational performance due to the effects that include thermal stresses causing pellet cladding interaction and the release of fission product gases. This study presents the development of a modeling and simulation for enhanced thermal conductivity UO{sub 2}–BeO fuel behavior in a light water reactor, using self-defined multiple physics models fully coupled based on the framework of COMSOL Multiphysics. Almost all the related physical models are considered, including heat generation and conduction, species diffusion, thermomechanics (thermal expansion, elastic strain, densification, and fission product swelling strain), grain growth, fission gas production and release, gap heat transfer, mechanical contact, gap/plenum pressure with plenum volume, cladding thermal and irradiation creep and oxidation. All the phenomenal models and materials properties are implemented into COMSOL Multiphysics finite-element platform with a 2D axisymmetric geometry of a fuel pellet and cladding. UO{sub 2}–BeO enhanced thermal conductivity nuclear fuel would decrease fuel temperatures and facilitate a reduction in pellet cladding interaction from our simulation results through lessening thermal stresses that result in fuel cracking, relocation, and swelling, so that the safety of the reactor would be improved.
Parallel artificial liquid membrane extraction
DEFF Research Database (Denmark)
Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine
2013-01-01
This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....
Parallel algorithms for mapping pipelined and parallel computations
Nicol, David M.
1988-01-01
Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.
International Nuclear Information System (INIS)
Miyamoto, Akira; Sato, Etsuko; Sato, Ryo; Inaba, Kenji; Hatakeyama, Nozomu
2014-01-01
In collaboration with experimental experts we have reported in the present conference (Hatakeyama, N. et al., “Experiment-integrated multi-scale, multi-physics computational chemistry simulation applied to corrosion behaviour of BWR structural materials”) the results of multi-scale multi-physics computational chemistry simulations applied to the corrosion behaviour of BWR structural materials. In macro-scale, a macroscopic simulator of anode polarization curve was developed to solve the spatially one-dimensional electrochemical equations on the material surface in continuum level in order to understand the corrosion behaviour of typical BWR structural material, SUS304. The experimental anode polarization behaviours of each pure metal were reproduced by fitting all the rates of electrochemical reactions and then the anode polarization curve of SUS304 was calculated by using the same parameters and found to reproduce the experimental behaviour successfully. In meso-scale, a kinetic Monte Carlo (KMC) simulator was applied to an actual-time simulation of the morphological corrosion behaviour under the influence of an applied voltage. In micro-scale, an ultra-accelerated quantum chemical molecular dynamics (UA-QCMD) code was applied to various metallic oxide surfaces of Fe 2 O 3 , Fe 3 O 4 , Cr 2 O 3 modelled as same as water molecules and dissolved metallic ions on the surfaces, then the dissolution and segregation behaviours were successfully simulated dynamically by using UA-QCMD. In this paper we describe details of the multi-scale, multi-physics computational chemistry method especially the UA-QCMD method. This method is approximately 10,000,000 times faster than conventional first-principles molecular dynamics methods based on density-functional theory (DFT), and the accuracy was also validated for various metals and metal oxides compared with DFT results. To assure multi-scale multi-physics computational chemistry simulation based on the UA-QCMD method for
DEFF Research Database (Denmark)
Khan, Mohammad Rezwan; Kær, Søren Knudsen
2016-01-01
The research is focused on the development of a three-dimensional cell level multiphysics battery thermal model. The primary aim is to represent the cooling mechanism inside the unit cell battery pack. It is accomplished through the coupling of heat transfer and computational fluid dynamics (CFD......) physics. A lumped value of heat generation (HG) inside the battery cell is used. It stems from isothermal calorimeter experiment. HG depends on current rate and the corresponding operating temperature. It is demonstrated that the developed model provides a deeper understanding of the thermal spatio......-temporal behavior of Li-ion battery in different operating conditions....
DEFF Research Database (Denmark)
Andersen, Søren Bøgh; Santos, Ilmar F.; Fuerst, Axel
2015-01-01
This paper presents an improved completely interconnected procedure for estimating the losses, cooling flows, fluid characteristics and temperature distribution in a gearless mill drive using real life data. The presented model is part of a larger project building a multi-physics model combining...... iteratively according to the heat flux transferred to the fluid, is modeled as a lumped model with two nodes interconnected by 11 channels and one pump. The flow model is based on Bernoulli's energy equation and solved by Newton-Raphson method. All the results from the three physical areas have been verified...
Cellular automata a parallel model
Mazoyer, J
1999-01-01
Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.
High temporal resolution functional MRI using parallel echo volumar imaging
International Nuclear Information System (INIS)
Rabrait, C.; Ciuciu, P.; Ribes, A.; Poupon, C.; Dehaine-Lambertz, G.; LeBihan, D.; Lethimonnier, F.; Le Roux, P.; Dehaine-Lambertz, G.
2008-01-01
Purpose: To combine parallel imaging with 3D single-shot acquisition (echo volumar imaging, EVI) in order to acquire high temporal resolution volumar functional MRI (fMRI) data. Materials and Methods: An improved EVI sequence was associated with parallel acquisition and field of view reduction in order to acquire a large brain volume in 200 msec. Temporal stability and functional sensitivity were increased through optimization of all imaging parameters and Tikhonov regularization of parallel reconstruction. Two human volunteers were scanned with parallel EVI in a 1.5 T whole-body MR system, while submitted to a slow event-related auditory paradigm. Results: Thanks to parallel acquisition, the EVI volumes display a low level of geometric distortions and signal losses. After removal of low-frequency drifts and physiological artifacts,activations were detected in the temporal lobes of both volunteers and voxel-wise hemodynamic response functions (HRF) could be computed. On these HRF different habituation behaviors in response to sentence repetition could be identified. Conclusion: This work demonstrates the feasibility of high temporal resolution 3D fMRI with parallel EVI. Combined with advanced estimation tools,this acquisition method should prove useful to measure neural activity timing differences or study the nonlinearities and non-stationarities of the BOLD response. (authors)
Parallel adaptation of a vectorised quantumchemical program system
International Nuclear Information System (INIS)
Van Corler, L.C.H.; Van Lenthe, J.H.
1987-01-01
Supercomputers, like the CRAY 1 or the Cyber 205, have had, and still have, a marked influence on Quantum Chemistry. Vectorization has led to a considerable increase in the performance of Quantum Chemistry programs. However, clockcycle times more than a factor 10 smaller than those of the present supercomputers are not to be expected. Therefore future supercomputers will have to depend on parallel structures. Recently, the first examples of such supercomputers have been installed. To be prepared for this new generation of (parallel) supercomputers one should consider the concepts one wants to use and the kind of problems one will encounter during implementation of existing vectorized programs on those parallel systems. The authors implemented four important parts of a large quantumchemical program system (ATMOL), i.e. integrals, SCF, 4-index and Direct-CI in the parallel environment at ECSEC (Rome, Italy). This system offers simulated parallellism on the host computer (IBM 4381) and real parallellism on at most 10 attached processors (FPS-164). Quantumchemical programs usually handle large amounts of data and very large, often sparse matrices. The transfer of that many data can cause problems concerning communication and overhead, in view of which shared memory and shared disks must be considered. The strategy and the tools that were used to parallellise the programs are shown. Also, some examples are presented to illustrate effectiveness and performance of the system in Rome for these type of calculations
A solution for automatic parallelization of sequential assembly code
Directory of Open Access Journals (Sweden)
Kovačević Đorđe
2013-01-01
Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.
Parallelizing AT with MatlabMPI
International Nuclear Information System (INIS)
2011-01-01
The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while
Time-resolved terahertz spectroscopy in a parallel-plate waveguide
DEFF Research Database (Denmark)
Cooke, David; Jepsen, Peter Uhd
2009-01-01
The parallel plate waveguide (PPWG), formed by two conducting parallel plates separated by a distance on the order of the wavelength of the propagating light, has shown itself to be a near ideal terahertz interconnect exhibiting low loss and dispersionless propagation.[1] It is also a useful tool...
Parallel Harmony Search Based Distributed Energy Resource Optimization
Energy Technology Data Exchange (ETDEWEB)
Ceylan, Oguzhan [ORNL; Liu, Guodong [ORNL; Tomsovic, Kevin [University of Tennessee, Knoxville (UTK)
2015-01-01
This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.
War and peace: morphemes and full forms in a noninteractive activation parallel dual-route model.
Baayen, H; Schreuder, R
This article introduces a computational tool for modeling the process of morphological segmentation in visual and auditory word recognition in the framework of a parallel dual-route model. Copyright 1999 Academic Press.
Next Generation Parallelization Systems for Processing and Control of PDS Image Node Assets
Verma, R.
2017-06-01
We present next-generation parallelization tools to help Planetary Data System (PDS) Imaging Node (IMG) better monitor, process, and control changes to nearly 650 million file assets and over a dozen machines on which they are referenced or stored.
Parallel Sparse Matrix - Vector Product
DEFF Research Database (Denmark)
Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd
This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...
[Falsified medicines in parallel trade].
Muckenfuß, Heide
2017-11-01
The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.
The parallel adult education system
DEFF Research Database (Denmark)
Wahlgren, Bjarne
2015-01-01
for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...
Where are the parallel algorithms?
Voigt, R. G.
1985-01-01
Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.
Parallel imaging with phase scrambling.
Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel
2015-04-01
Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.
Default Parallels Plesk Panel Page
services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products ParallelsÂ® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this
Parallel plate transmission line transformer
Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M.
2011-01-01
A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the
Matpar: Parallel Extensions for MATLAB
Springer, P. L.
1998-01-01
Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.
Massively parallel quantum computer simulator
De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.
2007-01-01
We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray
International Nuclear Information System (INIS)
Bean, J.E.; Sanchez, M.; Arguello, J.G.
2012-01-01
Document available in extended abstract form only. Because, until recently, U.S. efforts had been focused on the volcanic tuff site at Yucca Mountain, radioactive waste disposal in U.S. clay/shale formations has not been considered for many years. However, advances in multi-physics computational modeling and research into clay mineralogy continue to improve the scientific basis for assessing nuclear waste repository performance in such formations. Disposal of high-level radioactive waste (HLW) in suitable clay/shale formations is attractive because the material is essentially impermeable and self-sealing, conditions are chemically reducing, and sorption tends to prevent radionuclide transport. Vertically and laterally extensive shale and clay formations exist in multiple locations in the contiguous 48 states. This paper describes an emerging massively parallel (MP) high performance computing (HPC) capability - SIERRA Mechanics - that is applicable to the simulation of coupled-physics processes occurring within a potential clay/shale repository for disposal of HLW within the U.S. The SIERRA Mechanics code development project has been underway at Sandia National Laboratories for approximately the past decade under the auspices of the U.S. Department of Energy's Advanced Scientific Computing (ASC) program. SIERRA Mechanics was designed and developed from its inception to run on the latest and most sophisticated massively parallel computing hardware, with the capability to span the hardware range from single workstations to systems with thousands of processors. The foundation of SIERRA Mechanics is the SIERRA tool-kit, which provides finite element application-code services such as: (1) mesh and field data management, both parallel and distributed; (2) transfer operators for mapping field variables from one mechanics application to another; (3) a solution controller for code coupling; and (4) included third party libraries (e.g., solver libraries, communications
Experiments with parallel algorithms for combinatorial problems
G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens
1985-01-01
textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines
The numerical parallel computing of photon transport
International Nuclear Information System (INIS)
Huang Qingnan; Liang Xiaoguang; Zhang Lifa
1998-12-01
The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten
A Set of Annotation Interfaces for Alignment of Parallel Corpora
Directory of Open Access Journals (Sweden)
Singh Anil Kumar
2014-09-01
Full Text Available Annotation interfaces for parallel corpora which fit in well with other tools can be very useful. We describe a set of annotation interfaces which fulfill this criterion. This set includes a sentence alignment interface, two different word or word group alignment interfaces and an initial version of a parallel syntactic annotation alignment interface. These tools can be used for manual alignment, or they can be used to correct automatic alignments. Manual alignment can be performed in combination with certain kinds of linguistic annotation. Most of these interfaces use a representation called the Shakti Standard Format that has been found to be very robust and has been used for large and successful projects. It ties together the different interfaces, so that the data created by them is portable across all tools which support this representation. The existence of a query language for data stored in this representation makes it possible to build tools that allow easy search and modification of annotated parallel data.
DEFF Research Database (Denmark)
Pugnale, Alberto; Holst, Malene Kirstine; Kirkegaard, Poul Henning
2010-01-01
hand, the main software houses are trying to introduce powerful and effective user-friendly applications in the world of building designers, that are more and more able to fit their specific requirements; on the other hand, some groups of expert users with a basic programming knowledge seem to deal......This paper aims to discuss recent approaches in using more and more frequently computer tools as supports for the conceptual design phase of the architectural project. The present state-of-the-art about software as conceptual design tool could be summarized in two parallel tendencies. On the one...... with the problem of software as conceptual design tool by means of 'scripting', in other words by self-developing codes able to solve specific and well defined design problems. Starting with a brief historical recall and the discussion of relevant researches and practical experiences, this paper investigates...
Distributed and parallel approach for handle and perform huge datasets
Konopko, Joanna
2015-12-01
Big Data refers to the dynamic, large and disparate volumes of data comes from many different sources (tools, machines, sensors, mobile devices) uncorrelated with each others. It requires new, innovative and scalable technology to collect, host and analytically process the vast amount of data. Proper architecture of the system that perform huge data sets is needed. In this paper, the comparison of distributed and parallel system architecture is presented on the example of MapReduce (MR) Hadoop platform and parallel database platform (DBMS). This paper also analyzes the problem of performing and handling valuable information from petabytes of data. The both paradigms: MapReduce and parallel DBMS are described and compared. The hybrid architecture approach is also proposed and could be used to solve the analyzed problem of storing and processing Big Data.
Parallel and distributed processing in two SGBDS: A case study
Directory of Open Access Journals (Sweden)
Francisco Javier Moreno
2017-04-01
Full Text Available Context: One of the strategies for managing large volumes of data is distributed and parallel computing. Among the tools that allow applying these characteristics are some Data Base Management Systems (DBMS, such as Oracle, DB2, and SQL Server. Method: In this paper we present a case study where we evaluate the performance of an SQL query in two of these DBMS. The evaluation is done through various forms of data distribution in a computer network with different degrees of parallelism. Results: The tests of the SQL query evidenced the performance differences between the two DBMS analyzed. However, more thorough testing and a wider variety of queries are needed. Conclusions: The differences in performance between the two DBMSs analyzed show that when evaluating this aspect, it is necessary to consider the particularities of each DBMS and the degree of parallelism of the queries.
Parallel dispatch: a new paradigm of electrical power system dispatch
Energy Technology Data Exchange (ETDEWEB)
Zhang, Jun Jason; Wang, Fei-Yue; Wang, Qiang; Hao, Dazhi; Yang, Xiaojing; Gao, David Wenzhong; Zhao, Xiangyang; Zhang, Yingchen
2018-01-01
Modern power systems are evolving into sociotechnical systems with massive complexity, whose real-time operation and dispatch go beyond human capability. Thus, the need for developing and applying new intelligent power system dispatch tools are of great practical significance. In this paper, we introduce the overall business model of power system dispatch, the top level design approach of an intelligent dispatch system, and the parallel intelligent technology with its dispatch applications. We expect that a new dispatch paradigm, namely the parallel dispatch, can be established by incorporating various intelligent technologies, especially the parallel intelligent technology, to enable secure operation of complex power grids, extend system operators U+02BC capabilities, suggest optimal dispatch strategies, and to provide decision-making recommendations according to power system operational goals.
Parallel grid generation algorithm for distributed memory computers
Moitra, Stuti; Moitra, Anutosh
1994-01-01
A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.
Parallel computing for data science with examples in R, C++ and CUDA
Matloff, Norman
2015-01-01
Parallel Computing for Data Science: With Examples in R, C++ and CUDA is one of the first parallel computing books to concentrate exclusively on parallel data structures, algorithms, software tools, and applications in data science. It includes examples not only from the classic ""n observations, p variables"" matrix format but also from time series, network graph models, and numerous other structures common in data science. The examples illustrate the range of issues encountered in parallel programming.With the main focus on computation, the book shows how to compute on three types of platfor
Structural synthesis of parallel robots
Gogu, Grigore
This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators. This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1. Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...
GPU Parallel Bundle Block Adjustment
Directory of Open Access Journals (Sweden)
ZHENG Maoteng
2017-09-01
Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.
Energy Technology Data Exchange (ETDEWEB)
Yu, Y. Q. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, E. R. [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay S. [Argonne National Lab. (ANL), Argonne, IL (United States); Rahaman, Ronald O. [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-02-29
SHARP, developed under the NEAMS Reactor Product Line, is an advanced modeling and simulation toolkit for the analysis of advanced nuclear reactors. SHARP is comprised of three physics modules currently including neutronics, thermal hydraulics, and structural mechanics. SHARP empowers designers to produce accurate results for modeling physical phenomena that have been identified as important for nuclear reactor analysis. SHARP can use existing physics codes and take advantage of existing infrastructure capabilities in the MOAB framework and the coupling driver/solver library, the Coupled Physics Environment (CouPE), which utilizes the widely used, scalable PETSc library. This report aims at identifying the coupled-physics simulation capability of SHARP by introducing the demonstration example called sahex in advance of the SHARP release expected by Mar 2016. sahex consists of 6 fuel pins with cladding, 1 control rod, sodium coolant and an outer duct wall that encloses all the other components. This example is carefully chosen to demonstrate the proof of concept for solving more complex demonstration examples such as EBR II assembly and ABTR full core. The workflow of preparing the input files, running the case and analyzing the results is demonstrated in this report. Moreover, an extension of the sahex model called sahex_core, which adds six homogenized neighboring assemblies to the full heterogeneous sahex model, is presented to test homogenization capabilities in both Nek5000 and PROTEUS. Some primary information on the configuration and build aspects for the SHARP toolkit, which includes capability to auto-download dependencies and configure/install with optimal flags in an architecture-aware fashion, is also covered by this report. A step-by-step instruction is provided to help users to create their cases. Details on these processes will be provided in the SHARP user manual that will accompany the first release.
International Nuclear Information System (INIS)
Yu, Y. Q.; Shemon, E. R.; Mahadevan, Vijay S.; Rahaman, Ronald O.
2016-01-01
SHARP, developed under the NEAMS Reactor Product Line, is an advanced modeling and simulation toolkit for the analysis of advanced nuclear reactors. SHARP is comprised of three physics modules currently including neutronics, thermal hydraulics, and structural mechanics. SHARP empowers designers to produce accurate results for modeling physical phenomena that have been identified as important for nuclear reactor analysis. SHARP can use existing physics codes and take advantage of existing infrastructure capabilities in the MOAB framework and the coupling driver/solver library, the Coupled Physics Environment (CouPE), which utilizes the widely used, scalable PETSc library. This report aims at identifying the coupled-physics simulation capability of SHARP by introducing the demonstration example called sahex in advance of the SHARP release expected by Mar 2016. sahex consists of 6 fuel pins with cladding, 1 control rod, sodium coolant and an outer duct wall that encloses all the other components. This example is carefully chosen to demonstrate the proof of concept for solving more complex demonstration examples such as EBR II assembly and ABTR full core. The workflow of preparing the input files, running the case and analyzing the results is demonstrated in this report. Moreover, an extension of the sahex model called sahex c ore, which adds six homogenized neighboring assemblies to the full heterogeneous sahex model, is presented to test homogenization capabilities in both Nek5000 and PROTEUS. Some primary information on the configuration and build aspects for the SHARP toolkit, which includes capability to auto-download dependencies and configure/install with optimal flags in an architecture-aware fashion, is also covered by this report. A step-by-step instruction is provided to help users to create their cases. Details on these processes will be provided in the SHARP user manual that will accompany the first release.
Cpl6: The New Extensible, High-Performance Parallel Coupler forthe Community Climate System Model
Energy Technology Data Exchange (ETDEWEB)
Craig, Anthony P.; Jacob, Robert L.; Kauffman, Brain; Bettge,Tom; Larson, Jay; Ong, Everest; Ding, Chris; He, Yun
2005-03-24
Coupled climate models are large, multiphysics applications designed to simulate the Earth's climate and predict the response of the climate to any changes in the forcing or boundary conditions. The Community Climate System Model (CCSM) is a widely used state-of-art climate model that has released several versions to the climate community over the past ten years. Like many climate models, CCSM employs a coupler, a functional unit that coordinates the exchange of data between parts of climate system such as the atmosphere and ocean. This paper describes the new coupler, cpl6, contained in the latest version of CCSM,CCSM3. Cpl6 introduces distributed-memory parallelism to the coupler, a class library for important coupler functions, and a standardized interface for component models. Cpl6 is implemented entirely in Fortran90 and uses Model Coupling Toolkit as the base for most of its classes. Cpl6 gives improved performance over previous versions and scales well on multiple platforms.
A tandem parallel plate analyzer
International Nuclear Information System (INIS)
Hamada, Y.; Fujisawa, A.; Iguchi, H.; Nishizawa, A.; Kawasumi, Y.
1996-11-01
By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)