Parallel Adaptive Mesh Refinement
Diachin, L; Hornung, R; Plassmann, P; WIssink, A
2005-03-04
As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the
Adaptive Mesh Refinement in CTH
This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems
Adaptive mesh refinement in titanium
Colella, Phillip; Wen, Tong
2005-01-21
In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.
Adaptive Mesh Refinement for Storm Surge
Mandli, Kyle T
2014-01-01
An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the \\geoclaw framework and compared to \\adcirc for Hurricane Ike along with observed tide gauge data and the computational cost of each model run.
Adaptive mesh refinement for storm surge
Mandli, Kyle T.
2014-03-01
An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the GeoClaw framework and compared to ADCIRC for Hurricane Ike along with observed tide gauge data and the computational cost of each model run. © 2014 Elsevier Ltd.
Parallel object-oriented adaptive mesh refinement
Balsara, D.; Quinlan, D.J.
1997-04-01
In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.
GRChombo: Numerical relativity with adaptive mesh refinement
Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran
2015-12-01
In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial 'many-boxes-in-many-boxes' mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.
Fully implicit adaptive mesh refinement MHD algorithm
Philip, Bobby
2005-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.
Visualization of Scalar Adaptive Mesh Refinement Data
VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes
2007-12-06
Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.
GRChombo : Numerical Relativity with Adaptive Mesh Refinement
Clough, Katy; Finkel, Hal; Kunesch, Markus; Lim, Eugene A; Tunyasuvunakool, Saran
2015-01-01
Numerical relativity has undergone a revolution in the past decade. With a well-understood mathematical formalism, and full control over the gauge modes, it is now entering an era in which the science can be properly explored. In this work, we introduce GRChombo, a new numerical relativity code written to take full advantage of modern parallel computing techniques. GRChombo's features include full adaptive mesh refinement with block structured Berger-Rigoutsos grid generation which supports non-trivial "many-boxes-in-many-boxes" meshing hierarchies, and massive parallelism through the Message Passing Interface (MPI). GRChombo evolves the Einstein equation with the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. We show that GRChombo passes all the standard "Apples-to-Apples" code comparison tests. We also show that it can stably and accurately evolve vacuum black hole spacetimes such as binary black hole mergers, and non-vacuum spacetimes such as scalar collapses into b...
Elliptic Solvers for Adaptive Mesh Refinement Grids
Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.
1999-06-03
We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.
Elliptic Solvers with Adaptive Mesh Refinement on Complex Geometries
Phillip, B.
2000-07-24
Adaptive Mesh Refinement (AMR) is a numerical technique for locally tailoring the resolution computational grids. Multilevel algorithms for solving elliptic problems on adaptive grids include the Fast Adaptive Composite grid method (FAC) and its parallel variants (AFAC and AFACx). Theory that confirms the independence of the convergence rates of FAC and AFAC on the number of refinement levels exists under certain ellipticity and approximation property conditions. Similar theory needs to be developed for AFACx. The effectiveness of multigrid-based elliptic solvers such as FAC, AFAC, and AFACx on adaptively refined overlapping grids is not clearly understood. Finally, a non-trivial eye model problem will be solved by combining the power of using overlapping grids for complex moving geometries, AMR, and multilevel elliptic solvers.
AMR++: Object-Oriented Parallel Adaptive Mesh Refinement
Quinlan, D.; Philip, B.
2000-02-02
Adaptive mesh refinement (AMR) computations are complicated by their dynamic nature. The development of solvers for realistic applications is complicated by both the complexity of the AMR and the geometry of realistic problem domains. The additional complexity of distributed memory parallelism within such AMR applications most commonly exceeds the level of complexity that can be reasonable maintained with traditional approaches toward software development. This paper will present the details of our object-oriented work on the simplification of the use of adaptive mesh refinement on applications with complex geometries for both serial and distributed memory parallel computation. We will present an independent set of object-oriented abstractions (C++ libraries) well suited to the development of such seemingly intractable scientific computations. As an example of the use of this object-oriented approach we will present recent results of an application modeling fluid flow in the eye. Within this example, the geometry is too complicated for a single curvilinear coordinate grid and so a set of overlapping curvilinear coordinate grids' are used. Adaptive mesh refinement and the required grid generation work to support the refinement process is coupled together in the solution of essentially elliptic equations within this domain. This paper will focus on the management of complexity within development of the AMR++ library which forms a part of the Overture object-oriented framework for the solution of partial differential equations within scientific computing.
Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units
Beckingsale, D. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Gaudin, W. P. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Hornung, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gunney, B. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Herdman, J. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Jarvis, S. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom)
2014-11-17
Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.
PARAMESH V4.1: Parallel Adaptive Mesh Refinement
MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; de Fainchtein, Rosalinda; Packer, Charles
2011-06-01
PARAMESH is a package of Fortran 90 subroutines designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain, with spatial resolution varying to satisfy the demands of the application. These sub-grid blocks form the nodes of a tree data-structure (quad-tree in 2D or oct-tree in 3D). Each grid block has a logically cartesian mesh. The package supports 1, 2 and 3D models. PARAMESH is released under the NASA-wide Open-Source software license.
The Nonlinear Sigma Model With Distributed Adaptive Mesh Refinement
Liebling, Steven L.
2004-01-01
An adaptive mesh refinement (AMR) scheme is implemented in a distributed environment using Message Passing Interface (MPI) to find solutions to the nonlinear sigma model. Previous work studied behavior similar to black hole critical phenomena at the threshold for singularity formation in this flat space model. This work is a follow-up describing extensions to distribute the grid hierarchy and presenting tests showing the correctness of the model.
Texture-based volume rendering of adaptive mesh refinement data
Kähler, R.; Hege, H.
2002-01-01
Many phenomena in nature and engineering happen simultaneously on rather diverse spatial and temporal scales. In other words, they exhibit a multi-scale character. A special numerical multilevel technique associated with a particular hierarchical data structure is adaptive mesh refinement (AMR). This scheme achieves locally very high spatial and temporal resolutions. Due to its popularity, many scientists are in need of interactive visualization tools for AMR data. In this article, we present...
A Diffusion Synthetic Acceleration Method for Block Adaptive Mesh Refinement.
Ward, R. C. (Robert C.); Baker, R. S. (Randall S.); Morel, J. E. (Jim E.)
2005-01-01
A prototype two-dimensional Diffusion Synthetic Acceleration (DSA) method on a Block-based Adaptive Mesh Refinement (BAMR) transport mesh has been developed. The Block-Adaptive Mesh Refinement Diffusion Synthetic Acceleration (BAMR-DSA) method was tested in the PARallel TIme-Dependent SN (PARTISN) deterministic transport code. The BAMR-DSA equations are derived by differencing the DSA equation using a vertex-centered diffusion discretization that is diamond-like and may be characterized as 'partially' consistent. The derivation of a diffusion discretization that is fully consistent with diamond transport differencing on BAMR mesh does not appear to be possible. However, despite being partially consistent, the BAMR-DSA method is effective for many applications. The BAMR-DSA solver was implemented and tested in two dimensions for rectangular (XY) and cylindrical (RZ) geometries. Testing results confirm that a partially consistent BAMR-DSA method will introduce instabilities for extreme cases, e.g., scattering ratios approaching 1.0 with optically thick cells, but for most realistic problems the BAMR-DSA method provides effective acceleration. The initial use of a full matrix to store and LU-Decomposition to solve the BAMR-DSA equations has been extended to include Compressed Sparse Row (CSR) storage and a Conjugate Gradient (CG) solver. The CSR and CG methods provide significantly more efficient and faster storage and solution methods.
Block-structured adaptive mesh refinement - theory, implementation and application
Deiterding, Ralf [ORNL
2011-01-01
Structured adaptive mesh refinement (SAMR) techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.
Fully implicit adaptive mesh refinement algorithm for reduced MHD
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
Block-structured Adaptive Mesh Refinement - Theory, Implementation and Application
Deiterding Ralf
2011-12-01
Full Text Available Structured adaptive mesh refinement (SAMR techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.
Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics
Philip, Bobby; Chacón, Luis; Pernice, Michael
2008-10-01
An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.
CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM
Miniati, Francesco; Martin, Daniel F.
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
Enzo: An Adaptive Mesh Refinement Code for Astrophysics
Bryan, Greg L; O'Shea, Brian W; Abel, Tom; Wise, John H; Turk, Matthew J; Reynolds, Daniel R; Collins, David C; Wang, Peng; Skillman, Samuel W; Smith, Britton; Harkness, Robert P; Bordner, James; Kim, Ji-hoon; Kuhlen, Michael; Xu, Hao; Goldbaum, Nathan; Hummels, Cameron; Kritsuk, Alexei G; Tasker, Elizabeth; Skory, Stephen; Simpson, Christine M; Hahn, Oliver; Oishi, Jeffrey S; So, Geoffrey C; Zhao, Fen; Cen, Renyue; Li, Yuan
2013-01-01
This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in 1, 2, and 3 dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically-thin radiative cooling of primordial and metal-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the code's parallel performance, and discuss the Enzo collaboration's code development methodology.
Hydrodynamical Adaptive Mesh Refinement Simulations of Disk Galaxies
Gibson, Brad K; Sanchez-Blazquez, Patricia; Teyssier, Romain; House, Elisa L; Brook, Chris B; Kawata, Daisuke
2008-01-01
To date, fully cosmological hydrodynamic disk simulations to redshift zero have only been undertaken with particle-based codes, such as GADGET, Gasoline, or GCD+. In light of the (supposed) limitations of traditional implementations of smoothed particle hydrodynamics (SPH), or at the very least, their respective idiosyncrasies, it is important to explore complementary approaches to the SPH paradigm to galaxy formation. We present the first high-resolution cosmological disk simulations to redshift zero using an adaptive mesh refinement (AMR)-based hydrodynamical code, in this case, RAMSES. We analyse the temporal and spatial evolution of the simulated stellar disks' vertical heating, velocity ellipsoids, stellar populations, vertical and radial abundance gradients (gas and stars), assembly/infall histories, warps/lopsideness, disk edges/truncations (gas and stars), ISM physics implementations, and compare and contrast these properties with our sample of cosmological SPH disks, generated with GCD+. These prelim...
A Spectral Adaptive Mesh Refinement Method for the Burgers equation
Nasr Azadani, Leila; Staples, Anne
2013-03-01
Adaptive mesh refinement (AMR) is a powerful technique in computational fluid dynamics (CFD). Many CFD problems have a wide range of scales which vary with time and space. In order to resolve all the scales numerically, high grid resolutions are required. The smaller the scales the higher the resolutions should be. However, small scales are usually formed in a small portion of the domain or in a special period of time. AMR is an efficient method to solve these types of problems, allowing high grid resolutions where and when they are needed and minimizing memory and CPU time. Here we formulate a spectral version of AMR in order to accelerate simulations of a 1D model for isotropic homogenous turbulence, the Burgers equation, as a first test of this method. Using pseudo spectral methods, we applied AMR in Fourier space. The spectral AMR (SAMR) method we present here is applied to the Burgers equation and the results are compared with the results obtained using standard solution methods performed using a fine mesh.
3D Compressible Melt Transport with Adaptive Mesh Refinement
Dannberg, Juliane; Heister, Timo
2015-04-01
Melt generation and migration have been the subject of numerous investigations, but their typical time and length-scales are vastly different from mantle convection, which makes it difficult to study these processes in a unified framework. The equations that describe coupled Stokes-Darcy flow have been derived a long time ago and they have been successfully implemented and applied in numerical models (Keller et al., 2013). However, modelling magma dynamics poses the challenge of highly non-linear and spatially variable material properties, in particular the viscosity. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in mesh cells where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. In addition, previous models neglect the compressibility of both the solid and the fluid phase. However, experiments have shown that the melt density change from the depth of melt generation to the surface leads to a volume increase of up to 20%. Considering these volume changes in both phases also ensures self-consistency of models that strive to link melt generation to processes in the deeper mantle, where the compressibility of the solid phase becomes more important. We describe our extension of the finite-element mantle convection code ASPECT (Kronbichler et al., 2012) that allows for solving additional equations describing the behaviour of silicate melt percolating through and interacting with a viscously deforming host rock. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. This approach includes both melt migration and melt generation with the accompanying latent heat effects. We evaluate the functionality and potential of this method using a series of simple model setups and benchmarks, comparing results of the compressible and incompressible formulation and
Direct numerical simulation of bubbles with parallelized adaptive mesh refinement
The study of two-phase Thermal-Hydraulics is a major topic for Nuclear Engineering for both security and efficiency of nuclear facilities. In addition to experiments, numerical modeling helps to knowing precisely where bubbles appear and how they behave, in the core as well as in the steam generators. This work presents the finest scale of representation of two-phase flows, Direct Numerical Simulation of bubbles. We use the 'Di-phasic Low Mach Number' equation model. It is particularly adapted to low-Mach number flows, that is to say flows which velocity is much slower than the speed of sound; this is very typical of nuclear thermal-hydraulics conditions. Because we study bubbles, we capture the front between vapor and liquid phases thanks to a downward flux limiting numerical scheme. The specific discrete analysis technique this work introduces is well-balanced parallel Adaptive Mesh Refinement (AMR). With AMR, we refined the coarse grid on a batch of patches in order to locally increase precision in areas which matter more, and capture fine changes in the front location and its topology. We show that patch-based AMR is very adapted for parallel computing. We use a variety of physical examples: forced advection, heat transfer, phase changes represented by a Stefan model, as well as the combination of all those models. We will present the results of those numerical simulations, as well as the speed up compared to equivalent non-AMR simulation and to serial computation of the same problems. This document is made up of an abstract and the slides of the presentation. (author)
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
Adaptive mesh refinement and adjoint methods in geophysics simulations
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics
Lomov, I; Pember, R; Greenough, J; Liu, B
2005-10-18
We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.
SIP-CESE MHD model of solar wind with adaptive mesh refinement of hexahedral meshes
Feng, Xueshang; Xiang, Changqing; Zhong, Dingkun; Zhou, Yufen; Yang, Liping; Ma, Xiaopeng
2014-07-01
Solar-interplanetary space involves many features, such as discontinuities and heliospheric current sheet, with spatial scales many orders of magnitude smaller than the system size. The scalable, massively parallel, block-based, adaptive-mesh refinement (AMR) promises to resolve different temporal and spatial scales on which solar-wind plasma occurs throughout the vast solar-interplanetary space with even less cells but can generate a good enough resolution. Here, we carry out the adaptive mesh refinement (AMR) implementation of our Solar-Interplanetary space-time conservation element and solution element (CESE) magnetohydrodynamic model (SIP-CESE MHD model) using a six-component grid system (Feng et al., 2007, 2010). The AMR realization of the SIP-CESE MHD model is naturalized directly in hexahedral meshes with the aid of the parallel AMR package PARAMESH available at http://sourceforge.net/projects/paramesh/. At the same time, the topology of the magnetic field expansion factor and the minimum angular separation (at the photosphere) between an open field foot point and its nearest coronal-hole boundary are merged into the model in order to determine the volumetric heating source terms. Our numerical results for the validation study of the solar-wind background of Carrington rotation 2060 show overall good agreements in the solar corona and in interplanetary space with the observations from the Solar and Heliospheric Observatory (SOHO) and spacecraft data from OMNI.
RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code
Zhang, Wei-Qun; /KIPAC, Menlo Park; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study
2005-06-06
The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.
A Parallel Algorithm for Adaptive Local Refinement of Tetrahedral Meshes Using Bisection
LinBo Zhang
2009-01-01
Local mesh refinement is one of the key steps in the implementations of adaptive finite element methods. This paper presents a parallel algorithm for distributed memory parallel computers for adaptive local refinement of tetrahedral meshes using bisection. This algorithm is used in PHG, Parallel Hierarchical Grid (http: //lsec. cc. ac. cn/phg/J, a toolbox under active development for parallel adaptive finite element solutions of partial differential equations. The algorithm proposed is characterized by allowing simultaneous refinement of submeshes to arbitrary levels before synchronization between submeshes and without the need of a central coordinator process for managing new vertices. Using the concept of canonical refinement, a simple proof of the independence of the resulting mesh on the mesh partitioning is given, which is useful in better understanding the behaviour of the bisectioning refinement procedure.AMS subject classifications: 65Y05, 65N50
GAMER: a GPU-Accelerated Adaptive Mesh Refinement Code for Astrophysics
Schive, Hsi-Yu; Tsai, Yu-Chih; Chiueh, Tzihong
2009-01-01
We present the newly developed code, GAMER (GPU-accelerated Adaptive MEsh Refinement code), which has adopted a novel approach to improve the performance of adaptive mesh refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing TVD scheme for the hydrodynamic solver, and a multi-level relaxation scheme for ...
An Adaptive Mesh Refinement Strategy for Immersed Boundary/Interface Methods
Li, Zhilin; Song, Peng
2012-01-01
An adaptive mesh refinement strategy is proposed in this paper for the Immersed Boundary and Immersed Interface methods for two-dimensional elliptic interface problems involving singular sources. The interface is represented by the zero level set of a Lipschitz function φ(x,y). Our adaptive mesh refinement is done within a small tube of |φ(x,y)|≤ δ with finer Cartesian meshes. The discrete linear system of equations is solved by a multigrid solver. The AMR methods could obtain solutions with ...
Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions
In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner
Cosmological Shocks in Adaptive Mesh Refinement Simulations and the Acceleration of Cosmic Rays
Skillman, Samuel W.; O'Shea, Brian W.; Hallman, Eric J.; Burns, Jack O.; Michael L. Norman
2008-01-01
We present new results characterizing cosmological shocks within adaptive mesh refinement N-Body/hydrodynamic simulations that are used to predict non-thermal components of large-scale structure. This represents the first study of shocks using adaptive mesh refinement. We propose a modified algorithm for finding shocks from those used on unigrid simulations that reduces the shock frequency of low Mach number shocks by a factor of ~3. We then apply our new technique to a large, (512 Mpc/h)^3, ...
Welcome, Michael; Rendleman, Charles; Oliker, Leonid; Biswas, Rupak
2006-01-01
Adaptive mesh refinement (AMR) is a powerful technique that reduces the resources necessary to solve otherwise in-tractable problems in computational science. The AMR strategy solves the problem on a relatively coarse grid, and dynamically refines it in regions requiring higher resolution. However, AMR codes tend to be far more complicated than their uniform grid counterparts due to the software infrastructure necessary to dynamically manage the hierarchical grid framework. Despite this ...
Hornung, R.D. [Duke Univ., Durham, NC (United States)
1996-12-31
An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.
Relativistic Vlasov-Maxwell modelling using finite volumes and adaptive mesh refinement
Wettervik, Benjamin Svedung; Siminos, Evangelos; Fülöp, Tünde
2016-01-01
The dynamics of collisionless plasmas can be modelled by the Vlasov-Maxwell system of equations. An Eulerian approach is needed to accurately describe processes that are governed by high energy tails in the distribution function, but is of limited efficiency for high dimensional problems. The use of an adaptive mesh can reduce the scaling of the computational cost with the dimension of the problem. Here, we present a relativistic Eulerian Vlasov-Maxwell solver with block-structured adaptive mesh refinement in one spatial and one momentum dimension. The discretization of the Vlasov equation is based on a high-order finite volume method. A flux corrected transport algorithm is applied to limit spurious oscillations and ensure the physical character of the distribution function. We demonstrate a speed-up by a factor of five, because of the use of an adaptive mesh, in a typical scenario involving laser-plasma interaction in the self-induced transparency regime.
Highlights: ► A new adaptive h-refinement approach has been developed for a class of nodal method. ► The resulting system of nodal equations is more amenable to efficient numerical solution. ► The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. ► Spatially adaptive approach greatly enhances the accuracy of the solution. - Abstract: The aim of this work is to develop a spatially adaptive coarse mesh strategy that progressively refines the nodes in appropriate regions of domain to solve the neutron balance equation by zeroth order nodal expansion method. A flux gradient based a posteriori estimation scheme has been utilized for checking the approximate solutions for various nodes. The relative surface net leakage of nodes has been considered as an assessment criterion. In this approach, the core module is called in by adaptive mesh generator to determine gradients of node surfaces flux to explore the possibility of node refinements in appropriate regions and directions of the problem. The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. For this purpose, a computer program ANRNE-2D, Adaptive Node Refinement Nodal Expansion, has been developed to solve neutron diffusion equation using average current nodal expansion method for 2D rectangular geometries. Implementing the adaptive algorithm confirms its superiority in enhancing the accuracy of the solution without using fine nodes throughout the domain and increasing the number of unknown solution. Some well-known benchmarks have been investigated and improvements are reported
Constrained-Transport Magnetohydrodynamics with Adaptive-Mesh-Refinement in CHARM
Miniati, Francesco; Martin, Daniel F.
2011-01-01
We present the implementation of a three-dimensional, second order accurate Godunov-type algorithm for magneto-hydrodynamic (MHD), in the adaptive-mesh-refinement (AMR) cosmological code {\\tt CHARM}. The algorithm is based on the full 12-solve spatially unsplit Corner-Transport-Upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the Piecewise-Parabolic-Method (PPM), while the magnetic field variables are face-centered and are evolved through application of the St...
Block Structured Adaptive Mesh and Time Refinement for Hybrid, Hyperbolic + N-body Systems
Miniati, Francesco; Colella, Phillip
2006-01-01
We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov's method for hydrodynamics; a symmetric, time centered m...
Fromang, S.; Hennebelle, P.; Teyssier, R.
2006-01-01
In this paper, we present a new method to perform numerical simulations of astrophysical MHD flows using the Adaptive Mesh Refinement framework and Constrained Transport. The algorithm is based on a previous work in which the MUSCL--Hancock scheme was used to evolve the induction equation. In this paper, we detail the extension of this scheme to the full MHD equations and discuss its properties. Through a series of test problems, we illustrate the performances of this new code using two diffe...
A Patch-based Partitioner for Structured Adaptive Mesh Refinement : Implementation and Evaluation
Vakili, Abbas
2008-01-01
To increase the speed of computer simulations we solve partial differential equations (PDEs) using structured adaptive mesh refinement (SAMR). During the execution of an SAMR-application, finer grids are superimposed dynamically on coarser grids where a more accurate solution is needed in the computation area. To further decrease the computation time, we use parallel computers and divide the computational work between the processors. This gives rise to challenging load balancing problem. The ...
Greg L. Bryan
2002-01-01
Full Text Available As an entry for the 2001 Gordon Bell Award in the "special" category, we describe our 3-d, hybrid, adaptive mesh refinement (AMR code Enzo designed for high-resolution, multiphysics, cosmological structure formation simulations. Our parallel implementation places no limit on the depth or complexity of the adaptive grid hierarchy, allowing us to achieve unprecedented spatial and temporal dynamic range. We report on a simulation of primordial star formation which develops over 8000 subgrids at 34 levels of refinement to achieve a local refinement of a factor of 1012 in space and time. This allows us to resolve the properties of the first stars which form in the universe assuming standard physics and a standard cosmological model. Achieving extreme resolution requires the use of 128-bit extended precision arithmetic (EPA to accurately specify the subgrid positions. We describe our EPA AMR implementation on the IBM SP2 Blue Horizon system at the San Diego Supercomputer Center.
Adaptive mesh refinement with spectral accuracy for magnetohydrodynamics in two space dimensions
We examine the effect of accuracy of high-order spectral element methods, with or without adaptive mesh refinement (AMR), in the context of a classical configuration of magnetic reconnection in two space dimensions, the so-called Orszag-Tang (OT) vortex made up of a magnetic X-point centred on a stagnation point of the velocity. A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code is applied to simulate this problem. The MHD solver is explicit, and uses the Elsaesser formulation on high-order elements. It automatically takes advantage of the adaptive grid mechanics that have been described elsewhere in the fluid context (Rosenberg et al 2006 J. Comput. Phys. 215 59-80); the code allows both statically refined and dynamically refined grids. Tests of the algorithm using analytic solutions are described, and comparisons of the OT solutions with pseudo-spectral computations are performed. We demonstrate for moderate Reynolds numbers that the algorithms using both static and refined grids reproduce the pseudo-spectral solutions quite well. We show that low-order truncation-even with a comparable number of global degrees of freedom-fails to correctly model some strong (sup-norm) quantities in this problem, even though it satisfies adequately the weak (integrated) balance diagnostics
A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction
Herrnstein, A
2005-09-08
An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO{sub 2} concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No
Adaptive Mesh Refinement with the PLUTO Code for Astrophysical Fluid Dynamics
Mignone, A; Tzeferacos, P; van Straalen, B; Colella, P; Bodo, G
2011-01-01
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell-center in a dimensionally unsplit fashion using the Corner Transport Upwind (CTU) method. Time stepping relies on a characteristic tracing step where piecewise parabolic method (PPM), weighted essentially non-oscillatory (WENO) or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange mu...
Single-Pass GPU-Raycasting for Structured Adaptive Mesh Refinement Data
Kaehler, Ralf
2012-01-01
Structured Adaptive Mesh Refinement (SAMR) is a popular numerical technique to study processes with high spatial and temporal dynamic range. It reduces computational requirements by adapting the lattice on which the underlying differential equations are solved to most efficiently represent the solution. Particularly in astrophysics and cosmology such simulations now can capture spatial scales ten orders of magnitude apart and more. The irregular locations and extensions of the refined regions in the SAMR scheme and the fact that different resolution levels partially overlap, poses a challenge for GPU-based direct volume rendering methods. kD-trees have proven to be advantageous to subdivide the data domain into non-overlapping blocks of equally sized cells, optimal for the texture units of current graphics hardware, but previous GPU-supported raycasting approaches for SAMR data using this data structure required a separate rendering pass for each node, preventing the application of many advanced lighting sche...
A high order special relativistic hydrodynamic code with space-time adaptive mesh refinement
Zanotti, Olindo
2013-01-01
We present a high order one-step ADER-WENO finite volume scheme with space-time adaptive mesh refinement (AMR) for the solution of the special relativistic hydrodynamics equations. By adopting a local discontinuous Galerkin predictor method, a high order one-step time discretization is obtained, with no need for Runge-Kutta sub-steps. This turns out to be particularly advantageous in combination with space-time adaptive mesh refinement, which has been implemented following a "cell-by-cell" approach. As in existing second order AMR methods, also the present higher order AMR algorithm features time-accurate local time stepping (LTS), where grids on different spatial refinement levels are allowed to use different time steps. We also compare two different Riemann solvers for the computation of the numerical fluxes at the cell interfaces. The new scheme has been validated over a sample of numerical test problems in one, two and three spatial dimensions, exploring its ability in resolving the propagation of relativ...
Block Structured Adaptive Mesh and Time Refinement for Hybrid, Hyperbolic + N-body Systems
Miniati, F; Miniati, Francesco; Colella, Phillip
2006-01-01
We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.
Development of a Godunov method for Maxwell's equations with Adaptive Mesh Refinement
Barbas, Alfonso; Velarde, Pedro
2015-11-01
In this paper we present a second order 3D method for Maxwell's equations based on a Godunov scheme with Adaptive Mesh Refinement (AMR). In order to achieve it, we apply a limiter which better preserves extrema and boundary conditions based on a characteristic fields decomposition. Despite being more complex, simplifications in the boundary conditions make the resulting method competitive in computer time consumption and accuracy compared to FDTD. AMR allows us to simulate systems with a sharp step in material properties with negligible rebounds and also large domains with accuracy in small wavelengths.
Maric, Tomislav; Marschall, Holger; Bothe, Dieter
2013-01-01
A new parallelized unsplit geometrical Volume of Fluid (VoF) algorithm with support for arbitrary unstructured meshes and dynamic local Adaptive Mesh Refinement (AMR), as well as for two and three dimensional computation is developed. The geometrical VoF algorithm supports arbitrary unstructured meshes in order to enable computations involving flow domains of arbitrary geometrical complexity. The implementation of the method is done within the framework of the OpenFOAM library for Computation...
Radiation hydrodynamics including irradiation and adaptive mesh refinement with AZEuS. I. Methods
Ramsey, J P
2014-01-01
Aims. The importance of radiation to the physical structure of protoplanetary disks cannot be understated. However, protoplanetary disks evolve with time, and so to understand disk evolution and by association, disk structure, one should solve the combined and time-dependent equations of radiation hydrodynamics. Methods. We implement a new implicit radiation solver in the AZEuS adaptive mesh refinement magnetohydrodynamics fluid code. Based on a hybrid approach that combines frequency-dependent ray-tracing for stellar irradiation with non-equilibrium flux limited diffusion, we solve the equations of radiation hydrodynamics while preserving the directionality of the stellar irradiation. The implementation permits simulations in Cartesian, cylindrical, and spherical coordinates, on both uniform and adaptive grids. Results. We present several hydrostatic and hydrodynamic radiation tests which validate our implementation on uniform and adaptive grids as appropriate, including benchmarks specifically designed for ...
Constrained-Transport Magnetohydrodynamics with Adaptive-Mesh-Refinement in CHARM
Miniati, Francesco
2011-01-01
We present the implementation of a three-dimensional, second order accurate Godunov-type algorithm for magneto-hydrodynamic (MHD), in the adaptive-mesh-refinement (AMR) cosmological code {\\tt CHARM}. The algorithm is based on the full 12-solve spatially unsplit Corner-Transport-Upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the Piecewise-Parabolic-Method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a Constrained-Transport (CT) method. The multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a {\\it reflux-curl} operation, which maintains a ...
A new adaptive mesh refinement data structure with an application to detonation
Ji, Hua; Lien, Fue-Sang; Yee, Eugene
2010-11-01
A new Cell-based Structured Adaptive Mesh Refinement (CSAMR) data structure is developed. In our CSAMR data structure, Cartesian-like indices are used to identify each cell. With these stored indices, the information on the parent, children and neighbors of a given cell can be accessed simply and efficiently. Owing to the usage of these indices, the computer memory required for storage of the proposed AMR data structure is only {5}/{8} word per cell, in contrast to the conventional oct-tree [P. MacNeice, K.M. Olson, C. Mobary, R. deFainchtein, C. Packer, PARAMESH: a parallel adaptive mesh refinement community toolkit, Comput. Phys. Commun. 330 (2000) 126] and the fully threaded tree (FTT) [A.M. Khokhlov, Fully threaded tree algorithms for adaptive mesh fluid dynamics simulations, J. Comput. Phys. 143 (1998) 519] data structures which require, respectively, 19 and 2{3}/{8} words per cell for storage of the connectivity information. Because the connectivity information (e.g., parent, children and neighbors) of a cell in our proposed AMR data structure can be accessed using only the cell indices, a tree structure which was required in previous approaches for the organization of the AMR data is no longer needed for this new data structure. Instead, a much simpler hash table structure is used to maintain the AMR data, with the entry keys in the hash table obtained directly from the explicitly stored cell indices. The proposed AMR data structure simplifies the implementation and parallelization of an AMR code. Two three-dimensional test cases are used to illustrate and evaluate the computational performance of the new CSAMR data structure.
Fromang, S; Teyssier, R
2006-01-01
In this paper, we present a new method to perform numerical simulations of astrophysical MHD flows using the Adaptive Mesh Refinement framework and Constrained Transport. The algorithm is based on a previous work in which the MUSCL--Hancock scheme was used to evolve the induction equation. In this paper, we detail the extension of this scheme to the full MHD equations and discuss its properties. Through a series of test problems, we illustrate the performances of this new code using two different MHD Riemann solvers (Lax-Friedrich and Roe) and the need of the Adaptive Mesh Refinement capabilities in some cases. Finally, we show its versatility by applying it to two completely different astrophysical situations well studied in the past years: the growth of the magnetorotational instability in the shearing box and the collapse of magnetized cloud cores. We have implemented this new Godunov scheme to solve the ideal MHD equations in the AMR code RAMSES. It results in a powerful tool that can be applied to a grea...
Fromang, S.; Hennebelle, P.; Teyssier, R.
2006-10-01
Aims. In this paper, we present a new method to perform numerical simulations of astrophysical MHD flows using the Adaptive Mesh Refinement framework and Constrained Transport. Methods: . The algorithm is based on a previous work in which the MUSCL-Hancock scheme was used to evolve the induction equation. In this paper, we detail the extension of this scheme to the full MHD equations and discuss its properties. Results: . Through a series of test problems, we illustrate the performances of this new code using two different MHD Riemann solvers (Lax-Friedrich and Roe) and the need of the Adaptive Mesh Refinement capabilities in some cases. Finally, we show its versatility by applying it to two completely different astrophysical situations well studied in the past years: the growth of the magnetorotational instability in the shearing box and the collapse of magnetized cloud cores. Conclusions: . We have implemented a new Godunov scheme to solve the ideal MHD equations in the AMR code RAMSES. We have shown that it results in a powerful tool that can be applied to a great variety of astrophysical problems, ranging from galaxies formation in the early universe to high resolution studies of molecular cloud collapse in our galaxy.
GAMER: A Graphic Processing Unit Accelerated Adaptive-Mesh-Refinement Code for Astrophysics
Schive, Hsi-Yu; Tsai, Yu-Chih; Chiueh, Tzihong
2010-02-01
We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 40963 effective resolution and 16 GPUs with 81923 effective resolution, respectively.
GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS
We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 40963 effective resolution and 16 GPUs with 81923 effective resolution, respectively.
GAMER: a GPU-Accelerated Adaptive Mesh Refinement Code for Astrophysics
Schive, Hsi-Yu; Chiueh, Tzihong
2009-01-01
We present the newly developed code, GAMER (GPU-accelerated Adaptive MEsh Refinement code), which has adopted a novel approach to improve the performance of adaptive mesh refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing TVD scheme for the hydrodynamic solver, and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is made to diminish by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is...
A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model
Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A
2009-03-03
Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.
Single-pass GPU-raycasting for structured adaptive mesh refinement data
Kaehler, Ralf; Abel, Tom
2013-01-01
Structured Adaptive Mesh Refinement (SAMR) is a popular numerical technique to study processes with high spatial and temporal dynamic range. It reduces computational requirements by adapting the lattice on which the underlying differential equations are solved to most efficiently represent the solution. Particularly in astrophysics and cosmology such simulations now can capture spatial scales ten orders of magnitude apart and more. The irregular locations and extensions of the refined regions in the SAMR scheme and the fact that different resolution levels partially overlap, poses a challenge for GPU-based direct volume rendering methods. kD-trees have proven to be advantageous to subdivide the data domain into non-overlapping blocks of equally sized cells, optimal for the texture units of current graphics hardware, but previous GPU-supported raycasting approaches for SAMR data using this data structure required a separate rendering pass for each node, preventing the application of many advanced lighting schemes that require simultaneous access to more than one block of cells. In this paper we present the first single-pass GPU-raycasting algorithm for SAMR data that is based on a kD-tree. The tree is efficiently encoded by a set of 3D-textures, which allows to adaptively sample complete rays entirely on the GPU without any CPU interaction. We discuss two different data storage strategies to access the grid data on the GPU and apply them to several datasets to prove the benefits of the proposed method.
Fiacconi, Davide; Ripamonti, Emanuele; Colpi, Monica
2012-01-01
Collisional ring galaxies are the outcome of nearly axisymmetric high-speed encounters between a disc and an intruder galaxy. We investigate the properties of collisional ring galaxies as a function of the impact parameter, the initial relative velocity and the inclination angle. We employ new adaptive mesh refinement simulations to trace the evolution with time of both stars and gas, taking into account star formation and supernova feedback. Axisymmetric encounters produce circular primary rings followed by smaller secondary rings, while off-centre interactions produce asymmetric rings with displaced nuclei. We propose an analytical treatment of the disc warping induced by an inclination angle greater then zero. The star formation history of our models is mainly influenced by the impact parameter: axisymmetric collisions induce impulsive short-lived starburst episodes, whereas off-centre encounters produce long-lived star formation. We compute synthetic colour maps of our models and we find that rings have a...
Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems
Miniati, Francesco; Colella, Phillip
2007-11-01
We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.
Martizzi, Davide; Moore, Ben
2014-01-01
A large sample of cosmological hydrodynamical zoom-in simulations with Adaptive Mesh Refinement (AMR) is analysed to study the properties of simulated Brightest Cluster Galaxies (BCGs). Following the formation and evolution of BCGs requires modeling an entire galaxy cluster, because the BCG properties are largely influenced by the state of the gas in the cluster and by interactions and mergers with satellites. BCG evolution is also deeply influenced by the presence of gas heating sources such as Active Galactic Nuclei (AGNs) that prevent catastrophic cooling of large amounts of gas. We show that AGN feedback is one of the most important mechanisms in shaping the properties of BCGs at low redshift by analysing our statistical sample of simulations with and without AGN feedback. When AGN feedback is included BCG masses, sizes, star formation rates and kinematic properties are closer to those of the observed systems. Some small discrepancies are observed only for the most massive BCGs, an effect that might be du...
Schaal, Kevin; Chandrashekar, Praveen; Pakmor, Rüdiger; Klingenberg, Christian; Springel, Volker
2015-01-01
Solving the Euler equations of ideal hydrodynamics as accurately and efficiently as possible is a key requirement in many astrophysical simulations. It is therefore important to continuously advance the numerical methods implemented in current astrophysical codes, especially also in light of evolving computer technology, which favours certain computational approaches over others. Here we introduce the new adaptive mesh refinement (AMR) code TENET, which employs a high-order Discontinuous Galerkin (DG) scheme for hydrodynamics. The Euler equations in this method are solved in a weak formulation with a polynomial basis by means of explicit Runge-Kutta time integration and Gauss-Legendre quadrature. This approach offers significant advantages over commonly employed finite volume (FV) solvers. In particular, the higher order capability renders it computationally more efficient, in the sense that the same precision can be obtained at significantly less computational cost. Also, the DG scheme inherently conserves a...
Lichtenberg, Tim
2015-01-01
The astonishing diversity in the observed planetary population requires theoretical efforts and advances in planet formation theories. Numerical approaches provide a method to tackle the weaknesses of current planet formation models and are an important tool to close gaps in poorly constrained areas. We present a global disk setup to model the first stages of giant planet formation via gravitational instabilities (GI) in 3D with the block-structured adaptive mesh refinement (AMR) hydrodynamics code ENZO. With this setup, we explore the impact of AMR techniques on the fragmentation and clumping due to large-scale instabilities using different AMR configurations. Additionally, we seek to derive general resolution criteria for global simulations of self-gravitating disks of variable extent. We run a grid of simulations with varying AMR settings, including runs with a static grid for comparison, and study the effects of varying the disk radius. Adopting a marginally stable disk profile (Q_init=1), we validate the...
Zanotti, O.; Dumbser, M.; Fambri, F.
2016-05-01
We describe a new method for the solution of the ideal MHD equations in special relativity which adopts the following strategy: (i) the main scheme is based on Discontinuous Galerkin (DG) methods, allowing for an arbitrary accuracy of order N+1, where N is the degree of the basis polynomials; (ii) in order to cope with oscillations at discontinuities, an ”a-posteriori” sub-cell limiter is activated, which scatters the DG polynomials of the previous time-step onto a set of 2N+1 sub-cells, over which the solution is recomputed by means of a robust finite volume scheme; (iii) a local spacetime Discontinuous-Galerkin predictor is applied both on the main grid of the DG scheme and on the sub-grid of the finite volume scheme; (iv) adaptive mesh refinement (AMR) with local time-stepping is used. We validate the new scheme and comment on its potential applications in high energy astrophysics.
Hybrid Characteristics: 3D radiative transfer for parallel adaptive mesh refinement hydrodynamics
Rijkhorst, E J; Dubey, A; Mellema, G R; Rijkhorst, Erik-Jan; Plewa, Tomasz; Dubey, Anshu; Mellema, Garrelt
2005-01-01
We have developed a three-dimensional radiative transfer method designed specifically for use with parallel adaptive mesh refinement hydrodynamics codes. This new algorithm, which we call hybrid characteristics, introduces a novel form of ray tracing that can neither be classified as long, nor as short characteristics, but which applies the underlying principles, i.e. efficient execution through interpolation and parallelizability, of both. Primary applications of the hybrid characteristics method are radiation hydrodynamics problems that take into account the effects of photoionization and heating due to point sources of radiation. The method is implemented in the hydrodynamics package FLASH. The ionization, heating, and cooling processes are modelled using the DORIC ionization package. Upon comparison with the long characteristics method, we find that our method calculates the column density with a similarly high accuracy and produces sharp and well defined shadows. We show the quality of the new algorithm ...
The Numerical Simulation of Ship Waves Using Cartesian Grid Methods with Adaptive Mesh Refinement
Dommermuth, Douglas G; Beck, Robert F; O'Shea, Thomas T; Wyatt, Donald C; Olson, Kevin; MacNeice, Peter
2014-01-01
Cartesian-grid methods with Adaptive Mesh Refinement (AMR) are ideally suited for simulating the breaking of waves, the formation of spray, and the entrainment of air around ships. As a result of the cartesian-grid formulation, minimal input is required to describe the ships geometry. A surface panelization of the ship hull is used as input to automatically generate a three-dimensional model. No three-dimensional gridding is required. The AMR portion of the numerical algorithm automatically clusters grid points near the ship in regions where wave breaking, spray formation, and air entrainment occur. Away from the ship, where the flow is less turbulent, the mesh is coarser. The numerical computations are implemented using parallel algorithms. Together, the ease of input and usage, the ability to resolve complex free-surface phenomena, and the speed of the numerical algorithms provide a robust capability for simulating the free-surface disturbances near a ship. Here, numerical predictions, with and without AMR,...
Henshaw, W; Schwendeman, D
2007-11-15
This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.
Northrup, Scott A.
A new parallel implicit adaptive mesh refinement (AMR) algorithm is developed for the prediction of unsteady behaviour of laminar flames. The scheme is applied to the solution of the system of partial-differential equations governing time-dependent, two- and three-dimensional, compressible laminar flows for reactive thermally perfect gaseous mixtures. A high-resolution finite-volume spatial discretization procedure is used to solve the conservation form of these equations on body-fitted multi-block hexahedral meshes. A local preconditioning technique is used to remove numerical stiffness and maintain solution accuracy for low-Mach-number, nearly incompressible flows. A flexible block-based octree data structure has been developed and is used to facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. The data structure also enables an efficient and scalable parallel implementation via domain decomposition. The parallel implicit formulation makes use of a dual-time-stepping like approach with an implicit second-order backward discretization of the physical time, in which a Jacobian-free inexact Newton method with a preconditioned generalized minimal residual (GMRES) algorithm is used to solve the system of nonlinear algebraic equations arising from the temporal and spatial discretization procedures. An additive Schwarz global preconditioner is used in conjunction with block incomplete LU type local preconditioners for each sub-domain. The Schwarz preconditioning and block-based data structure readily allow efficient and scalable parallel implementations of the implicit AMR approach on distributed-memory multi-processor architectures. The scheme was applied to solutions of steady and unsteady laminar diffusion and premixed methane-air combustion and was found to accurately predict key flame characteristics. For a premixed flame under terrestrial gravity, the scheme accurately predicted the frequency of the natural
Development of a scalable gas-dynamics solver with adaptive mesh refinement
Korkut, Burak
There are various computational physics areas in which Direct Simulation Monte Carlo (DSMC) and Particle in Cell (PIC) methods are being employed. The accuracy of results from such simulations depend on the fidelity of the physical models being used. The computationally demanding nature of these problems make them ideal candidates to make use of modern supercomputers. The software developed to run such simulations also needs special attention so that the maintainability and extendability is considered with the recent numerical methods and programming paradigms. Suited for gas-dynamics problems, a software called SUGAR (Scalable Unstructured Gas dynamics with Adaptive mesh Refinement) has recently been developed and written in C++ and MPI. Physical and numerical models were added to this framework to simulate ion thruster plumes. SUGAR is used to model the charge-exchange (CEX) reactions occurring between the neutral and ion species as well as the induced electric field effect due to ions. Multiple adaptive mesh refinement (AMR) meshes were used in order to capture different physical length scales present in the flow. A multiple-thruster configuration was run to extend the studies to cases for which there is no axial or radial symmetry present that could only be modeled with a three-dimensional simulation capability. The combined plume structure showed interactions between individual thrusters where AMR capability captured this in an automated way. The back flow for ions was found to occur when CEX and momentum-exchange (MEX) collisions are present and strongly enhanced when the induced electric field is considered. The ion energy distributions in the back flow region were obtained and it was found that the inclusion of the electric field modeling is the most important factor in determining its shape. The plume back flow structure was also examined for a triple-thruster, 3-D geometry case and it was found that the ion velocity in the back flow region appears to be
EMMA: an adaptive mesh refinement cosmological simulation code with radiative transfer
Aubert, Dominique; Deparis, Nicolas; Ocvirk, Pierre
2015-11-01
EMMA is a cosmological simulation code aimed at investigating the reionization epoch. It handles simultaneously collisionless and gas dynamics, as well as radiative transfer physics using a moment-based description with the M1 approximation. Field quantities are stored and computed on an adaptive three-dimensional mesh and the spatial resolution can be dynamically modified based on physically motivated criteria. Physical processes can be coupled at all spatial and temporal scales. We also introduce a new and optional approximation to handle radiation: the light is transported at the resolution of the non-refined grid and only once the dynamics has been fully updated, whereas thermo-chemical processes are still tracked on the refined elements. Such an approximation reduces the overheads induced by the treatment of radiation physics. A suite of standard tests are presented and passed by EMMA, providing a validation for its future use in studies of the reionization epoch. The code is parallel and is able to use graphics processing units (GPUs) to accelerate hydrodynamics and radiative transfer calculations. Depending on the optimizations and the compilers used to generate the CPU reference, global GPU acceleration factors between ×3.9 and ×16.9 can be obtained. Vectorization and transfer operations currently prevent better GPU performance and we expect that future optimizations and hardware evolution will lead to greater accelerations.
Hatori, Tomoharu; Ito, Atsushi M.; Nunami, Masanori; Usui, Hideyuki; Miura, Hideaki
2016-08-01
We propose a numerical method to determine the artificial viscosity in magnetohydrodynamics (MHD) simulations with adaptive mesh refinement (AMR) method, where the artificial viscosity is adaptively changed due to the resolution level of the AMR hierarchy. Although the suitable value of the artificial viscosity depends on the governing equations and the model of target problem, it can be determined by von Neumann stability analysis. By means of the new method, "level-by-level artificial viscosity method," MHD simulations of Rayleigh-Taylor instability (RTI) are carried out with the AMR method. The validity of the level-by-level artificial viscosity method is confirmed by the comparison of the linear growth rates of RTI between the AMR simulations and the simple simulations with uniform grid and uniform artificial viscosity whose resolution is the same as that in the highest level of the AMR simulation. Moreover, in the nonlinear phase of RTI, the secondary instability is clearly observed where the hierarchical data structure of AMR calculation is visualized as high resolution region floats up like terraced fields. In the applications of the method to general fluid simulations, the growth of small structures can be sufficiently reproduced, while the divergence of numerical solutions can be suppressed.
Teyssier, Romain; Fromang, Sébastien; Dormy, Emmanuel
2006-10-01
We propose to extend the well-known MUSCL-Hancock scheme for Euler equations to the induction equation modeling the magnetic field evolution in kinematic dynamo problems. The scheme is based on an integral form of the underlying conservation law which, in our formulation, results in a “finite-surface” scheme for the induction equation. This naturally leads to the well-known “constrained transport” method, with additional continuity requirement on the magnetic field representation. The second ingredient in the MUSCL scheme is the predictor step that ensures second order accuracy both in space and time. We explore specific constraints that the mathematical properties of the induction equations place on this predictor step, showing that three possible variants can be considered. We show that the most aggressive formulations (referred to as C-MUSCL and U-MUSCL) reach the same level of accuracy as the other one (referred to as Runge Kutta), at a lower computational cost. More interestingly, these two schemes are compatible with the adaptive mesh refinement (AMR) framework. It has been implemented in the AMR code RAMSES. It offers a novel and efficient implementation of a second order scheme for the induction equation. We have tested it by solving two kinematic dynamo problems in the low diffusion limit. The construction of this scheme for the induction equation constitutes a step towards solving the full MHD set of equations using an extension of our current methodology.
Multigroup radiation hydrodynamics with flux-limited diffusion and adaptive mesh refinement
González, Matthias; Commerçon, Benoît; Masson, Jacques
2015-01-01
Radiative transfer plays a key role in the star formation process. Due to a high computational cost, radiation-hydrodynamics simulations performed up to now have mainly been carried out in the grey approximation. In recent years, multi-frequency radiation-hydrodynamics models have started to emerge, in an attempt to better account for the large variations of opacities as a function of frequency. We wish to develop an efficient multigroup algorithm for the adaptive mesh refinement code RAMSES which is suited to heavy proto-stellar collapse calculations. Due to prohibitive timestep constraints of an explicit radiative transfer method, we constructed a time-implicit solver based on a stabilised bi-conjugate gradient algorithm, and implemented it in RAMSES under the flux-limited diffusion approximation. We present a series of tests which demonstrate the high performance of our scheme in dealing with frequency-dependent radiation-hydrodynamic flows. We also present a preliminary simulation of a three-dimensional p...
De Colle, Fabio; Lopez-Camara, Diego; Ramirez-Ruiz, Enrico
2011-01-01
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in Gamma-Ray Burst sources. The SRHD equations are solved using finite volume conservative solvers. The correct implementation of the algorithms is verified by one-dimensional (1D) shock tube and multidimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with $\\rho \\propto r^{-k}$, bridging between the relativistic and Newtonian phases, as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to non-relativistic speeds in one-dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, toge...
Schaal, Kevin; Bauer, Andreas; Chandrashekar, Praveen; Pakmor, Rüdiger; Klingenberg, Christian; Springel, Volker
2015-11-01
Solving the Euler equations of ideal hydrodynamics as accurately and efficiently as possible is a key requirement in many astrophysical simulations. It is therefore important to continuously advance the numerical methods implemented in current astrophysical codes, especially also in light of evolving computer technology, which favours certain computational approaches over others. Here we introduce the new adaptive mesh refinement (AMR) code TENET, which employs a high-order discontinuous Galerkin (DG) scheme for hydrodynamics. The Euler equations in this method are solved in a weak formulation with a polynomial basis by means of explicit Runge-Kutta time integration and Gauss-Legendre quadrature. This approach offers significant advantages over commonly employed second-order finite-volume (FV) solvers. In particular, the higher order capability renders it computationally more efficient, in the sense that the same precision can be obtained at significantly less computational cost. Also, the DG scheme inherently conserves angular momentum in regions where no limiting takes place, and it typically produces much smaller numerical diffusion and advection errors than an FV approach. A further advantage lies in a more natural handling of AMR refinement boundaries, where a fall-back to first order can be avoided. Finally, DG requires no wide stencils at high order, and offers an improved data locality and a focus on local computations, which is favourable for current and upcoming highly parallel supercomputers. We describe the formulation and implementation details of our new code, and demonstrate its performance and accuracy with a set of two- and three-dimensional test problems. The results confirm that DG schemes have a high potential for astrophysical applications.
Adaptive mesh refinement and automatic remeshing in crystal plasticity finite element simulations
In finite element simulations dedicated to the modelling of microstructure evolution, the mesh has to be fine enough to: (i) accurately describe the geometry of the constituents; (ii) capture local strain gradients stemming from the heterogeneity in material properties. In this paper, 3D polycrystalline aggregates are discretized into unstructured meshes and a level set framework is used to represent the grain boundaries. The crystal plasticity finite element method is used to simulate the plastic deformation of these aggregates. A mesh sensitivity analysis based on the deformation energy distribution shows that the predictions are, on average, more sensitive near grain boundaries. An anisotropic mesh refinement strategy based on the level set description is introduced and it is shown that it offers a good compromise between accuracy requirements on the one hand and computation time on the other hand. As the aggregates deform, mesh distortion inevitably occurs and ultimately causes the breakdown of the simulations. An automatic remeshing tool is used to periodically reconstruct the mesh and appropriate transfer of state variables is performed. It is shown that the diffusion related to data transfer is not significant. Finally, remeshing is performed repeatedly in a highly resolved 500 grains polycrystal subjected to about 90% thickness reduction in rolling. The predicted texture is compared with the experimental data and with the predictions of a standard Taylor model
Maric, Tomislav; Bothe, Dieter
2013-01-01
A new parallelized unsplit geometrical Volume of Fluid (VoF) algorithm with support for arbitrary unstructured meshes and dynamic local Adaptive Mesh Refinement (AMR), as well as for two and three dimensional computation is developed. The geometrical VoF algorithm supports arbitrary unstructured meshes in order to enable computations involving flow domains of arbitrary geometrical complexity. The implementation of the method is done within the framework of the OpenFOAM library for Computational Continuum Mechanics (CCM) using the C++ programming language with modern policy based design for high program code modularity. The development of the geometrical VoF algorithm significantly extends the method base of the OpenFOAM library by geometrical volumetric flux computation for two-phase flow simulations. For the volume fraction advection, a novel unsplit geometrical algorithm is developed, which inherently sustains volume conservation utilizing unique Lagrangian discrete trajectories located in the mesh points. ...
Radiation diffusion for multi-fluid Eulerian hydrodynamics with adaptive mesh refinement
Block-structured meshes provide the ability to concentrate grid points and computational effort in interesting regions of a flow field, without sacrificing the efficiency and low memory requirements of a regular grid. We describe an algorithm for simulating radiation diffusion on such a mesh, coupled to multi-fluid gasdynamics. Conservation laws are enforced by using locally conservative difference schemes along with explicit synchronization operations between different levels of refinement. In unsteady calculations each refinement level is advanced at its own optimal timestep. Particular attention is given to the appropriate coupling between the fluid energy and the radiation field, the behavior of the discretization at sharp interfaces, and the form of synchronization between levels required for energy conservation in the diffusion process. Two- and three-dimensional examples are presented, including parallel calculations performed on an IBM SP-2
Ravindran, Prashaanth
The unstable nature of detonation waves is a result of the critical relationship between the hydrodynamic shock and the chemical reactions sustaining the shock. A perturbative analysis of the critical point is quite challenging due to the multiple spatio-temporal scales involved along with the non-linear nature of the shock-reaction mechanism. The author's research attempts to provide detailed resolution of the instabilities at the shock front. Another key aspect of the present research is to develop an understanding of the causality between the non-linear dynamics of the front and the eventual breakdown of the sub-structures. An accurate numerical simulation of detonation waves requires a very efficient solution of the Euler equations in conservation form with detailed, non-equilibrium chemistry. The difference in the flow and reaction length scales results in very stiff source terms, requiring the problem to be solved with adaptive mesh refinement. For this purpose, Berger-Colella's block-structured adaptive mesh refinement (AMR) strategy has been developed and applied to time-explicit finite volume methods. The block-structured technique uses a hierarchy of parent-child sub-grids, integrated recursively over time. One novel approach to partition the problem within a large supercomputer was the use of modified Peano-Hilbert space filling curves. The AMR framework was merged with CLAWPACK, a package providing finite volume numerical methods tailored for wave-propagation problems. The stiffness problem is bypassed by using a 1st order Godunov or a 2nd order Strang splitting technique, where the flow variables and source terms are integrated independently. A linearly explicit fourth-order Runge-Kutta integrator is used for the flow, and an ODE solver was used to overcome the numerical stiffness. Second-order spatial resolution is obtained by using a second-order Roe-HLL scheme with the inclusion of numerical viscosity to stabilize the solution near the discontinuity
Pantano, Carlos
2005-11-01
We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)
Calder, A. C.; Curtis, B. C.; Dursi, L. J.; Fryxell, B.; Henry, G.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Tufo, H. M.; Truran, J. W.; Zingale, M.
We present simulations and performance results of nuclear burning fronts in supernovae on the largest domain and at the finest spatial resolution studied to date. These simulations were performed on the Intel ASCI-Red machine at Sandia National Laboratories using FLASH, a code developed at the Center for Astrophysical Thermonuclear Flashes at the University of Chicago. FLASH is a modular, adaptive mesh, parallel simulation code capable of handling compressible, reactive fluid flows in astrophysical environments. FLASH is written primarily in Fortran 90, uses the Message-Passing Interface library for inter-processor communication and portability, and employs the PARAMESH package to manage a block-structured adaptive mesh that places blocks only where the resolution is required and tracks rapidly changing flow features, such as detonation fronts, with ease. We describe the key algorithms and their implementation as well as the optimizations required to achieve sustained performance of 238 GLOPS on 6420 processors of ASCI-Red in 64-bit arithmetic.
2015-01-01
A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit scheme...
Adaptive Mesh Refinement Cosmological Simulations of Cosmic Rays in Galaxy Clusters
Skillman, Samuel William
2013-12-01
Galaxy clusters are unique astrophysical laboratories that contain many thermal and non-thermal phenomena. In particular, they are hosts to cosmic shocks, which propagate through the intracluster medium as a by-product of structure formation. It is believed that at these shock fronts, magnetic field inhomogeneities in a compressing flow may lead to the acceleration of cosmic ray electrons and ions. These relativistic particles decay and radiate through a variety of mechanisms, and have observational signatures in radio, hard X-ray, and Gamma-ray wavelengths. We begin this dissertation by developing a method to find shocks in cosmological adaptive mesh refinement simulations of structure formation. After describing the evolution of shock properties through cosmic time, we make estimates for the amount of kinetic energy processed and the total number of cosmic ray protons that could be accelerated at these shocks. We then use this method of shock finding and a model for the acceleration of and radio synchrotron emission from cosmic ray electrons to estimate the radio emission properties in large scale structures. By examining the time-evolution of the radio emission with respect to the X-ray emission during a galaxy cluster merger, we find that the relative timing of the enhancements in each are important consequences of the shock dynamics. By calculating the radio emission expected from a given mass galaxy cluster, we make estimates for future large-area radio surveys. Next, we use a state-of-the-art magnetohydrodynamic simulation to follow the electron acceleration in a massive merging galaxy cluster. We use the magnetic field information to calculate not only the total radio emission, but also create radio polarization maps that are compared to recent observations. We find that we can naturally reproduce Mpc-scale radio emission that resemble many of the known double radio relic systems. Finally, motivated by our previous studies, we develop and introduce a
Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang
2015-10-01
For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.
Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.
2007-01-01
We present a methodology for the large-eddy simulation of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). A description of a conservative, flux-based hybrid numerical method that uses both centered finite-difference and a weighted essentially non-oscillatory (WENO) scheme is given, encompassing the cases of scheme alternation and internal mesh interfaces resulting from SAMR. In this method, the centered scheme is used in turbulent flow regions while WENO is employed to capture shocks. One-, two- and three-dimensional numerical experiments and example simulations are presented including homogeneous shock-free turbulence, a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability.
Sui, Yi; Spelt, Peter D. M.; Ding, Hang
2010-11-01
Diffuse Interface (DI) methods are employed widely for the numerical simulation of two-phase flows, even with moving contact lines. In a DI method, the interface thickness should be as thin as possible to simulate spreading phenomena under realistic flow conditions, so a fine grid is required, beyond the reach of current methods that employ a uniform grid. Here we have integrated a DI method based on a uniform mesh, to a block-based adaptive mesh refinement method, so that only the regions near the interface are resolved by a fine mesh. The performance of the present method is tested by simulations including drop deformation in shear flow, Rayleigh-Taylor instability and drop spreading on a flat surface, et al. The results show that the present method can give accurate results with much smaller computational cost, compared to the original DI method based on a uniform mesh. Based on the present method, simulation of drop spreading is carried out with Cahn number of 0.001 and the contact line region is well resolved. The flow field near the contact line, the contact line speed as well as the apparent contact angle are investigated in detail and compared with previous analytical work.
The interaction of a supernova shock with an interstellar cloud can be idealized to the problem of the interaction of a strong planar shock with a dense spherical inhomogeneity surrounded by a less dense fluid: the intercloud medium (ICM). This deceptively simple problem actually represents an extremely complex set of nonlinear hydrodynamic flows encompassing a rich set of shock/shock interaction phenomena. The authors have, for the first time, implemented local adaptive mesh refinement (AMR) techniques with second-order Godunov methods as developed recently by Berger and Colella to astrophysical gas dynamic scenarios to address this complex nonlinear problem. With AMR with a Godunov second-order method, they are able to evolve the hydrodynamics highly complex, multiply shock-distorted structures to a high degree of accuracy with a total of not more than 80,000 grid cells. A comparable calculation with a fixed grid would need >1,500,000 grid cells to achieve a similar accuracy. Clearly, adaptive mesh refinement techniques hold a major advantage in calculating complex compressible gas dynamic flows one to two orders of magnitude more rapidly than standard techniques
Moura, R. C.; Silva, A. F. C.; Bigarella, E. D. V.; Fazenda, A. L.; Ortega, M. A.
2016-08-01
This paper proposes two important improvements to shock-capturing strategies using a discontinuous Galerkin scheme, namely, accurate shock identification via finite-time Lyapunov exponent (FTLE) operators and efficient shock treatment through a point-implicit discretization of a PDE-based artificial viscosity technique. The advocated approach is based on the FTLE operator, originally developed in the context of dynamical systems theory to identify certain types of coherent structures in a flow. We propose the application of FTLEs in the detection of shock waves and demonstrate the operator's ability to identify strong and weak shocks equally well. The detection algorithm is coupled with a mesh refinement procedure and applied to transonic and supersonic flows. While the proposed strategy can be used potentially with any numerical method, a high-order discontinuous Galerkin solver is used in this study. In this context, two artificial viscosity approaches are employed to regularize the solution near shocks: an element-wise constant viscosity technique and a PDE-based smooth viscosity model. As the latter approach is more sophisticated and preferable for complex problems, a point-implicit discretization in time is proposed to reduce the extra stiffness introduced by the PDE-based technique, making it more competitive in terms of computational cost.
Generic Mesh Refinement On GPU
Boubekeur, Tamy; Schlick, Christophe
2005-01-01
International audience Many recent publications have shown that a large variety of computation involved in computer graphics can be moved from the CPU to the GPU, by a clever use of vertex or fragment shaders. Nonetheless there is still one kind of algorithms that is hard to translate from CPU to GPU: mesh refinement techniques. The main reason for this, is that vertex shaders available on current graphics hardware do not allow the generation of additional vertices on a mesh stored in grap...
Cunningham, Andrew J; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W
2007-01-01
A description is given of the algorithms implemented in the AstroBEAR adaptive mesh refinement code for ideal magnetohydrodynamics. The code provides several high resolution, shock capturing schemes which are constructed to maintain conserved quantities of the flow in a finite volume sense. Divergence free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of such topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across co-located grids of different resolution. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.
Lyman alpha Radiative Transfer in Cosmological Simulations using Adaptive Mesh Refinement
Laursen, Peter; Sommer-Larsen, Jesper
2008-01-01
A numerical code for solving various Lyman alpha (Lya) radiative transfer (RT) problems is presented. The code is suitable for an arbitrary, three-dimensional distribution of Lya emissivity, gas temperature, density, and velocity field. Capable of handling Lya RT in an adaptively refined grid-based structure, it enables detailed investigation of the effects of clumpiness of the interstellar (or intergalactic) medium. The code is tested against various geometrically and physically idealized configurations for which analytical solutions exist, and subsequently applied to three "Lyman-break galaxies", extracted from high-resolution cosmological simulations at redshift z = 3.6. Proper treatment of the Lya scattering reveals a diversity of surface brightness (SB) and line profiles. Specifically, for a given galaxy the maximum observed SB can vary by an order of magnitude, and the total flux by a factor of 3 - 6, depending on the viewing angle. This may provide an explanation for differences in observed properties ...
In this paper, we consider adaptive numerical simulation of miscible displacement problems in porous media, which are modeled by single phase flow equations. A vertex-centred finite volume method is employed to discretize the coupled system: the Darcy flow equation and the diffusion-convection concentration equation. The convection term is approximated with a Godunov scheme over the dual finite volume mesh, whereas the diffusion-dispersion term is discretized by piecewise linear conforming finite elements. We introduce two kinds of indicators, both of them of residual type. The first one is related to time discretization and is local with respect to the time discretization: thus, at each time, it provides an appropriate information for the choice of the next time step. The second is related to space discretization and is local with respect to both the time and space variable and the idea is that at each time it is an efficient tool for mesh adaptivity. An error estimation procedure evaluates where additional refinement is needed and grid generation procedures dynamically create or remove fine-grid patches as resolution requirements change. The method was implemented in the software MELODIE, developed by the French Institute for Radiological Protection and Nuclear Safety (IRSN, Institut de Radioprotection et de Surete Nucleaire). The algorithm is then used to simulate the evolution of radionuclide migration from the waste packages through a heterogeneous disposal, demonstrating its capability to capture complex behavior of the resulting flow. (authors)
Amaziane Brahim
2014-07-01
Full Text Available In this paper, we consider adaptive numerical simulation of miscible displacement problems in porous media, which are modeled by single phase flow equations. A vertex-centred finite volume method is employed to discretize the coupled system: the Darcy flow equation and the diffusion-convection concentration equation. The convection term is approximated with a Godunov scheme over the dual finite volume mesh, whereas the diffusion-dispersion term is discretized by piecewise linear conforming finite elements. We introduce two kinds of indicators, both of them of residual type. The first one is related to time discretization and is local with respect to the time discretization: thus, at each time, it provides an appropriate information for the choice of the next time step. The second is related to space discretization and is local with respect to both the time and space variable and the idea is that at each time it is an efficient tool for mesh adaptivity. An error estimation procedure evaluates where additional refinement is needed and grid generation procedures dynamically create or remove fine-grid patches as resolution requirements change. The method was implemented in the software MELODIE, developed by the French Institute for Radiological Protection and Nuclear Safety (IRSN, Institut de Radioprotection et de Sûreté Nucléaire. The algorithm is then used to simulate the evolution of radionuclide migration from the waste packages through a heterogeneous disposal, demonstrating its capability to capture complex behavior of the resulting flow.
Lopez-Camara, D.; Lazzati, Davide [Department of Physics, NC State University, 2401 Stinson Drive, Raleigh, NC 27695-8202 (United States); Morsony, Brian J. [Department of Astronomy, University of Wisconsin-Madison, 2535 Sterling Hall, 475 N. Charter Street, Madison, WI 53706-1582 (United States); Begelman, Mitchell C., E-mail: dlopezc@ncsu.edu [JILA, University of Colorado, 440 UCB, Boulder, CO 80309-0440 (United States)
2013-04-10
We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.
Lyα RADIATIVE TRANSFER IN COSMOLOGICAL SIMULATIONS USING ADAPTIVE MESH REFINEMENT
A numerical code for solving various Lyα radiative transfer (RT) problems is presented. The code is suitable for an arbitrary, three-dimensional distribution of Lyα emissivity, gas temperature, density, and velocity field. Capable of handling Lyα RT in an adaptively refined grid-based structure, it enables detailed investigation of the effects of clumpiness of the interstellar (or intergalactic) medium. The code is tested against various geometrically and physically idealized configurations for which analytical solutions exist, and subsequently applied to three different simulated high-resolution 'Lyman-break galaxies', extracted from high-resolution cosmological simulations at redshift z = 3.6. Proper treatment of the Lyα scattering reveals a diversity of surface brightness (SB) and line profiles. Specifically, for a given galaxy the maximum observed SB can vary by an order of magnitude, and the total flux by a factor of 3-6, depending on the viewing angle. This may provide an explanation for differences in observed properties of high-redshift galaxies, and in particular a possible physical link between Lyman-break galaxies and regular Lyα emitters.
Moving Overlapping Grids with Adaptive Mesh Refinement for High-Speed Reactive and Non-reactive Flow
Henshaw, W D; Schwendeman, D W
2005-08-30
We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows in order to demonstrate the use and accuracy of the numerical approach.
Mesh Adaptation and Shape Optimization on Unstructured Meshes Project
National Aeronautics and Space Administration — In this SBIR CRM proposes to implement the entropy adjoint method for solution adaptive mesh refinement into the Loci/CHEM unstructured flow solver. The scheme will...
Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.
2011-01-01
We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.
Zanotti, Olindo; Dumbser, Michael
2015-01-01
We present a new numerical tool for solving the special relativistic ideal MHD equations that is based on the combination of the following three key features: (i) a one-step ADER discontinuous Galerkin (DG) scheme that allows for an arbitrary order of accuracy in both space and time, (ii) an a posteriori subcell finite volume limiter that is activated to avoid spurious oscillations at discontinuities without destroying the natural subcell resolution capabilities of the DG finite element framework and finally (iii) a space-time adaptive mesh refinement (AMR) framework with time-accurate local time-stepping. The divergence-free character of the magnetic field is instead taken into account through the so-called 'divergence-cleaning' approach. The convergence of the new scheme is verified up to 5th order in space and time and the results for a sample of significant numerical tests including shock tube problems, the RMHD rotor problem and the Orszag-Tang vortex system are shown. We also consider a simple case of t...
woptic: Optical conductivity with Wannier functions and adaptive k-mesh refinement
Assmann, E.; Wissgott, P.; Kuneš, J.; Toschi, A.; Blaha, P.; Held, K.
2016-05-01
We present an algorithm for the adaptive tetrahedral integration over the Brillouin zone of crystalline materials, and apply it to compute the optical conductivity, dc conductivity, and thermopower. For these quantities, whose contributions are often localized in small portions of the Brillouin zone, adaptive integration is especially relevant. Our implementation, the woptic package, is tied into the WIEN2WANNIER framework and allows including a local many-body self energy, e.g. from dynamical mean-field theory (DMFT). Wannier functions and dipole matrix elements are computed with the DFT package WIEN2k and Wannier90. For illustration, we show DFT results for fcc-Al and DMFT results for the correlated metal SrVO3.
woptic: optical conductivity with Wannier functions and adaptive k-mesh refinement
Assmann, E; Kuneš, J; Toschi, A; Blaha, P; Held, K
2015-01-01
We present an algorithm for the adaptive tetrahedral integration over the Brillouin zone of crystalline materials, and apply it to compute the optical conductivity, dc conductivity, and thermopower. For these quantities, whose contributions are often localized in small portions of the Brillouin zone, adaptive integration is especially relevant. Our implementation, the woptic package, is tied into the wien2wannier framework and allows including a many-body self energy, e.g. from dynamical mean-field theory (DMFT). Wannier functions and dipole matrix elements are computed with the DFT package Wien2k and Wannier90. For illustration, we show DFT results for fcc-Al and DMFT results for the correlated metal SrVO$_3$.
Autotuning of Adaptive Mesh Refinement PDE Solvers on Shared Memory Architectures
Nogina, Svetlana
2012-01-01
Many multithreaded, grid-based, dynamically adaptive solvers for partial differential equations permanently have to traverse subgrids (patches) of different and changing sizes. The parallel efficiency of this traversal depends on the interplay of the patch size, the architecture used, the operations triggered throughout the traversal, and the grain size, i.e. the size of the subtasks the patch is broken into. We propose an oracle mechanism delivering grain sizes on-the-fly. It takes historical runtime measurements for different patch and grain sizes as well as the traverse\\'s operations into account, and it yields reasonable speedups. Neither magic configuration settings nor an expensive pre-tuning phase are necessary. It is an autotuning approach. © 2012 Springer-Verlag.
Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.
2006-09-01
This paper describes a hybrid finite-difference method for the large-eddy simulation of compressible flows with low-numerical dissipation and structured adaptive mesh refinement (SAMR). A conservative flux-based approach is described with an explicit centered scheme used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. Three-dimensional numerical simulations of a Richtmyer-Meshkov instability are presented.
An Adaptive Mesh Algorithm: Mesh Structure and Generation
Scannapieco, Anthony J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-21
The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented by a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time.
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
Cartesian anisotropic mesh adaptation for compressible flow
Simulating transient compressible flows involving shock waves presents challenges to the CFD practitioner in terms of the mesh quality required to resolve discontinuities and prevent smearing. This paper discusses a novel two-dimensional Cartesian anisotropic mesh adaptation technique implemented for compressible flow. This technique, developed for laminar flow by Ham, Lien and Strong, is efficient because it refines and coarsens cells using criteria that consider the solution in each of the cardinal directions separately. In this paper the method will be applied to compressible flow. The procedure shows promise in its ability to deliver good quality solutions while achieving computational savings. The convection scheme used is the Advective Upstream Splitting Method (Plus), and the refinement/ coarsening criteria are based on work done by Ham et al. Transient shock wave diffraction over a backward step and shock reflection over a forward step are considered as test cases because they demonstrate that the quality of the solution can be maintained as the mesh is refined and coarsened in time. The data structure is explained in relation to the computational mesh, and the object-oriented design and implementation of the code is presented. Refinement and coarsening algorithms are outlined. Computational savings over uniform and isotropic mesh approaches are shown to be significant. (author)
Schartmann, M; Burkert, A; Gillessen, S; Genzel, R; Pfuhl, O; Eisenhauer, F; Plewa, P M; Ott, T; George, E M; Habibi, M
2015-01-01
The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtained results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-$\\gamma$ data, (3) a detailed comparison to the observed high-quality position-velocity diagrams and the evolution of the total Brackett-$\\gamma$ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scen...
Electrostatic PIC with adaptive Cartesian mesh
Kolobov, Vladimir I
2016-01-01
We describe an initial implementation of an electrostatic Particle-in-Cell (ES-PIC) module with adaptive Cartesian mesh in our Unified Flow Solver framework. Challenges of PIC method with cell-based adaptive mesh refinement (AMR) are related to a decrease of the particle-per-cell number in the refined cells with a corresponding increase of the numerical noise. The developed ES-PIC solver is validated for capacitively coupled plasma, its AMR capabilities are demonstrated for simulations of streamer development during high-pressure gas breakdown. It is shown that cell-based AMR provides a convenient particle management algorithm for exponential multiplications of electrons and ions in the ionization events.
Electrostatic PIC with adaptive Cartesian mesh
Kolobov, Vladimir; Arslanbekov, Robert
2016-05-01
We describe an initial implementation of an electrostatic Particle-in-Cell (ES-PIC) module with adaptive Cartesian mesh in our Unified Flow Solver framework. Challenges of PIC method with cell-based adaptive mesh refinement (AMR) are related to a decrease of the particle-per-cell number in the refined cells with a corresponding increase of the numerical noise. The developed ES-PIC solver is validated for capacitively coupled plasma, its AMR capabilities are demonstrated for simulations of streamer development during high-pressure gas breakdown. It is shown that cell-based AMR provides a convenient particle management algorithm for exponential multiplications of electrons and ions in the ionization events.
Details of tetrahedral anisotropic mesh adaptation
Jensen, Kristian Ejlebjerg; Gorman, Gerard
2016-04-01
We have implemented tetrahedral anisotropic mesh adaptation using the local operations of coarsening, swapping, refinement and smoothing in MATLAB without the use of any for- N loops, i.e. the script is fully vectorised. In the process of doing so, we have made three observations related to details of the implementation: 1. restricting refinement to a single edge split per element not only simplifies the code, it also improves mesh quality, 2. face to edge swapping is unnecessary, and 3. optimising for the Vassilevski functional tends to give a little higher value for the mean condition number functional than optimising for the condition number functional directly. These observations have been made for a uniform and a radial shock metric field, both starting from a structured mesh in a cube. Finally, we compare two coarsening techniques and demonstrate the importance of applying smoothing in the mesh adaptation loop. The results pertain to a unit cube geometry, but we also show the effect of corners and edges by applying the implementation in a spherical geometry.
Serial and parallel dynamic adaptation of general hybrid meshes
Kavouklis, Christos
The Navier-Stokes equations are a standard mathematical representation of viscous fluid flow. Their numerical solution in three dimensions remains a computationally intensive and challenging task, despite recent advances in computer speed and memory. A strategy to increase accuracy of Navier-Stokes simulations, while maintaining computing resources to a minimum, is local refinement of the associated computational mesh in regions of large solution gradients and coarsening in regions where the solution does not vary appreciably. In this work we consider adaptation of general hybrid meshes for Computational Fluid Dynamics (CFD) applications. Hybrid meshes are composed of four types of elements; hexahedra, prisms, pyramids and tetrahedra, and have been proven a promising technology in accurately resolving fluid flow for complex geometries. The first part of this dissertation is concerned with the design and implementation of a serial scheme for the adaptation of general three dimensional hybrid meshes. We have defined 29 refinement types, for all four kinds of elements. The core of the present adaptation scheme is an iterative algorithm that flags mesh edges for refinement, so that the adapted mesh is conformal. Of primary importance is considered the design of a suitable dynamic data structure that facilitates refinement and coarsening operations and furthermore minimizes memory requirements. A special dynamic list is defined for mesh elements, in contrast with the usual tree structures. It contains only elements of the current adaptation step and minimal information that is utilized to reconstruct parent elements when the mesh is coarsened. In the second part of this work, a new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid meshes is presented. Partitioning of a hybrid mesh reduces to partitioning of the corresponding dual graph. Communication among processors is based on the faces of the interpartition boundary. The distributed
宫翔飞; 张树道; 江松
2006-01-01
在流体力学方程的计算中采用高精度WENO格式,用AMR(adaptive mesh refinement)方法提高流场局部分辨率,在采用Level Set函数标定物质界面的计算中用GFM(ghostfluid method)方法进行界面处理,尝试将AMR技术与界面追踪技术相互融合并应用于数值模拟,对不同的模拟结果进行了比较.
Adaptive mesh strategies for the spectral element method
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
Core, X.
2002-02-01
The isobar approximation for the system of the balance equations of mass, momentum, energy and chemical species is a suitable approximation to represent low Mach number reactive flows. In this approximation, which neglects acoustics phenomena, the mixture is hydrodynamically incompressible and the thermodynamic effects lead to an uniform compression of the system. We present a novel numerical scheme for this approximation. An incremental projection method, which uses the original form of mass balance equation, discretizes in time the Navier-Stokes equations. Spatial discretization is achieved through a finite volume approach on MAC-type staggered mesh. A higher order de-centered scheme is used to compute the convective fluxes. We associate to this discretization a local mesh refinement method, based on Flux Interface Correction technique. A first application concerns a forced flow with variable density which mimics a combustion problem. The second application is natural convection with first small temperature variations and then beyond the limit of validity of the Boussinesq approximation. Finally, we treat a third application which is a laminar diffusion flame. For each of these test problems, we demonstrate the robustness of the proposed numerical scheme, notably for the density spatial variations. We analyze the gain in accuracy obtained with the local mesh refinement method. (author)
Parallel adaptation of general three-dimensional hybrid meshes
A new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid grids has been developed. The meshes considered in this work are composed of four kinds of elements; tetrahedra, prisms, hexahedra and pyramids, which poses a challenge to parallel mesh adaptation. Additional complexity imposed by the presence of multiple types of elements affects especially data migration, updates of local data structures and interpartition data structures. Efficient partition of hybrid meshes has been accomplished by transforming them to suitable graphs and using serial graph partitioning algorithms. Communication among processors is based on the faces of the interpartition boundary and the termination detection algorithm of Dijkstra is employed to ensure proper flagging of edges for refinement. An inexpensive dynamic load balancing strategy is introduced to redistribute work load among processors after adaptation. In particular, only the initial coarse mesh, with proper weighting, is balanced which yields savings in computation time and relatively simple implementation of mesh quality preservation rules, while facilitating coarsening of refined elements. Special algorithms are employed for (i) data migration and dynamic updates of the local data structures, (ii) determination of the resulting interpartition boundary and (iii) identification of the communication pattern of processors. Several representative applications are included to evaluate the method.
Parallel adaptation of general three-dimensional hybrid meshes
Kavouklis, Christos; Kallinderis, Yannis
2010-05-01
A new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid grids has been developed. The meshes considered in this work are composed of four kinds of elements; tetrahedra, prisms, hexahedra and pyramids, which poses a challenge to parallel mesh adaptation. Additional complexity imposed by the presence of multiple types of elements affects especially data migration, updates of local data structures and interpartition data structures. Efficient partition of hybrid meshes has been accomplished by transforming them to suitable graphs and using serial graph partitioning algorithms. Communication among processors is based on the faces of the interpartition boundary and the termination detection algorithm of Dijkstra is employed to ensure proper flagging of edges for refinement. An inexpensive dynamic load balancing strategy is introduced to redistribute work load among processors after adaptation. In particular, only the initial coarse mesh, with proper weighting, is balanced which yields savings in computation time and relatively simple implementation of mesh quality preservation rules, while facilitating coarsening of refined elements. Special algorithms are employed for (i) data migration and dynamic updates of the local data structures, (ii) determination of the resulting interpartition boundary and (iii) identification of the communication pattern of processors. Several representative applications are included to evaluate the method.
Evolutions in 3D numerical relativity using fixed mesh refinement
Schnetter, E; Hawke, I; Schnetter, Erik; Hawley, Scott H.; Hawke, Ian
2004-01-01
We present results of 3D numerical simulations using a finite difference code featuring fixed mesh refinement (FMR), in which a subset of the computational domain is refined in space and time. We apply this code to a series of test cases including a robust stability test, a nonlinear gauge wave and an excised Schwarzschild black hole in an evolving gauge. We find that the mesh refinement results are comparable in accuracy, stability and convergence to unigrid simulations with the same effective resolution. At the same time, the use of FMR reduces the computational resources needed to obtain a given accuracy. Particular care must be taken at the interfaces between coarse and fine grids to avoid a loss of convergence at high resolutions. This FMR system, "Carpet", is a driver module in the freely available Cactus computational infrastructure, and is able to endow existing Cactus simulation modules ("thorns") with FMR with little or no extra effort.
Mesh refinement of simulation with the AID riser transmission gamma
Type reactors Circulating Fluidized Bed (CFBR) vertical, in which the particulate and gaseous phases have flows upward (riser) have been widely used in gasification processes, combustion and fluid catalytic cracking (FCC). These biphasic reactors (gas-solid) efficiency depends largely on their hydrodynamic characteristics, and shows different behaviors in the axial and radial directions. The solids axial distribution is observed by the higher concentration in the base, getting more diluted toward the top. Radially, the solids concentration is characterized as core-annular, in which the central region is highly diluted, consisting of dispersed particles and fluid. In the present work developed a two-dimensional geometry (2D) techniques through simulations in computational fluid dynamics (CFD) to predict the gas-solid flow in the riser type CFBR through transient modeling, based on the kinetic theory of granular flow . The refinement of computational meshes provide larger amounts of information on the parameters studied, but may increase the processing time of the simulations. A minimum number of cells applied to the mesh construction was obtained by testing five meshes. The validation of the hydrodynamic parameters was performed using a range of 241Am source and detector NaI (Tl). The numerical results were provided consistent with the experimental data, indicating that the refined computational mesh in a controlled manner, improve the approximation of the expected results. (author)
Grid adaptation using chimera composite overlapping meshes
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1994-01-01
The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.
Grid adaption using Chimera composite overlapping meshes
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
Evolutions in 3D numerical relativity using fixed mesh refinement
We present results of 3D numerical simulations using a finite difference code featuring fixed mesh refinement (FMR), in which a subset of the computational domain is refined in space and time. We apply this code to a series of test cases including a robust stability test, a nonlinear gauge wave and an excised Schwarzschild black hole in an evolving gauge. We find that the mesh refinement results are comparable in accuracy, stability and convergence to unigrid simulations with the same effective resolution. At the same time, the use of FMR reduces the computational resources needed to obtain a given accuracy. Particular care must be taken at the interfaces between coarse and fine grids to avoid a loss of convergence at higher resolutions, and we introduce the use of 'buffer zones' as one resolution of this issue. We also introduce a new method for initial data generation, which enables higher order interpolation in time even from the initial time slice. This FMR system, 'Carpet', is a driver module in the freely available Cactus computational infrastructure, and is able to endow generic existing Cactus simulation modules ('thorns') with FMR with little or no extra effort
Todarello, Giovanni; Vonck, Floris; Bourasseau, Sébastien; Peter, Jacques; Désidéri, Jean-Antoine
2016-05-01
A new goal-oriented mesh adaptation method for finite volume/finite difference schemes is extended from the structured mesh framework to a more suitable setting for adaptation of unstructured meshes. The method is based on the total derivative of the goal with respect to volume mesh nodes that is computable after the solution of the goal discrete adjoint equation. The asymptotic behaviour of this derivative is assessed on regularly refined unstructured meshes. A local refinement criterion is derived from the requirement of limiting the first order change in the goal that an admissible node displacement may cause. Mesh adaptations are then carried out for classical test cases of 2D Euler flows. Efficiency and local density of the adapted meshes are presented. They are compared with those obtained with a more classical mesh adaptation method in the framework of finite volume/finite difference schemes [46]. Results are very close although the present method only makes usage of the current grid.
A multilevel adaptive mesh generation scheme using Kd-trees
Alfonso Limon
2009-04-01
Full Text Available We introduce a mesh refinement strategy for PDE based simulations that benefits from a multilevel decomposition. Using Harten's MRA in terms of Schroder-Pander linear multiresolution analysis [20], we are able to bound discontinuities in $mathbb{R}$. This MRA is extended to $mathbb{R}^n$ in terms of n-orthogonal linear transforms and utilized to identify cells that contain a codimension-one discontinuity. These refinement cells become leaf nodes in a balanced Kd-tree such that a local dyadic MRA is produced in $mathbb{R}^n$, while maintaining a minimal computational footprint. The nodes in the tree form an adaptive mesh whose density increases in the vicinity of a discontinuity.
Yan, Su; Arslanbekov, Robert R; Kolobov, Vladimir I; Jin, Jian-Ming
2016-01-01
A discontinuous Galerkin time-domain (DGTD) method based on dynamically adaptive Cartesian meshes (ACM) is developed for a full-wave analysis of electromagnetic fields in dispersive media. Hierarchical Cartesian grids offer simplicity close to that of structured grids and the flexibility of unstructured grids while being highly suited for adaptive mesh refinement (AMR). The developed DGTD-ACM achieves a desired accuracy by refining non-conformal meshes near material interfaces to reduce stair-casing errors without sacrificing the high efficiency afforded with uniform Cartesian meshes. Moreover, DGTD-ACM can dynamically refine the mesh to resolve the local variation of the fields during propagation of electromagnetic pulses. A local time-stepping scheme is adopted to alleviate the constraint on the time-step size due to the stability condition of the explicit time integration. Simulations of electromagnetic wave diffraction over conducting and dielectric cylinders and spheres demonstrate that the proposed meth...
Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement
Guzik, Stephen M. [Colorado State Univ., Fort Collins, CO (United States). Dept. of Mechanical Engineering; Weisgraber, Todd H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Colella, Phillip [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Alder, Berni J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2013-12-10
A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examples highlighting the mesh adaptivity of this method are also provided.
An Adaptive Mesh Algorithm: Mapping the Mesh Variables
Scannapieco, Anthony J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-07-25
Both thermodynamic and kinematic variables must be mapped. The kinematic variables are defined on a separate kinematic mesh; it is the duel mesh to the thermodynamic mesh. The map of the kinematic variables is done by calculating the contributions of kinematic variables on the old thermodynamic mesh, mapping the kinematic variable contributions onto the new thermodynamic mesh and then synthesizing the mapped kinematic variables on the new kinematic mesh. In this document the map of the thermodynamic variables will be described.
Führer, Thomas; Melenk, Jens Markus; Praetorius, Dirk; Rieder, Alexander
2014-01-01
We propose and analyze an overlapping Schwarz preconditioner for the $p$ and $hp$ boundary element method for the hypersingular integral equation in 3D. We consider surface triangulations consisting of triangles. The condition number is bounded uniformly in the mesh size $h$ and the polynomial order $p$. The preconditioner handles adaptively refined meshes and is based on a local multilevel preconditioner for the lowest order space. Numerical experiments on different geometries illustrate its...
Multigrid solution strategies for adaptive meshing problems
Mavriplis, Dimitri J.
1995-01-01
This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.
Papadakis, A. P.; Georghiou, G. E.; Metaxas, A. C.
2008-12-01
A new adaptive mesh generator has been developed and used in the analysis of high-pressure gas discharges, such as avalanches and streamers, reducing computational times and computer memory needs significantly. The new adaptive mesh generator developed, uses normalized error indicators, varying from 0 to 1, to guarantee optimal mesh resolution for all carriers involved in the analysis. Furthermore, it uses h- and r-refinement techniques such as mesh jiggling, edge swapping and node addition/removal to develop an element quality improvement algorithm that improves the mesh quality significantly and a fast and accurate algorithm for interpolation between meshes. Finally, the mesh generator is applied in the characterization of the transition from a single electron to the avalanche and streamer discharges in high-voltage, high-pressure gas discharges for dc 1 mm gaps, RF 1 cm point-plane gaps and parallel-plate 40 MHz configurations, in ambient atmospheric air.
Papadakis, A P [Department of Electrical Engineering, Frederick University Cyprus, 7 Y Frederickou Street, Palouriotissa, Nicosia 1036 (Cyprus); Georghiou, G E [Department of Electrical and Computer Engineering, University of Cyprus, 75 Kallipoleos, PO Box 20577, 1678, Nicosia (Cyprus); Metaxas, A C [St John' s College, University of Cambridge, Cambridge, CB2 1TP (United Kingdom)], E-mail: eng.ap@frederick.ac.cy, E-mail: geg@ucy.ac.cy, E-mail: acm33@cam.ac.uk
2008-12-07
A new adaptive mesh generator has been developed and used in the analysis of high-pressure gas discharges, such as avalanches and streamers, reducing computational times and computer memory needs significantly. The new adaptive mesh generator developed, uses normalized error indicators, varying from 0 to 1, to guarantee optimal mesh resolution for all carriers involved in the analysis. Furthermore, it uses h- and r-refinement techniques such as mesh jiggling, edge swapping and node addition/removal to develop an element quality improvement algorithm that improves the mesh quality significantly and a fast and accurate algorithm for interpolation between meshes. Finally, the mesh generator is applied in the characterization of the transition from a single electron to the avalanche and streamer discharges in high-voltage, high-pressure gas discharges for dc 1 mm gaps, RF 1 cm point-plane gaps and parallel-plate 40 MHz configurations, in ambient atmospheric air.
Local mesh refinement for incompressible fluid flow with free surfaces
Terasaka, H.; Kajiwara, H.; Ogura, K. [Tokyo Electric Power Company (Japan)] [and others
1995-09-01
A new local mesh refinement (LMR) technique has been developed and applied to incompressible fluid flows with free surface boundaries. The LMR method embeds patches of fine grid in arbitrary regions of interest. Hence, more accurate solutions can be obtained with a lower number of computational cells. This method is very suitable for the simulation of free surface movements because free surface flow problems generally require a finer computational grid to obtain adequate results. By using this technique, one can place finer grids only near the surfaces, and therefore greatly reduce the total number of cells and computational costs. This paper introduces LMR3D, a three-dimensional incompressible flow analysis code. Numerical examples calculated with the code demonstrate well the advantages of the LMR method.
Development and verification of unstructured adaptive mesh technique with edge compatibility
In the design study of the large-sized sodium-cooled fast reactor (JSFR), one key issue is suppression of gas entrainment (GE) phenomena at a gas-liquid interface. Therefore, the authors have been developed a high-precision CFD algorithm to evaluate the GE phenomena accurately. The CFD algorithm has been developed on unstructured meshes to establish an accurate modeling of JSFR system. For two-phase interfacial flow simulations, a high-precision volume-of-fluid algorithm is employed. It was confirmed that the developed CFD algorithm could reproduce the GE phenomena in a simple GE experiment. Recently, the authors have been developed an important technique for the simulation of the GE phenomena in JSFR. That is an unstructured adaptive mesh technique which can apply fine cells dynamically to the region where the GE occurs in JSFR. In this paper, as a part of the development, a two-dimensional unstructured adaptive mesh technique is discussed. In the two-dimensional adaptive mesh technique, each cell is refined isotropically to reduce distortions of the mesh. In addition, connection cells are formed to eliminate the edge incompatibility between refined and non-refined cells. The two-dimensional unstructured adaptive mesh technique is verified by solving well-known lid-driven cavity flow problem. As a result, the two-dimensional unstructured adaptive mesh technique succeeds in providing a high-precision solution, even though poor-quality distorted initial mesh is employed. In addition, the simulation error on the two-dimensional unstructured adaptive mesh is much less than the error on the structured mesh with a larger number of cells. (author)
Strategies for hp-adaptive Refinement
In the hp-adaptive version of the finite element method for solving partial differential equations, the grid is adaptively refined in both h, the size of the elements, and p, the degree of the piecewise polynomial approximation over the element. The selection of which elements to refine is determined by a local a posteriori error indicator, and is well established. But the determination of whether the element should be refined by h or p is still open. In this paper, we describe several strategies that have been proposed for making this determination. A numerical example to illustrate the effectiveness of these strategies will be presented.
Stochastic domain decomposition for time dependent adaptive mesh generation
Bihlo, Alexander; Walsh, Emily J
2015-01-01
The efficient generation of meshes is an important component in the numerical solution of problems in physics and engineering. Of interest are situations where global mesh quality and a tight coupling to the solution of the physical partial differential equation (PDE) is important. We consider parabolic PDE mesh generation and present a method for the construction of adaptive meshes in two spatial dimensions using stochastic domain decomposition that is suitable for an implementation in a multi- or many-core environment. Methods for mesh generation on periodic domains are also provided. The mesh generator is coupled to a time dependent physical PDE and the system is evolved using an alternating solution procedure. The method uses the stochastic representation of the exact solution of a parabolic linear mesh generator to find the location of an adaptive mesh along the (artificial) subdomain interfaces. The deterministic evaluation of the mesh over each subdomain can then be obtained completely independently us...
Applications of automatic mesh generation and adaptive methods in computational medicine
Schmidt, J.A.; Macleod, R.S. [Univ. of Utah, Salt Lake City, UT (United States); Johnson, C.R.; Eason, J.C. [Duke Univ., Durham, NC (United States)
1995-12-31
Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.
An adaptive mesh finite volume method for the Euler equations of gas dynamics
Mungkasi, Sudi
2016-06-01
The Euler equations have been used to model gas dynamics for decades. They consist of mathematical equations for the conservation of mass, momentum, and energy of the gas. For a large time value, the solution may contain discontinuities, even when the initial condition is smooth. A standard finite volume numerical method is not able to give accurate solutions to the Euler equations around discontinuities. Therefore we solve the Euler equations using an adaptive mesh finite volume method. In this paper, we present a new construction of the adaptive mesh finite volume method with an efficient computation of the refinement indicator. The adaptive method takes action automatically at around places having inaccurate solutions. Inaccurate solutions are reconstructed to reduce the error by refining the mesh locally up to a certain level. On the other hand, if the solution is already accurate, then the mesh is coarsened up to another certain level to minimize computational efforts. We implement the numerical entropy production as the mesh refinement indicator. As a test problem, we take the Sod shock tube problem. Numerical results show that the adaptive method is more promising than the standard one in solving the Euler equations of gas dynamics.
Adaptive Mesh Redistibution Method Based on Godunov's Scheme
Azarenok, Boris N.; Ivanenko, Sergey A.; Tang, Tao
2003-01-01
In this work, a detailed description for an efficent adaptive mesh redistribution algorithm based on the Godunov's scheme is presented. After each mesh iteration a second-order finite-volume flow solver is used to update the flow parameters at the new time level directly without using interpolation. Numerical experiments are perfomed to demonstrate the efficency and robustness of the proposed adaptive mesh algorithm in one and two dimensions.
Lober, R.R.; Tautges, T.J.; Vaughan, C.T.
1997-03-01
Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.
Highlights: • Powerful hp-SEM refinement approach for PN neutron transport equation has been presented. • The method provides great geometrical flexibility and lower computational cost. • There is a capability of using arbitrary high order and non uniform meshes. • Both posteriori and priori local error estimation approaches have been employed. • High accurate results are compared against other common adaptive and uniform grids. - Abstract: In this work we presented the adaptive hp-SEM approach which is obtained from the incorporation of Spectral Element Method (SEM) and adaptive hp refinement. The SEM nodal discretization and hp adaptive grid-refinement for even-parity Boltzmann neutron transport equation creates powerful grid refinement approach with high accuracy solutions. In this regard a computer code has been developed to solve multi-group neutron transport equation in one-dimensional geometry using even-parity transport theory. The spatial dependence of flux has been developed via SEM method with Lobatto orthogonal polynomial. Two commonly error estimation approaches, the posteriori and the priori has been implemented. The incorporation of SEM nodal discretization method and adaptive hp grid refinement leads to high accurate solutions. Coarser meshes efficiency and significant reduction of computer program runtime in comparison with other common refining methods and uniform meshing approaches is tested along several well-known transport benchmarks
A high-precision simulation method for gas-liquid two-phase flows on unstructured meshes has been developed as a part of numerical studies on a gas entrainment phenomenon in the sodium-cooled fast reactor (JSFR). In this study, a two-dimensional unstructured adaptive mesh algorithm is developed because an adaptive mesh technique is necessary to simulate the local gas entrainment phenomenon accurately in large size JSFR. In a proposed two-dimensional adaptive mesh algorithm, each cell is isotropically subdivided to reduce distortions of the mesh. In addition, a connection cell is formed to eliminate the edge incompatibility between a refined and a non-refined cells. When forming connection cells, patterns of each connection cell is determined by subdivision condisions of neighboring cells. After checking the developed two-dimensional unstructured adaptive mesh manipulations (subdivision and merging of cells and construction of connection cells), the present adaptive mesh algorithm is verified by solving well-kwon driven cavity problem. As the result, the present unstructed adaptive mesh algorithm succeeds in reproducing vortical flow field in the cavity using relatively small cell number. (author)
Adaptive mesh generation for non-steady state heat transport problems
Full text: The paper deals with the problem of mesh generation for the two-dimensional finite element modeling. The general objective of the work is development of the mesh adaptation method and its application to non-steady state heat transport processes. The main feature of the method is generation of new triangular or quadrilateral mesh at each iteration of the adaptation procedure. This generation is performed on the basis of information obtained from the previous iteration. The adaptation is based on the evaluation of the solution curvature, which is approximated using second spatial derivatives. Discrete Hessian of the solution is applied to generate the relevant discrete metric, which is next interpolated in the whole domain. The metric is defined by three parameters: stretching of elements in two orthogonal directions and the angle of the directions with respect to the coordinate system. Thus, the mesh can be refined or stretched in the selected parts of the domain and in a selected direction. The general idea of the developed adaptation method applied to steady state problems is described. Application of the method to non-steady state heat transport processes is described in the present paper. Mesh adaptation in non-steady state processes presents several difficulties, among which decision when re-meshing should be done and transport of information from the old mesh to the new mesh are the most important. An example of application of the mesh adaptation method to the processes, which are characterized by fast changes of heat transfer coefficient in the third kind boundary conditions and by strong inhomogeneity of heat transport, is described. Refs. 1 (author)
Kinetic Solvers with Adaptive Mesh in Phase Space
Arslanbekov, Robert R; Kolobov, Vladimir I; Frolova, Anna A.
2013-01-01
An Adaptive Mesh in Phase Space (AMPS) methodology has been developed for solving multi-dimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a tree of trees data structure. The mesh in r-space is automatically generated around embedded boundaries and dynamically adapted to local solution properties. The mesh in v-space is created on-the-fly for each cell in r-space. Mappings between neighboring v-s...
Ralf Deiterding
2011-01-01
Full Text Available Numerical simulation can be key to the understanding of the multidimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the nonequilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniques in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, that is, under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis, and a diagram of the transition boundaries between possible Mach reflection structures is constructed.
A thread-parallel algorithm for anisotropic mesh adaptation
Rokos, Georgios; Gorman, Gerard J.; Southern, James; Kelly, Paul H. J.
2013-01-01
Anisotropic mesh adaptation is a powerful way to directly minimise the computational cost of mesh based simulation. It is particularly important for multi-scale problems where the required number of floating-point operations can be reduced by orders of magnitude relative to more traditional static mesh approaches. Increasingly, finite element and finite volume codes are being optimised for modern multi-core architectures. Typically, decomposition methods implemented through the Message Passin...
Kinetic Solvers with Adaptive Mesh in Phase Space
Arslanbekov, Robert R; Frolova, Anna A
2013-01-01
An Adaptive Mesh in Phase Space (AMPS) methodology has been developed for solving multi-dimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a tree of trees data structure. The mesh in r-space is automatically generated around embedded boundaries and dynamically adapted to local solution properties. The mesh in v-space is created on-the-fly for each cell in r-space. Mappings between neighboring v-space trees implemented for the advection operator in configuration space. We have developed new algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the full Boltzmann collision integral with dynamically adaptive mesh in velocity space: importance sampling, multi-point projection method, and the variance reduction method. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic...
Numerical modeling of seismic waves using frequency-adaptive meshes
Hu, Jinyin; Jia, Xiaofeng
2016-08-01
An improved modeling algorithm using frequency-adaptive meshes is applied to meet the computational requirements of all seismic frequency components. It automatically adopts coarse meshes for low-frequency computations and fine meshes for high-frequency computations. The grid intervals are adaptively calculated based on a smooth inversely proportional function of grid size with respect to the frequency. In regular grid-based methods, the uniform mesh or non-uniform mesh is used for frequency-domain wave propagators and it is fixed for all frequencies. A too coarse mesh results in inaccurate high-frequency wavefields and unacceptable numerical dispersion; on the other hand, an overly fine mesh may cause storage and computational overburdens as well as invalid propagation angles of low-frequency wavefields. Experiments on the Padé generalized screen propagator indicate that the Adaptive mesh effectively solves these drawbacks of regular fixed-mesh methods, thus accurately computing the wavefield and its propagation angle in a wide frequency band. Several synthetic examples also demonstrate its feasibility for seismic modeling and migration.
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
Some observations on mesh refinement schemes applied to shock wave phenomena
Quirk, James J.
1995-01-01
This workshop's double-wedge test problem is taken from one of a sequence of experiments which were performed in order to classify the various canonical interactions between a planar shock wave and a double wedge. Therefore to build up a reasonably broad picture of the performance of our mesh refinement algorithm we have simulated three of these experiments and not just the workshop case. Here, using the results from these simulations together with their experimental counterparts, we make some general observations concerning the development of mesh refinement schemes for shock wave phenomena.
Dynamic mesh refinement for discrete models of jet electro-hydrodynamics
Lauricella, Marco; Pisignano, Dario; Succi, Sauro
2015-01-01
Nowadays, several models of unidimensional fluid jets exploit discrete element methods. In some cases, as for models aiming at describing the electrospinning nanofabrication process of polymer fibers, discrete element methods suffer a non constant resolution of the jet representation. We develop a dynamic mesh-refinement method for the numerical study of the electro-hydrodynamic behavior of charged jets using discrete element methods. To this purpose, we import ideas and techniques from the string method originally developed in the framework of free-energy landscape simulations. The mesh-refined discrete element method is demonstrated for the case of electrospinning applications.
A Hybrid Advection Scheme for Conserving Angular Momentum on a Refined Cartesian Mesh
Byerly, Zachary D; Tohline, Joel E; Marcello, Dominic C
2014-01-01
We test a new "hybrid" scheme for simulating dynamical fluid flows in which cylindrical components of the momentum are advected across a rotating Cartesian coordinate mesh. This hybrid scheme allows us to conserve angular momentum to machine precision while capitalizing on the advantages offered by a Cartesian mesh, such as a straightforward implementation of mesh refinement. Our test focuses on measuring the real and imaginary parts of the eigenfrequency of unstable axisymmetric modes that naturally arise in massless polytropic tori having a range of different aspect ratios, and quantifying the uncertainty in these measurements. Our measured eigenfrequencies show good agreement with the results obtained from the linear stability analysis of Kojima (1986) and from nonlinear hydrodynamic simulations performed on a cylindrical coordinate mesh by Woodward et al. (1994). When compared against results conducted with a traditional Cartesian advection scheme, the hybrid scheme achieves qualitative convergence at the...
Adaptive upscaling with the dual mesh method
Guerillot, D.; Verdiere, S.
1997-08-01
The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.
Dimensional reduction as a tool for mesh refinement and trackingsingularities of PDEs
Stinis, Panagiotis
2007-06-10
We present a collection of algorithms which utilizedimensional reduction to perform mesh refinement and study possiblysingular solutions of time-dependent partial differential equations. Thealgorithms are inspired by constructions used in statistical mechanics toevaluate the properties of a system near a critical point. The firstalgorithm allows the accurate determination of the time of occurrence ofa possible singularity. The second algorithm is an adaptive meshrefinement scheme which can be used to approach efficiently the possiblesingularity. Finally, the third algorithm uses the second algorithm untilthe available resolution is exhausted (as we approach the possiblesingularity) and then switches to a dimensionally reduced model which,when accurate, can follow faithfully the solution beyond the time ofoccurrence of the purported singularity. An accurate dimensionallyreduced model should dissipate energy at the right rate. We construct twovariants of each algorithm. The first variant assumes that we have actualknowledge of the reduced model. The second variant assumes that we knowthe form of the reduced model, i.e., the terms appearing in the reducedmodel, but not necessarily their coefficients. In this case, we alsoprovide a way of determining the coefficients. We present numericalresults for the Burgers equation with zero and nonzero viscosity toillustrate the use of the algorithms.
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
Automatic mesh adaptivity for CADIS and FW-CADIS neutronics modeling of difficult shielding problems
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macro-material approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm de-couples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, obviating the need for a world-class super computer. (authors)
Kinetic solvers with adaptive mesh in phase space.
Arslanbekov, Robert R; Kolobov, Vladimir I; Frolova, Anna A
2013-12-01
An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a "tree of trees" (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems. PMID:24483578
Kinetic solvers with adaptive mesh in phase space
Arslanbekov, Robert R.; Kolobov, Vladimir I.; Frolova, Anna A.
2013-12-01
An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a “tree of trees” (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems.
An adaptive p-refinement strategy applied to nodal expansion method in 3D Cartesian geometry
Highlights: • An adaptive p-refinement approach is developed and implemented successfully in ACNEM. • The proposed strategy enhances the accuracy with regard to the uniform zeroth order solution. • Improvement of results is gained by less computation time relative to uniform high order solution. - Abstract: The aim of this work is to develop a coarse mesh treatment strategy using adaptive polynomial, p, refinement approach for average current nodal expansion method in order to solve the neutron diffusion equation. For performing the adaptive solution process, a posteriori error estimation scheme, i.e. flux gradient has been utilized for finding the probable numerical errors. The high net leakage in a node represents flux gradient existence between neighbor nodes and it may indicate the source of errors for the coarse mesh calculation. Therefore, the relative Cartesian directional net leakage of nodes is considered as an assessment criterion for mesh refinement in a sub-domain. In our proposed approach, the zeroth order nodal expansion solution is used along coarse meshes as large as fuel assemblies to treat neutron populations. Coarse nodes with high directional net leakage may be chosen for implementing higher order polynomial expansion in the corresponding direction, i.e. X and/or Y and/or Z Cartesian directions. Using this strategy, the computational cost and time are reduced relative to uniform high order polynomial solution. In order to demonstrate the efficiency of this approach, a computer program, APNEC, Adaptive P-refinement Nodal Expansion Code, has been developed for solving the neutron diffusion equation using various orders of average current nodal expansion method in 3D rectangular geometry. Some well-known benchmarks are investigated to compare the uniform and adaptive solutions. Results demonstrate the superiority of our proposed strategy in enhancing the accuracy of solution without using uniform high order solution throughout the domain and
Optimality of multilevel preconditioners for local mesh refinement in three dimensions
Aksoylu, Burak
2010-01-01
In this article, we establish optimality of the Bramble-Pasciak-Xu (BPX) norm equivalence and optimality of the wavelet modified (or stabilized) hierarchical basis (WHB) preconditioner in the setting of local 3D mesh refinement. In the analysis of WHB methods, a critical first step is to establish the optimality of BPX norm equivalence for the refinement procedures under consideration. While the available optimality results for the BPX norm have been constructed primarily in the setting of uniformly refined meshes, a notable exception is the local 2D red-green result due to Dahmen and Kunoth. The purpose of this article is to extend this original 2D optimality result to the local 3D red-green refinement procedure introduced by Bornemann-Erdmann-Kornhuber (BEK), and then to use this result to extend the WHB optimality results from the quasiuniform setting to local 2D and 3D red-green refinement scenarios. The BPX extension is reduced to establishing that locally enriched finite element subspaces allow for the ...
Adaptive anisotropic meshing for steady convection-dominated problems
Nguyen, Hoa [Tulane University; Gunzburger, Max [Florida State University; Ju, Lili [University of South Carolina; Burkardt, John [Florida State University
2009-01-01
Obtaining accurate solutions for convection–diffusion equations is challenging due to the presence of layers when convection dominates the diffusion. To solve this problem, we design an adaptive meshing algorithm which optimizes the alignment of anisotropic meshes with the numerical solution. Three main ingredients are used. First, the streamline upwind Petrov–Galerkin method is used to produce a stabilized solution. Second, an adapted metric tensor is computed from the approximate solution. Third, optimized anisotropic meshes are generated from the computed metric tensor by an anisotropic centroidal Voronoi tessellation algorithm. Our algorithm is tested on a variety of two-dimensional examples and the results shows that the algorithm is robust in detecting layers and efficient in avoiding non-physical oscillations in the numerical approximation.
Dynamic mesh refinement for discrete models of jet electro-hydrodynamics
Lauricella, Marco; Pontrelli, Giuseppe; Pisignano, Dario; Succi, Sauro
2015-01-01
Nowadays, several models of unidimensional fluid jets exploit discrete element methods. In some cases, as for models aiming at describing the electrospinning nanofabrication process of polymer fibers, discrete element methods suffer a non constant resolution of the jet representation. We develop a dynamic mesh-refinement method for the numerical study of the electro-hydrodynamic behavior of charged jets using discrete element methods. To this purpose, we import ideas and techniques from the s...
MECHANICAL DYNAMICS ANALYSIS OF PM GENERATOR USING H-ADAPTIVE REFINEMENT
AJAY KUMAR; SANJAY MARWAHA; ANUPAMA MARWAHA
2010-01-01
This paper describes the dynamic analysis of permanent magnet (PM) rotor generator using COMSOL Multiphysics, a Finite Element Analysis (FEA) based package and Simulink, a system simulation program. Model of PM rotor generator is developed for its mechanical dynamics and computational of torque resulting from magnetic force. For the model the mesh is constructed using first order Lagrange quadratic elements and h-adaptive refinement technique based upon bank bisection is used for improving ac...
The numerical simulation of the driving beams in a heavy ion fusion power plant is a challenging task, and simulation of the power plant as a whole, or even of the driver, is not yet possible. Despite the rapid progress in computer power, past and anticipated, one must consider the use of the most advanced numerical techniques, if we are to reach our goal expeditiously. One of the difficulties of these simulations resides in the disparity of scales, in time and in space, which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g., fluid dynamics simulations) is the mesh refinement technique. They discuss the challenges posed by the implementation of this technique into plasma simulations (due to the presence of particles and electromagnetic waves). They will present the prospects for and projected benefits of its application to heavy ion fusion. In particular to the simulation of the ion source and the final beam propagation in the chamber. A collaboration project is under way at LBNL between the Applied Numerical Algorithms Group (ANAG) and the HIF group to couple the Adaptive Mesh Refinement (AMR) library (CHOMBO) developed by the ANAG group to the Particle-In-Cell accelerator code WARP developed by the HIF-VNL. They describe their progress and present their initial findings
The Stratified Ocean Model with Adaptive Refinement (SOMAR)
Santilli, Edward; Scotti, Alberto
2015-06-01
A computational framework for the evolution of non-hydrostatic, baroclinic flows encountered in regional and coastal ocean simulations is presented, which combines the flexibility of Adaptive Mesh Refinement (AMR) with a suite of numerical tools specifically developed to deal with the high degree of anisotropy of oceanic flows and their attendant numerical challenges. This framework introduces a semi-implicit update of the terms that give rise to buoyancy oscillations, which permits a stable integration of the Navier-Stokes equations when a background density stratification is present. The lepticity of each grid in the AMR hierarchy, which serves as a useful metric for anisotropy, is used to select one of several different efficient Poisson-solving techniques. In this way, we compute the pressure over the entire set of AMR grids without resorting to the hydrostatic approximation, which can degrade the structure of internal waves whose dynamics may have large-scale significance. We apply the modeling framework to three test cases, for which numerical or analytical solutions are known that can be used to benchmark the results. In all the cases considered, the model achieves an excellent degree of congruence with the benchmark, while at the same time achieving a substantial reduction of the computational resources needed.
Particle systems for adaptive, isotropic meshing of CAD models.
Bronson, Jonathan R; Levine, Joshua A; Whitaker, Ross T
2012-10-01
We present a particle-based approach for generating adaptive triangular surface and tetrahedral volume meshes from computer-aided design models. Input shapes are treated as a collection of smooth, parametric surface patches that can meet non-smoothly on boundaries. Our approach uses a hierarchical sampling scheme that places particles on features in order of increasing dimensionality. These particles reach a good distribution by minimizing an energy computed in 3D world space, with movements occurring in the parametric space of each surface patch. Rather than using a pre-computed measure of feature size, our system automatically adapts to both curvature as well as a notion of topological separation. It also enforces a measure of smoothness on these constraints to construct a sizing field that acts as a proxy to piecewise-smooth feature size. We evaluate our technique with comparisons against other popular triangular meshing techniques for this domain. PMID:23162181
Adaptive radial basis function mesh deformation using data reduction
Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.
2016-09-01
Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited
White Dwarf Mergers on Adaptive Meshes I. Methodology and Code Verification
Katz, Max P; Calder, Alan C; Swesty, F Douglas; Almgren, Ann S; Zhang, Weiqun
2015-01-01
The Type Ia supernova progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf merger scenario, which has the potential to naturally explain many of the observed characteristics of Type Ia supernovae. To date there have been relatively few self-consistent simulations of merging white dwarf systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hy...
ROAMing terrain (Real-time Optimally Adapting Meshes)
Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.; Miller, M.C.; Aldrich, C.; Mineev, M.
1997-07-01
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor stimulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly, and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM execution time is proportionate to the number of triangle changes per frame, which is typically a few percent of the output mesh size, hence ROAM performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.
ADAPTIVE MODEL REFINEMENT FOR THE IONOSPHERE AND THERMOSPHERE
National Aeronautics and Space Administration — ADAPTIVE MODEL REFINEMENT FOR THE IONOSPHERE AND THERMOSPHERE ANTHONY M. D’AMATO∗, AARON J. RIDLEY∗∗, AND DENNIS S. BERNSTEIN∗∗∗ Abstract. Mathematical models of...
Adaptive mesh generation for image registration and segmentation
Fogtmann, Mads; Larsen, Rasmus
This paper deals with the problem of generating quality tetrahedral meshes for image registration. From an initial coarse mesh the approach matches the mesh to the image volume by combining red-green subdivision and mesh evolution through mesh-to-image matching regularized with a mesh quality...
Comprehensive adaptive mesh refinement in wrinkling prediction analysis
Selman, A.; Meinders, T.; Huetink, J.; Boogaard, van den, F.E.
2002-01-01
Discretisation errors indicator, contact free wrinkling and wrinkling with contact indicators are, in a challenging task, brought together and used in a comprehensive approach to wrinkling prediction analysis in thin sheet metal forming processes.
Manguoglu, Murat; Takizawa, Kenji; Sameh, Ahmed H.; Tezduyar, Tayfun E.
2009-10-01
Computation of incompressible flows in arterial fluid mechanics, especially because it involves fluid-structure interaction, poses significant numerical challenges. Iterative solution of the fluid mechanics part of the equation systems involved is one of those challenges, and we address that in this paper, with the added complication of having boundary layer mesh refinement with thin layers of elements near the arterial wall. As test case, we use matrix data from stabilized finite element computation of a bifurcating middle cerebral artery segment with aneurysm. It is well known that solving linear systems that arise in incompressible flow computations consume most of the time required by such simulations. For solving these large sparse nonsymmetric systems, we present effective preconditioning techniques appropriate for different stages of the computation over a cardiac cycle.
Shi, Lei; Wang, Z. J.
2015-08-01
Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.
Schnieders, Michael J; Fenn, Timothy D; Pande, Vijay S
2011-04-12
Refinement of macromolecular models from X-ray crystallography experiments benefits from prior chemical knowledge at all resolutions. As the quality of the prior chemical knowledge from quantum or classical molecular physics improves, in principle so will resulting structural models. Due to limitations in computer performance and electrostatic algorithms, commonly used macromolecules X-ray crystallography refinement protocols have had limited support for rigorous molecular physics in the past. For example, electrostatics is often neglected in favor of nonbonded interactions based on a purely repulsive van der Waals potential. In this work we present advanced algorithms for desktop workstations that open the door to X-ray refinement of even the most challenging macromolecular data sets using state-of-the-art classical molecular physics. First we describe theory for particle mesh Ewald (PME) summation that consistently handles the symmetry of all 230 space groups, replicates of the unit cell such that the minimum image convention can be used with a real space cutoff of any size and the combination of space group symmetry with replicates. An implementation of symmetry accelerated PME for the polarizable atomic multipole optimized energetics for biomolecular applications (AMOEBA) force field is presented. Relative to a single CPU core performing calculations on a P1 unit cell, our AMOEBA engine called Force Field X (FFX) accelerates energy evaluations by more than a factor of 24 on an 8-core workstation with a Tesla GPU coprocessor for 30 structures that contain 240 000 atoms on average in the unit cell. The benefit of AMOEBA electrostatics evaluated with PME for macromolecular X-ray crystallography refinement is demonstrated via rerefinement of 10 crystallographic data sets that range in resolution from 1.7 to 4.5 Å. Beginning from structures obtained by local optimization without electrostatics, further optimization using AMOEBA with PME electrostatics improved
Improving Tropical Cyclone Track and Intensity in a Global Model with Local Mesh Refinement
Zarzycki, C. M.; Jablonowski, C.
2014-12-01
Even with recent improvements in general circulation model (GCM) resolution, tropical cyclones (TCs) are typically underresolved, resulting in fewer or weaker storms than observed. In an effort to alleviate these issues, the use of limited area models (LAMs) allowing for higher resolutions has become popular. However, LAMs require lateral boundary conditions and typically lack two-way communication with the exterior domain. Variable-resolution GCMs can serve as the bridge between traditional global models and high-resolution LAMs. These models can reach 10 km or finer resolution in low-latitude ocean basins where TCs are prevalent. They do so while maintaining global continuity, therefore eliminating the need for externally-forced and possibly numerically and physically inconsistent boundary conditions required by LAMs. Recent developments allow the Community Atmosphere Model's (CAM) Spectral Element (SE) dynamical core to be run on unstructured, statically-nested, variable-resolution grids. We present deterministic CAM-SE model simulations of TCs during recent summers and compare the model's prediction of storm track and intensity to other global and regional models as well as observations. The simulations are run on a 55 km global cubed-sphere grid with additional refinement to 13 km over the Atlantic and Eastern Pacific Oceans. Forecasts are integrated for eight days and the period of analysis spans three months (August, September, and October) during 2012 and 2013. We compare these simulations to identically initialized model runs without mesh refinement to demonstrate the impact of high resolution on TC behavior in CAM. We also investigate cyclone genesis and whether locally high resolution in a global model leads to improved forecast skill at longer lead times. In addition, the impact of the localized refined patch on the remainder of the coarser global solution during the simulation period is discussed.
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.
The Geometry of r-adaptive meshes generated using Optimal Transport Methods
C. J. Budd; Russell, R. D.; Walsh, E.
2014-01-01
The principles of mesh equidistribution and alignment play a fundamental role in the design of adaptive methods, and a metric tensor M and mesh metric are useful theoretical tools for understanding a methods level of mesh alignment, or anisotropy. We consider a mesh redistribution method based on the Monge-Ampere equation, which combines equidistribution of a given scalar density function with optimal transport. It does not involve explicit use of a metric tensor M, although such a tensor mus...
LOAD AWARE ADAPTIVE BACKBONE SYNTHESIS IN WIRELESS MESH NETWORKS
Yuan Yuan; Zheng Baoyu
2009-01-01
Wireless Mesh Networks (WMNs) are envisioned to support the wired backbone with a wireless Backbone Networks (BNet) for providing internet connectivity to large-scale areas.With a wide range of internet-oriented applications with different Quality of Service (QoS) requirement,the large-scale WMNs should have good scalability and large bandwidth.In this paper,a Load Aware Adaptive Backbone Synthesis (LAABS) algorithm is proposed to automatically balance the traffic flow in the WMNs.The BNet will dynamically split into smaller size or merge into bigger one according to statistic load information of Backbone Nodes (BNs).Simulation results show LAABS generates moderate BNet size and converges quickly,thus providing scalable and stable BNet to facilitate traffic flow.
Problem-adapted mesh generation with FEM-features
Werner, Horst; Weber, Christian; Schilke, Martin
2000-01-01
Today automatic meshing of CAD geometry is the most common method of FEM mesh generation. However, to get results of acceptable accuracy with universal meshing algorithms it is necessary to use rather small-sized elements which leads to high memory and CPU time consumption. Furthermore, the irregularity of automatically generatated meshes makes it difficult to create well-defined local areas with different material properties. A solution for this problem is the application of predefined build...
A trigonal nodal SP3 method with mesh refinement capabilities - Development and verification
The neutronics model of the nodal reactor dynamics code DYN3D developed for 3D analyses of steady states and transients in Light-Water Reactors has been extended by a simplified P3 (SP3) neutron transport option - to overcome the limitations of the diffusion approach at regions with significant anisotropy effects. To provide a method being applicable to reactors with hexagonal fuel assemblies and to furthermore allow flexible mesh refinement, the nodal SP3 method has been developed on the basis of a flux expansion in triangular-z geometry. In this paper, the derivation of the trigonal SP3 method is presented in a condensed form and a verification of the methodology on quasi-pin level is performed by means of two single-assembly test examples. The corresponding pin-wise few-group cross sections were obtained by the deterministic lattice code HELIOS. The power distributions were calculated using both the trigonal DYN3D diffusion and SP3 solver and compared to the HELIOS reference solutions. Close to regions with non-negligible flux gradients, e.g., caused by the presence of a strong absorbing material, the power distribution calculated by DYN3D-SP3 shows a significant improvement in comparison to the diffusion method. (authors)
Numerical relativity simulations of neutron star merger remnants using conservative mesh refinement
Dietrich, Tim; Ujevic, Maximiliano; Bruegmann, Bernd
2015-01-01
We study equal and unequal-mass neutron star mergers by means of new numerical relativity simulations in which the general relativistic hydrodynamics solver employs an algorithm that guarantees mass conservation across the refinement levels of the computational mesh. We consider eight binary configurations with total mass $M=2.7\\,M_\\odot$, mass-ratios $q=1$ and $q=1.16$, and four different equation of states (EOSs), and one configuration with a stiff EOS, $M=2.5M_\\odot$ and $q=1.5$. We focus on the post-merger dynamics and study the merger remnant, dynamical ejecta and the postmerger gravitational wave spectrum. Although most of the merger remnants form a hypermassive neutron star collapsing to a black hole+disk system on dynamical timescales, stiff EOSs can eventually produce a stable massive neutron star. Ejecta are mostly emitted around the orbital plane; favored by large mass ratios and softer EOS. The postmerger wave spectrum is mainly characterized by non-axisymmetric oscillations of the remnant. The st...
Goal based mesh adaptivity for fixed source radiation transport calculations
Highlights: ► Derives an anisotropic goal based error measure for shielding problems. ► Reduces the error in the detector response by optimizing the finite element mesh. ► Anisotropic adaptivity captures material interfaces using fewer elements than AMR. ► A new residual based on the numerical scheme chosen forms the error measure. ► The error measure also combines the forward and adjoint metrics in a novel way. - Abstract: In this paper, the application of goal based error measures for anisotropic adaptivity applied to shielding problems in which a detector is present is explored. Goal based adaptivity is important when the response of a detector is required to ensure that dose limits are adhered to. To achieve this, a dual (adjoint) problem is solved which solves the neutron transport equation in terms of the response variables, in this case the detector response. The methods presented can be applied to general finite element solvers, however, the derivation of the residuals are dependent on the underlying finite element scheme which is also discussed in this paper. Once error metrics for the forward and adjoint solutions have been formed they are combined using a novel approach. The two metrics are combined by forming the minimum ellipsoid that covers both the error metrics rather than taking the maximum ellipsoid that is contained within the metrics. Another novel approach used within this paper is the construction of the residual. The residual, used to form the goal based error metrics, is calculated from the subgrid scale correction which is inherent in the underlying spatial discretisation employed
Three-dimensional modeling and highly refined mesh generation of the aorta artery and its tunics
This paper describes strategies and techniques to perform modeling and automatic mesh generation of the aorta artery and its tunics (adventitia, media and intima walls), using open source codes. The models were constructed in the Blender package and Python scripts were used to export the data necessary for the mesh generation in TetGen. The strategies proposed are able to provide meshes of complicated and irregular volumes, with a large number of mesh elements involved (12,000,000 tetrahedrons approximately). These meshes can be used to perform computational simulations by Finite Element Method (FEM).
Algebraic turbulence modeling for unstructured and adaptive meshes
Mavriplis, Dimitri J.
1990-01-01
An algebraic turbulence model based on the Baldwin-Lomax model, has been implemented for use on unstructured grids. The implementation is based on the use of local background structured turbulence meshes. At each time-step, flow variables are interpolated from the unstructured mesh onto the background structured meshes, the turbulence model is executed on these meshes, and the resulting eddy viscosity values are interpolated back to the unstructured mesh. Modifications to the algebraic model were required to enable the treatment of more complicated flows, such as confluent boundary layers and wakes. The model is used in conjuction with an efficient unstructured multigrid finite-element Navier-Stokes solver in order to compute compressible turbulent flows on fully unstructured meshes. Solutions about single and multiple element airfoils are obtained and compared with experimental data.
Optimal panel-clustering in the presence of anisotropic mesh refinement
Graham, I.G.; Grasedyck, L; Hackbusch, W.; Sauter, S.
2008-01-01
In this paper we consider the numerical solution of discrete boundary integral equations on polyhedral surfaces in three dimensions. When the solution contains typical edge singularities, highly stretched meshes are preferred to uniform meshes, since they reduce the number of degrees of freedom needed to obtain a fixed accuracy. The classical panel-clustering method can still be applied in the presence of such highly stretched meshes. However, we will show that the savings in computation time...
h-Adaptive Mesh Generation using Electric Field Intensity Value as a Criterion (in Japanese)
Toyonaga, Kiyomi; Cingoski, Vlatko; Kaneda, Kazufumi; Yamashita, Hideo
1994-01-01
Finite mesh divisions are essential to obtain accurate solution of two dimensional electric field analysis. It requires the technical knowledge to generate a suitable fine mesh divisions. In electric field problem, analysts are usually interested in the electric field intensity and its distribution. In order to obtain electric field intensity with high-accuracy, we have developed and adaptive mesh generator using electric field intensity value as a criterion.
A spatially adaptive grid-refinement approach has been investigated to solve the even-parity Boltzmann transport equation. A residual based a posteriori error estimation scheme has been utilized for checking the approximate solutions for various finite element grids. The local particle balance has been considered as an error assessment criterion. To implement the adaptive approach, a computer program ADAFENT (adaptive finite elements for neutron transport) has been developed to solve the second order even-parity Boltzmann transport equation using K+ variational principle for slab geometry. The program has a core K+ module which employs Lagrange polynomials as spatial basis functions for the finite element formulation and Legendre polynomials for the directional dependence of the solution. The core module is called in by the adaptive grid generator to determine local gradients and residuals to explore the possibility of grid refinements in appropriate regions of the problem. The a posteriori error estimation scheme has been implemented in the outer grid refining iteration module. Numerical experiments indicate that local errors are large in regions where the flux gradients are large. A comparison of the spatially adaptive grid-refinement approach with that of uniform meshing approach for various benchmark cases confirms its superiority in greatly enhancing the accuracy of the solution without increasing the number of unknown coefficients. A reduction in the local errors of the order of 102 has been achieved using the new approach in some cases
MECHANICAL DYNAMICS ANALYSIS OF PM GENERATOR USING H-ADAPTIVE REFINEMENT
AJAY KUMAR
2010-03-01
Full Text Available This paper describes the dynamic analysis of permanent magnet (PM rotor generator using COMSOL Multiphysics, a Finite Element Analysis (FEA based package and Simulink, a system simulation program. Model of PM rotor generator is developed for its mechanical dynamics and computational of torque resulting from magnetic force. For the model the mesh is constructed using first order Lagrange quadratic elements and h-adaptive refinement technique based upon bank bisection is used for improving accuracy of the model. Effect of rotor moment of inertia (MI on the winding resistance and winding inductance has been studied by using Simulink. It is shown that the system MI has a significant effect on optimal winding resistance and inductance to achieve steady state operation in shortest period of time.
The GeoClaw software for depth-averaged flows with adaptive refinement
Berger, Marsha J.; George, David L.; LeVeque, Randall J.; Mandli, Kyle T.
2011-09-01
Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw.
The GeoClaw software for depth-averaged flows with adaptive refinement
Berger, Marsha J; LeVeque, Randall J; Mandli, Kyle
2010-01-01
Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude--longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis, dam break problems, and storm surge. Documentation and download information is available at www.clawpack.org/geoclaw
A zoomable and adaptable hidden fine-mesh approach to solving advection-dispersion equations
A zoomable and adaptable hidden fine-mesh approach (ZAHFMA), that can be used with either finite element or finite difference methods, is proposed to solve the advection-dispersion equation. The approach is based on automatic adaptation of zooming a hidden fine-mesh in the place where the sharp front locates. Preliminary results indicate that ZAHFMA used with finite element methods can handle the advection-dispersion problems with Peclet number ranging from 0 to ∞. 5 refs., 2 figs
Greene, Patrick T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schofield, Samuel P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nourgaliev, Robert [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-06-21
A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.
Geometrically Consistent Mesh Modification
Bonito, A.
2010-01-01
A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.
Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.
2015-06-01
A new anisotropic hr-adaptive mesh technique has been applied to modelling of multiscale transport phenomena, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been setup for two-dimensional (2-D) transport phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes.
Geostrophic balance preserving interpolation in mesh adaptive shallow-water ocean modelling
Maddison, James R; Farrell, Patrick E
2010-01-01
The accurate representation of geostrophic balance is an essential requirement for numerical modelling of geophysical flows. Significant effort is often put into the selection of accurate or optimal balance representation by the discretisation of the fundamental equations. The issue of accurate balance representation is particularly challenging when applying dynamic mesh adaptivity, where there is potential for additional imbalance injection when interpolating to new, optimised meshes. In the context of shallow-water modelling, we present a new method for preservation of geostrophic balance when applying dynamic mesh adaptivity. This approach is based upon interpolation of the Helmholtz decomposition of the Coriolis acceleration. We apply this in combination with a discretisation for which states in geostrophic balance are exactly steady solutions of the linearised equations on an f-plane; this method guarantees that a balanced and steady flow on a donor mesh remains balanced and steady after interpolation on...
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
METHOD FOR ADAPTIVE MESH GENERATION BASED ON GEOMETRICAL FEATURES OF 3D SOLID
HUANG Xiaodong; DU Qungui; YE Bangyan
2006-01-01
In order to provide a guidance to specify the element size dynamically during adaptive finite element mesh generation, adaptive criteria are firstly defined according to the relationships between the geometrical features and the elements of 3D solid. Various modes based on different datum geometrical elements, such as vertex, curve, surface, and so on, are then designed for generating local refmed mesh. With the guidance of the defined criteria, different modes are automatically selected to apply on the appropriate datum objects to program the element size in the local special areas. As a result, the control information of element size is successfully programmed coveting the entire domain based on the geometrical features of 3D solid. A new algorithm based on Delaunay triangulation is then developed for generating 3D adaptive fmite element mesh, in which the element size is dynamically specified to catch the geometrical features and suitable tetrahedron facets are selected to locate interior nodes continuously. As a result, adaptive mesh with good-quality elements is generated. Examples show that the proposed method can be successfully applied to adaptive finite element mesh automatic generation based on the geometrical features of 3D solid.
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2016-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
Adaptive scheduling in cellular access, wireless mesh and IP networks
Nieminen, Johanna
2011-01-01
Networking scenarios in the future will be complex and will include fixed networks and hybrid Fourth Generation (4G) networks, consisting of both infrastructure-based and infrastructureless, wireless parts. In such scenarios, adaptive provisioning and management of network resources becomes of critical importance. Adaptive mechanisms are desirable since they enable a self-configurable network that is able to adjust itself to varying traffic and channel conditions. The operation of adaptive me...
Using Multi-threading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes
Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1998-01-01
In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the adaption phase of FE applications oil triangular meshes and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments oil EARTH-SP2, on implementation of EARTH on the IBM SP2 with different load balancing strategies that are built into the runtime system.
On a Bisection Algorithm that Produces Conforming Locally Refined Simplicial Meshes
Hannukainen, A.; Korotov, S.; Křížek, Michal
Berlin : Springer-Verlag Berlin Heidelberg, 2008 - (Lirkov, I.; Margenov, S.; Waśniewski, J.), s. 571-579 ISBN 978-3-540-78825-6. ISSN 0302-9743. [LSSC 2007. International Conference on Large-Scale Scientific Computations /6./. Sozopol (BG), 05.06.2007-09.06.2007] R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : finite element method * mesh density function * convergence Subject RIV: BA - General Mathematics
Adaptive local refinement and multi-level methods for simulating multiphasic flows
This thesis describes some numerical and mathematical aspects of incompressible multiphase flows simulations with a diffuse interface Cahn-Hilliard / Navier-Stokes model (interfaces have a small but a positive thickness). The space discretization is performed thanks to a Galerkin formulation and the finite elements method. The presence of different scales in the system (interfaces have a very small thickness compared to the characteristic lengths of the domain) suggests the use of a local adaptive refinement method. The algorithm that is introduced allows to implicitly handle the non-conformities of the generated meshes to produce conformal finite elements approximation spaces. It consists in refining basis functions instead of cells. The refinement of a basis function is made possible by the conceptual existence of a nested sequence of uniformly refined grids from which 'parent-child' relationships are deduced, linking the basis functions of two consecutive refinement levels. Moreover, it is shown how this method can be exploited to build multigrid pre-conditioners. From a composite finite elements approximation space, it is indeed possible to rebuild, by 'coarsening', a sequence of auxiliary nested spaces which allows to enter in the abstract multigrid framework. Concerning the time discretization, it begins with the study of the Cahn-Hilliard system. A semi-implicit scheme is proposed to remedy to convergence failures of the Newton method used to solve this (non linear) system. It guarantees the decrease of the discrete free energy ensuring the stability of the scheme. The existence and convergence of discrete solutions towards the weak solution of the system are shown. The study continues with providing an unconditionally stable time discretization of the complete Cahn-Hilliard / Navier-Stokes model. An important point is that this discretization does not strongly couple the Cahn-Hilliard and Navier-Stokes systems allowing to independently solve the two systems
Finite element model for linear-elastic mixed mode loading using adaptive mesh strategy
无
2008-01-01
An adaptive mesh finite element model has been developed to predict the crack propagation direction as well as to calculate the stress intensity factors (SIFs), under linear-elastic assumption for mixed mode loading application. The finite element mesh is generated using the advancing front method. In order to suit the requirements of the fracture analysis, the generation of the background mesh and the construction of singular elements have been added to the developed program. The adaptive remeshing process is carried out based on the posteriori stress error norm scheme to obtain an optimal mesh. Previous works of the authors have proposed techniques for adaptive mesh generation of 2D cracked models. Facilitated by the singular elements, the displacement extrapolation technique is employed to calculate the SIF. The fracture is modeled by the splitting node approach and the trajectory follows the successive linear extensions of each crack increment. The SIFs values for two different case studies were estimated and validated by direct comparisons with other researchers work.
We present long-term-stable and convergent evolutions of head-on black-hole collisions and extraction of gravitational waves generated during the merger and subsequent ring-down. The new ingredients in this work are the use of fixed mesh-refinement and dynamical singularity excision techniques. We are able to carry out head-on collisions with large initial separations and demonstrate that our excision infrastructure is capable of accommodating the motion of the individual black holes across the computational domain as well as their merger. We extract gravitational waves from these simulations using the Zerilli-Moncrief formalism and find the ring-down radiation to be, as expected, dominated by the l=2, m=0 quasinormal mode. The total radiated energy is about 0.1% of the total Arnowitt-Deser-Misner mass of the system
A dynamic mesh refinement technique for Lattice Boltzmann simulations on octree-like grids
Neumann, Philipp
2012-04-27
In this contribution, we present our new adaptive Lattice Boltzmann implementation within the Peano framework, with special focus on nanoscale particle transport problems. With the continuum hypothesis not holding anymore on these small scales, new physical effects - such as Brownian fluctuations - need to be incorporated. We explain the overall layout of the application, including memory layout and access, and shortly review the adaptive algorithm. The scheme is validated by different benchmark computations in two and three dimensions. An extension to dynamically changing grids and a spatially adaptive approach to fluctuating hydrodynamics, allowing for the thermalisation of the fluid in particular regions of interest, is proposed. Both dynamic adaptivity and adaptive fluctuating hydrodynamics are validated separately in simulations of particle transport problems. The application of this scheme to an oscillating particle in a nanopore illustrates the importance of Brownian fluctuations in such setups. © 2012 Springer-Verlag.
3D Simulation of Flow with Free Surface Based on Adaptive Octree Mesh System
Li Shaowu; Zhuang Qian; Huang Xiaoyun; Wang Dong
2015-01-01
The technique of adaptive tree mesh is an effective way to reduce computational cost through automatic adjustment of cell size according to necessity. In the present study, the 2D numerical N-S solver based on the adaptive quadtree mesh system was extended to a 3D one, in which a spatially adaptive octree mesh system and multiple parti-cle level set method were adopted for the convenience to deal with the air-water-structure multiple-medium coexisting domain. The stretching process of a dumbbell was simulated and the results indicate that the meshes are well adaptable to the free surface. The collapsing process of water column impinging a circle cylinder was simulated and from the results, it can be seen that the processes of fluid splitting and merging are properly simulated. The interaction of sec-ond-order Stokes waves with a square cylinder was simulated and the obtained drag force is consistent with the result by the Morison’s wave force formula with the coefficient values of the stable drag component and the inertial force component being set as 2.54.
STABILIZED FEM FOR CONVECTION-DIFFUSION PROBLEMS ON LAYER-ADAPTED MESHES
Hans-G(o)rg Roos
2009-01-01
The application of a standard Galerkin finite element method for convection-diflusion problems leads to oscillations in the discrete solution,therefore stabilization seems to be necessary.We discuss several recent stabilization methods,especially its combination with a Galerkin method on layer-adapted meshes.Supercloseness results obtained allow an improvement of the discrete solution using recovery techniques.
Towards a large-scale scalable adaptive heart model using shallow tree meshes
Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf
2015-10-01
Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.
The geometry of r-adaptive meshes generated using optimal transport methods
Budd, C. J.; Russell, R. D.; Walsh, E.
2015-02-01
The principles of mesh equidistribution and alignment play a fundamental role in the design of adaptive methods, and a metric tensor and mesh metric are useful theoretical tools for understanding a method's level of mesh alignment, or anisotropy. We consider a mesh redistribution method based on the Monge-Ampère equation which combines equidistribution of a given scalar density function with optimal transport. It does not involve explicit use of a metric tensor, although such a tensor must exist for the method, and an interesting question to ask is whether or not the alignment produced by the metric gives an anisotropic mesh. For model problems with a linear feature and with a radially symmetric feature, we derive the exact form of the metric, which involves expressions for its eigenvalues and eigenvectors. The eigenvectors are shown to be orthogonal and tangential to the feature, and the ratio of the eigenvalues (corresponding to the level of anisotropy) is shown to depend, both locally and globally, on the value of the density function and the amount of curvature. We thereby demonstrate how the optimal transport method produces an anisotropic mesh along a given feature while equidistributing a suitably chosen scalar density function. Numerical results are given to verify these results and to demonstrate how the analysis is useful for problems involving more complex features, including for a non-trivial time dependant nonlinear PDE which evolves narrow and curved reaction fronts.
Essentials of finite element modeling and adaptive refinement
Dow, John O
2012-01-01
Finite Element Analysis is a very popular, computer-based tool that uses a complex system of points called nodes to make a grid called a ""mesh. "" The mesh contains the material and structural properties that define how the structure will react to certain loading conditions, allowing virtual testing and analysis of stresses or changes applied to the material or component design. This groundbreaking text extends the usefulness of finite element analysis by helping both beginners and advanced users alike. It simplifies, improves, and extends both the finite element method while at the same t
Multi-scale mesh saliency with local adaptive patches for viewpoint selection
Nouri, Anass; Charrier, Christophe; Lézoray, Olivier
2015-01-01
International audience Our visual attention is attracted by specific areas into 3D objects (represented by meshes). This visual attention depends on the degree of saliency exposed by these areas. In this paper, we propose a novel multi-scale approach for detecting salient regions. To do so, we define a local surface descriptor based on patches of adaptive size and filled in with a local height field. The single-scale saliency of a vertex is defined as its degree measure in the mesh with ed...
Marty, Nicolas C. M.; Tournassat, Christophe; Burnol, André; Giffaut, Eric; Gaucher, Eric C.
2009-01-01
SummaryLarge quantities of cements and concretes need to be incorporated in geological disposal facilities for long-lived radwaste. An alkaline plume diffusing from an aged concrete (pH ˜ 12.5) through argillite-type rocks has been modelled considering feedback of porosity value variations on transport properties using the reactive transport code TOUGHREACT. The mineralogical composition of the argillite is modified at the interface with the concrete. Diffusion of cementitious elements leads to rapid and strong porosity occlusion in the argillite. Numerical results show that both reaction rates and spatial refinement affect mineralogical transformation pathways. The variations in porosity and the extension of the zone affected by the alkaline perturbation are compared at different times. The major effects of mineral precipitation under kinetic constraints, rather than local equilibrium, are a delay in the porosity clogging and an increase in the extension of the alkaline perturbation in the clay formation. The same time-delay rise for the porosity occlusion also appears for the roughest spatial resolutions. A simulation as representative as possible of temporal and spatial scales of cementation processes must then be supported by more comparative data such as long term experimental investigations or natural analogues.
A Simple Fault-Tolerant Adaptive and Minimal Routing Approach in 3-D Meshes
WU Jie(吴杰)
2003-01-01
In this paper we propose a sufficient condition for minimal routing in 3-dimensional (3-D) meshes with faulty nodes. It is based on an early work of the author on minimal routing in 2-dimensional (2-D) meshes. Unlike many traditional models that assume all the nodes know global fault distribution or just adjacent fault information, our approach is based on the concept of limited global fault information. First, we propose a fault model called faulty cube in which all faulty nodes in the system are contained in a set of faulty cubes. Fault information is then distributed to limited number of nodes while it is still sufficient to support minimal routing. The limited fault information collected at each node is represented by a vector called extended safety level. The extended safety level associated with a node can be used to determine the existence of a minimal path from this node to a given destination. Specifically, we study the existence of minimal paths at a given source node, limited distribution of fault information, minimal routing, and deadlock-free and livelock-free routing. Our results show that any minimal routing that is partially adaptive can be applied in our model as long as the destination node meets a certain condition. We also propose a dynamic planar-adaptive routing scheme that offers better fault tolerance and adaptivity than the planar-adaptive routing scheme in 3-D meshes. Our approach is the first attempt to address adaptive and minimal routing in 3-D meshes with faulty nodes using limited fault information.
DISCONTINUITY-CAPTURING FINITE ELEMENT COMPUTATION OF UNSTEADY FLOW WITH ADAPTIVE UNSTRUCTURED MESH
DONG Genjin; LU Xiyun; ZHUANG Lixian
2004-01-01
A discontinuity-capturing scheme of finite element method (FEM) is proposed. The unstructured-grid technique combined with a new type of adaptive mesh approach is developed for both compressible and incompressible unsteady flows, which exhibits the capability of capturing the shock waves and/or thin shear layers accurately in an unsteady viscous flow at high Reynolds number.In particular, a new testing variable, i.e., the disturbed kinetic energy E, is suggested and used in the adaptive mesh computation, which is universally applicable to the capturing of both shock waves and shear layers in the inviscid flow and viscous flow at high Reynolds number. Based on several calculated examples, this approach has been proved to be effective and efficient for the calculations of compressible and incompressible flows.
Upper and lower bounds in limit analysis: adaptive meshing strategies and discontinuous loading
Muñoz Romero, José; Bonet Carbonell, Javier; Huerta, Antonio; Peraire Guitart, Jaume
2008-01-01
This is the pre-peer reviewed version of the following article: Muñoz, José J. [et al.]. Upper and lower bounds in limit analysis: adaptive meshing strategies and discontinuous loading. "International journal for numerical methods in engineering", Agost 2008, vol. 77, núm. 4, p. 471-501., which has been published in final form at http://www3.interscience.wiley.com/journal/121370765/abstract Peer Reviewed
TRIM: A finite-volume MHD algorithm for an unstructured adaptive mesh
Schnack, D.D.; Lottati, I.; Mikic, Z. [Science Applications International Corp., San Diego, CA (United States)] [and others
1995-07-01
The authors describe TRIM, a MHD code which uses finite volume discretization of the MHD equations on an unstructured adaptive grid of triangles in the poloidal plane. They apply it to problems related to modeling tokamak toroidal plasmas. The toroidal direction is treated by a pseudospectral method. Care was taken to center variables appropriately on the mesh and to construct a self adjoint diffusion operator for cell centered variables.
An adaptive hierarchical particle-mesh code with isolated boundary conditions
Gelato, Sergio; Chernoff, David F.; Wasserman, Ira
1996-01-01
This article describes a new, fully adaptive Particle-Multiple-Mesh numerical simulation code developed primarily for simulations of small regions (such as a group of galaxies) in a cosmological context. It integrates the equations of motion of a set of particles subject to their mutual gravitational interaction and to an arbitrary external field. The interactions are computed using a hierarchy of nested grids constructed anew at each integration step to enhance the spatial resolution in high...
Walko, R. L.; Medvigy, D.; Avissar, R.
2013-12-01
regular model grid or (2) estimate the essential elements of the convective response from lookup table entries that were previously generated for similar environments using method (1). Obviously, method (2) is extremely efficient while method (1) is computationally intensive, so the key is to construct clever algorithms that enable method (2) to be used as often as possible. The method is self-learning in that as a model simulation progresses, the lookup table can grow and the search algorithm for selecting the best table entries can adapt to the growing table. We demonstrate applications of this method on the variable-resolution hexagonal grid of the Ocean-Land-Atmosphere Model (OLAM) for both idealized and realistic environments.
A Schroedinger eigenvalue problem is solved for the 2D quantum simple harmonic oscillator using a finite element discretization of real space within which elements are adaptively spatially refined. We compare two competing methods of adaptively discretizing the real-space grid on which computations are performed without modifying the standard polynomial basis-set traditionally used in finite element interpolations; namely, (i) an application of the Kelly error estimator, and (ii) a refinement based on the local potential level. When the performance of these methods are compared to standard uniform global refinement, we find that they significantly improve the total time spent in the eigensolver. (general)
Impact of space-time mesh adaptation on solute transport modeling in porous media
Esfandiar, Bahman; Porta, Giovanni; Perotto, Simona; Guadagnini, Alberto
2015-02-01
We implement a space-time grid adaptation procedure to efficiently improve the accuracy of numerical simulations of solute transport in porous media in the context of model parameter estimation. We focus on the Advection Dispersion Equation (ADE) for the interpretation of nonreactive transport experiments in laboratory-scale heterogeneous porous media. When compared to a numerical approximation based on a fixed space-time discretization, our approach is grounded on a joint automatic selection of the spatial grid and the time step to capture the main (space-time) system dynamics. Spatial mesh adaptation is driven by an anisotropic recovery-based error estimator which enables us to properly select the size, shape, and orientation of the mesh elements. Adaptation of the time step is performed through an ad hoc local reconstruction of the temporal derivative of the solution via a recovery-based approach. The impact of the proposed adaptation strategy on the ability to provide reliable estimates of the key parameters of an ADE model is assessed on the basis of experimental solute breakthrough data measured following tracer injection in a nonuniform porous system. Model calibration is performed in a Maximum Likelihood (ML) framework upon relying on the representation of the ADE solution through a generalized Polynomial Chaos Expansion (gPCE). Our results show that the proposed anisotropic space-time grid adaptation leads to ML parameter estimates and to model results of markedly improved quality when compared to classical inversion approaches based on a uniform space-time discretization.
Adaptive resolution refinement for high-fidelity continuum parameterizations
Anderson, J.W.; Khamayseh, A. [Los Alamos National Lab., NM (United States); Jean, B.A. [Mississippi State Univ., Starkville, MS (United States)
1996-10-01
This paper describes an algorithm the adaptively samples a parametric continuum so that a fidelity metric is satisfied. Using the divide-and-conquer strategy of adaptive sampling eliminates the guesswork of traditional uniform parameterization techniques. The space and time complexity of parameterization are increased in a controllable manner so that a desired fidelity is obtained.
FINITE VOLUME METHODS AND ADAPTIVE REFINEMENT FOR GLOBAL TSUNAMI PROPAGATION AND LOCAL INUNDATION
David L. George
2006-01-01
Full Text Available The shallow water equations are a commonly accepted approximation governing tsunami propagation. Numerically capturing certain features of local tsunami inundation requires solving these equations in their physically relevant conservative form, as integral con- servation laws for depth and momentum. This form of the equations presents challenges when trying to numerically model global tsunami propagation, so often the best numerical methods for the local inundation regime are not suitable for the global propagation regime. The different regimes of tsunami flow belong to different spatial scales as well, and re- quire correspondingly different grid resolutions. The long wavelength of deep ocean tsunamis requires a large global scale computing domain, yet near the shore the propa- gating energy is compressed and focused by bathymetry in unpredictable ways. This can lead to large variations in energy and run-up even over small localized regions.We have developed a finite volume method to deal with the diverse flow regimes of tsunamis. These methods are well suited for the inundation regime—they are robust in the presence of bores and steep gradients, or drying regions, and can capture the inundating shoreline and run-up features. Additionally, these methods are well-balanced, meaning that they can appropriately model global propagation.To deal with the disparate spatial scales, we have used adaptive refinement algorithms originally developed for gas dynamics, where often steep variation is highly localized at a given time, but moves throughout the domain. These algorithms allow evolving Cartesian sub-grids that can move with the propagating waves and highly resolve local inundation of impacted areas in a single global scale computation. Because the dry regions are part of the computing domain, simple rectangular cartesian grids eliminate the need for complex shoreline-fitted mesh generation.
Domain adaptation using stock market prices to refine sentiment dictionaries
Moore, Andrew; Rayson, Paul Edward; Young, Steven Eric
2016-01-01
As part of a larger project where we are examining the relationship and influence of news and social media on stock price, here we investigate the potential links between the sentiment of news articles about companies and stock price change of those companies. We describe a method to adapt sentiment word lists based on news articles about specific companies, in our case downloaded from the Guardian. Our novel approach here is to adapt word lists in sentiment classifiers for news articles base...
A three-dimensional nodal method with Channel-wise Intrinsic Axial Mesh Adaptation
Highlights: • CIAMA solves axial heterogeneity without iterative node re-homogenization. • CIAMA can easily resolve the control rod cusping problem. • CIAMA result shows great potential for 3-D pin-by-pin calculation. - Abstract: In a conventional coarse mesh nodal method the more accurate treatment of intra-nodal axial heterogeneity requires iterative axial node re-homogenization using axial flux profiles either reconstructed from core-wise coarse mesh solution or obtained from channel-wise axial fine mesh calculation. In this paper a new nodal method formulation, using Channel-wise Intrinsic Axial Mesh Adaptation (CIAMA), is proposed to solve this problem in a more fundamental way. For a given transverse (radial) leakage, along each axial channel a rigorous sub-node heterogeneous calculation is performed with the explicit axial heterogeneity within each coarse axial node. However, the transverse leakage between the axial channels is still calculated on the basis of coarse axial nodes, using the axially averaged radial current in each coarse axial node. Since the coupling between the axial channels is through the coarse axial nodes, it is not necessary to match the boundaries of the axial sub-nodes of neighboring axial channels in order to incorporate the axial sub-node calculation as an intrinsic part of the whole core global calculation. Therefore in the CIAMA nodal method, each axial channel is allowed to have its own sub-nodes adapting to its own axial heterogeneity variation. The CIAMA method has been implemented in the commercial code EGRET, which is used to qualify CIAMA. Excellent results of modeling fuel grid and control rod movement are presented. Application of CIAMA to three-dimensional pin-by-pin core calculation is also discussed and demonstrated to work well
FUN3D Grid Refinement and Adaptation Studies for the Ares Launch Vehicle
Bartels, Robert E.; Vasta, Veer; Carlson, Jan-Renee; Park, Mike; Mineck, Raymond E.
2010-01-01
This paper presents grid refinement and adaptation studies performed in conjunction with computational aeroelastic analyses of the Ares crew launch vehicle (CLV). The unstructured grids used in this analysis were created with GridTool and VGRID while the adaptation was performed using the Computational Fluid Dynamic (CFD) code FUN3D with a feature based adaptation software tool. GridTool was developed by ViGYAN, Inc. while the last three software suites were developed by NASA Langley Research Center. The feature based adaptation software used here operates by aligning control volumes with shock and Mach line structures and by refining/de-refining where necessary. It does not redistribute node points on the surface. This paper assesses the sensitivity of the complex flow field about a launch vehicle to grid refinement. It also assesses the potential of feature based grid adaptation to improve the accuracy of CFD analysis for a complex launch vehicle configuration. The feature based adaptation shows the potential to improve the resolution of shocks and shear layers. Further development of the capability to adapt the boundary layer and surface grids of a tetrahedral grid is required for significant improvements in modeling the flow field.
Bajc, Iztok; Hecht, Frédéric; Žumer, Slobodan
2016-09-01
This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics derived from the Hessian of a solution. The study is made on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module.
Guo, Zhikui; Chen, Chao; Tao, Chunhui
2016-04-01
Since 2007, there are four China Da yang cruises (CDCs), which have been carried out to investigate polymetallic sulfides in the southwest Indian ridge (SWIR) and have acquired both gravity data and bathymetry data on the corresponding survey lines(Tao et al., 2014). Sandwell et al. (2014) published a new global marine gravity model including the free air gravity data and its first order vertical gradient (Vzz). Gravity data and its gradient can be used to extract unknown density structure information(e.g. crust thickness) under surface of the earth, but they contain all the mass effect under the observation point. Therefore, how to get accurate gravity and its gradient effect of the existing density structure (e.g. terrain) has been a key issue. Using the bathymetry data or ETOPO1 (http://www.ngdc.noaa.gov/mgg/global/global.html) model at a full resolution to calculate the terrain effect could spend too much computation time. We expect to develop an effective method that takes less time but can still yield the desired accuracy. In this study, a constant-density polyhedral model is used to calculate the gravity field and its vertical gradient, which is based on the work of Tsoulis (2012). According to gravity field attenuation with distance and variance of bathymetry, we present an adaptive mesh refinement and coarsening strategies to merge both global topography data and multi-beam bathymetry data. The local coarsening or size of mesh depends on user-defined accuracy and terrain variation (Davis et al., 2011). To depict terrain better, triangular surface element and rectangular surface element are used in fine and coarse mesh respectively. This strategy can also be applied to spherical coordinate in large region and global scale. Finally, we applied this method to calculate Bouguer gravity anomaly (BGA), mantle Bouguer anomaly(MBA) and their vertical gradient in SWIR. Further, we compared the result with previous results in the literature. Both synthetic model
Moving mesh generation with a sequential approach for solving PDEs
physical and mesh equations suffers typically from long computation time due to highly nonlinear coupling between the two equations. Moreover, the extended system (physical and mesh equations) may be sensitive to the tuning parameters such as a temporal relaxation factor. It is therefore useful to design a......In moving mesh methods, physical PDEs and a mesh equation derived from equidistribution of an error metrics (so-called the monitor function) are simultaneously solved and meshes are dynamically concentrated on steep regions (Lim et al., 2001). However, the simultaneous solution procedure of...... adaptive grid method (local refinement by adding/deleting the meshes at a discrete time level) as well as of efficiency for the dynamic adaptive grid method (or moving mesh method) where the number of meshes is not changed. For illustration, a phase change problem is solved with the decomposition algorithm....
Adaptive Fault-Tolerant Routing in 2D Mesh with Cracky Rectangular Model
Yi Yang
2014-01-01
Full Text Available This paper mainly focuses on routing in two-dimensional mesh networks. We propose a novel faulty block model, which is cracky rectangular block, for fault-tolerant adaptive routing. All the faulty nodes and faulty links are surrounded in this type of block, which is a convex structure, in order to avoid routing livelock. Additionally, the model constructs the interior spanning forest for each block in order to keep in touch with the nodes inside of each block. The procedure for block construction is dynamically and totally distributed. The construction algorithm is simple and ease of implementation. And this is a fully adaptive block which will dynamically adjust its scale in accordance with the situation of networks, either the fault emergence or the fault recovery, without shutdown of the system. Based on this model, we also develop a distributed fault-tolerant routing algorithm. Then we give the formal proof for this algorithm to guarantee that messages will always reach their destinations if and only if the destination nodes keep connecting with these mesh networks. So the new model and routing algorithm maximize the availability of the nodes in networks. This is a noticeable overall improvement of fault tolerability of the system.
Solution adaptive triangular meshes with application to the simulation of plasma equilibrium
A new discrete Laplace operator is constructed on a local mesh molecule, second order accurate on symmetric cell regions, based on local Taylor series expansions. This discrete Laplacian is then compared to the one commonly used in the literature. A truncation error analysis of gradient and Laplace operators calculated at triangle centroids reveals that the maximum bounds of their truncation errors are minimized on equilateral triangles, for a fixed triangle perimeter. A new adaptive strategy on arbitrary triangular grids is developed in which a uniform grid is defined with respect to the solution surface, as opposed to the x,y plane. Departures from mesh uniformity arises from a spacially dependent mean-curvature of the solution surface. The power of this new adaptive technique is applied to the problem of finding free-boundary plasma equilibria within the context of MHD. The geometry is toroidal, and axisymmetry in the toroidal direction is assumed. We are led to conclude that the grid should move, not towards regions of high curvature of magnetic flux, but rather towards regions of greater toroidal current density. This has a direct bearing on the accuracy with which the Grad-Shafranov equation is being approximated
Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow
Wood, William Alfred, III
production is shown relative to DMFDSFV. Remarkably the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. A viscous Mach 17.6 (perfect gas) cylinder case demonstrates solution monotonicity and heat transfer capability with the fluctuation splitting scheme. While fluctuation splitting is recommended over DMFDSFV, the difference in performance between the schemes is not so great as to obsolete DMFDSFV. The second half of the dissertation develops a local, compact, anisotropic unstructured mesh adaption scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. This alignment behavior stands in contrast to the curvature clustering nature of the local, anisotropic unstructured adaption strategy based upon a posteriori error estimation that is used for comparison. The characteristic alignment is most pronounced for linear advection, with reduced improvement seen for the more complex non-linear advection and advection-diffusion cases. The adaption strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization. The system test case for the adaption strategy is a sting mounted capsule at Mach-10 wind tunnel conditions, considered in both two-dimensional and axisymmetric configurations. For this complex flowfield the adaption results are disappointing since feature alignment does not emerge from the local operations. Aggressive adaption is shown to result in a loss of robustness for the solver, particularly in the bow shock/stagnation point interaction region. Reducing the adaption strength maintains solution robustness but fails to produce significant improvement in the surface heat transfer predictions.
This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, keff, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for keff with directional dependence. General error estimators are derived for any given functional of the flux and applied to keff to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The keff goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained
Simulations of recoiling black holes: adaptive mesh refinement and radiative transfer
Meliani, Zakaria; Olivares, Hector; Porth, Oliver; Rezzolla, Luciano; Younsi, Ziri
2016-01-01
(Abridged) We here continue our effort to model the behaviour of matter when orbiting or accreting onto a generic black hole by developing a new numerical code employing advanced techniques geared solve the equations of in general-relativistic hydrodynamics. The new code employs a number of high-resolution shock-capturing Riemann-solvers and reconstruction algorithms, exploiting the enhanced accuracy and the reduced computational cost of AMR techniques. In addition, the code makes use of sophisticated ray-tracing libraries that, coupled with general-relativistic radiation-transfer calculations, allow us to compute accurately the electromagnetic emissions from such accretion flows. We validate the new code by presenting an extensive series of stationary accretion flows either in spherical or axial symmetry and performed either in 2D or 3D. In addition, we consider the highly nonlinear scenario of a recoiling black hole produced in the merger of a supermassive black hole binary interacting with the surrounding ...
Teyssier, R; Fromang, S
2006-01-01
We propose to extend the well-known MUSCL-Hancock scheme for Euler equations to the induction equation modeling the magnetic field evolution in kinematic dynamo problems. The scheme is based on an integral form of the underlying conservation law which, in our formulation, results in a ``finite-surface'' scheme for the induction equation. This naturally leads to the well-known ``constrained transport'' method, with additional continuity requirement on the magnetic field representation. The second ingredient in the MUSCL scheme is the predictor step that ensures second order accuracy both in space and time. We explore specific constraints that the mathematical properties of the induction equations place on this predictor step, showing that three possible variants can be considered. We show that the most aggressive formulations (referred to as C-MUSCL and U-MUSCL) reach the same level of accuracy as the other one (referred to as Runge-Kutta), at a lower computational cost. More interestingly, these two schemes a...
Teyssier, R.; Fromang, S.; Dormy, E.
2006-01-01
We propose to extend the well-known MUSCL-Hancock scheme for Euler equations to the induction equation modeling the magnetic field evolution in kinematic dynamo problems. The scheme is based on an integral form of the underlying conservation law which, in our formulation, results in a ``finite-surface'' scheme for the induction equation. This naturally leads to the well-known ``constrained transport'' method, with additional continuity requirement on the magnetic field representation. The sec...
Conservative multi-implicit integral deferred correction methods with adaptive mesh refinement
Layton, A.T. [Univ. of North Carolina, Dept. of Mathematics, Chapel Hill, North Carolina (United States)]. E-mail: layton@amath.unc.edu
2004-07-01
In most models of reacting gas dynamics, the characteristic time scales of chemical reactions are much shorter than the hydrodynamic and diffusive time scales, rendering the reaction part of the model equations stiff. Moreover, nonlinear forcings may introduce into the solutions sharp gradients or shocks, the robust behavior and correct propagation of which require the use of specialized spatial discretization procedures. This study presents high-order conservative methods for the temporal integration of model equations of reacting flows. By means of a method of lines discretization on the flux difference form of the equations, these methods compute approximations to the cell-averaged or finite-volume solution. The temporal discretization is based on a multi-implicit generalization of integral deferred correction methods. The advection term is integrated explicitly, and the diffusion and reaction terms are treated implicitly but independently, with the splitting errors present in traditional operator splitting methods reduced via the integral deferred correction procedure. To reduce computational cost, time steps used to integrate processes with widely-differing time scales may differ in size. (author)
Conservative multi-implicit integral deferred correction methods with adaptive mesh refinement
In most models of reacting gas dynamics, the characteristic time scales of chemical reactions are much shorter than the hydrodynamic and diffusive time scales, rendering the reaction part of the model equations stiff. Moreover, nonlinear forcings may introduce into the solutions sharp gradients or shocks, the robust behavior and correct propagation of which require the use of specialized spatial discretization procedures. This study presents high-order conservative methods for the temporal integration of model equations of reacting flows. By means of a method of lines discretization on the flux difference form of the equations, these methods compute approximations to the cell-averaged or finite-volume solution. The temporal discretization is based on a multi-implicit generalization of integral deferred correction methods. The advection term is integrated explicitly, and the diffusion and reaction terms are treated implicitly but independently, with the splitting errors present in traditional operator splitting methods reduced via the integral deferred correction procedure. To reduce computational cost, time steps used to integrate processes with widely-differing time scales may differ in size. (author)
An adaptive grid refinement strategy for the simulation of negative streamers
The evolution of negative streamers during electric breakdown of a non-attaching gas can be described by a two-fluid model for electrons and positive ions. It consists of continuity equations for the charged particles including drift, diffusion and reaction in the local electric field, coupled to the Poisson equation for the electric potential. The model generates field enhancement and steep propagating ionization fronts at the tip of growing ionized filaments. An adaptive grid refinement method for the simulation of these structures is presented. It uses finite volume spatial discretizations and explicit time stepping, which allows the decoupling of the grids for the continuity equations from those for the Poisson equation. Standard refinement methods in which the refinement criterion is based on local error monitors fail due to the pulled character of the streamer front that propagates into a linearly unstable state. We present a refinement method which deals with all these features. Tests on one-dimensional streamer fronts as well as on three-dimensional streamers with cylindrical symmetry (hence effectively 2D for numerical purposes) are carried out successfully. Results on fine grids are presented, they show that such an adaptive grid method is needed to capture the streamer characteristics well. This refinement strategy enables us to adequately compute negative streamers in pure gases in the parameter regime where a physical instability appears: branching streamers
Goal functional evaluations for phase-field fracture using PU-based DWR mesh adaptivity
Wick, Thomas
2016-06-01
In this study, a posteriori error estimation and goal-oriented mesh adaptivity are developed for phase-field fracture propagation. Goal functionals are computed with the dual-weighted residual (DWR) method, which is realized by a recently introduced novel localization technique based on a partition-of-unity (PU). This technique is straightforward to apply since the weak residual is used. The influence of neighboring cells is gathered by the PU. Consequently, neither strong residuals nor jumps over element edges are required. Therefore, this approach facilitates the application of the DWR method to coupled (nonlinear) multiphysics problems such as fracture propagation. These developments then allow for a systematic investigation of the discretization error for certain quantities of interest. Specifically, our focus on the relationship between the phase-field regularization and the spatial discretization parameter in terms of goal functional evaluations is novel.
van der Holst, B; Sokolov, I V; Powell, K G; Holloway, J P; Myra, E S; Stout, Q; Adams, M L; Morel, J E; Drake, R P
2011-01-01
We describe the CRASH (Center for Radiative Shock Hydrodynamics) code, a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problem...
An adaptive hierarchical particle-mesh code with isolated boundary conditions
Gelato, S; Wasserman, I M; Gelato, Sergio; Chernoff, David F.; Wasserman, Ira
1996-01-01
This article describes a new, fully adaptive Particle-Multiple-Mesh numerical simulation code developed primarily for simulations of small regions (such as a group of galaxies) in a cosmological context. It integrates the equations of motion of a set of particles subject to their mutual gravitational interaction and to an arbitrary external field. The interactions are computed using a hierarchy of nested grids constructed anew at each integration step to enhance the spatial resolution in high-density regions of interest. Significant effort has gone into supporting isolated boundary conditions at the top grid level. This makes our method also applicable to non-cosmological problems, at the cost of some complications which we discuss. We point out the implications of some differences between our approach and those of other authors of similar codes, in particular with respect to the handling of the interface between regions of different spatial resolution. We present a selection of tests performed to verify the ...
6th International Meshing Roundtable '97
White, D.
1997-09-01
The goal of the 6th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the pas~ the Roundtable has enjoyed significant participation born each of these groups from a wide variety of countries. The Roundtable will consist of technical presentations from contributed papers and abstracts, two invited speakers, and two invited panels of experts discussing topics related to the development and use of automatic mesh generation tools. In addition, this year we will feature a "Bring Your Best Mesh" competition and poster session to encourage discussion and participation from a wide variety of mesh generation tool users. The schedule and evening social events are designed to provide numerous opportunities for informal dialog. A proceedings will be published by Sandia National Laboratories and distributed at the Roundtable. In addition, papers of exceptionally high quaIity will be submitted to a special issue of the International Journal of Computational Geometry and Applications. Papers and one page abstracts were sought that present original results on the meshing process. Potential topics include but are got limited to: Unstructured triangular and tetrahedral mesh generation Unstructured quadrilateral and hexahedral mesh generation Automated blocking and structured mesh generation Mixed element meshing Surface mesh generation Geometry decomposition and clean-up techniques Geometry modification techniques related to meshing Adaptive mesh refinement and mesh quality control Mesh visualization Special purpose meshing algorithms for particular applications Theoretical or novel ideas with practical potential Technical presentations from industrial researchers.
A combined procedure for two-dimensional Delaunay mesh generation algorithm and an adaptive remeshing technique with higher-order compressible flow solver is presented. A pseudo-code procedure is described for the adaptive remeshing technique. The flux-difference splitting scheme with a modified multidimensional dissipation for high-speed compressible flow analysis on unstructured meshes is proposed. The scheme eliminates nonphysical flow solutions such as the spurious bump of the carbuncle phenomenon observed from the bow shock of the flow over a blunt body and the oscillation in the odd-even grid perturbation in a straight duct for the Quirk's odd-even decoupling test. The proposed scheme is further extended to achieve higher-order spatial and temporal solution accuracy. The performance of the combined procedure is evaluated on unstructured triangular meshes by solving several steady-state and transient high-speed compressible flow problems
Gutowski, William J.; Prusa, Joseph M.; Smolarkiewicz, Piotr K.
2012-05-08
This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the "physics" of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer-reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited. 3a. EULAG Advances EULAG is a non-hydrostatic, parallel computational model for all-scale geophysical flows. EULAG's name derives from its two computational options: EULerian (flux form) or semi-LAGrangian (advective form). The model combines nonoscillatory forward-in-time (NFT) numerical algorithms with a robust elliptic Krylov solver. A signature feature of EULAG is that it is formulated in generalized time-dependent curvilinear coordinates. In particular, this enables grid adaptivity. In total, these features give EULAG novel advantages over many existing dynamical cores. For EULAG itself, numerical advances included refining boundary conditions and filters for optimizing model performance in polar regions. We also added flexibility to the model's underlying formulation, allowing it to work with the pseudo-compressible equation set of Durran in addition to EULAG's standard anelastic formulation. Work in collaboration with others also extended the
Padmanabhan, R.; Oliveira, M. C.; Baptista, A. J.; Alves, J. L.; Menezes, L. F.
2007-05-01
Springback phenomenon associated with the elastic properties of sheet metals makes the design of forming dies a complex task. Thus, to develop consistent algorithms for springback compensation an accurate prediction of the amount of springback is mandatory. The numerical simulation using the finite element method is consensually the only feasible method to predict springback. However, springback prediction is a very complicated task and highly sensitive to various numerical parameters of finite elements (FE), such as: type, order, integration scheme, shape and size, as well the time integration formulae and the unloading strategy. All these numerical parameters make numerical simulation of springback more sensitive to numerical tolerances than the forming operation. In case of an unconstrained cylindrical bending, the in-plane to thickness FE size ratio is more relevant than the number of FE layers through-thickness, for the numerical prediction of final stress and strain states, variables of paramount importance for an accurate springback prediction. The aim of the present work is to evaluate the influence of the refinement of a 3-D FE mesh, namely the in-plane mesh refinement and the number of through-thickness FE layers, in springback prediction. The selected example corresponds to the first stage of the "Numisheet'05 Benchmark♯3", which consists basically in the sheet forming of a channel section in an industrial-scale channel draw die. The physical drawbeads are accurately taken into account in the numerical model in order to accurately reproduce its influence during the forming process simulation. FEM simulations were carried out with the in-house code DD3IMP. Solid finite elements were used. They are recommended for accuracy in FE springback simulation when the ratio between the tool radius and blank thickness is lower than 5-6. In the selected example the drawbead radius is 4.0 mm. The influence of the FE mesh refinement in springback prediction is
Steger, J. L.; Dougherty, F. C.; Benek, J. A.
1983-01-01
A mesh system composed of multiple overset body-conforming grids is described for adapting finite-difference procedures to complex aircraft configurations. In this so-called 'chimera mesh,' a major grid is generated about a main component of the configuration and overset minor grids are used to resolve all other features. Methods for connecting overset multiple grids and modifications of flow-simulation algorithms are discussed. Computational tests in two dimensions indicate that the use of multiple overset grids can simplify the task of grid generation without an adverse effect on flow-field algorithms and computer code complexity.
Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems
Turinsky, Paul
2015-02-09
This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can be realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BASED MODELS
Erdal Özbay
2013-11-01
Full Text Available 3-dimensional object modelling of real world objects in steady state by means of multiple point cloud (pcl depth scans taken by using sensing camera and application of smoothing algorithm are suggested in this study. Polygon structure, which is constituted by coordinates of point cloud (x,y,z corresponding to the position of 3D model in space and obtained by nodal points and connection of these points by means of triangulation, is utilized for the demonstration of 3D models. Gaussian smoothing and developed methods are applied to the mesh consisting of merge of these polygons, and a new mesh simplification and augmentation algorithm are suggested for the over the 3D modelling. Mesh consisting of merge of polygons can be demonstrated in a more packed, smooth and fluent way. In this study is shown that applied the triangulation and smoothing method for 3D modelling, perform to a fast and robust mesh structures compared to existing methods therewithal no remeshing is necessary for refinement and reduction.
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.; Nixon, David (Technical Monitor)
1998-01-01
The work presents a new method for on-the-fly domain decomposition technique for mapping grids and solution algorithms to parallel machines, and is applicable to both shared-memory and message-passing architectures. It will be demonstrated on the Cray T3E, HP Exemplar, and SGI Origin 2000. Computing time has been secured on all these platforms. The decomposition technique is an outgrowth of techniques used in computational physics for simulations of N-body problems and the event horizons of black holes, and has not been previously used by the CFD community. Since the technique offers on-the-fly partitioning, it offers a substantial increase in flexibility for computing in heterogeneous environments, where the number of available processors may not be known at the time of job submission. In addition, since it is dynamic it permits the job to be repartitioned without global communication in cases where additional processors become available after the simulation has begun, or in cases where dynamic mesh adaptation changes the mesh size during the course of a simulation. The platform for this partitioning strategy is a completely new Cartesian Euler solver tarcreted at parallel machines which may be used in conjunction with Ames' "Cart3D" arbitrary geometry simulation package.
Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.
2015-10-01
An integrated method of advanced anisotropic hr-adaptive mesh and discretization numerical techniques has been, for first time, applied to modelling of multiscale advection-diffusion problems, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been set up for two-dimensional (2-D) advection phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes. Performance achieved in 3-D simulation of power plant plumes indicates that this new adaptive multiscale model has the potential to provide accurate air quality modelling solutions effectively.
We consider the iterative reconstruction of both the internal geometry and the values of an inhomogeneous acoustic refraction index through a piecewise constant approximation. In this context, we propose two enhancements intended to reduce the number of parameters used in reconstruction, while preserving accuracy. This is achieved through the use of geometrical information obtained from a previously developed defect localization method. The first enhancement consists of a preliminary selection of relevant parameters, while the second one is an adaptive refinement to enhance precision with a low number of parameters. Each of them is numerically illustrated. (paper)
Mesh Generation and Adaption for High Reynolds Number RANS Computations Project
National Aeronautics and Space Administration — This proposal offers to provide NASA with an automatic mesh generator for the simulation of aerodynamic flows using Reynolds-Averages Navier-Stokes (RANS) models....
Mesh Generation and Adaption for High Reynolds Number RANS Computations Project
National Aeronautics and Space Administration — The innovation of our Phase II STTR program is to develop and provide to NASA automatic mesh generation software for the simulation of fluid flows using...
Zheng, H. W.; Shu, C.; Chew, Y. T.
2008-07-01
In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.
Xuthos: A Discontinuous Galerkin Transport Solver For NEWT Based On Unstructured Triangular Meshes
A transport solver has been developed based on a Discontinuous Galerkin Finite Element Formulation with linear and quadratic shape functions. The data structure is general enough to allow for spatial mesh adaptivity. The DGFEM formalism is particularly useful to treat seamlessly problems with locally refined meshes. In the multigroup context, the spatial meshes can be made group-dependent to account for the smoothness of each component of the multigroup scalar flux. Numerical results validate this approach. (authors)
A computational fluid dynamics model with anisotropic mesh adaptivity is used to investigate coolant flow and heat transfer in pebble bed reactors. A novel method for implicitly incorporating solid boundaries based on multi-fluid flow modelling is adopted. The resulting model is able to resolve and simulate flow and heat transfer in randomly packed beds, regardless of the actual geometry, starting off with arbitrarily coarse meshes. The model is initially evaluated using an orderly stacked square channel of channel-height-to-particle diameter ratio of unity for a range of Reynolds numbers. The model is then applied to the face-centred cubical geometry. Coolant flow and heat transfer patterns are investigated. (author)
Li, Xianping
2010-01-01
Heterogeneous anisotropic diffusion problems arise in the various areas of science and engineering including plasma physics, petroleum engineering, and image processing. Standard numerical methods can produce spurious oscillations when they are used to solve those problems. A common approach to avoid this difficulty is to design a proper numerical scheme and/or a proper mesh so that the numerical solution validates the discrete counterpart (DMP) of the maximum principle satisfied by the continuous solution. A well known mesh condition for the DMP satisfaction by the linear finite element solution of isotropic diffusion problems is the non-obtuse angle condition that requires the dihedral angles of mesh elements to be non-obtuse. In this paper, a generalization of the condition, the so-called anisotropic non-obtuse angle condition, is developed for the finite element solution of heterogeneous anisotropic diffusion problems. The new condition is essentially the same as the existing one except that the dihedral ...
Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory
2009-01-01
We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.
Foks, Nathan Leon
The interpretation of geophysical data plays an important role in the analysis of potential field data in resource exploration industries. Two categories of interpretation techniques are discussed in this thesis; boundary detection and geophysical inversion. Fault or boundary detection is a method to interpret the locations of subsurface boundaries from measured data, while inversion is a computationally intensive method that provides 3D information about subsurface structure. My research focuses on these two aspects of interpretation techniques. First, I develop a method to aid in the interpretation of faults and boundaries from magnetic data. These processes are traditionally carried out using raster grid and image processing techniques. Instead, I use unstructured meshes of triangular facets that can extract inferred boundaries using mesh edges. Next, to address the computational issues of geophysical inversion, I develop an approach to reduce the number of data in a data set. The approach selects the data points according to a user specified proxy for its signal content. The approach is performed in the data domain and requires no modification to existing inversion codes. This technique adds to the existing suite of compressive inversion algorithms. Finally, I develop an algorithm to invert gravity data for an interfacing surface using an unstructured mesh of triangular facets. A pertinent property of unstructured meshes is their flexibility at representing oblique, or arbitrarily oriented structures. This flexibility makes unstructured meshes an ideal candidate for geometry based interface inversions. The approaches I have developed provide a suite of algorithms geared towards large-scale interpretation of potential field data, by using an unstructured representation of both the data and model parameters.
An hr-adaptive discontinuous Galerkin method for advection-diffusion problems
Antonietti, Paola F.; Houston, Paul
2009-01-01
We propose an adaptive mesh refinement strategy based on exploiting a combination of a pre-processing mesh re-distribution algorithm employing a harmonic mapping technique, and standard (isotropic) mesh subdivision for discontinuous Galerkin approximations of advection-diffusion problems. Numerical experiments indicate that the resulting adaptive strategy can efficiently reduce the computed discretization error by clustering the nodes in the computational mesh where the analytical solution un...
Multi-dimensional Upwind Fluctuation Splitting Scheme with Mesh Adaption for Hypersonic Viscous Flow
Wood, William Alfred
2001-01-01
A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-orde...
Design of Finite Element Tools for Coupled Surface and Volume Meshes
Daniel K(o)ster; Oliver Kriessl; Kunibert G. Siebert
2008-01-01
Many problems with underlying variational structure involve a coupling of volume with surface effects. A straight-forward approach in a finite element discretization is to make use of the surface triangulation that is naturally induced by the volume triangulation. In an adaptive method one wants to facilitate "matching" local mesh modifications, i.e., local refinement and/or coarsening, of volume and surface mesh with standard tools such that the surface grid is always induced by the volume grid. We describe the concepts behind this approach for bisectional refinement and describe new tools incorporated in the finite element toolbox ALBERTA. We also present several important applications of the mesh coupling.
Refined adaptive optics simulation with wide field of view for the E-ELT
Refined simulation tools for wide field AO systems (such as MOAO, MCAO or LTAO) on ELTs present new challenges. Increasing the number of degrees of freedom (scales as the square of the telescope diameter) makes the standard simulation's codes useless due to the huge number of operations to be performed at each step of the Adaptive Optics (AO) loop process. This computational burden requires new approaches in the computation of the DM voltages from WFS data. The classical matrix inversion and the matrix vector multiplication have to be replaced by a cleverer iterative resolution of the Least Square or Minimum Mean Square Error criterion (based on sparse matrices approaches). Moreover, for this new generation of AO systems, concepts themselves will become more complex: data fusion coming from multiple Laser and Natural Guide Stars (LGS / NGS) will have to be optimized, mirrors covering all the field of view associated to dedicated mirrors inside the scientific instrument itself will have to be coupled using split or integrated tomography schemes, differential pupil or/and field rotations will have to be considered, etc. All these new entries should be carefully simulated, analysed and quantified in terms of performance before any implementation in AO systems. For those reasons I developed, in collaboration with the ONERA, a full simulation code, based on iterative solution of linear systems with many parameters (use of sparse matrices). On this basis, I introduced new concepts of filtering and data fusion (LGS / NGS) to effectively manage modes such as tip, tilt and defocus in the entire process of tomographic reconstruction. The code will also eventually help to develop and test complex control laws (Multi-DM and multi-field) who have to manage a combination of adaptive telescope and post-focal instrument including dedicated deformable mirrors. The first application of this simulation tool has been studied in the framework of the EAGLE multi-object spectrograph
Bode, P; Bode, Paul; Ostriker, Jeremiah P.
2003-01-01
An improved implementation of an N-body code for simulating collisionless cosmological dynamics is presented. TPM (Tree-Particle-Mesh) combines the PM method on large scales with a tree code to handle particle-particle interactions at small separations. After the global PM forces are calculated, spatially distinct regions above a given density contrast are located; the tree code calculates the gravitational interactions inside these denser objects at higher spatial and temporal resolution. The new implementation includes individual particle time steps within trees, an improved treatment of tidal forces on trees, new criteria for higher force resolution and choice of time step, and parallel treatment of large trees. TPM is compared to P^3M and a tree code (GADGET) and is found to give equivalent results in significantly less time. The implementation is highly portable (requiring a Fortran compiler and MPI) and efficient on parallel machines. The source code can be found at http://astro.princeton.edu/~bode/TPM/
Armstrong, Jerawan C. [Los Alamos National Laboratory; Favorite, Jeffrey A. [Los Alamos National Laboratory
2012-06-20
The Levenberg-Marquardt (or simply Marquardt) and differential evolution (DE) optimization methods were recently applied to solve inverse transport problems. The Marquardt method is fast but convergence of the method is dependent on the initial guess. While it has been shown to work extremely well at finding an optimum independent of the initial guess, the DE method does not provide a global optimal solution in some problems. In this paper, we apply the Mesh Adaptive Direct Search (MADS) algorithm to solve the inverse problem of material interface location identification in one-dimensional spherical radiation source/shield systems, and we compare the results obtained by MADS to those obtained by Levenberg-Marquardt and DE.
Zhang, Hong
2016-01-01
Motivated by observations of saturation overshoot, this paper investigates numerical modeling of two-phase flow incorporating dynamic capillary pressure. The effects of the dynamic capillary coefficient, the infiltrating flux rate and the initial and boundary values are systematically studied using a travelling wave ansatz and efficient numerical methods. The travelling wave solutions may exhibit monotonic, non-monotonic or plateau-shaped behaviour. Special attention is paid to the non-monotonic profiles. The travelling wave results are confirmed by numerically solving the partial differential equation using an accurate adaptive moving mesh solver. Comparisons between the computed solutions using the Brooks-Corey model and the laboratory measurements of saturation overshoot verify the effectiveness of our approach.
Adaptive techniques in electrical impedance tomography reconstruction
We present an adaptive algorithm for solving the inverse problem in electrical impedance tomography. To strike a balance between the accuracy of the reconstructed images and the computational efficiency of the forward and inverse solvers, we propose to combine an adaptive mesh refinement technique with the adaptive Kaczmarz method. The iterative algorithm adaptively generates the optimal current patterns and a locally-refined mesh given the conductivity estimate and solves for the unknown conductivity distribution with the block Kaczmarz update step. Simulation and experimental results with numerical analysis demonstrate the accuracy and the efficiency of the proposed algorithm. (paper)
Hydrodynamic simulations on a moving Voronoi mesh
Springel, Volker
2011-01-01
At the heart of any method for computational fluid dynamics lies the question of how the simulated fluid should be discretized. Traditionally, a fixed Eulerian mesh is often employed for this purpose, which in modern schemes may also be adaptively refined during a calculation. Particle-based methods on the other hand discretize the mass instead of the volume, yielding an approximately Lagrangian approach. It is also possible to achieve Lagrangian behavior in mesh-based methods if the mesh is allowed to move with the flow. However, such approaches have often been fraught with substantial problems related to the development of irregularity in the mesh topology. Here we describe a novel scheme that eliminates these weaknesses. It is based on a moving unstructured mesh defined by the Voronoi tessellation of a set of discrete points. The mesh is used to solve the hyperbolic conservation laws of ideal hydrodynamics with a finite volume approach, based on a second-order Godunov scheme with an exact Riemann solver. A...
Medical case-based retrieval: integrating query MeSH terms for query-adaptive multi-modal fusion
Seco de Herrera, Alba G.; Foncubierta-Rodríguez, Antonio; Müller, Henning
2015-03-01
Advances in medical knowledge give clinicians more objective information for a diagnosis. Therefore, there is an increasing need for bibliographic search engines that can provide services helping to facilitate faster information search. The ImageCLEFmed benchmark proposes a medical case-based retrieval task. This task aims at retrieving articles from the biomedical literature that are relevant for differential diagnosis of query cases including a textual description and several images. In the context of this campaign many approaches have been investigated showing that the fusion of visual and text information can improve the precision of the retrieval. However, fusion does not always lead to better results. In this paper, a new query-adaptive fusion criterion to decide when to use multi-modal (text and visual) or only text approaches is presented. The proposed method integrates text information contained in MeSH (Medical Subject Headings) terms extracted and visual features of the images to find synonym relations between them. Given a text query, the query-adaptive fusion criterion decides when it is suitable to also use visual information for the retrieval. Results show that this approach can decide if a text or multi{modal approach should be used with 77.15% of accuracy.
Surface meshing with curvature convergence
Li, Huibin
2014-06-01
Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.
Hundebøll, Martin; Pedersen, Morten Videbæk; Roetter, Daniel Enrique Lucani;
2014-01-01
on the TCP protocol for reliability in data delivery. TCP is known to drop its throughput performance by several fold in the presence of even 1% or 2% packet losses, which are common in wireless systems. This will force DASH to settle at a much lower video resolution, thus reducing the user's quality...... of experience. We show that the use of FRANC, an adaptive network coding protocol that provides both low delay and high throughput to upper layers, as a reliability mechanism for TCP can significantly increase video quality. As part of our analysis, we benchmark the performance of various TCP versions......, including CUBIC, Reno, Veno, Vegas, and Westwood+, under different packet loss rates in wireless systems using a real testbed with Raspberry Pi devices. Our goal was to choose the most promising TCP version in terms of delay performance, in this case TCP Reno, and make a fair comparison between TCP running...
Hybrid direct and iterative solvers for h refined grids with singularities
Paszyński, Maciej R.
2015-04-27
This paper describes a hybrid direct and iterative solver for two and three dimensional h adaptive grids with point singularities. The point singularities are eliminated by using a sequential linear computational cost solver O(N) on CPU [1]. The remaining Schur complements are submitted to incomplete LU preconditioned conjugated gradient (ILUPCG) iterative solver. The approach is compared to the standard algorithm performing static condensation over the entire mesh and executing the ILUPCG algorithm on top of it. The hybrid solver is applied for two or three dimensional grids automatically h refined towards point or edge singularities. The automatic refinement is based on the relative error estimations between the coarse and fine mesh solutions [2], and the optimal refinements are selected using the projection based interpolation. The computational mesh is partitioned into sub-meshes with local point and edge singularities separated. This is done by using the following greedy algorithm.
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation
Sarkis, C.; Silva, L.; Gandin, Ch-A.; Plapp, M.
2016-03-01
Dendritic growth is computed with automatic adaptation of an anisotropic and unstructured finite element mesh. The energy conservation equation is formulated for solid and liquid phases considering an interface balance that includes the Gibbs-Thomson effect. An equation for a diffuse interface is also developed by considering a phase field function with constant negative value in the liquid and constant positive value in the solid. Unknowns are the phase field function and a dimensionless temperature, as proposed by [1]. Linear finite element interpolation is used for both variables, and discretization stabilization techniques ensure convergence towards a correct non-oscillating solution. In order to perform quantitative computations of dendritic growth on a large domain, two additional numerical ingredients are necessary: automatic anisotropic unstructured adaptive meshing [2,[3] and parallel implementations [4], both made available with the numerical platform used (CimLib) based on C++ developments. Mesh adaptation is found to greatly reduce the number of degrees of freedom. Results of phase field simulations for dendritic solidification of a pure material in two and three dimensions are shown and compared with reference work [1]. Discussion on algorithm details and the CPU time will be outlined.
Skutnik, Steven E.; Davis, David R.
2016-05-01
The use of passive gamma and neutron signatures from fission indicators is a common means of estimating used fuel burnup, enrichment, and cooling time. However, while characteristic fission product signatures such as 134Cs, 137Cs, 154Eu, and others are generally reliable estimators for used fuel burnup within the context where the assembly initial enrichment and the discharge time are known, in the absence of initial enrichment and/or cooling time information (such as when applying NDA measurements in a safeguards/verification context), these fission product indicators no longer yield a unique solution for assembly enrichment, burnup, and cooling time after discharge. Through the use of a new Mesh-Adaptive Direct Search (MADS) algorithm, it is possible to directly probe the shape of this "degeneracy space" characteristic of individual nuclides (and combinations thereof), both as a function of constrained parameters (such as the assembly irradiation history) and unconstrained parameters (e.g., the cooling time before measurement and the measurement precision for particular indicator nuclides). In doing so, this affords the identification of potential means of narrowing the uncertainty space of potential assembly enrichment, burnup, and cooling time combinations, thereby bounding estimates of assembly plutonium content. In particular, combinations of gamma-emitting nuclides with distinct half-lives (e.g., 134Cs with 137Cs and 154Eu) in conjunction with gross neutron counting (via 244Cm) are able to reasonably constrain the degeneracy space of possible solutions to a space small enough to perform useful discrimination and verification of fuel assemblies based on their irradiation history.
John Maltby
Full Text Available The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing relationships with adaptive expressions of the traits of the five-factor personality model and the contribution to well-being after controlling for personality and coping, or over time. The current findings suggest that EEA resilience is a useful and parsimonious model and measure of trait resilience that can readily be placed within wider trait psychology and that is found to contribute to individual well-being.
John Maltby; Liz Day; Sophie Hall
2015-01-01
The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA) resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing r...
Determination of an Initial Mesh Density for Finite Element Computations via Data Mining
Kanapady, R; Bathina, S K; Tamma, K K; Kamath, C; Kumar, V
2001-07-23
Numerical analysis software packages which employ a coarse first mesh or an inadequate initial mesh need to undergo a cumbersome and time consuming mesh refinement studies to obtain solutions with acceptable accuracy. Hence, it is critical for numerical methods such as finite element analysis to be able to determine a good initial mesh density for the subsequent finite element computations or as an input to a subsequent adaptive mesh generator. This paper explores the use of data mining techniques for obtaining an initial approximate finite element density that avoids significant trial and error to start finite element computations. As an illustration of proof of concept, a square plate which is simply supported at its edges and is subjected to a concentrated load is employed for the test case. Although simplistic, the present study provides insight into addressing the above considerations.
A mesh density study for application to large deformation rolling process evaluation
When addressing large deformation through an elastic-plastic analysis the mesh density is paramount in determining the accuracy of the solution. However, given the nonlinear nature of the problem, a highly-refined mesh will generally require a prohibitive amount of computer resources. This paper addresses finite element mesh optimization studies considering accuracy of results and computer resource needs as applied to large deformation rolling processes. In particular, the simulation of the thread rolling manufacturing process is considered using the MARC software package and a Cray C90 supercomputer. Both mesh density and adaptive meshing on final results for both indentation of a rigid body to a specified depth and contact rolling along a predetermined length are evaluated
McCorquodale, Peter; Ullrich, Paul A.; Johansen, Hans; Colella, Phillip
2015-06-16
We present a high-order finite-volume approach for solving the shallow-water equations on the sphere, using multiblock grids on the cubed-sphere. This approach combines a Runge--Kutta time discretization with a fourth-order accurate spatial discretization, and includes adaptive mesh refinement and refinement in time. Results of tests show fourth-order convergence for the shallow-water equations as well as for advection in a highly deformational flow. Hierarchical adaptive mesh refinement allows solution error to be achieved that is comparable to that obtained with uniform resolution of the most refined level of the hierarchy, but with many fewer operations.
Self-Avoiding Walks Over Adaptive Triangular Grids
Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1999-01-01
Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.
Mesh type tradeoffs in 2D hydrodynamic modeling of flooding with a Godunov-based flow solver
Kim, Byunghyun; Sanders, Brett F.; Schubert, Jochen E.; Famiglietti, James S.
2014-06-01
The effect of mesh type on the accuracy and computational demands of a two-dimensional Godunov-type flood inundation model is critically examined. Cartesian grids, constrained and unconstrained triangular grids, constrained quadrilateral grids, and mixed meshes are considered, with and without local time stepping (LTS), to determine the approach that maximizes computational efficiency defined as accuracy relative to computational effort. A mixed-mesh numerical scheme is introduced so all grids are processed by the same solver. Analysis focuses on a wide range of dam-break type test cases, where Godunov-type flood models have proven very successful. Results show that different mesh types excel under different circumstances. Cartesian grids are 2-3 times more efficient with relatively simple terrain features such as rectilinear channels that call for a uniform grid resolution, while unstructured grids are about twice as efficient in complex domains with irregular terrain features that call for localized refinements. The superior efficiency of locally refined, unstructured grids in complex terrain is attributable to LTS; the locally refined unstructured grid becomes less efficient using global time stepping. These results point to mesh-type tradeoffs that should be considered in flood modeling applications. A mixed mesh model formulation with LTS is recommended as a general purpose solver because the mesh type can be adapted to maximize computational efficiency.
2.5D induced polarization forward modeling using the adaptive finite-element method
Ye Yi-Xin; Li Yu-Guo; Deng Ju-Zhi; Li Ze-Lin
2014-01-01
The conventional finite-element (FE) method often uses a structured mesh, which is designed according to the user’s experience, and it is not sufficiently accurate and flexible to accommodate complex structures such as dipping interfaces and rough topography. We present an adaptive FE method for 2.5D forward modeling of induced polarization (IP). In the presented method, an unstructured triangulation mesh that allows for local mesh refinement and flexible description of arbitrary model geometries is used. Furthermore, the mesh refinement process is guided by dual error estimate weighting to bias the refinement towards elements that affect the solution at the receiver locations. After the final mesh is generated, the Jacobian matrix is used to obtain the IP response on 2D structure models. We validate the adaptive FE algorithm using a vertical contact model. The validation shows that the elements near the receivers are highly refined and the average relative error of the potentials converges to 0.4%and 1.2%for the IP response. This suggests that the numerical solution of the adaptive FE algorithm converges to an accurate solution with the refined mesh. Finally, the accuracy and flexibility of the adaptive FE procedure are also validated using more complex models.
Georg eLayher
2014-12-01
Full Text Available The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, but both belong to the category of felines. In other words, tigers and leopards are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in the computational neurosciences. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of (sub- category representations. We demonstrate the temporal evolution of such learning and show how the approach successully establishes category and subcategory
Evaluation of Different Meshing Techniques for the Case of a Stented Artery.
Lotfi, Azadeh; Simmons, Anne; Barber, Tracie
2016-03-01
The formation and progression of in-stent restenosis (ISR) in bifurcated vessels may vary depending on the technique used for stenting. This study evaluates the effect of a variety of mesh styles on the accuracy and reliability of computational fluid dynamics (CFD) models in predicting these regions, using an idealized stented nonbifurcated model. The wall shear stress (WSS) and the near-stent recirculating vortices are used as determinants. The meshes comprise unstructured tetrahedral and polyhedral elements. The effects of local refinement, as well as higher-order elements such as prismatic inflation layers and internal hexahedral core, have also been examined. The uncertainty associated with individual mesh style was assessed through verification of calculations using the grid convergence index (GCI) method. The results obtained show that the only condition which allows the reliable comparison of uncertainty estimation between different meshing styles is that the monotonic convergence of grid solutions is in the asymptotic range. Comparisons show the superiority of a flow-adaptive polyhedral mesh over the commonly used adaptive and nonadaptive tetrahedral meshes in terms of resolving the near-stent flow features, GCI value, and prediction of WSS. More accurate estimation of hemodynamic factors was obtained using higher-order elements, such as hexahedral or prismatic grids. Incorporating these higher-order elements, however, was shown to introduce some degrees of numerical diffusion at the transitional area between the two meshes, not necessarily translating into high GCI value. Our data also confirmed the key role of local refinement in improving the performance and accuracy of nonadaptive mesh in predicting flow parameters in models of stented artery. The results of this study can provide a guideline for modeling biofluid domain in complex bifurcated arteries stented in regards to various stenting techniques. PMID:26784359
Korous, L.; Šolín, Pavel
2013-01-01
Roč. 95, č. 1 (2013), S425-S444. ISSN 0010-485X Institutional support: RVO:61388998 Keywords : numerical simulation * finite element method * hp-adaptivity Subject RIV: BA - General Math ematics Impact factor: 1.055, year: 2013
Adaptive numerical methods for partial differential equations
Cololla, P. [Univ. of California, Berkeley, CA (United States)
1995-07-01
This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.
Refining the impact of TCF7L2 gene variants on type 2 diabetes and adaptive evolution
Helgason, Agnar; Pálsson, Snaebjörn; Thorleifsson, Gudmar;
2007-01-01
We recently described an association between risk of type 2diabetes and variants in the transcription factor 7-like 2 gene (TCF7L2; formerly TCF4), with a population attributable risk (PAR) of 17%-28% in three populations of European ancestry. Here, we refine the definition of the TCF7L2 type 2di...
Adaptive Finite Element Method Assisted by Stochastic Simulation of Chemical Systems
Cotter, Simon L.
2013-01-01
Stochastic models of chemical systems are often analyzed by solving the corresponding Fokker-Planck equation, which is a drift-diffusion partial differential equation for the probability distribution function. Efficient numerical solution of the Fokker-Planck equation requires adaptive mesh refinements. In this paper, we present a mesh refinement approach which makes use of a stochastic simulation of the underlying chemical system. By observing the stochastic trajectory for a relatively short amount of time, the areas of the state space with nonnegligible probability density are identified. By refining the finite element mesh in these areas, and coarsening elsewhere, a suitable mesh is constructed and used for the computation of the stationary probability density. Numerical examples demonstrate that the presented method is competitive with existing a posteriori methods. © 2013 Society for Industrial and Applied Mathematics.
Highlights: → Systematic search of low melting temperatures in multicomponent systems. → Calculation of eutectic in multicomponent systems. → The FactSage software and the direct search algorithm are used simultaneously. - Abstract: It is often of interest, for a multicomponent system, to identify the low melting compositions at which local minima of the liquidus surface occur. The experimental determination of these minima can be very time-consuming. An alternative is to employ the CALPHAD approach using evaluated thermodynamic databases containing optimized model parameters giving the thermodynamic properties of all phases as functions of composition and temperature. Liquidus temperatures are then calculated by Gibbs free energy minimization algorithms which access the databases. Several such large databases for many multicomponent systems have been developed over the last 40 years, and calculated liquidus temperatures are generally quite accurate. In principle, one could then search for local liquidus minima by simply calculating liquidus temperatures over a compositional grid. In practice, such an approach is prohibitively time-consuming for all but the simplest systems since the required number of grid points is extremely large. In the present article, the FactSage database computing system is coupled with the powerful Mesh Adaptive Direct Search (MADS) algorithm in order to search for and calculate automatically all liquidus minima in a multicomponent system. Sample calculations for a 4-component oxide system, a 7-component chloride system, and a 9-component ferrous alloy system are presented. It is shown that the algorithm is robust and rapid.
Gheribi, Aimen E., E-mail: aimen.gheribi@polymtl.ca [CRCT - Centre for Research in Computational Thermochemistry, Department of Chemical Engineering, Ecole Polytechnique de Montreal, C.P. 6079, Succursale Centre-Ville, Montreal (Quebec), H3C 3A7 (Canada); Robelin, Christian [CRCT - Centre for Research in Computational Thermochemistry, Department of Chemical Engineering, Ecole Polytechnique de Montreal, C.P. 6079, Succursale Centre-Ville, Montreal (Quebec), H3C 3A7 (Canada); Digabel, Sebastien Le; Audet, Charles [GERAD and Department of Mathematics and Industrial Engineering, Ecole Polytechnique de Montreal, C.P. 6079, Succursale Centre-Ville, Montreal (Quebec), H3C 3A7 (Canada); Pelton, Arthur D. [CRCT - Centre for Research in Computational Thermochemistry, Department of Chemical Engineering, Ecole Polytechnique de Montreal, C.P. 6079, Succursale Centre-Ville, Montreal (Quebec), H3C 3A7 (Canada)
2011-09-15
Highlights: > Systematic search of low melting temperatures in multicomponent systems. > Calculation of eutectic in multicomponent systems. > The FactSage software and the direct search algorithm are used simultaneously. - Abstract: It is often of interest, for a multicomponent system, to identify the low melting compositions at which local minima of the liquidus surface occur. The experimental determination of these minima can be very time-consuming. An alternative is to employ the CALPHAD approach using evaluated thermodynamic databases containing optimized model parameters giving the thermodynamic properties of all phases as functions of composition and temperature. Liquidus temperatures are then calculated by Gibbs free energy minimization algorithms which access the databases. Several such large databases for many multicomponent systems have been developed over the last 40 years, and calculated liquidus temperatures are generally quite accurate. In principle, one could then search for local liquidus minima by simply calculating liquidus temperatures over a compositional grid. In practice, such an approach is prohibitively time-consuming for all but the simplest systems since the required number of grid points is extremely large. In the present article, the FactSage database computing system is coupled with the powerful Mesh Adaptive Direct Search (MADS) algorithm in order to search for and calculate automatically all liquidus minima in a multicomponent system. Sample calculations for a 4-component oxide system, a 7-component chloride system, and a 9-component ferrous alloy system are presented. It is shown that the algorithm is robust and rapid.
Coirier, William John
1994-01-01
A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a
Multi-level adaptive simulation of transient two-phase flow in heterogeneous porous media
Chueh, C.C.
2010-10-01
An implicit pressure and explicit saturation (IMPES) finite element method (FEM) incorporating a multi-level shock-type adaptive refinement technique is presented and applied to investigate transient two-phase flow in porous media. Local adaptive mesh refinement is implemented seamlessly with state-of-the-art artificial diffusion stabilization allowing simulations that achieve both high resolution and high accuracy. Two benchmark problems, modelling a single crack and a random porous medium, are used to demonstrate the robustness of the method and illustrate the capabilities of the adaptive refinement technique in resolving the saturation field and the complex interaction (transport phenomena) between two fluids in heterogeneous media. © 2010 Elsevier Ltd.
We present numerical simulations aimed at exploring the effects of varying the sub-grid physics parameters on the evolution and the properties of the galaxy formed in a low-mass dark matter halo (∼7 x 1010 h -1 Msun at redshift z = 0). The simulations are run within a cosmological setting with a nominal resolution of 218 pc comoving and are stopped at z = 0.43. For simulations that cannot resolve individual molecular clouds, we propose the criterion that the threshold density for star formation, nSF, should be chosen such that the column density of the star-forming cells equals the threshold value for molecule formation, N ∼ 1021 cm-2, or ∼8 Msun pc-2. In all of our simulations, an extended old/intermediate-age stellar halo and a more compact younger stellar disk are formed, and in most cases, the halo's specific angular momentum is slightly larger than that of the galaxy, and sensitive to the SF/feedback parameters. We found that a non-negligible fraction of the halo stars are formed in situ in a spheroidal distribution. Changes in the sub-grid physics parameters affect significantly and in a complex way the evolution and properties of the galaxy: (1) lower threshold densities nSF produce larger stellar effective radii Re , less peaked circular velocity curves Vc (R), and greater amounts of low-density and hot gas in the disk mid-plane; (2) when stellar feedback is modeled by temporarily switching off radiative cooling in the star-forming regions, Re increases (by a factor of ∼2 in our particular model), the circular velocity curve becomes flatter, and a complex multi-phase gaseous disk structure develops; (3) a more efficient local conversion of gas mass to stars, measured by a stellar particle mass distribution biased toward larger values, increases the strength of the feedback energy injection-driving outflows and inducing burstier SF histories; (4) if feedback is too strong, gas loss by galactic outflows-which are easier to produce in low-mass galaxies-interrupts SF, whose history becomes episodic; and (5) in all cases, the surface SF rate (SFR) versus the gas surface density correlation is steeper than the Kennicutt law but in agreement with observations in low surface brightness galaxies. The simulations exhibit two important shortcomings: the baryon fractions are higher, and the specific SFRs are much smaller, than observationally inferred values for redshifts ∼0.4-1. These shortcomings pose a major challenge to the SF/feedback physics commonly applied in the ΛCDM-based galaxy formation simulations.
Numerical simulation of H2/air detonation using unstructured mesh
Togashi, Fumiya; Löhner, Rainald; Tsuboi, Nobuyuki
2009-06-01
To explore the capability of unstructured mesh to simulate detonation wave propagation phenomena, numerical simulation of H2/air detonation using unstructured mesh was conducted. The unstructured mesh has several adv- antages such as easy mesh adaptation and flexibility to the complicated configurations. To examine the resolution dependency of the unstructured mesh, several simulations varying the mesh size were conducted and compared with a computed result using a structured mesh. The results show that the unstructured mesh solution captures the detailed structure of detonation wave, as well as the structured mesh solution. To capture the detailed detonation cell structure, the unstructured mesh simulations required at least twice, ideally 5times the resolution of structured mesh solution.
Adaptive Finite Element Methods for Continuum Damage Modeling
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
A new procedure for dynamic adaption of three-dimensional unstructured grids
Biswas, Rupak; Strawn, Roger
1993-01-01
A new procedure is presented for the simultaneous coarsening and refinement of three-dimensional unstructured tetrahedral meshes. This algorithm allows for localized grid adaption that is used to capture aerodynamic flow features such as vortices and shock waves in helicopter flowfield simulations. The mesh-adaption algorithm is implemented in the C programming language and uses a data structure consisting of a series of dynamically-allocated linked lists. These lists allow the mesh connectivity to be rapidly reconstructed when individual mesh points are added and/or deleted. The algorithm allows the mesh to change in an anisotropic manner in order to efficiently resolve directional flow features. The procedure has been successfully implemented on a single processor of a Cray Y-MP computer. Two sample cases are presented involving three-dimensional transonic flow. Computed results show good agreement with conventional structured-grid solutions for the Euler equations.
Adaptive Multilinear Tensor Product Wavelets.
Weiss, Kenneth; Lindstrom, Peter
2016-01-01
Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells. PMID:26529742
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Unstructured Geometric Multigrid in Two and Three Dimensions on Complex and Graded Meshes
Brune, Peter R; Scott, L Ridgway
2011-01-01
The use of multigrid and related preconditioners with the finite element method is often limited by the difficulty of applying the algorithm effectively to a problem, especially when the domain has a complex shape or adaptive refinement. We introduce a simplification of a general topologically-motivated mesh coarsening algorithm for use in creating hierarchies of meshes for geometric unstructured multigrid methods. The connections between the guarantees of this technique and the quality criteria necessary for multigrid methods for non-quasi-uniform problems are noted. The implementation details, in particular those related to coarsening, remeshing, and interpolation, are discussed. Computational tests on pathological test cases from adaptive finite element methods show the performance of the technique.
Spherical geodesic mesh generation
Fung, Jimmy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kenamond, Mark Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Burton, Donald E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shashkov, Mikhail Jurievich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-27
In ALE simulations with moving meshes, mesh topology has a direct influence on feature representation and code robustness. In three-dimensional simulations, modeling spherical volumes and features is particularly challenging for a hydrodynamics code. Calculations on traditional spherical meshes (such as spin meshes) often lead to errors and symmetry breaking. Although the underlying differencing scheme may be modified to rectify this, the differencing scheme may not be accessible. This work documents the use of spherical geodesic meshes to mitigate solution-mesh coupling. These meshes are generated notionally by connecting geodesic surface meshes to produce triangular-prismatic volume meshes. This mesh topology is fundamentally different from traditional mesh topologies and displays superior qualities such as topological symmetry. This work describes the geodesic mesh topology as well as motivating demonstrations with the FLAG hydrocode.
Adaptive finite element strategies for shell structures
Stanley, G.; Levit, I.; Stehlin, B.; Hurlbut, B.
1992-01-01
The present paper extends existing finite element adaptive refinement (AR) techniques to shell structures, which have heretofore been neglected in the AR literature. Specific challenges in applying AR to shell structures include: (1) physical discontinuities (e.g., stiffener intersections); (2) boundary layers; (3) sensitivity to geometric imperfections; (4) the sensitivity of most shell elements to mesh distortion, constraint definition and/or thinness; and (5) intrinsic geometric nonlinearity. All of these challenges but (5) are addressed here.
A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics
Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars
2016-08-01
We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.
Dębski Roman
2016-06-01
Full Text Available A new dynamic programming based parallel algorithm adapted to on-board heterogeneous computers for simulation based trajectory optimization is studied in the context of “high-performance sailing”. The algorithm uses a new discrete space of continuously differentiable functions called the multi-splines as its search space representation. A basic version of the algorithm is presented in detail (pseudo-code, time and space complexity, search space auto-adaptation properties. Possible extensions of the basic algorithm are also described. The presented experimental results show that contemporary heterogeneous on-board computers can be effectively used for solving simulation based trajectory optimization problems. These computers can be considered micro high performance computing (HPC platforms-they offer high performance while remaining energy and cost efficient. The simulation based approach can potentially give highly accurate results since the mathematical model that the simulator is built upon may be as complex as required. The approach described is applicable to many trajectory optimization problems due to its black-box represented performance measure and use of OpenCL.
MHD simulations on an unstructured mesh
Two reasons for using an unstructured computational mesh are adaptivity, and alignment with arbitrarily shaped boundaries. Two codes which use finite element discretization on an unstructured mesh are described. FEM3D solves 2D and 3D RMHD using an adaptive grid. MH3D++, which incorporates methods of FEM3D into the MH3D generalized MHD code, can be used with shaped boundaries, which might be 3D
Sierra toolkit computational mesh conceptual model
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
Adaptive Kinetic-Fluid Solvers for Heterogeneous Computing Architectures
Zabelok, Sergey; Kolobov, Vladimir
2015-01-01
This paper describes recent progress towards porting a Unified Flow Solver (UFS) to heterogeneous parallel computing. UFS is an adaptive kinetic-fluid simulation tool, which combines Adaptive Mesh Refinement (AMR) with automatic cell-by-cell selection of kinetic or fluid solvers based on continuum breakdown criteria. The main challenge of porting UFS to graphics processing units (GPUs) comes from the dynamically adapted mesh, which causes irregular data access. We describe the implementation of CUDA kernels for three modules in UFS: the direct Boltzmann solver using discrete velocity method (DVM), the Direct Simulation Monte Carlo (DSMC) module, and the Lattice Boltzmann Method (LBM) solver, all using octree Cartesian mesh with AMR. Double digit speedups on single GPU and good scaling for multi-GPU have been demonstrated.
Adaptive kinetic-fluid solvers for heterogeneous computing architectures
Zabelok, Sergey; Arslanbekov, Robert; Kolobov, Vladimir
2015-12-01
We show feasibility and benefits of porting an adaptive multi-scale kinetic-fluid code to CPU-GPU systems. Challenges are due to the irregular data access for adaptive Cartesian mesh, vast difference of computational cost between kinetic and fluid cells, and desire to evenly load all CPUs and GPUs during grid adaptation and algorithm refinement. Our Unified Flow Solver (UFS) combines Adaptive Mesh Refinement (AMR) with automatic cell-by-cell selection of kinetic or fluid solvers based on continuum breakdown criteria. Using GPUs enables hybrid simulations of mixed rarefied-continuum flows with a million of Boltzmann cells each having a 24 × 24 × 24 velocity mesh. We describe the implementation of CUDA kernels for three modules in UFS: the direct Boltzmann solver using the discrete velocity method (DVM), the Direct Simulation Monte Carlo (DSMC) solver, and a mesoscopic solver based on the Lattice Boltzmann Method (LBM), all using adaptive Cartesian mesh. Double digit speedups on single GPU and good scaling for multi-GPUs have been demonstrated.
Lewis R. W.
2006-11-01
Full Text Available This paper describes the application of adaptive mesh methods to the numerical simulation of one and two-dimensional petroleum reservoir waterfloods. The method uses current information on the solution to adapt the mesh to the solution as the computation proceeds. It is shown that this leads to significant improvements in accuracy at a marginal increase in computational cost. Cet article décrit l'application des méthodes de maillages évolutifs à la simulation numérique dinjection d'eau à une ou deux dimensions dans des réservoirs pétroliers. La méthode utilise des informations disponibles sur la solution pour adapter le maillage à la solution pendant que se déroule le calcul. On montre que cela conduit à des améliorations significatives en ce qui concerne la précision avec une augmentation marginale du coût des calculs.
An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations
Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.
2016-08-01
In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.
Multiple Staggered Mesh Ewald: Boosting the Accuracy of the Smooth Particle Mesh Ewald Method
Wang, Han; Fang, Jun
2016-01-01
The smooth particle mesh Ewald (SPME) method is the standard method for computing the electrostatic interactions in the molecular simulations. In this work, the multiple staggered mesh Ewald (MSME) method is proposed to boost the accuracy of the SPME method. Unlike the SPME that achieves higher accuracy by refining the mesh, the MSME improves the accuracy by averaging the standard SPME forces computed on, e.g. $M$, staggered meshes. We prove, from theoretical perspective, that the MSME is as accurate as the SPME, but uses $M^2$ times less mesh points in a certain parameter range. In the complementary parameter range, the MSME is as accurate as the SPME with twice of the interpolation order. The theoretical conclusions are numerically validated both by a uniform and uncorrelated charge system, and by a three-point-charge water system that is widely used as solvent for the bio-macromolecules.
Automated hexahedral meshing of anatomic structures using deformable registration.
Grosland, Nicole M; Bafna, Ritesh; Magnotta, Vincent A
2009-02-01
This work introduces a novel method of automating the process of patient-specific finite element (FE) model development using a mapped mesh technique. The objective is to map a predefined mesh (template) of high quality directly onto a new bony surface (target) definition, thereby yielding a similar mesh with minimal user interaction. To bring the template mesh into correspondence with the target surface, a deformable registration technique based on the FE method has been adopted. The procedure has been made hierarchical allowing several levels of mesh refinement to be used, thus reducing the time required to achieve a solution. Our initial efforts have focused on the phalanx bones of the human hand. Mesh quality metrics, such as element volume and distortion were evaluated. Furthermore, the distance between the target surface and the final mapped mesh were measured. The results have satisfactorily proven the applicability of the proposed method. PMID:18688764
Coupling of non-conforming meshes in a component mode synthesis method
Akcay-Perdahcioglu, D.; Doreille, M.; Boer, de A.; Ludwig, T.
2013-01-01
A common mesh refinement-based coupling technique is embedded into a component mode synthesis method, Craig–Bampton. More specifically, a common mesh is generated between the non-conforming interfaces of the coupled structures, and the compatibility constraints are enforced on that mesh via L2-minim
An adaptive hybrid stress transition quadrilateral finite element method for linear elasticity
Huang, Feiteng; Xie, Xiaoping; Zhang, Chen-Song
2014-01-01
In this paper, we discuss an adaptive hybrid stress finite element method on quadrilateral meshes for linear elasticity problems. To deal with hanging nodes arising in the adaptive mesh refinement, we propose new transition types of hybrid stress quadrilateral elements with 5 to 7 nodes. In particular, we derive a priori error estimation for the 5-node transition hybrid stress element to show that it is free from Poisson-locking, in the sense that the error bound in the a priori estimate is i...
Liu, Rong
2009-01-01
Polygonal meshes are ubiquitous in geometric modeling. They are widely used in many applications, such as computer games, computer-aided design, animation, and visualization. One of the important problems in mesh processing and analysis is segmentation, where the goal is to partition a mesh into segments to suit the particular application at hand. In this thesis we study structural-level mesh segmentation, which seeks to decompose a given 3D shape into parts according to human intuition. We t...
HE JiFeng
2008-01-01
This paper presents a refinement calculus for service components. We model the behaviour of individual service by a guarded design, which enables one to separate the responsibility of clients from the commitment made by the system, and to iden-tify a component by a set of failures and divergences. Protocols are introduced to coordinate the interactions between a component with the external environment. We adopt the notion of process refinement to formalize the substitutivity of components, and provide a complete proof method based on the notion of simulations.
Effects of mesh style and grid convergence on numerical simulation accuracy of centrifugal pump
刘厚林; 刘明明; 白羽; 董亮
2015-01-01
In order to evaluate the effects of mesh generation techniques and grid convergence on pump performance in centrifugal pump model, three widely used mesh styles including structured hexahedral, unstructured tetrahedral and hybrid prismatic/tetrahedral meshes were generated for a centrifugal pump model. And quantitative grid convergence was assessed based on a grid convergence index (GCI), which accounts for the degree of grid refinement. The structured, unstructured or hybrid meshes are found to have certain difference for velocity distributions in impeller with the change of grid cell number. And the simulation results have errors to different degrees compared with experimental data. The GCI-value for structured meshes calculated is lower than that for the unstructured and hybrid meshes. Meanwhile, the structured meshes are observed to get more vortexes in impeller passage. Nevertheless, the hybrid meshes are found to have larger low-velocity area at outlet and more secondary vortexes at a specified location than structured meshes and unstructured meshes.
Massively parallel computation on anisotropic meshes
Digonnet, Hugues; Silva, Luisa; Coupez, Thierry
2013-01-01
In this paper, we present developments done to obtain efficient parallel computations on supercomputers up to 8192 cores. While most massively parallel computation are shown using regular grid it is less common to see massively parallel computation using anisotropic adapted unstructured meshes. We will present here two mains components done to reach very large scale calculation up to 10 billions unknowns using a muligrid method over unstructured mesh running on 8192 cores. We firstly focus on...
Malgarinos, Ilias; Nikolopoulos, Nikolaos; Gavaises, Manolis
2015-11-01
This study presents the implementation of an interface sharpening scheme on the basis of the Volume of Fluid (VOF) method, as well as its application in a number of theoretical and real cases usually modelled in literature. More specifically, the solution of an additional sharpening equation along with the standard VOF model equations is proposed, offering the advantage of "restraining" interface numerical diffusion, while also keeping a quite smooth induced velocity field around the interface. This sharpening equation is solved right after volume fraction advection; however a novel method for its coupling with the momentum equation has been applied in order to save computational time. The advantages of the proposed sharpening scheme lie on the facts that a) it is mass conservative thus its application does not have a negative impact on one of the most important benefits of VOF method and b) it can be used in coarser grids as now the suppression of the numerical diffusion is grid independent. The coupling of the solved equation with an adaptive local grid refinement technique is used for further decrease of computational time, while keeping high levels of accuracy at the area of maximum interest (interface). The numerical algorithm is initially tested against two theoretical benchmark cases for interface tracking methodologies followed by its validation for the case of a free-falling water droplet accelerated by gravity, as well as the normal liquid droplet impingement onto a flat substrate. Results indicate that the coupling of the interface sharpening equation with the HRIC discretization scheme used for volume fraction flux term, not only decreases the interface numerical diffusion, but also allows the induced velocity field to be less perturbed owed to spurious velocities across the liquid-gas interface. With the use of the proposed algorithmic flow path, coarser grids can replace finer ones at the slight expense of accuracy.
Workshop on adaptive grid methods for fusion plasmas
Wiley, J.C. [Univ. of Texas, Austin, TX (United States)
1995-07-01
The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.
Goal-Oriented Adaptivity and Multilevel Preconditioning for the Poisson-Boltzmann Equation
Aksoylu, Burak; Cyr, Eric; Holst, Michael
2011-01-01
In this article, we develop goal-oriented error indicators to drive adaptive refinement algorithms for the Poisson-Boltzmann equation. Empirical results for the solvation free energy linear functional demonstrate that goal-oriented indicators are not sufficient on their own to lead to a superior refinement algorithm. To remedy this, we propose a problem-specific marking strategy using the solvation free energy computed from the solution of the linear regularized Poisson-Boltzmann equation. The convergence of the solvation free energy using this marking strategy, combined with goal-oriented refinement, compares favorably to adaptive methods using an energy-based error indicator. Due to the use of adaptive mesh refinement, it is critical to use multilevel preconditioning in order to maintain optimal computational complexity. We use variants of the classical multigrid method, which can be viewed as generalizations of the hierarchical basis multigrid and Bramble-Pasciak-Xu (BPX) preconditioners.
Refinement for administrative policies
Dekker, M.A.C.; Etalle, S.
2007-01-01
Flexibility of management is an important requisite for access control systems as it allows users to adapt the access control system in accordance with practical requirements. This paper builds on earlier work where we defined administrative policies for a general class of RBAC models. We present a formal definition of administrative refinnement and we show that there is an ordering for administrative privileges which yields administrative refinements of policies. We argue (by giving an examp...
An overview of petroleum refining in Spain is presented (by Repsol YPF) and some views on future trends are discussed. Spain depends heavily on imports. Sub-headings in the article cover: sources of crude imports, investments and logistics and marketing, -detailed data for each are shown diagrammatically. Tables show: (1) economic indicators (e.g. total GDP, vehicle numbers and inflation) for 1998-200; (2) crude oil imports for 1995-2000; (3) oil products balance for 1995-2000; (4) commodities demand, by product; (5) refining in Spain in terms of capacity per region; (6) outlets in Spain and other European countries in 2002 and (7) sales distribution channel by product
The solution of the time-independent neutron transport equation in a deterministic way invariably consists in the successive discretization of the three variables: energy, angle and space. In the SNATCH solver used in this study, the energy and the angle are respectively discretized with a multigroup approach and the discrete ordinate method. A set of spatial coupled transport equations is obtained and solved using the Discontinuous Galerkin Finite Element Method (DGFEM). Within this method, the spatial domain is decomposed into elements and the solution is approximated by a hierarchical polynomial basis in each one. This approach is time and memory consuming when the mesh becomes fine or the basis order high. To improve the computational time and the memory footprint, adaptive algorithms are proposed. These algorithms are based on an error estimation in each cell. If the error is important in a given region, the mesh has to be refined (h−refinement) or the polynomial basis order increased (p−refinement). This paper is related to the choice between the two types of refinement. Two ways to estimate the error are compared on different benchmarks. Analyzing the differences, a hp−refinement method is proposed and tested. (author)
Segmentation of branching vascular structures using adaptive subdivision surface fitting
Kitslaar, Pieter H.; van't Klooster, Ronald; Staring, Marius; Lelieveldt, Boudewijn P. F.; van der Geest, Rob J.
2015-03-01
This paper describes a novel method for segmentation and modeling of branching vessel structures in medical images using adaptive subdivision surfaces fitting. The method starts with a rough initial skeleton model of the vessel structure. A coarse triangular control mesh consisting of hexagonal rings and dedicated bifurcation elements is constructed from this skeleton. Special attention is paid to ensure a topological sound control mesh is created around the bifurcation areas. Then, a smooth tubular surface is obtained from this coarse mesh using a standard subdivision scheme. This subdivision surface is iteratively fitted to the image. During the fitting, the target update locations of the subdivision surface are obtained using a scanline search along the surface normals, finding the maximum gradient magnitude (of the imaging data). In addition to this surface fitting framework, we propose an adaptive mesh refinement scheme. In this step the coarse control mesh topology is updated based on the current segmentation result, enabling adaptation to varying vessel lumen diameters. This enhances the robustness and flexibility of the method and reduces the amount of prior knowledge needed to create the initial skeletal model. The method was applied to publicly available CTA data from the Carotid Bifurcation Algorithm Evaluation Framework resulting in an average dice index of 89.2% with the ground truth. Application of the method to the complex vascular structure of a coronary artery tree in CTA and to MRI images were performed to show the versatility and flexibility of the proposed framework.
Verification of radiation transport codes with unstructured meshes
Confidence in the results of a radiation transport code requires that the code be verified against problems with known solutions. Such verification problems may be generated by means of the method of manufactured solutions. Previously we reported the application of this method to the verification of radiation transport codes for structured meshes, in particular the SCEPTRE code. We extend this work to verification with unstructured meshes and again apply it to SCEPTRE. We report on additional complexities for unstructured mesh verification of transport codes. Refinement of such meshes for error convergence studies is more involved, particularly for tetrahedral meshes. Furthermore, finite element integrations arising from the presence of the streaming operator exhibit different behavior for unstructured meshes than for structured meshes. We verify SCEPTRE with a combination of 'exact' and 'inexact' problems. Errors in the results are consistent with the discretizations, either being limited to roundoff error or displaying the expected rates of convergence with mesh refinement. We also observe behaviors in the results that were difficult to analyze and predict from a strictly theoretical basis, thereby yielding benefits from verification activities beyond demonstrating code correctness. (author)
Namiot, Dmitry
2015-01-01
With the advances in mobile computing technologies and the growth of the Net, mobile mesh networks are going through a set of important evolutionary steps. In this paper, we survey architectural aspects of mobile mesh networks and their use cases and deployment models. Also, we survey challenging areas of mobile mesh networks and describe our vision of promising mobile services. This paper presents a basic introductory material for Masters of Open Information Technologies Lab, interested in m...
Algorithm refinement for stochastic partial differential equations I. linear diffusion
Alexander, F J; Tartakovsky, D M
2002-01-01
A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. Results from a variety of numerical experiments are presented for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a nonstochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except in particle regions away from the interface. Extensions of the methodology to fluid mechanics applications are discussed.
Domingues M. O.
2013-12-01
Full Text Available We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via Harten’s cell average multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge–Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution.
Pertel, Michael J.
1992-01-01
A table of useful summation formulae are derived, together with a Mathematica package for producing them. The distance distribution in mesh routing networks is derived. The mean and variance of the distance distribution are computed. A program for computing the distance distribution of any mesh is presented.
Three-dimensional h-adaptivity for the multigroup neutron diffusion equations
Wang, Yaqi
2009-04-01
Adaptive mesh refinement (AMR) has been shown to allow solving partial differential equations to significantly higher accuracy at reduced numerical cost. This paper presents a state-of-the-art AMR algorithm applied to the multigroup neutron diffusion equation for reactor applications. In order to follow the physics closely, energy group-dependent meshes are employed. We present a novel algorithm for assembling the terms coupling shape functions from different meshes and show how it can be made efficient by deriving all meshes from a common coarse mesh by hierarchic refinement. Our methods are formulated using conforming finite elements of any order, for any number of energy groups. The spatial error distribution is assessed with a generalization of an error estimator originally derived for the Poisson equation. Our implementation of this algorithm is based on the widely used Open Source adaptive finite element library deal.II and is made available as part of this library\\'s extensively documented tutorial. We illustrate our methods with results for 2-D and 3-D reactor simulations using 2 and 7 energy groups, and using conforming finite elements of polynomial degree up to 6. © 2008 Elsevier Ltd. All rights reserved.
Scandalously Parallelizable Mesh Generation
Bortz, David
2011-01-01
We propose a novel approach which employs random sampling to generate an accurate non-uniform mesh for numerically solving Partial Differential Equation Boundary Value Problems (PDE-BVP's). From a uniform probability distribution U over a 1D domain, we sample M discretizations of size N where M>>N. The statistical moments of the solutions to a given BVP on each of the M ultra-sparse meshes provide insight into identifying highly accurate non-uniform meshes. Essentially, we use the pointwise mean and variance of the coarse-grid solutions to construct a mapping Q(x) from uniformly to non-uniformly spaced mesh-points. The error convergence properties of the approximate solution to the PDE-BVP on the non-uniform mesh are superior to a uniform mesh for a certain class of BVP's. In particular, the method works well for BVP's with locally non-smooth solutions. We present a framework for studying the sampled sparse-mesh solutions and provide numerical evidence for the utility of this approach as applied to a set of e...
Bucki, Marek; Payan, Yohan; 10.1016/j.media.2010.02.003
2010-01-01
Finite Element mesh generation remains an important issue for patient specific biomechanical modeling. While some techniques make automatic mesh generation possible, in most cases, manual mesh generation is preferred for better control over the sub-domain representation, element type, layout and refinement that it provides. Yet, this option is time consuming and not suited for intraoperative situations where model generation and computation time is critical. To overcome this problem we propose a fast and automatic mesh generation technique based on the elastic registration of a generic mesh to the specific target organ in conjunction with element regularity and quality correction. This Mesh-Match-and-Repair (MMRep) approach combines control over the mesh structure along with fast and robust meshing capabilities, even in situations where only partial organ geometry is available. The technique was successfully tested on a database of 5 pre-operatively acquired complete femora CT scans, 5 femoral heads partially...
Adaptive Anisotropic Petrov-Galerkin Methods for First Order Transport Equations
Dahmen, W.; Kutyniok, G.; Lim, W. -Q; Schwab, C.; Welper, G.
2016-01-01
This paper builds on recent developments of adaptive methods for linear transport equations based on certain stable variational formulations of Petrov-Galerkin type. The variational formulations allow us to employ meshes with cells of arbitrary aspect ratios. We develop a refinement scheme generating highly anisotropic partitions that is inspired by shearlet systems. We establish approximation rates for N-term approximations from corresponding piecewise polynomials for certain compact cartoon...
Optimal Throughput and Self-adaptability of Robust Real-Time IEEE 802.15.4 MAC for AMI Mesh Network
Shabani, Hikma; Mohamud Ahmed, Musse; Khan, Sheroz; Hameed, Shahab Ahmed; Hadi Habaebi, Mohamed
2013-12-01
A smart grid refers to a modernization of the electricity system that brings intelligence, reliability, efficiency and optimality to the power grid. To provide an automated and widely distributed energy delivery, the smart grid will be branded by a two-way flow of electricity and information system between energy suppliers and their customers. Thus, the smart grid is a power grid that integrates data communication networks which provide the collected and analysed data at all levels in real time. Therefore, the performance of communication systems is so vital for the success of smart grid. Merit to the ZigBee/IEEE802.15.4std low cost, low power, low data rate, short range, simplicity and free licensed spectrum that makes wireless sensor networks (WSNs) the most suitable wireless technology for smart grid applications. Unfortunately, almost all ZigBee channels overlap with wireless local area network (WLAN) channels, resulting in severe performance degradation due to interference. In order to improve the performance of communication systems, this paper proposes an optimal throughput and self-adaptability of ZigBee/IEEE802.15.4std for smart grid.
Optimal Throughput and Self-adaptability of Robust Real-Time IEEE 802.15.4 MAC for AMI Mesh Network
A smart grid refers to a modernization of the electricity system that brings intelligence, reliability, efficiency and optimality to the power grid. To provide an automated and widely distributed energy delivery, the smart grid will be branded by a two-way flow of electricity and information system between energy suppliers and their customers. Thus, the smart grid is a power grid that integrates data communication networks which provide the collected and analysed data at all levels in real time. Therefore, the performance of communication systems is so vital for the success of smart grid. Merit to the ZigBee/IEEE802.15.4std low cost, low power, low data rate, short range, simplicity and free licensed spectrum that makes wireless sensor networks (WSNs) the most suitable wireless technology for smart grid applications. Unfortunately, almost all ZigBee channels overlap with wireless local area network (WLAN) channels, resulting in severe performance degradation due to interference. In order to improve the performance of communication systems, this paper proposes an optimal throughput and self-adaptability of ZigBee/IEEE802.15.4std for smart grid
Urogynecologic Surgical Mesh Implants
... be used for urogynecologic procedures, including repair of pelvic organ prolapse (POP) and stress urinary incontinence (SUI). It is ... associated with surgical mesh for transvaginal repair of pelvic organ prolapse 513(e) Proposed Order for Reclassification of Surgical ...
Botsch, Mario; Pauly, Mark; Alliez, Pierre; Levy, Bruno
2010-01-01
Geometry processing, or mesh processing, is a fast-growing area of research that uses concepts from applied mathematics, computer science, and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation, and transmission of complex 3D models. Applications of geometry processing algorithms already cover a wide range of areas from multimedia, entertainment, and classical computer-aided design, to biomedical computing, reverse engineering, and scientific computing. Over the last several years, triangle meshes have become increasingly popular,
Iqbal, Amer
2012-01-01
We establish a relation between the refined Hopf link invariant and the S-matrix of the refined Chern-Simons theory. We show that the refined open string partition function corresponding to the Hopf link, calculated using the refined topological vertex, when expressed in the basis of Macdonald polynomials gives the S-matrix of the refined Chern-Simons theory.
A parallel adaptive finite difference algorithm for petroleum reservoir simulation
Hoang, Hai Minh
2005-07-01
Adaptive finite differential for problems arising in simulation of flow in porous medium applications are considered. Such methods have been proven useful for overcoming limitations of computational resources and improving the resolution of the numerical solutions to a wide range of problems. By local refinement of the computational mesh where it is needed to improve the accuracy of solutions, yields better solution resolution representing more efficient use of computational resources than is possible with traditional fixed-grid approaches. In this thesis, we propose a parallel adaptive cell-centered finite difference (PAFD) method for black-oil reservoir simulation models. This is an extension of the adaptive mesh refinement (AMR) methodology first developed by Berger and Oliger (1984) for the hyperbolic problem. Our algorithm is fully adaptive in time and space through the use of subcycling, in which finer grids are advanced at smaller time steps than the coarser ones. When coarse and fine grids reach the same advanced time level, they are synchronized to ensure that the global solution is conservative and satisfy the divergence constraint across all levels of refinement. The material in this thesis is subdivided in to three overall parts. First we explain the methodology and intricacies of AFD scheme. Then we extend a finite differential cell-centered approximation discretization to a multilevel hierarchy of refined grids, and finally we are employing the algorithm on parallel computer. The results in this work show that the approach presented is robust, and stable, thus demonstrating the increased solution accuracy due to local refinement and reduced computing resource consumption. (Author)
Tropical cyclone activity in nested regional and global grid-refined simulations
Hashimoto, Atsushi; Done, James M.; Fowler, Laura D.; Bruyère, Cindy L.
2016-07-01
The capacity of two different grid refinement methods—two-way limited area nesting and variable-mesh refinement—to capture Northwest Pacific Tropical Cyclone (TC) activity is compared in a suite of single-year continuous simulations. Simulations are conducted with and without regional grid refinement from approximately 100-20 km grid spacing over the Northwest Pacific. The capacity to capture smooth transitions between the two resolutions varies by grid refinement method. Nesting shows adverse influence of the nest boundary, with the boundary evident in seasonal average cloud patterns and precipitation, and contortions of the seasonal mean mid-latitude jet. Variable-mesh, on the other hand, reduces many of these effects and produced smoother cloud patterns and mid-latitude jet structure. Both refinement methods lead to increased TC frequency in the region of refinement compared to simulations without grid refinement, although nesting adversely affects TC tracks through the contorted mid-latitude jet. The variable-mesh approach leads to enhanced TC activity over the Southern Indian and Southwest Pacific basins, compared to a uniform mesh simulation. Nesting, on the other hand, does not appear to influence basins outside the region of grid refinement. This study provides evidence that variable mesh may bring benefits to seasonal TC simulation over traditional nesting, and demonstrates capacity of variable mesh refinement for regional climate simulation.
Godunov methods and adaptive algorithms for unsteady fluid dynamics
Bell, J.; Colella, P.; Trangenstein, J.; Welcome, M.
1988-06-01
Higher-order versions of Godunov's method have proven highly successful for high-Mach-number compressible flow. One goal of the research being described in this paper is to extend the range of applicability of these methods to more general systems of hyperbolic conversion laws such as magnetohydrodynamics, flow in porous media and finite deformations of elastic-plastics solids. A second goal is to apply Godunov methods to problems involving more complex physical and solution geometries than can be treated on a simple rectangular grid. This requires the introduction of various adaptive methodologies: global moving and body-fitted meshes, local adaptive mesh refinement, and front tracking. 11 refs., 6 figs.
Hidalgo, Victor Hugo; Luo, Xianwu; Escaler Puigoriol, Francesc Xavier; An, Yu; Valencia, Esteban Alejandro
2015-01-01
Commercial programs are widely used to do unstructured and structured meshes for CFD simulations. However, grids and meshes based on free-open source software (FOSS) give to researchers and engineers the possibility to adapt and improve the meshing process for special study cases with a high Reynolds numbers, such as unsteady partial cavitating flows. In order to improve the grid qualities, the FOSS GMSH has been used to do three types of grid, unstructured hexahedral mesh, hybrid mesh and st...
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Algebraic mesh quality metrics
KNUPP,PATRICK
2000-04-24
Quality metrics for structured and unstructured mesh generation are placed within an algebraic framework to form a mathematical theory of mesh quality metrics. The theory, based on the Jacobian and related matrices, provides a means of constructing, classifying, and evaluating mesh quality metrics. The Jacobian matrix is factored into geometrically meaningful parts. A nodally-invariant Jacobian matrix can be defined for simplicial elements using a weight matrix derived from the Jacobian matrix of an ideal reference element. Scale and orientation-invariant algebraic mesh quality metrics are defined. the singular value decomposition is used to study relationships between metrics. Equivalence of the element condition number and mean ratio metrics is proved. Condition number is shown to measure the distance of an element to the set of degenerate elements. Algebraic measures for skew, length ratio, shape, volume, and orientation are defined abstractly, with specific examples given. Combined metrics for shape and volume, shape-volume-orientation are algebraically defined and examples of such metrics are given. Algebraic mesh quality metrics are extended to non-simplical elements. A series of numerical tests verify the theoretical properties of the metrics defined.
GIZMO: A New Class of Accurate, Mesh-Free Hydrodynamic Simulation Methods
Hopkins, Philip F
2014-01-01
We present and study two new Lagrangian numerical methods for solving the equations of hydrodynamics, in a systematic comparison with moving-mesh, SPH, and non-moving grid methods. The new methods are designed to capture many advantages of both smoothed-particle hydrodynamics (SPH) and grid-based or adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume 'overlap.' We implement and test a parallel, second-order version of the method with coupled self-gravity & cosmological integration, in the code GIZMO: this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require 'artificial diffusion' terms; and allows fluid elements to move with the flow so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods a...
Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature
Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)
2000-01-01
This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.
Documentation for MeshKit - Reactor Geometry (&mesh) Generator
Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-09-30
This report gives documentation for using MeshKit’s Reactor Geometry (and mesh) Generator (RGG) GUI and also briefly documents other algorithms and tools available in MeshKit. RGG is a program designed to aid in modeling and meshing of complex/large hexagonal and rectilinear reactor cores. RGG uses Argonne’s SIGMA interfaces, Qt and VTK to produce an intuitive user interface. By integrating a 3D view of the reactor with the meshing tools and combining them into one user interface, RGG streamlines the task of preparing a simulation mesh and enables real-time feedback that reduces accidental scripting mistakes that could waste hours of meshing. RGG interfaces with MeshKit tools to consolidate the meshing process, meaning that going from model to mesh is as easy as a button click. This report is designed to explain RGG v 2.0 interface and provide users with the knowledge and skills to pilot RGG successfully. Brief documentation of MeshKit source code, tools and other algorithms available are also presented for developers to extend and add new algorithms to MeshKit. RGG tools work in serial and parallel and have been used to model complex reactor core models consisting of conical pins, load pads, several thousands of axially varying material properties of instrumentation pins and other interstices meshes.
An hp-adaptivity and error estimation for hyperbolic conservation laws
Bey, Kim S.
1995-01-01
This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.
无
2011-01-01
The scaled boundary finite element method(SBFEM) is a semi-analytical numerical method,which models an analysis domain by a small number of large-sized subdomains and discretises subdomain boundaries only.In a subdomain,all fields of state variables including displacement,stress,velocity and acceleration are semi-analytical,and the kinetic energy,strain energy and energy error are all integrated semi-analytically.These advantages are taken in this study to develop a posteriori h-hierarchical adaptive SBFEM for transient elastodynamic problems using a mesh refinement procedure which subdivides subdomains.Because only a small number of subdomains are subdivided,mesh refinement is very simple and efficient,and mesh mapping to transfer state variables from an old mesh to a new one is also very simple but accurate.Two 2D examples with stress wave propagation were modelled.The results show that the developed method is capable of capturing propagation of steep stress regions and calculating accurate dynamic responses,using only a fraction of degrees of freedom required by adaptive finite element method.
韩玉琪; 张常贤
2015-01-01
基于自适应笛卡尔网格方法求解Navier-Stokes方程，网格以四叉树数据结构存储，固壁边界条件通过一种虚拟单元体方法引入。在几何外形确定的前提下自动完成网格生成、加密和流场的求解任务。对含有激波的NACA0012翼型的超音速绕流工况和含有回流区的双NACA0012翼型亚音速绕流工况进行了数值模拟，并与现有的非等距笛卡尔网格解和非结构网格解进行了对比。结果表明：基于自适应笛卡尔网格能够准确模拟可压缩黏性流动，同非等距笛卡尔网格相比，自适应技术的使用显著降低了网格量，但是同非结构网格相比，现有的自适应笛卡尔网格技术在边界层的分辨上效率较低，有待进一步发展。%Navier-Stokes equations were solved with adaptively-refined Cartesian grid approach, grid was ac-cessed based on quad-tree data structure, and solid wall boundary was introduced by a ghost body cell method. The grid was automatically established and refined, and flow field was automatically solved with specified geometries. Supersonic flow around NACA0012 airfoil with shock wave and subsonic flow around double NACA0012 airfoils with recirculation region were numerical simulated, and then compared with published results which were based on stretched Cartesian grid and unstructured grid. The results show that, compressible viscous flows can be adequately simulated with adaptively-refined Cartesian grid, the number of cells is dramatically decreased compared with stretched Cartesian grid, but current approach is inefficient in the resolution of boundary layer when compared with unstructured grid, which needs further development.
Mesh Resolution Effect on 3D RANS Turbomachinery Flow Simulations
Yershov, Sergiy
2016-01-01
The paper presents the study of the effect of a mesh refinement on numerical results of 3D RANS computations of turbomachinery flows. The CFD solver F, which based on the second-order accurate ENO scheme, is used in this study. The simplified multigrid algorithm and local time stepping permit decreasing computational time. The flow computations are performed for a number of turbine and compressor cascades and stages. In all flow cases, the successively refined meshes of H-type with an approximate orthogonalization near the solid walls were generated. The results obtained are compared in order to estimate their both mesh convergence and ability to resolve the transonic flow pattern. It is concluded that for thorough studying the fine phenomena of the 3D turbomachinery flows, it makes sense to use the computational meshes with the number of cells from several millions up to several hundred millions per a single turbomachinery blade channel, while for industrial computations, a mesh of about or less than one mil...
A general boundary capability embedded in an orthogonal mesh
Hewett, D.W.; Yu-Jiuan Chen [Lawrence Livermore National Lab., CA (United States)
1995-07-01
The authors describe how they hold onto orthogonal mesh discretization when dealing with curved boundaries. Special difference operators were constructed to approximate numerical zones split by the domain boundary; the operators are particularly simple for this rectangular mesh. The authors demonstrated that this simple numerical approach, termed Dynamic Alternating Direction Implicit, turned out to be considerably more efficient than more complex grid-adaptive algorithms that were tried previously.
Marquez, Maria Jose
2012-01-01
Calibration is nowadays one of the most important processes involved in the extraction of valuable data from measurements. The current availability of an optimum data cube measured from a heterogeneous set of instruments and surveys relies on a systematic and robust approach in the corresponding measurement analysis. In that sense, the inference of configurable instrument parameters can considerably increase the quality of the data obtained. This paper proposes a solution based on Bayesian inference for the estimation of the configurable parameters relevant to the signal to noise ratio. The information obtained by the resolution of this problem can be handled in a very useful way if it is considered as part of an adaptive loop for the overall measurement strategy, in such a way that the outcome of this parametric inference leads to an increase in the knowledge of a model comparison problem in the context of the measurement interpretation. The context of this problem is the multi-wavelength measurements coming...
基于网格化曲面的自适应自动铺放轨迹算法%Algorithm of Adaptive Path Planning for Automated Placement on Meshed Surface
熊文磊; 肖军; 王显峰; 李俊斐; 黄志军
2013-01-01
This paper analyzes the causes of prepreg distortion and discusses its influence for the requirement of trajectory placement ability. A new algorithm of geodesic generation based on meshed surfaces is proposed according to the definition of geodesic, which possesses features of efficiency and highaccuracy, etc. Both the manufacturability of the prepreg and its distribution of strength in a product are considered in the algorithm of path planning, providing it with the ability of adapting to surfaces. The algorithm first figures out the maximum geodesic curvature allowed for the central path, and applies it to the design of trajectory. The trajectory obtained both has good ability of placement and can satisfy the demands of strength distribution in a product. Finally, the path planning aiming at one type of S-inlet with database of SQL Server and VC+ + is carried out. The disperse trajectory points are then fitted to be a curve in CATIA to verify the validity and effectiveness of the algorithm of geodesic generation and trajectory placement generation.%基于轨迹的可铺放性要求分析了铺放过程中预浸料产生畸变的原因及影响轨迹可铺放性的因素.根据测地线定义构造了一种基于网格化曲面的测地线新算法,具有高效率、高精度等特点；在此基础上综合考虑预浸料的可铺放性和构件强度分布要求,提出了具有曲面自适应功能的铺放轨迹算法,可根据预浸料带宽计算得到铺放轨迹容许的最大测地曲率,并将其运用于铺放轨迹设计,使轨迹能够保证预浸料良好可铺放性的同时又满足构件的强度分布要求.最后通过数据库SQL Server和VC++针对某型号S型进气道进行铺放轨迹设计,在CATIA中将计算获取的离散轨迹点拟合成曲线并进行了实际的铺放试验,验证了测地线生成算法和铺放轨迹生成算法的正确性和有效性.
A parallel direct solver for the self-adaptive hp Finite Element Method
Paszyński, Maciej R.
2010-03-01
In this paper we present a new parallel multi-frontal direct solver, dedicated for the hp Finite Element Method (hp-FEM). The self-adaptive hp-FEM generates in a fully automatic mode, a sequence of hp-meshes delivering exponential convergence of the error with respect to the number of degrees of freedom (d.o.f.) as well as the CPU time, by performing a sequence of hp refinements starting from an arbitrary initial mesh. The solver constructs an initial elimination tree for an arbitrary initial mesh, and expands the elimination tree each time the mesh is refined. This allows us to keep track of the order of elimination for the solver. The solver also minimizes the memory usage, by de-allocating partial LU factorizations computed during the elimination stage of the solver, and recomputes them for the backward substitution stage, by utilizing only about 10% of the computational time necessary for the original computations. The solver has been tested on 3D Direct Current (DC) borehole resistivity measurement simulations problems. We measure the execution time and memory usage of the solver over a large regular mesh with 1.5 million degrees of freedom as well as on the highly non-regular mesh, generated by the self-adaptive h p-FEM, with finite elements of various sizes and polynomial orders of approximation varying from p = 1 to p = 9. From the presented experiments it follows that the parallel solver scales well up to the maximum number of utilized processors. The limit for the solver scalability is the maximum sequential part of the algorithm: the computations of the partial LU factorizations over the longest path, coming from the root of the elimination tree down to the deepest leaf. © 2009 Elsevier Inc. All rights reserved.
Analysis and development of spatial hp-refinement methods for solving the neutron transport equation
The different neutronic parameters have to be calculated with a higher accuracy in order to design the 4. generation reactor cores. As memory storage and computation time are limited, adaptive methods are a solution to solve the neutron transport equation. The neutronic flux, solution of this equation, depends on the energy, angle and space. The different variables are successively discretized. The energy with a multigroup approach, considering the different quantities to be constant on each group, the angle by a collocation method called SN approximation. Once the energy and angle variable are discretized, a system of spatially-dependent hyperbolic equations has to be solved. Discontinuous finite elements are used to make possible the development of hp-refinement methods. Thus, the accuracy of the solution can be improved by spatial refinement (h-refinement), consisting into subdividing a cell into sub-cells, or by order refinement (p-refinement), by increasing the order of the polynomial basis. In this thesis, the properties of this methods are analyzed showing the importance of the regularity of the solution to choose the type of refinement. Thus, two error estimators are used to lead the refinement process. Whereas the first one requires high regularity hypothesis (analytical solution), the second one supposes only the minimal hypothesis required for the solution to exist. The comparison of both estimators is done on benchmarks where the analytic solution is known by the method of manufactured solutions. Thus, the behaviour of the solution as a regard of the regularity can be studied. It leads to a hp-refinement method using the two estimators. Then, a comparison is done with other existing methods on simplified but also realistic benchmarks coming from nuclear cores. These adaptive methods considerably reduces the computational cost and memory footprint. To further improve these two points, an approach with energy-dependent meshes is proposed. Actually, as the
ALEGRA -- A massively parallel h-adaptive code for solid dynamics
Summers, R.M.; Wong, M.K.; Boucheron, E.A.; Weatherby, J.R. [Sandia National Labs., Albuquerque, NM (United States)
1997-12-31
ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Using this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.
Wang, Xinheng
2008-01-01
Wireless telemedicine using GSM and GPRS technologies can only provide low bandwidth connections, which makes it difficult to transmit images and video. Satellite or 3G wireless transmission provides greater bandwidth, but the running costs are high. Wireless networks (WLANs) appear promising, since they can supply high bandwidth at low cost. However, the WLAN technology has limitations, such as coverage. A new wireless networking technology named the wireless mesh network (WMN) overcomes some of the limitations of the WLAN. A WMN combines the characteristics of both a WLAN and ad hoc networks, thus forming an intelligent, large scale and broadband wireless network. These features are attractive for telemedicine and telecare because of the ability to provide data, voice and video communications over a large area. One successful wireless telemedicine project which uses wireless mesh technology is the Emergency Room Link (ER-LINK) in Tucson, Arizona, USA. There are three key characteristics of a WMN: self-organization, including self-management and self-healing; dynamic changes in network topology; and scalability. What we may now see is a shift from mobile communication and satellite systems for wireless telemedicine to the use of wireless networks based on mesh technology, since the latter are very attractive in terms of cost, reliability and speed. PMID:19047448
Solving Fluid Flow Problems on Moving and Adaptive Overlapping Grids
Henshaw, W
2005-07-28
Solution of fluid dynamics problems on overlapping grids will be discussed. An overlapping grid consists of a set of structured component grids that cover a domain and overlap where they meet. Overlapping grids provide an effective approach for developing efficient and accurate approximations for complex, possibly moving geometry. Topics to be addressed include the reactive Euler equations, the incompressible Navier-Stokes equations and elliptic equations solved with a multigrid algorithm. Recent developments coupling moving grids and adaptive mesh refinement and preliminary parallel results will also be presented.
On Optimal Bilinear Quadrilateral Meshes
D' Azevedo, E.
1998-10-26
The novelty of this work is in presenting interesting error properties of two types of asymptotically optimal quadrilateral meshes for bilinear approximation. The first type of mesh has an error equidistributing property where the maximum interpolation error is asymptotically the same over all elements. The second type has faster than expected super-convergence property for certain saddle-shaped data functions. The super-convergent mesh may be an order of magnitude more accurate than the error equidistributing mesh. Both types of mesh are generated by a coordinate transformation of a regular mesh of squares. The coordinate transformation is derived by interpreting the Hessian matrix of a data function as a metric tensor. The insights in this work may have application in mesh design near known corner or point singularities.
On Optimal Bilinear Quadrilateral Meshes
D' Azevedo, E
2000-03-17
The novelty of this work is in presenting interesting error properties of two types of asymptotically ''optimal'' quadrilateral meshes for bilinear approximation. The first type of mesh has an error equidistributing property where the maximum interpolation error is asymptotically the same over all elements. The second type has faster than expected ''super-convergence'' property for certain saddle-shaped data functions. The ''superconvergent'' mesh may be an order of magnitude more accurate than the error equidistributing mesh. Both types of mesh are generated by a coordinate transformation of a regular mesh of squares. The coordinate transformation is derived by interpreting the Hessian matrix of a data function as a metric tensor. The insights in this work may have application in mesh design near corner or point singularities.
Here, a workflow for high-resolution efficient numerical modeling of bioimpedance measurements is suggested that includes 3D image segmentation, adaptive mesh generation, finite-element discretization, and the analysis of simulation results. Using the adaptive unstructured tetrahedral meshes enables to decrease significantly a number of mesh elements while keeping model accuracy. The numerical results illustrate current, potential, and sensitivity field distributions for a conventional Kubicek-like scheme of bioimpedance measurements using segmented geometric model of human torso based on Visible Human Project data. The whole body VHP man computational mesh is constructed that contains 574 thousand vertices and 3.3 million tetrahedrons.
Danilov, A. A.; Salamatova, V. Yu; Vassilevski, Yu V.
2012-12-01
Here, a workflow for high-resolution efficient numerical modeling of bioimpedance measurements is suggested that includes 3D image segmentation, adaptive mesh generation, finite-element discretization, and the analysis of simulation results. Using the adaptive unstructured tetrahedral meshes enables to decrease significantly a number of mesh elements while keeping model accuracy. The numerical results illustrate current, potential, and sensitivity field distributions for a conventional Kubicek-like scheme of bioimpedance measurements using segmented geometric model of human torso based on Visible Human Project data. The whole body VHP man computational mesh is constructed that contains 574 thousand vertices and 3.3 million tetrahedrons.
Vickers, Trevor
1992-01-01
On the Refinement Calculus gives one view of the development of the refinement calculus and its attempt to bring together - among other things - Z specifications and Dijkstra's programming language. It is an excellent source of reference material for all those seeking the background and mathematical underpinnings of the refinement calculus.
Refining: restructuring for profit
This article examines the options for restructuring the under-performing downstream part of the oil industry to improve profitability for example by integrating the refining and marketing businesses. The future outlook for the refining industry, the shareholders, the emergence of independent downstream companies, and internal refining operations are discussed
Adaptive computational methods for SSME internal flow analysis
Oden, J. T.
1986-01-01
Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.
Notes on the Mesh Handler and Mesh Data Conversion
At the outset of the development of the thermal-hydraulic code (THC), efforts have been made to utilize the recent technology of the computational fluid dynamics. Among many of them, the unstructured mesh approach was adopted to alleviate the restriction of the grid handling system. As a natural consequence, a mesh handler (MH) has been developed to manipulate the complex mesh data from the mesh generator. The mesh generator, Gambit, was chosen at the beginning of the development of the code. But a new mesh generator, Pointwise, was introduced to get more flexible mesh generation capability. An open source code, Paraview, was chosen as a post processor, which can handle unstructured as well as structured mesh data. Overall data processing system for THC is shown in Figure-1. There are various file formats to save the mesh data in the permanent storage media. A couple of dozen of file formats are found even in the above mentioned programs. A competent mesh handler should have the capability to import or export mesh data as many as possible formats. But, in reality, there are two aspects that make it difficult to achieve the competence. The first aspect to consider is the time and efforts to program the interface code. And the second aspect, which is even more difficult one, is the fact that many mesh data file formats are proprietary information. In this paper, some experience of the development of the format conversion programs will be presented. File formats involved are Gambit neutral format, Ansys-CFX grid file format, VTK legacy file format, Nastran format and CGNS
Gamra: Simple meshing for complex earthquakes
Landry, Walter; Barbot, Sylvain
2016-05-01
The static offsets caused by earthquakes are well described by elastostatic models with a discontinuity in the displacement along the fault. A traditional approach to model this discontinuity is to align the numerical mesh with the fault and solve the equations using finite elements. However, this distorted mesh can be difficult to generate and update. We present a new numerical method, inspired by the Immersed Interface Method (Leveque and Li, 1994), for solving the elastostatic equations with embedded discontinuities. This method has been carefully designed so that it can be used on parallel machines on an adapted finite difference grid. We have implemented this method in Gamra, a new code for earth modeling. We demonstrate the correctness of the method with analytic tests, and we demonstrate its practical performance by solving a realistic earthquake model to extremely high precision.
Gamra: Simple Meshes for Complex Earthquakes
Landry, Walter
2016-01-01
The static offsets caused by earthquakes are well described by elastostatic models with a discontinuity in the displacement along the fault. A traditional approach to model this discontinuity is to align the numerical mesh with the fault and solve the equations using finite elements. However, this distorted mesh can be difficult to generate and update. We present a new numerical method, inspired by the Immersed Interface Method, for solving the elastostatic equations with embedded discontinuities. This method has been carefully designed so that it can be used on parallel machines on an adapted finite difference grid. We have implemented this method in Gamra, a new code for earth modelling. We demonstrate the correctness of the method with analytic tests, and we demonstrate its practical performance by solving a realistic earthquake model to extremely high precision.
OPTIMIZING EUCALYPTUS PULP REFINING
Vail Manfredi
2004-01-01
This paper discusses the refining of bleached eucalyptus kraft pulp (BEKP).Pilot plant tests were carried out in to optimize the refining process and to identify the effects of refining variables on final paper quality and process costs.The following parameters are discussed: pulp consistency, disk pattern design, refiner speed,energy input, refiner configuration (parallel or serial)and refining intensity.The effects of refining on pulp fibers were evaluated against the pulp quality properties, such as physical strengths, bulk, opacity and porosity, as well as the interactions with papermaking process, such as paper machine runnability, paper breaks and refining control.The results showed that process optimization,considering pulp quality and refining costs, were obtained when eucalyptus pulp is refined under the lowest intensity and the highest pulp consistency possible. Changes on the operational refining conditions will have the highest impact on total energy requirements (costs) without any significant effect on final paper properties.It was also observed that classical ways to control the industrial operation, such as those based on drainage measurements, do not represent the best alternative to maximize the final paper properties neither the paper machine runability.
Paszyński, Maciej R.
2013-04-01
This paper describes a direct solver algorithm for a sequence of finite element meshes that are h-refined towards one or several point singularities. For such a sequence of grids, the solver delivers linear computational cost O(N) in terms of CPU time and memory with respect to the number of unknowns N. The linear computational cost is achieved by utilizing the recursive structure provided by the sequence of h-adaptive grids with a special construction of the elimination tree that allows for reutilization of previously computed partial LU (or Cholesky) factorizations over the entire unrefined part of the computational mesh. The reutilization technique reduces the computational cost of the entire sequence of h-refined grids from O(N2) down to O(N). Theoretical estimates are illustrated with numerical results on two- and three-dimensional model problems exhibiting one or several point singularities. © 2013 Elsevier Ltd. All rights reserved.
Parallel Adaptive Simulation of Detonation Waves Using a Weighted Essentially Non-Oscillatory Scheme
McMahon, Sean
The purpose of this thesis was to develop a code that could be used to develop a better understanding of the physics of detonation waves. First, a detonation was simulated in one dimension using ZND theory. Then, using the 1D solution as an initial condition, a detonation was simulated in two dimensions using a weighted essentially non-oscillatory scheme on an adaptive mesh with the smallest lengthscales being equal to 2-3 flamelet lengths. The code development in linking Chemkin for chemical kinetics to the adaptive mesh refinement flow solver was completed. The detonation evolved in a way that, qualitatively, matched the experimental observations, however, the simulation was unable to progress past the formation of the triple point.
Adaptive and Iterative Methods for Simulations of Nanopores with the PNP-Stokes Equations
Mitscha-Baude, Gregor; Tulzer, Gerhard; Heitzinger, Clemens
2016-01-01
We present a 3D finite element solver for the nonlinear Poisson-Nernst-Planck (PNP) equations for electrodiffusion, coupled to the Stokes system of fluid dynamics. The model serves as a building block for the simulation of macromolecule dynamics inside nanopore sensors. We add to existing numerical approaches by deploying goal-oriented adaptive mesh refinement. To reduce the computation overhead of mesh adaptivity, our error estimator uses the much cheaper Poisson-Boltzmann equation as a simplified model, which is justified on heuristic grounds but shown to work well in practice. To address the nonlinearity in the full PNP-Stokes system, three different linearization schemes are proposed and investigated, with two segregated iterative approaches both outperforming a naive application of Newton's method. Numerical experiments are reported on a real-world nanopore sensor geometry. We also investigate two different models for the interaction of target molecules with the nanopore sensor through the PNP-Stokes equ...
VPN Mesh in Industrial Networking
Berndtsson, Andreas
2013-01-01
This thesis report describes the process and present the results gained while evaluating available VPN mesh solutions and equipment for integration into Industrial systems. The task was divided into several sub steps; summarize the previous work done in the VPN mesh area, evaluate the available VPN mesh solutions, verify that the interesting equipment comply with the criteria set by ABB and lastly verify that the equipment can be integrated transparently into already running systems. The resu...
Conforming restricted Delaunay mesh generation for piecewise smooth complexes
Engwirda, Darren
2016-01-01
A Frontal-Delaunay refinement algorithm for mesh generation in piecewise smooth domains is described. Built using a restricted Delaunay framework, this new algorithm combines a number of novel features, including: (i) a consistent, conforming restricted Delaunay representation for domains specified as a (non-manifold) collection of piecewise smooth surface patches and curve constraints, (ii) a `protection' strategy for domains containing 1-dimensional features that meet at sharply acute angle...
Planet-disc interaction on a freely moving mesh
Munoz, Diego J; Springel, Volker; Hernquist, Lars
2014-01-01
General-purpose, moving-mesh schemes for hydrodynamics have opened the possibility of combining the accuracy of grid-based numerical methods with the flexibility and automatic resolution adaptivity of particle-based methods. Due to their supersonic nature, Keplerian accretion discs are in principle a very attractive system for applying such freely moving mesh techniques. However, the high degree of symmetry of simple accretion disc models can be difficult to capture accurately by these methods, due to the generation of geometric grid noise and associated numerical diffusion, which is absent in polar grids. To explore these and other issues, in this work we study the idealized problem of two-dimensional planet-disc interaction with the moving-mesh code AREPO. We explore the hydrodynamic evolution of discs with planets through a series of numerical experiments that vary the planet mass, the disc viscosity and the mesh resolution, and compare the resulting surface density, vortensity field and tidal torque with ...
Mesh quality improvement for SciDAC applications
Accurate and efficient numerical solution of partial differential equations requires well-formed meshes that are non-inverted, smooth, well-shaped, oriented, and size-adapted. The Mesquite mesh quality improvement toolkit is a software library that applies optimization algorithms to create well-formed meshes via node movement. Mesquite can be run standalone using drivers or called directly from an application code. Mesquite can play an essential role in the SLAC accelerator design program as a component in automatic shape optimization software and in manufacturing defect-correction studies to smoothly deform meshes in response to geometric domain deformations guided by the optimization of design parameters. Mesquite has also been applied to problems in fusion, biology, and propellant burn studies
Synthesized Optimization of Triangular Mesh
HU Wenqiang; YANG Wenyu
2006-01-01
Triangular mesh is often used to describe geometric object as computed model in digital manufacture, thus the mesh model with both uniform triangular shape and excellent geometric shape is expected. But in fact, the optimization of triangular shape often is contrary with that of geometric shape. In this paper, one synthesized optimizing algorithm is presented through subdividing triangles to achieve the trade-off solution between the geometric and triangular shape optimization of mesh model. The result mesh with uniform triangular shape and excellent topology are obtained.
Adaptive Boundary Elements and Error Estimation for Elastic Problems
Jingguo Qu
2014-02-01
Full Text Available In traditional thinking, when the elastic problems are solved, we need to repeatedly plot element grids and analyze computing results according to diverse precision requirement. Against the malpractice exists in the above process, a new method of error estimation was suggested on H-R adaptive boundary element method in this paper. Based on the discrete meshes that are generated for the process of H-R adaptive refinement, the solution error was estimated by the interpolation residue. In addition, this method is easy to programming, which is carried out in the program by automatically creating new adaptive data files. Then a great deal of fore-disposal and post-disposal can be saved. Its validity and effectiveness have been confirmed by numerical example
Oskay Kaya; Engin Olcucuoglu; Gaye Seker; Hakan Kulacoglu
2012-01-01
We present a case of immediate abdominal wall reconstruction with biologic mesh following the resection of locally advanced colonic cancer. The tumor in the right colon did not respond to neoadjuvant chemotherapy. Surgical enbloc excision, including excision of the invasion in the abdominal wall, was achieved, and the defect was reconstructed with porcine dermal collagen mesh. The patient was discharged with no complication, and adaptation of the mesh was excellent at the six-month followup.
Tangle-Free Mesh Motion for Ablation Simulations
Droba, Justin
2016-01-01
Problems involving mesh motion-which should not be mistakenly associated with moving mesh methods, a class of adaptive mesh redistribution techniques-are of critical importance in numerical simulations of the thermal response of melting and ablative materials. Ablation is the process by which material vaporizes or otherwise erodes due to strong heating. Accurate modeling of such materials is of the utmost importance in design of passive thermal protection systems ("heatshields") for spacecraft, the layer of the vehicle that ensures survival of crew and craft during re-entry. In an explicit mesh motion approach, a complete thermal solve is first performed. Afterwards, the thermal response is used to determine surface recession rates. These values are then used to generate boundary conditions for an a posteriori correction designed to update the location of the mesh nodes. Most often, linear elastic or biharmonic equations are used to model this material response, traditionally in a finite element framework so that complex geometries can be simulated. A simple scheme for moving the boundary nodes involves receding along the surface normals. However, for all but the simplest problem geometries, evolution in time following such a scheme will eventually bring the mesh to intersect and "tangle" with itself, inducing failure. This presentation demonstrates a comprehensive and sophisticated scheme that analyzes the local geometry of each node with help from user-provided clues to eliminate the tangle and enable simulations on a wide-class of difficult problem geometries. The method developed is demonstrated for linear elastic equations but is general enough that it may be adapted to other modeling equations. The presentation will explicate the inner workings of the tangle-free mesh motion algorithm for both two and three-dimensional meshes. It will show abstract examples of the method's success, including a verification problem that demonstrates its accuracy and
The refined topological vertex
We define a refined topological vertex which depends in addition on a parameter, which physically corresponds to extending the self-dual graviphoton field strength to a more general configuration. Using this refined topological vertex we compute, using geometric engineering, a two-parameter (equivariant) instanton expansion of gauge theories which reproduce the results of Nekrasov. The refined vertex is also expected to be related to Khovanov knot invariants.
Parallel unstructured mesh optimisation for 3D radiation transport and fluids modelling
In this paper we describe the theory and application of a parallel mesh optimisation procedure to obtain self-adapting finite element solutions on unstructured tetrahedral grids. The optimisation procedure adapts the tetrahedral mesh to the solution of a radiation transport or fluid flow problem without sacrificing the integrity of the boundary (geometry), or internal boundaries (regions) of the domain. The objective is to obtain a mesh which has both a uniform interpolation error in any direction and the element shapes are of good quality. This is accomplished with use of a non-Euclidean (anisotropic) metric which is related to the Hessian of the solution field. Appropriate scaling of the metric enables the resolution of multi-scale phenomena as encountered in transient incompressible fluids and multigroup transport calculations. The resulting metric is used to calculate element size and shape quality. The mesh optimisation method is based on a series of mesh connectivity and node position searches of the landscape defining mesh quality which is gauged by a functional. The mesh modification thus fits the solution field(s) in an optimal manner. The parallel mesh optimisation/adaptivity procedure presented in this paper is of general applicability. We illustrate this by applying it to a transient CFD (computational fluid dynamics) problem. Incompressible flow past a cylinder at moderate Reynolds numbers is modelled to demonstrate that the mesh can follow transient flow features. (authors)
Solution of the incompressible Navier-Stokes equations on unstructured meshes.
Charlesworth, D. J.
2004-01-01
Since Patankar first developed the SIMPLE (Semi Implicit Method for Pressure Linked Equations) algorithm, the incompressible Navier-Stokes equations have been solved using a variety of pressure-based methods. Over the last twenty years these methods have been refined and developed however the majority of this work has been based around the use of structured grids to mesh the fluid domain of interest. Unstructured grids offer considerable advantages over structured meshes when a...
Ye, Dexin; Lu, Ling; Joannopoulos, John D; Soljačić, Marin; Ran, Lixin
2016-03-01
A solid material possessing identical electromagnetic properties as air has yet to be found in nature. Such a medium of arbitrary shape would neither reflect nor refract light at any angle of incidence in free space. Here, we introduce nonscattering corrugated metallic wires to construct such a medium. This was accomplished by aligning the dark-state frequencies in multiple scattering channels of a single wire. Analytical solutions, full-wave simulations, and microwave measurement results on 3D printed samples show omnidirectional invisibility in any configuration. This invisible metallic mesh can improve mechanical stability, electrical conduction, and heat dissipation of a system, without disturbing the electromagnetic design. Our approach is simple, robust, and scalable to higher frequencies. PMID:26884208
This article focuses on recent developments in the US refining industry and presents a model for improving the performance of refineries based on the analysis of the refining industry by Cap Gemini Ernst and Young. The identification of refineries in risk of failing, the construction of pipelines for refinery products from Gulf State refineries, mergers and acquisitions, and poor financial performance are discussed. Current challenges concerning the stagnant demand for refinery products, environmental regulations, and shareholder value are highlighted. The structure of the industry, the creation of value in refining, and the search for business models are examined. The top 25 US companies and US refining business groups are listed
Adaptive Parametrization of Multivariate B-splines for Image Registration
Hansen, Michael Sass; Glocker, Benjamin; Navab, Nassir;
2008-01-01
We present an adaptive parametrization scheme for dynamic mesh refinement in the application of parametric image registration. The scheme is based on a refinement measure ensuring that the control points give an efficient representation of the warp fields, in terms of minimizing the registration...... cost function. In the current work we introduce multivariate B-splines as a novel alternative to the widely used tensor B-splines enabling us to make efficient use of the derived measure.The multivariate B-splines of order n are Cn- 1 smooth and are based on Delaunay configurations of arbitrary 2D or 3...... reside on a regular grid. In contrast, by efficient non- constrained placement of the knots, the multivariate B- splines are shown to give a good representation of inho- mogeneous objects in natural settings. The wide applicability of the method is illustrated through its application on medical data and...
Aranha: a 2D mesh generator for triangular finite elements
A method for generating unstructured meshes for linear and quadratic triangular finite elements is described in this paper. Some topics on the C language data structure used in the development of the program Aranha are also presented. The applicability for adaptive remeshing is shown and finally several examples are included to illustrate the performance of the method in irregular connected planar domains. (author)
Arniza Ghazali; Nurul Hasanah Kamaludin; Mohd Ridzuan Hafiz Mohd Zukeri; Wan Rosli Wan Daud; Rushdan Ibrahim
2012-01-01
Desired pulp-based product properties can be achieved by addition of filler in the pulp network. In exploring this, fines co-generated upon refining the alkaline peroxide treated oil palm empty fruit bunches (EFB) were collected based on their passage and retention capacities when subjected to varying mesh-sizes stainless-steel square mesh wires. Pulp network incorporating fines produced from the synergy of low alkaline peroxide (AP) and low energy refining effects shows that blending 12% of ...
Goal-Oriented Self-Adaptive hp Finite Element Simulation of 3D DC Borehole Resistivity Simulations
Calo, Victor M.
2011-05-14
In this paper we present a goal-oriented self-adaptive hp Finite Element Method (hp-FEM) with shared data structures and a parallel multi-frontal direct solver. The algorithm automatically generates (without any user interaction) a sequence of meshes delivering exponential convergence of a prescribed quantity of interest with respect to the number of degrees of freedom. The sequence of meshes is generated from a given initial mesh, by performing h (breaking elements into smaller elements), p (adjusting polynomial orders of approximation) or hp (both) refinements on the finite elements. The new parallel implementation utilizes a computational mesh shared between multiple processors. All computational algorithms, including automatic hp goal-oriented adaptivity and the solver work fully in parallel. We describe the parallel self-adaptive hp-FEM algorithm with shared computational domain, as well as its efficiency measurements. We apply the methodology described to the three-dimensional simulation of the borehole resistivity measurement of direct current through casing in the presence of invasion.
21 CFR 878.3300 - Surgical mesh.
2010-04-01
... GENERAL AND PLASTIC SURGERY DEVICES Prosthetic Devices § 878.3300 Surgical mesh. (a) Identification... acetabular and cement restrictor mesh used during orthopedic surgery. (b) Classification. Class II....
Unterweger, K.
2015-01-01
© Springer International Publishing Switzerland 2015. We propose to couple our adaptive mesh refinement software PeanoClaw with existing solvers for complex overland flows that are tailored to regular Cartesian meshes. This allows us to augment them with spatial adaptivity and local time-stepping without altering the computational kernels. FullSWOF2D—Full Shallow Water Overland Flows—here is our software of choice though all paradigms hold for other solvers as well.We validate our hybrid simulation software in an artificial test scenario before we provide results for a large-scale flooding scenario of the Mecca region. The latter demonstrates that our coupling approach enables the simulation of complex “real-world” scenarios.
Tetrahedral mesh for needle insertion
Syvertsen, Rolf Anders
2007-01-01
This is a Master’s thesis in how to make a tetrahedral mesh for use in a needle insertion simulator. It also describes how it is possible to make the simulator, and how to improve it to make it as realistic as possible. The medical simulator uses a haptic device, a haptic scene graph and a FEM for realistic soft tissue deformation and interaction. In this project a tetrahedral mesh is created from a polygon model, and then the mesh has been loaded into the HaptX haptic scene graph. The object...
Nanowire mesh solar fuels generator
Yang, Peidong; Chan, Candace; Sun, Jianwei; Liu, Bin
2016-05-24
This disclosure provides systems, methods, and apparatus related to a nanowire mesh solar fuels generator. In one aspect, a nanowire mesh solar fuels generator includes (1) a photoanode configured to perform water oxidation and (2) a photocathode configured to perform water reduction. The photocathode is in electrical contact with the photoanode. The photoanode may include a high surface area network of photoanode nanowires. The photocathode may include a high surface area network of photocathode nanowires. In some embodiments, the nanowire mesh solar fuels generator may include an ion conductive polymer infiltrating the photoanode and the photocathode in the region where the photocathode is in electrical contact with the photoanode.
An unstructured-mesh atmospheric model for nonhydrostatic dynamics: Towards optimal mesh resolution
Szmelter, Joanna; Zhang, Zhao; Smolarkiewicz, Piotr K.
2015-08-01
The paper advances the limited-area anelastic model (Smolarkiewicz et al. (2013) [45]) for investigation of nonhydrostatic dynamics in mesoscale atmospheric flows. New developments include the extension to a tetrahedral-based median-dual option for unstructured meshes and a static mesh adaptivity technique using an error indicator based on inherent properties of the Multidimensional Positive Definite Advection Transport Algorithm (MPDATA). The model employs semi-implicit nonoscillatory forward-in-time integrators for soundproof PDEs, built on MPDATA and a robust non-symmetric Krylov-subspace elliptic solver. Finite-volume spatial discretisation adopts an edge-based data structure. Simulations of stratified orographic flows and the associated gravity-wave phenomena in media with uniform and variable dispersive properties verify the advancement and demonstrate the potential of heterogeneous anisotropic discretisation with large variation in spatial resolution for study of complex stratified flows that can be computationally unattainable with regular grids.
OPTIMIZING EUCALYPTUS PULP REFINING
VailManfredi
2004-01-01
This paper discusses the refining of bleachedeucalyptus kraft pulp (BEKP).Pilot plant tests were carded out in to optimize therefining process and to identify the effects of refiningvariables on final paper quality and process costs.The following parameters are discussed: pulpconsistency, disk pattern design, refiner speed,energy input, refiner configuration (parallel or serial)and refining intensity.The effects of refining on pulp fibers were evaluatedagainst the pulp quality properties, such as physicalstrengths, bulk, opacity and porosity, as well as theinteractions with papermaking process, such as papermachine runnability, paper breaks and refiningcontrol.The results showed that process optimization,considering pulp quality and refining costs, wereobtained when eucalyptus pulp is refined under thelowest intensity and the highest pulp consistencypossible. Changes on the operational refiningconditions will have the highest impact on totalenergy requirements (costs) without any significanteffect on final paper properties.It was also observed that classical ways to control theindustrial operation, such as those based on drainagemeasurements, do not represent the best alternative tomaximize the final paper properties neither the papermachine runability.
Challenges for Japanese refining
This article examines the importance of Japan in the Asian petroleum market and traces Japan's economic growth since the 1960s, the impact of the oil price shocks, and Japanese energy and oil demand. Overviews of Japans refining industry and oil trade are presented with details given of refining capacity and major refiners, and growing environmental awareness and environmental programmes are considered. Plots of Japanese petroleum product demand (1985-2000) and the average sizes and number of refineries (1980-2000) are shown
Checking Model Transformation Refinement
Büttner, Fabian; Egea, Marina; Guerra, Esther; Lara, Juan De
2013-01-01
The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-38883-5_15 Proceedings of 6th International Conference, ICMT 2013, Budapest, Hungary, June 18-19, 2013 Refinement is a central notion in computer science, meaning that some artefact S can be safely replaced by a refinement R, which preserves S’s properties. Having available techniques and tools to check transformation refinement would enable (a) the reasoning on whether a transformation correctly impl...
Toward Interoperable Mesh, Geometry and Field Components for PDE Simulation Development
Chand, K K; Diachin, L F; Li, X; Ollivier-Gooch, C; Seol, E S; Shephard, M; Tautges, T; Trease, H
2005-07-11
Mesh-based PDE simulation codes are becoming increasingly sophisticated and rely on advanced meshing and discretization tools. Unfortunately, it is still difficult to interchange or interoperate tools developed by different communities to experiment with various technologies or to develop new capabilities. To address these difficulties, we have developed component interfaces designed to support the information flow of mesh-based PDE simulations. We describe this information flow and discuss typical roles and services provided by the geometry, mesh, and field components of the simulation. Based on this delineation for the roles of each component, we give a high-level description of the abstract data model and set of interfaces developed by the Department of Energy's Interoperable Tools for Advanced Petascale Simulation (ITAPS) center. These common interfaces are critical to our interoperability goal, and we give examples of several services based upon these interfaces including mesh adaptation and mesh improvement.
Mersiline mesh in premaxillary augmentation.
Foda, Hossam M T
2005-01-01
Premaxillary retrusion may distort the aesthetic appearance of the columella, lip, and nasal tip. This defect is characteristically seen in, but not limited to, patients with cleft lip nasal deformity. This study investigated 60 patients presenting with premaxillary deficiencies in which Mersiline mesh was used to augment the premaxilla. All the cases had surgery using the external rhinoplasty technique. Two methods of augmentation with Mersiline mesh were used: the Mersiline roll technique, for the cases with central symmetric deficiencies, and the Mersiline packing technique, for the cases with asymmetric deficiencies. Premaxillary augmentation with Mersiline mesh proved to be simple technically, easy to perform, and not associated with any complications. Periodic follow-up evaluation for a mean period of 32 months (range, 12-98 months) showed that an adequate degree of premaxillary augmentation was maintained with no clinically detectable resorption of the mesh implant. PMID:15959688
Parallel Programming Strategies for Irregular Adaptive Applications
Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance for such computations. In this work, we examine two typical irregular adaptive applications, Dynamic Remeshing and N-Body, under competing programming methodologies and across various parallel architectures. The Dynamic Remeshing application simulates flow over an airfoil, and refines localized regions of the underlying unstructured mesh. The N-Body experiment models two neighboring Plummer galaxies that are about to undergo a merger. Both problems demonstrate dramatic changes in processor workloads and interprocessor communication with time; thus, dynamic load balancing is a required component.
GENERATION OF IRREGULAR HEXAGONAL MESHES
Vlasov Aleksandr Nikolaevich
2012-07-01
Decomposition is performed in a constructive way and, as option, it involves meshless representation. Further, this mapping method is used to generate the calculation mesh. In this paper, the authors analyze different cases of mapping onto simply connected and bi-connected canonical domains. They represent forward and backward mapping techniques. Their potential application for generation of nonuniform meshes within the framework of the asymptotic homogenization theory is also performed to assess and project effective characteristics of heterogeneous materials (composites.
Improved AFEM algorithm for bioluminescence tomography based on dual-mesh alternation strategy
Wei Li; Heng Zhao; Xiaochao Qu; Yanbin Hou; Xueli Chen; Duofang Chen; Xiaowei He; Qitan Zhang; Jimin Liang
2012-01-01
Adaptive finite element method (AFEM) is broadly adopted to recover the internal source in biological tissues.In this letter,a novel dual-mesh alternation strategy (dual-mesh AFEM) is developed for bioluminescence tomography.By comprehensively considering the error estimation of the finite element method solution on each mesh,two different adaptive strategies based on the error indicator of the reconstructed source and the photon flux density are used alternately in the process.Combined with the constantly adjusted permissible region in the adaptive process,the new algorithm can achieve a more accurate source location compared with the AFEM in the previous experiments.%Adaptive finite element method (AFEM) is broadly adopted to recover the internal source in biological tissues. In this letter, a novel dual-mesh alternation strategy (dual-mesh AFEM) is developed for biolumi-nescence tomography. By comprehensively considering the error estimation of the finite element method solution on each mesh, two different adaptive strategies based on the error indicator of the reconstructed source and the photon flux density are used alternately in the process. Combined with the constantly adjusted permissible region in the adaptive process, the new algorithm can achieve a more accurate source location compared with the AFEM in the previous experiments.
Wohlwill, Emil
2008-01-01
At the request of the editor of ELECTROCHEMICAL INDUSTRY, I herewith give some notes on the electrolytic method of gold refining, to supplement the article of Dr. Tuttle (Vol. I, page 157, January, 1903).
Image-driven mesh optimization
Lindstrom, P; Turk, G
2001-01-05
We describe a method of improving the appearance of a low vertex count mesh in a manner that is guided by rendered images of the original, detailed mesh. This approach is motivated by the fact that greedy simplification methods often yield meshes that are poorer than what can be represented with a given number of vertices. Our approach relies on edge swaps and vertex teleports to alter the mesh connectivity, and uses the downhill simplex method to simultaneously improve vertex positions and surface attributes. Note that this is not a simplification method--the vertex count remains the same throughout the optimization. At all stages of the optimization the changes are guided by a metric that measures the differences between rendered versions of the original model and the low vertex count mesh. This method creates meshes that are geometrically faithful to the original model. Moreover, the method takes into account more subtle aspects of a model such as surface shading or whether cracks are visible between two interpenetrating parts of the model.
Kim, D.; Ghanem, R. [State Univ. of New York, Buffalo, NY (United States)
1994-12-31
Multigrid solution technique to solve a material nonlinear problem in a visual programming environment using the finite element method is discussed. The nonlinear equation of equilibrium is linearized to incremental form using Newton-Rapson technique, then multigrid solution technique is used to solve linear equations at each Newton-Rapson step. In the process, adaptive mesh refinement, which is based on the bisection of a pair of triangles, is used to form grid hierarchy for multigrid iteration. The solution process is implemented in a visual programming environment with distributed computing capability, which enables more intuitive understanding of solution process, and more effective use of resources.
Boiten, Eerke Albert
2016-01-01
"Big data" has become a major area of research and associated funding, as well as a focus of utopian thinking. In the still growing research community, one of the favourite optimistic analogies for data processing is that of the oil refinery, extracting the essence out of the raw data. Pessimists look for their imagery to the other end of the petrol cycle, and talk about the "data exhausts" of our society. Obviously, the refinement community knows how to do "refining". This paper explores...
Capelli, Silvia C; Hans-Beat Bürgi; Birger Dittrich; Simon Grabowsky; Dylan Jayatilaka
2014-01-01
Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements,...
Vemaganti, Gururaja R.; Wieting, Allan R.
1990-01-01
A higher-order streamline upwinding Petrov-Galerkin finite element method is employed for high speed viscous flow analysis using structured and unstructured meshes. For a Mach 8.03 shock interference problem, successive mesh adaptation was performed using an adaptive remeshing method. Results from the finite element algorithm compare well with both experimental data and results from an upwind cell-centered method. Finite element results for a Mach 14.1 flow over a 24 degree compression corner compare well with experimental data and two other numerical algorithms for both structured and unstructured meshes.
Phase-field modelling of rapid solidification in alloy systems: Spontaneous grain refinement effects
Mullis, A. M.
2012-07-01
Phase-field modelling of rapid alloy solidification, in which the rejection of latent heat from the growing solid cannot be ignored, has lagged significantly behind the modelling of conventional casting practises which can be approximated as isothermal. This is in large part due to the fact that if realistic materials properties are adopted the ratio of the thermal to solute diffusivity (the Lewis number) is typically 103 - 104, leading to severe multi-scale problems. However, use of state-of-the-art numerical techniques such as local mesh adaptivity, implicit time-stepping and a non-linear multi-grid solver allow these difficulties to be overcome. Here we describe how the application of this model, formulated in the thin-interface limit, can help to explain the long-standing phenomenon of spontaneous grain refinement in deeply undercooled melts. We find that at intermediate undercoolings the operating point parameter, σ*, may collapse to zero, resulting in the growth of non-dendritic morphologies such as doublons and 'dendritic seaweed'. Further increases in undercooling then lead to the re-establishment of stable dendritic growth. We postulate that remelting of such seaweed structures gives rise to the low undercooling instance of grain refinement observed in alloys.
Phase-field modelling of rapid solidification in alloy systems: Spontaneous grain refinement effects
Phase-field modelling of rapid alloy solidification, in which the rejection of latent heat from the growing solid cannot be ignored, has lagged significantly behind the modelling of conventional casting practises which can be approximated as isothermal. This is in large part due to the fact that if realistic materials properties are adopted the ratio of the thermal to solute diffusivity (the Lewis number) is typically 103 - 104, leading to severe multi-scale problems. However, use of state-of-the-art numerical techniques such as local mesh adaptivity, implicit time-stepping and a non-linear multi-grid solver allow these difficulties to be overcome. Here we describe how the application of this model, formulated in the thin-interface limit, can help to explain the long-standing phenomenon of spontaneous grain refinement in deeply undercooled melts. We find that at intermediate undercoolings the operating point parameter, σ*, may collapse to zero, resulting in the growth of non-dendritic morphologies such as doublons and 'dendritic seaweed'. Further increases in undercooling then lead to the re-establishment of stable dendritic growth. We postulate that remelting of such seaweed structures gives rise to the low undercooling instance of grain refinement observed in alloys.
The Origins of Spontaneous Grain Refinement in Deeply Undercooled Metallic Melts
Andrew M. Mullis
2014-05-01
Full Text Available Phase-field modeling of rapid alloy solidification, in which the rejection of latent heat from the growing solid cannot be ignored, has lagged significantly behind the modeling of conventional casting practices which can be approximated as isothermal. This is in large part due to the fact that if realistic materials properties are adopted, the ratio of the thermal to solute diffusivity (the Lewis number is typically 103–104, leading to severe multi-scale problems. However, use of state-of-the-art numerical techniques, such as local mesh adaptivity, implicit time-stepping and a non-linear multi-grid solver, allow these difficulties to be overcome. Here we describe how the application of such a model, formulated in the thin-interface limit, can help to explain the long-standing phenomenon of spontaneous grain refinement in deeply undercooled melts. We find that at intermediate undercoolings the operating point parameter, σ*, may collapse to zero, resulting in the growth of non-dendritic morphologies such as doublons and ‘dendritic seaweed’. Further increases in undercooling then lead to the re-establishment of stable dendritic growth. We postulate that remelting of such seaweed structures gives rise to the low undercooling instance of grain refinement observed in alloys.
Method and system for mesh network embedded devices
Wang, Ray (Inventor)
2009-01-01
A method and system for managing mesh network devices. A mesh network device with integrated features creates an N-way mesh network with a full mesh network topology or a partial mesh network topology.
Optimal Cache-Oblivious Mesh Layouts
Bender, Michael A.; Kuszmaul, Bradley C.; Teng, Shang-Hua; Wang, Kebin
2007-01-01
A mesh is a graph that divides physical space into regularly-shaped regions. Meshes computations form the basis of many applications, e.g. finite-element methods, image rendering, and collision detection. In one important mesh primitive, called a mesh update, each mesh vertex stores a value and repeatedly updates this value based on the values stored in all neighboring vertices. The performance of a mesh update depends on the layout of the mesh in memory. This paper shows how to find a memory...
User Manual for the PROTEUS Mesh Tools
Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, Emily R. [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-06-01
This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.
User Manual for the PROTEUS Mesh Tools
This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MTMeshToMesh.x and the MTRadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as ''mesh'' input for any of the mesh tools discussed in this manual.
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.
1998-12-10
OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
Tetrahedral meshing via maximal Poisson-disk sampling
Guo, Jianwei
2016-02-15
In this paper, we propose a simple yet effective method to generate 3D-conforming tetrahedral meshes from closed 2-manifold surfaces. Our approach is inspired by recent work on maximal Poisson-disk sampling (MPS), which can generate well-distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform or adaptive sampling, respectively. We also propose an efficient optimization strategy to protect the domain boundaries and to remove slivers to improve the meshing quality. We present various experimental results to illustrate the efficiency and the robustness of our proposed approach. We demonstrate that the performance and quality (e.g., minimal dihedral angle) of our approach are superior to current state-of-the-art optimization-based approaches.
Hybrid space–angle adaptivity for whole-core particle transport calculations
Highlights: • A hybrid space–angle refinement for whole-core transport calculations is developed. • Our method is implemented for response matrix and collision probability methods. • The adaptive method leads to substantial reduction of the response matrix dimensions. • A new approach for coupling surface angular flux is introduced. • The results show the ability of the algorithm to obtain a desired accuracy. - Abstract: Adaptive refinement is a powerful method for efficiently solving physical problems. In this paper we present a new coupled space–angle adaptive algorithm for neutron transport calculations. The scheme is specifically employed for the solution of the integral form of transport equation based on the collision probability–response matrix method. The adaptive algorithm is started by first applying angular adaptivity and then projecting the solution to the spatial mesh refinement. A posteriori error estimate is derived by utilizing the flux gradient approach based on the net current leakage of nodes. A new approach is used to apply continuity of flux in the interface between nodes by escalating the order of spherical harmonics expansions of entrance response matrix to the same order of spherical harmonics expansions of outgoing angular flux at the neighboring node. Using an integral transport method within the node and refined space and angle variables, a new method for whole-core transport calculations is introduced. The validity of the developed adaptive strategy is assessed by a series of numerical experiments. Comparisons indicate that the space–angle adaptivity framework is capable of resulting acceptable solution with less number of the degrees of freedom (DOFs)
Grid refinement for entropic lattice Boltzmann models
Dorschner, B; Chikatamarla, S S; Karlin, I V
2016-01-01
We propose a novel multi-domain grid refinement technique with extensions to entropic incompressible, thermal and compressible lattice Boltzmann models. Its validity and accuracy are accessed by comparison to available direct numerical simulation and experiment for the simulation of isothermal, thermal and viscous supersonic flow. In particular, we investigate the advantages of grid refinement for the set-ups of turbulent channel flow, flow past a sphere, Rayleigh-Benard convection as well as the supersonic flow around an airfoil. Special attention is payed to analyzing the adaptive features of entropic lattice Boltzmann models for multi-grid simulations.
Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert
2015-11-15
The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence are mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.
Robust and efficient overset grid assembly for partitioned unstructured meshes
This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning. Another challenge arises because of the large variation in the type of mesh-block overlap and the resulting large load imbalance on multiple processors. Desirable traits for the grid assembly method are efficiency (requiring only a small fraction of the solver time), robustness (correct identification of all point types), and full automation (no user input required other than the mesh system). Additionally, the method should be scalable, which is an important challenge due to the inherent load imbalance. This paper describes a fully-automated grid assembly method, which can use two different donor search algorithms. One is based on the use of auxiliary grids and Exact Inverse Maps (EIM), and the other is based on the use of Alternating Digital Trees (ADT). The EIM method is demonstrated to be more efficient than the ADT method, while retaining robustness. An adaptive load re-balance algorithm is also designed and implemented, which considerably improves the scalability of the method
Robust and efficient overset grid assembly for partitioned unstructured meshes
Roget, Beatrice, E-mail: broget@uwyo.edu; Sitaraman, Jayanarayanan, E-mail: jsitaram@uwyo.edu
2014-03-01
This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning. Another challenge arises because of the large variation in the type of mesh-block overlap and the resulting large load imbalance on multiple processors. Desirable traits for the grid assembly method are efficiency (requiring only a small fraction of the solver time), robustness (correct identification of all point types), and full automation (no user input required other than the mesh system). Additionally, the method should be scalable, which is an important challenge due to the inherent load imbalance. This paper describes a fully-automated grid assembly method, which can use two different donor search algorithms. One is based on the use of auxiliary grids and Exact Inverse Maps (EIM), and the other is based on the use of Alternating Digital Trees (ADT). The EIM method is demonstrated to be more efficient than the ADT method, while retaining robustness. An adaptive load re-balance algorithm is also designed and implemented, which considerably improves the scalability of the method.
Large-scale Parallel Unstructured Mesh Computations for 3D High-lift Analysis
Mavriplis, Dimitri J.; Pirzadeh, S.
1999-01-01
A complete "geometry to drag-polar" analysis capability for the three-dimensional high-lift configurations is described. The approach is based on the use of unstructured meshes in order to enable rapid turnaround for complicated geometries that arise in high-lift configurations. Special attention is devoted to creating a capability for enabling analyses on highly resolved grids. Unstructured meshes of several million vertices are initially generated on a work-station, and subsequently refined on a supercomputer. The flow is solved on these refined meshes on large parallel computers using an unstructured agglomeration multigrid algorithm. Good prediction of lift and drag throughout the range of incidences is demonstrated on a transport take-off configuration using up to 24.7 million grid points. The feasibility of using this approach in a production environment on existing parallel machines is demonstrated, as well as the scalability of the solver on machines using up to 1450 processors.
Evaluation of refinement calculi
The present report aims at making a first evaluation of refinement calculi having some potential for the development of distributed systems. Refinement constitutes an integral part of formal software development, with the aim of providing a framework within which an executable software system can be constructed from a high level specification by going through a number of provably correct development steps. Many formal methods have their own refinement calculi, represented by sets of rules and pragmatic guidelines for relating pairs of specifications in a way that captures the essential idea of formal software development: to systematically produce a program that satisfies its specification. Based on a survey of a large number of relevant refinement calculi, seven selected methods were evaluated with respect to identified criteria. The results from the evaluation can be utilized in any software development project where a selection of refinement calculi is required. The evaluation in the present report complements those provided in other research projects at the OECD Halden Reactor Project, in particular INT-FS, EVAL-FS, and VV-FT (author) (ml)
Connectivity editing for quadrilateral meshes
Peng, Chihan
2011-12-12
We propose new connectivity editing operations for quadrilateral meshes with the unique ability to explicitly control the location, orientation, type, and number of the irregular vertices (valence not equal to four) in the mesh while preserving sharp edges. We provide theoretical analysis on what editing operations are possible and impossible and introduce three fundamental operations to move and re-orient a pair of irregular vertices. We argue that our editing operations are fundamental, because they only change the quad mesh in the smallest possible region and involve the fewest irregular vertices (i.e., two). The irregular vertex movement operations are supplemented by operations for the splitting, merging, canceling, and aligning of irregular vertices. We explain how the proposed highlevel operations are realized through graph-level editing operations such as quad collapses, edge flips, and edge splits. The utility of these mesh editing operations are demonstrated by improving the connectivity of quad meshes generated from state-of-art quadrangulation techniques. © 2011 ACM.
Connectivity editing for quadrilateral meshes
Peng, Chihan
2011-12-01
We propose new connectivity editing operations for quadrilateral meshes with the unique ability to explicitly control the location, orientation, type, and number of the irregular vertices (valence not equal to four) in the mesh while preserving sharp edges. We provide theoretical analysis on what editing operations are possible and impossible and introduce three fundamental operations to move and re-orient a pair of irregular vertices. We argue that our editing operations are fundamental, because they only change the quad mesh in the smallest possible region and involve the fewest irregular vertices (i.e., two). The irregular vertex movement operations are supplemented by operations for the splitting, merging, canceling, and aligning of irregular vertices. We explain how the proposed high-level operations are realized through graph-level editing operations such as quad collapses, edge flips, and edge splits. The utility of these mesh editing operations are demonstrated by improving the connectivity of quad meshes generated from state-of-art quadrangulation techniques.
Development of an adaptive hp-version finite element method for computational optimal control
Hodges, Dewey H.; Warner, Michael S.
1994-01-01
In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.
In 2004, refining margins showed a clear improvement that persisted throughout the first three quarters of 2005. This enabled oil companies to post significantly higher earnings for their refining activity in 2004 compared to 2003, with the results of the first half of 2005 confirming this trend. As for petrochemicals, despite a steady rise in the naphtha price, higher cash margins enabled a turnaround in 2004 as well as a clear improvement in oil company financial performance that should continue in 2005, judging by the net income figures reported for the first half-year. Despite this favorable business environment, capital expenditure in refining and petrochemicals remained at a low level, especially investment in new capacity, but a number of projects are being planned for the next five years. (author)
Constancio, Silva
2006-07-01
In 2004, refining margins showed a clear improvement that persisted throughout the first three quarters of 2005. This enabled oil companies to post significantly higher earnings for their refining activity in 2004 compared to 2003, with the results of the first half of 2005 confirming this trend. As for petrochemicals, despite a steady rise in the naphtha price, higher cash margins enabled a turnaround in 2004 as well as a clear improvement in oil company financial performance that should continue in 2005, judging by the net income figures reported for the first half-year. Despite this favorable business environment, capital expenditure in refining and petrochemicals remained at a low level, especially investment in new capacity, but a number of projects are being planned for the next five years. (author)
Secure Routing in Wireless Mesh Networks
Sen, Jaydip
2011-01-01
Wireless mesh networks (WMNs) have emerged as a promising concept to meet the challenges in next-generation networks such as providing flexible, adaptive, and reconfigurable architecture while offering cost-effective solutions to the service providers. Unlike traditional Wi-Fi networks, with each access point (AP) connected to the wired network, in WMNs only a subset of the APs are required to be connected to the wired network. The APs that are connected to the wired network are called the Internet gateways (IGWs), while the APs that do not have wired connections are called the mesh routers (MRs). The MRs are connected to the IGWs using multi-hop communication. The IGWs provide access to conventional clients and interconnect ad hoc, sensor, cellular, and other networks to the Internet. However, most of the existing routing protocols for WMNs are extensions of protocols originally designed for mobile ad hoc networks (MANETs) and thus they perform sub-optimally. Moreover, most routing protocols for WMNs are des...
Investment rallied in 2007, and many distillation and conversion projects likely to reach the industrial stage were announced. With economic growth sustained in 2006 and still pronounced in 2007, oil demand remained strong - especially in emerging countries - and refining margins stayed high. Despite these favorable business conditions, tensions persisted in the refining sector, which has fallen far behind in terms of investing in refinery capacity. It will take renewed efforts over a long period to catch up. Looking at recent events that have affected the economy in many countries (e.g. the sub-prime crisis), prudence remains advisable
Efficient Packet Forwarding in Mesh Network
Kanrar, Soumen
2012-01-01
Wireless Mesh Network (WMN) is a multi hop low cost, with easy maintenance robust network providing reliable service coverage. WMNs consist of mesh routers and mesh clients. In this architecture, while static mesh routers form the wireless backbone, mesh clients access the network through mesh routers as well as directly meshing with each other. Different from traditional wireless networks, WMN is dynamically self-organized and self-configured. In other words, the nodes in the mesh network automatically establish and maintain network connectivity. Over the years researchers have worked, to reduce the redundancy in broadcasting packet in the mesh network in the wireless domain for providing reliable service coverage, the source node deserves to broadcast or flood the control packets. The redundant control packet consumes the bandwidth of the wireless medium and significantly reduces the average throughput and consequently reduces the overall system performance. In this paper I study the optimization problem in...
Dancette, S.; Browet, A.; Martin, G.; Willemet, M.; Delannay, L.
2016-06-01
A new procedure for microstructure-based finite element modeling of polycrystalline aggregates is presented. The proposed method relies (i) on an efficient graph-based community detection algorithm for crystallographic data segmentation and feature contour extraction and (ii) on the generation of selectively refined meshes conforming to grain boundaries. It constitutes a versatile and close to automatic environment for meshing complex microstructures. The procedure is illustrated with polycrystal microstructures characterized by orientation imaging microscopy. Hot deformation of a Duplex stainless steel is investigated based on ex-situ EBSD measurements performed on the same region of interest before and after deformation. A finite element mesh representing the initial microstructure is generated and then used in a crystal plasticity simulation of the plane strain compression. Simulation results and experiments are in relatively good agreement, confirming a large potential for such directly coupled experimental and modeling analyses, which is facilitated by the present image-based meshing procedure.
Multiple scale mesh free analysis
Recent developments of mesh free and multi-scale methods and their applications in applied mechanics are surveyed. Three major methodologies are reviewed. First, smoothed particle hydrodynamics (SPH) is discussed as a representative of a non-local kernel, strong form collocation approach. Second, mesh-free Galerkin methods, which have been active research area in recent years, are reviewed. Third, some applications of molecular dynamics (MD) in applied mechanics are discussed. The emphases of this survey are placed on simulations of finite deformations, fracture, shear bands, multi-scale methods, and nano-scale mechanics. Refs. 13 (author)
Dyja, Robert; van der Zee, Kristoffer G
2016-01-01
We present an adaptive methodology for the solution of (linear and) non-linear time dependent problems that is especially tailored for massively parallel computations. The basic concept is to solve for large blocks of space-time unknowns instead of marching sequentially in time. The methodology is a combination of a computationally efficient implementation of a parallel-in-space-time finite element solver coupled with a posteriori space-time error estimates and a parallel mesh generator. This methodology enables, in principle, simultaneous adaptivity in both space and time (within the block) domains. We explore this basic concept in the context of a variety of time-steppers including $\\Theta$-schemes and Backward Differentiate Formulas. We specifically illustrate this framework with applications involving time dependent linear, quasi-linear and semi-linear diffusion equations. We focus on investigating how the coupled space-time refinement indicators for this class of problems affect spatial adaptivity. Final...
The major uncertainty characterizing the global energy landscape impacts particularly on transport, which remains the virtually-exclusive bastion of the oil industry. The industry must therefore respond to increasing demand for mobility against a background marked by the emergence of alternatives to oil-based fuels and the need to reduce emissions of pollutants and greenhouse gases (GHG). It is in this context that the 'Refining 2030' study conducted by IFP Energies Nouvelles (IFPEN) forecasts what the global supply and demand balance for oil products could be, and highlights the type and geographical location of the refinery investment required. Our study shows that the bulk of the refining investment will be concentrated in the emerging countries (mainly those in Asia), whilst the areas historically strong in refining (Europe and North America) face reductions in capacity. In this context, the drastic reduction in the sulphur specification of bunker oil emerges as a structural issue for European refining, in the same way as increasingly restrictive regulation of refinery CO2 emissions (quotas/taxation) and the persistent imbalance between gasoline and diesel fuels. (authors)
Copur, Yalcin
This study compares the modified kraft process, polysulfide pulping, one of the methods to obtain higher pulp yield, with conventional kraft method. More specifically, the study focuses on the refining effects of polysulfide pulp, which is an area with limited literature. Physical, mechanical and chemical properties of kraft and polysulfide pulps (4% elemental sulfur addition to cooking digester) cooked under the same conditions were studied as regards to their behavior under various PFI refining (0, 3000, 6000, 9000 revs.). Polysulfide (PS) pulping, compared to the kraft method, resulted in higher pulp yield and higher pulp kappa number. Polysulfide also gave pulp having higher tensile and burst index. However, the strength of polysulfide pulp, tear index at a constant tensile index, was found to be 15% lower as compared to the kraft pulp. Refining studies showed that moisture holding ability of chemical pulps mostly depends on the chemical nature of the pulp. Refining effects such as fibrillation and fine content did not have a significant effect on the hygroscopic behavior of chemical pulp.
An adaptive finite element approach for neutron transport equation
Highlights: → Using uniform grid solution gives high local residuals errors. → Element refinement in the region where the flux gradient is large improves accuracy of results. → It is not necessary to use high density element throughout problem domain. → The method provides great geometrical flexibility. → Implementation of different density of elements lowers computational cost. - Abstract: In this paper, we develop an adaptive element refinement strategy that progressively refines the elements in appropriate regions of domain to solve even-parity Boltzmann transport equation. A posteriori error approach has been used for checking the approximation solutions for various sizes of elements. The local balance of neutrons in elements is utilized as an error assessment. To implement the adaptive approach a new neutron transport code FEMPT, finite element modeling of particle transport, for arbitrary geometry has been developed. This code is based on even-parity spherical harmonics and finite element method. A variational formulation is implemented for the even-parity neutron transport equation for the general case of anisotropic scattering and sources. High order spherical harmonic functions expansion for angle and finite element method in space is used as trial function. This code can be used to solve the multi-group neutron transport equation in highly complex X-Y geometries with arbitrary boundary condition. Due to powerful element generator tools of FEMPT, the description of desired and complicated 2D geometry becomes quite convenient. The numerical results show that the locally adaptive element refinement approach enhances the accuracy of solution in comparison with uniform meshing approach.
Fitting polynomial surfaces to triangular meshes with Voronoi Squared Distance Minimization
Nivoliers, Vincent
2011-12-01
This paper introduces Voronoi Squared Distance Minimization (VSDM), an algorithm that fits a surface to an input mesh. VSDM minimizes an objective function that corresponds to a Voronoi-based approximation of the overall squared distance function between the surface and the input mesh (SDM). This objective function is a generalization of Centroidal Voronoi Tesselation (CVT), and can be minimized by a quasi-Newton solver. VSDM naturally adapts the orientation of the mesh to best approximate the input, without estimating any differential quantities. Therefore it can be applied to triangle soups or surfaces with degenerate triangles, topological noise and sharp features. Applications of fitting quad meshes and polynomial surfaces to input triangular meshes are demonstrated.
Fitting polynomial surfaces to triangular meshes with Voronoi squared distance minimization
Nivoliers, Vincent
2012-11-06
This paper introduces Voronoi squared distance minimization (VSDM), an algorithm that fits a surface to an input mesh. VSDM minimizes an objective function that corresponds to a Voronoi-based approximation of the overall squared distance function between the surface and the input mesh (SDM). This objective function is a generalization of the one minimized by centroidal Voronoi tessellation, and can be minimized by a quasi-Newton solver. VSDM naturally adapts the orientation of the mesh elements to best approximate the input, without estimating any differential quantities. Therefore, it can be applied to triangle soups or surfaces with degenerate triangles, topological noise and sharp features. Applications of fitting quad meshes and polynomial surfaces to input triangular meshes are demonstrated. © 2012 Springer-Verlag London.
A Numerical Study of Blowup in the Harmonic Map Heat Flow Using the MMPDE Moving Mesh Method
Haynes, R.D.; Huang, W.; Zegeling, P.A.
2013-01-01
The numerical solution of the harmonic heat map flow problems with blowup in finite or infinite time is considered using an adaptive moving mesh method. A properly chosen monitor function is derived so that the moving mesh method can be used to simulate blowup and produce accurate blowup profiles wh
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron
1998-12-08
Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
Hu, Guanghui; Yi, Nianyu
2016-05-01
In this paper, we present an adaptive finite volume method for steady Euler equations with a non-oscillatory k-exact reconstruction on unstructured mesh. The numerical framework includes a Newton method as an outer iteration to linearize the Euler equations, and a geometrical multigrid method as an inner iteration to solve the derived linear system. A non-oscillatory k-exact reconstruction of the conservative solution in each element is proposed for the high order and non-oscillatory behavior of the numerical solutions. The importance on handling the curved boundary in an appropriate way is also studied with the numerical experiments. The h-adaptive method is introduced to enhance the efficiency of the algorithm. The numerical tests show successfully that the quality solutions can be obtained smoothly with the proposed algorithm, i.e., the expected convergence order of the numerical solution with the mesh refinement can be reached, while the non-oscillation shock structure can be obtained. Furthermore, the mesh adaptive method with the appropriate error indicators can effectively enhance the implementation efficiency of numerical method, while the steady state convergence and numerical accuracy are kept in the meantime.
Over recent years, the refining industry has had to grapple with a growing burden of environmental and safety regulations concerning not only its plants and other facilities, but also its end products. At the same time, it has had to bear the effects of the reduction of the special status that used to apply to petroleum, and the consequences of economic freedom, to which we should add, as specifically concerns the French market, the impact of energy policy and the pro-nuclear option. The result is a drop in heavy fuel oil from 36 million tonnes per year in 1973 to 6.3 million in 1992, and in home-heating fuel from 37 to 18 million per year. This fast-moving market is highly competitive. The French market in particular is wide open to imports, but the refining companies are still heavy exporters for those products with high added-value, like lubricants, jet fuel, and lead-free gasolines. The competition has led the refining companies to commit themselves to quality, and to publicize their efforts in this direction. This is why the long-term perspectives for petroleum fuels are still wide open. This is supported by the probable expectation that the goal of economic efficiency is likely to soften the effects of the energy policy, which penalizes petroleum products, in that they have now become competitive again. In the European context, with the challenge of environmental protection and the decline in heavy fuel outlets, French refining has to keep on improving the quality of its products and plants, which means major investments. The industry absolutely must return to a more normal level of profitability, in order to sustain this financial effort, and generate the prosperity of its high-performance plants and equipment. 1 fig., 5 tabs
For oil companies to invest in new refining and conversion capacity, favorable conditions over time are required. In other words, refining margins must remain high and demand sustained over a long period. That was the situation prevailing before the onset of the financial crisis in the second half of 2008. The economic conjuncture has taken a substantial turn for the worse since then and the forecasts for 2009 do not look bright. Oil demand is expected to decrease in the OECD countries and to grow much more slowly in the emerging countries. It is anticipated that refining margins will fall in 2009 - in 2008, they slipped significantly in the United States - as a result of increasingly sluggish demand, especially for light products. The next few months will probably be unfavorable to investment. In addition to a gloomy business outlook, there may also be a problem of access to sources of financing. As for investment projects, a mainstream trend has emerged in the last few years: a shift away from the regions that have historically been most active (the OECD countries) towards certain emerging countries, mostly in Asia or the Middle East. The new conjuncture will probably not change this trend
Wireless mesh networked radios optimized for UGS applications
Calcutt, Wade; Williams, Jonathan; Jones, Barry
2010-04-01
Wireless mesh networked (WMN) radios have been applied to unattended ground sensor (UGS) applications for a number of years. However, adapting commercial off-the-shelf (COTS) WMN protocols and hardware for UGS applications has not yielded the desired performance because of compromises inherent to these existing radios. As a leading provider of UGS systems, McQ Inc. has been developing custom WMN protocols and radio hardware that are adapted specifically for the unique scenarios of the UGS situation. This paper presents the McQ designs, the tradeoffs made in developing the designs, and test and performance results.
Towards automated crystallographic structure refinement with phenix.refine
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods. phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. It has several automation features and is also highly flexible. Several hundred parameters enable extensive customizations for complex use cases. Multiple user-defined refinement strategies can be applied to specific parts of the model in a single refinement run. An intuitive graphical user interface is available to guide novice users and to assist advanced users in managing refinement projects. X-ray or neutron diffraction data can be used separately or jointly in refinement. phenix.refine is tightly integrated into the PHENIX suite, where it serves as a critical component in automated model building, final structure refinement, structure validation and deposition to the wwPDB. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods
describe the user's scheme. According to the mesh grid refinement options, GGTM introduces further co-ordinate values, which complete the input mesh grid. A loop for each cell is performed to determine the zone and the material to be attributed to the cell. The cell is ideally represented by its centre and it is relatively simple to determine which material zone the cell belongs to. Material zones may have very complicated geometrical shapes in space thanks to the combinatorial geometry among volumes existing in GGTM. Moreover, the priority parameter associated to each material zone can easily solve any overlapping situation among zones. Fixed neutron sources, if any, are adapted to the mesh refinement at the same time. As from version 5.0, GGTM can optionally calculate errors in volume values due to the stair-cased approximation in geometry. GGTM considers a 'very' refined uniform sub-grid for those single meshes cutting more than one material zone at zone interfaces and works in same way as previously described in the mesh attribution to zones for each single sub-mesh. This method lets users calculate the exact material zone volume values with great precision, independently of the geometry complexity and lets GGTM automatically update material zone densities to conserve mass. As for the plot programs DDM, DTM2 and DTM3, they do not make any value interpolations among cell values to have contours, when used as post-processors or to plot any fixed neutron source distribution; they simply attribute the entire single mesh grid cell the colour corresponding to the adopted value scale. This simple and fast method lets users faithfully reproduce transport results and overlap material, zone, body or mesh borders on the same plots without overcrowding them with too many lines. 3 - Restrictions on the complexity of the problem: Only a continuous space mesh grid can be generated by GGDM and GGTM and input to DDM, DTM2, DTM3, RVARSCL, COMPARE and MKSRC
Yan, Bo; Li, Yuguo; Liu, Ying
2016-07-01
In this paper, we present an adaptive finite element (FE) algorithm for direct current (DC) resistivity modeling in 2-D generally anisotropic conductivity structures. Our algorithm is implemented on an unstructured triangular mesh that readily accommodates complex structures such as topography and dipping layers and so on. We implement a self-adaptive, goal-oriented grid refinement algorithm in which the finite element analysis is performed on a sequence of refined grids. The grid refinement process is guided by an a posteriori error estimator. The problem is formulated in terms of total potentials where mixed boundary conditions are incorporated. This type of boundary condition is superior to the Dirichlet type of conditions and improves numerical accuracy considerably according to model calculations. We have verified the adaptive finite element algorithm using a two-layered earth with azimuthal anisotropy. The FE algorithm with incorporation of mixed boundary conditions achieves high accuracy. The relative error between the numerical and analytical solutions is less than 1% except in the vicinity of the current source location, where the relative error is up to 2.4%. A 2-D anisotropic model is used to demonstrate the effects of anisotropy upon the apparent resistivity in DC soundings.
Interactive graphical tools for three-dimensional mesh redistribution
Dobbs, L.A.
1996-03-01
Three-dimensional meshes modeling nonlinear problems such as sheet metal forming, metal forging, heat transfer during welding, the propagation of microwaves through gases, and automobile crashes require highly refined meshes in local areas to accurately represent areas of high curvature, stress, and strain. These locally refined areas develop late in the simulation and/or move during the course of the simulation, thus making it difficult to predict their exact location. This thesis is a systematic study of new tools scientists can use with redistribution algorithms to enhance the solution results and reduce the time to build, solve, and analyze nonlinear finite element problems. Participatory design techniques including Contextual Inquiry and Design were used to study and analyze the process of solving such problems. This study and analysis led to the in-depth understanding of the types of interactions performed by FEM scientists. Based on this understanding, a prototype tool was designed to support these interactions. Scientists participated in evaluating the design as well as the implementation of the prototype tool. The study, analysis, prototype tool design, and the results of the evaluation of the prototype tool are described in this thesis.
The Village Telco project: a reliable and practical wireless mesh telephony infrastructure
Gardner-Stephen Paul
2011-01-01
Full Text Available Abstract VoIP (Voice over IP over mesh networks could be a potential solution to the high cost of making phone calls in most parts of Africa. The Village Telco (VT is an easy to use and scalable VoIP over meshed WLAN (Wireless Local Area Network telephone infrastructure. It uses a mesh network of mesh potatoes to form a peer-to-peer network to relay telephone calls without landlines or cell phone towers. This paper discusses the Village Telco infrastructure, how it addresses the numerous difficulties associated with wireless mesh networks, and its efficient deployment for VoIP services in some communities around the globe. The paper also presents the architecture and functions of a mesh potato and a novel combined analog telephone adapter (ATA and WiFi access point that routes calls. Lastly, the paper presents the results of preliminary tests that have been conducted on a mesh potato. The preliminary results indicate very good performance and user acceptance of the mesh potatoes. The results proved that the infrastructure is deployable in severe and under-resourced environments as a means to make cheap phone calls and render Internet and IP-based services. As a result, the VT project contributes to bridging the digital divide in developing areas.
Evolutionary optimization of a Genetically Refined Truss
Hull, Patrick V.; Tinker, Michael L.; Dozier, Gerry
2005-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This paper will present a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: Genetic Algorithms and Differential Evolution to successfully optimize a benchmark structural optimization problem. An non-traditional solution to the benchmark problem is presented in this paper, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Evolutionary Optimization of a Geometrically Refined Truss
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
An adaptive finite element procedure for crack propagation analysis
ALSHOAIBI Abdulnaser M.; HADI M.S.A.; ARIFFIN A.K.
2007-01-01
This paper presents the adaptive mesh finite element estimation method for analyzing 2D linear elastic fracture problems. The mesh is generated by the advancing front method and the norm stress error is taken as a posteriori error estimator for the h-type adaptive refinement. The stress intensity factors are estimated by a displacement extrapolation technique. The near crack tip displacements used are obtained from specific nodes of natural six-noded quarter-point elements which are generated around the crack tip defined by the user. The crack growth and its direction are determined by the calculated stress intensity factors.The maximum circumference theory is used for the latter. In evaluating the accuracy of the estimated stress intensity factors, four cases are tested consisting of compact tension specimen, three-point bending specimen, central cracked plate and double edge notched plate. These were carried out and compared to the results from other studies. The crack trajectories of these specimen tests are also illustrated.
21st International Meshing Roundtable
Weill, Jean-Christophe
2013-01-01
This volume contains the articles presented at the 21st International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held on October 7–10, 2012 in San Jose, CA, USA. The first IMR was held in 1992, and the conference series has been held annually since. Each year the IMR brings together researchers, developers, and application experts in a variety of disciplines, from all over the world, to present and discuss ideas on mesh generation and related topics. The technical papers in this volume present theoretical and novel ideas and algorithms with practical potential, as well as technical applications in science and engineering, geometric modeling, computer graphics, and visualization.
22nd International Meshing Roundtable
Staten, Matthew
2014-01-01
This volume contains the articles presented at the 22nd International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held on Oct 13-16, 2013 in Orlando, Florida, USA. The first IMR was held in 1992, and the conference series has been held annually since. Each year the IMR brings together researchers, developers, and application experts in a variety of disciplines, from all over the world, to present and discuss ideas on mesh generation and related topics. The technical papers in this volume present theoretical and novel ideas and algorithms with practical potential, as well as technical applications in science and engineering, geometric modeling, computer graphics and visualization.
The moving mesh code Shadowfax
Vandenbroucke, Bert
2016-01-01
We introduce the moving mesh code Shadowfax, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public License. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare Shadowfax with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.
Confined helium on Lagrange meshes
Baye, Daniel
2015-01-01
The Lagrange-mesh method has the simplicity of a calculation on a mesh and can have the accuracy of a variational method. It is applied to the study of a confined helium atom. Two types of confinement are considered. Soft confinements by potentials are studied in perimetric coordinates. Hard confinement in impenetrable spherical cavities is studied in a system of rescaled perimetric coordinates varying in [0,1] intervals. Energies and mean values of the distances between electrons and between an electron and the helium nucleus are calculated. A high accuracy of 11 to 15 significant figures is obtained with small computing times. Pressures acting on the confined atom are also computed. For sphere radii smaller than 1, their relative accuracies are better than $10^{-10}$. For larger radii up to 10, they progressively decrease to $10^{-3}$, still improving the best literature results.
Practical implementation of tetrahedral mesh reconstruction in emission tomography
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2013-05-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio
Bercea, Gheorghe-Teodor; McRae, Andrew T. T.; Ham, David A.; Mitchell, Lawrence; Rathgeber, Florian; Nardi, Luigi; Luporini, Fabio; Kelly, Paul H. J.
2016-01-01
We present a generic algorithm for numbering and then efficiently iterating over the data values attached to an extruded mesh. An extruded mesh is formed by replicating an existing mesh, assumed to be unstructured, to form layers of prismatic cells. Applications of extruded meshes include, but are not limited to, the representation of 3D high aspect ratio domains employed by geophysical finite element simulations. These meshes are structured in the extruded direction. The algorithm presented ...
Mesh networked unattended ground sensors
Colling, Kent; Calcutt, Wade; Winston, Mark; Jones, Barry
2006-05-01
McQ has developed a family of low cost unattended ground sensors that utilize self-configured, mesh network communications for wireless sensing. Intended for use in an urban environment, the area monitored by the sensor system poses a communication challenge. A discussion into the sensor's communication performance and how it affects sensor installation and the operation of the system once deployed is presented.
Image meshing via hierarchical optimization＊
Hao XIE; Ruo-feng TONGS
2016-01-01
Vector graphic, as a kind of geometric representation of raster images, has many advantages, e.g., definition independence and editing facility. A popular way to convert raster images into vector graphics is image meshing, the aim of which is to find a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it difficult to find a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to finer ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.
Image meshing via hierarchical optimization
Hao XIE; Ruo-feng TONG‡
2016-01-01
Vector graphic, as a kind of geometric representation of raster images, has many advantages, e.g., defi nition independence and editing facility. A popular way to convert raster images into vector graphics is image meshing, the aim of which is to fi nd a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it diﬃcult to fi nd a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to fi ner ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.
Arniza Ghazali
2012-01-01
Full Text Available Desired pulp-based product properties can be achieved by addition of filler in the pulp network. In exploring this, fines co-generated upon refining the alkaline peroxide treated oil palm empty fruit bunches (EFB were collected based on their passage and retention capacities when subjected to varying mesh-sizes stainless-steel square mesh wires. Pulp network incorporating fines produced from the synergy of low alkaline peroxide (AP and low energy refining effects shows that blending 12% of the 400-mesh fines (P300/R400 with the normal 200-mesh pulp fraction enhanced paper tensile strength by 100% due to their favourable dimensions. This defines the usefulness of fibrillar particles whose cell wall collapsibility increases the web density by increasing bonding ability and thus, strength of pulp-based products. Fines produced from more extreme synergy between alkaline peroxide and degree of refining, exhibit unique submicron fibrils and ‘nano-CGF’ also responsible for further augmentation of EFB alkaline peroxide pulp network. Whether from the simple (low-AP and low energy refining or the extreme synergy of AP and refining, the co-generated fines are apparently suitable materials for use as natural filler for augmentation of pulp network. Particularly for the simple AP and refining synergy, the introduced recovery and utilization of the co-generated filler (CGF was found to reduce 74% turbidity and this improvement will help reduce the complexity of whitewater generation in the pulping system.
SHARP/PRONGHORN Interoperability: Mesh Generation
Avery Bingham; Javier Ortensi
2012-09-01
Progress toward collaboration between the SHARP and MOOSE computational frameworks has been demonstrated through sharing of mesh generation and ensuring mesh compatibility of both tools with MeshKit. MeshKit was used to build a three-dimensional, full-core very high temperature reactor (VHTR) reactor geometry with 120-degree symmetry, which was used to solve a neutron diffusion critical eigenvalue problem in PRONGHORN. PRONGHORN is an application of MOOSE that is capable of solving coupled neutron diffusion, heat conduction, and homogenized flow problems. The results were compared to a solution found on a 120-degree, reflected, three-dimensional VHTR mesh geometry generated by PRONGHORN. The ability to exchange compatible mesh geometries between the two codes is instrumental for future collaboration and interoperability. The results were found to be in good agreement between the two meshes, thus demonstrating the compatibility of the SHARP and MOOSE frameworks. This outcome makes future collaboration possible.
Optimizing the geometrical accuracy of curvilinear meshes
Toulorge, Thomas; Remacle, Jean-François
2015-01-01
This paper presents a method to generate valid high order meshes with optimized geometrical accuracy. The high order meshing procedure starts with a linear mesh, that is subsequently curved without taking care of the validity of the high order elements. An optimization procedure is then used to both untangle invalid elements and optimize the geometrical accuracy of the mesh. Standard measures of the distance between curves are considered to evaluate the geometrical accuracy in planar two-dimensional meshes, but they prove computationally too costly for optimization purposes. A fast estimate of the geometrical accuracy, based on Taylor expansions of the curves, is introduced. An unconstrained optimization procedure based on this estimate is shown to yield significant improvements in the geometrical accuracy of high order meshes, as measured by the standard Haudorff distance between the geometrical model and the mesh. Several examples illustrate the beneficial impact of this method on CFD solutions, with a part...
Down sharply in 2002, refining margins showed a clear improvement in the first half-year of 2003. As a result, the earnings reported by oil companies for financial year 2002 were significantly lower than in 2001, but the prospects are brighter for 2003. In the petrochemicals sector, slow demand and higher feedstock prices eroded margins in 2002, especially in Europe and the United States. The financial results for the first part of 2003 seem to indicate that sector profitability will not improve before 2004. (author)
Benazzi, E.; Alario, F
2004-07-01
In 2003, refining margins showed a clear improvement that continued throughout the first three quarters of 2004. Oil companies posted significantly higher earnings in 2003 compared to 2002, with the results of first quarter 2004 confirming this trend. Due to higher feedstock prices, the implementation of new capacity and more intense competition, the petrochemicals industry was not able to boost margins in 2003. In such difficult business conditions, aggravated by soaring crude prices, the petrochemicals industry is not likely to see any improvement in profitability before the second half of 2004. (author)
Benazzi, E
2003-07-01
Down sharply in 2002, refining margins showed a clear improvement in the first half-year of 2003. As a result, the earnings reported by oil companies for financial year 2002 were significantly lower than in 2001, but the prospects are brighter for 2003. In the petrochemicals sector, slow demand and higher feedstock prices eroded margins in 2002, especially in Europe and the United States. The financial results for the first part of 2003 seem to indicate that sector profitability will not improve before 2004. (author)
In 2003, refining margins showed a clear improvement that continued throughout the first three quarters of 2004. Oil companies posted significantly higher earnings in 2003 compared to 2002, with the results of first quarter 2004 confirming this trend. Due to higher feedstock prices, the implementation of new capacity and more intense competition, the petrochemicals industry was not able to boost margins in 2003. In such difficult business conditions, aggravated by soaring crude prices, the petrochemicals industry is not likely to see any improvement in profitability before the second half of 2004. (author)
Biologic mesh for abdominal wall reconstruction
King KS
2014-11-01
Full Text Available Kathryn S King,1 Frank P Albino,2 Parag Bhanot3 1School of Medicine, Georgetown University Hospital, Washington, DC, USA; 2Department of Plastic Surgery, 3Department of General Surgery, Georgetown University Hospital, Washington, DC, USA Background: Mesh reinforcement significantly decreases rates of recurrence following ventral hernia repair. Historically, biologic mesh was touted as superior in the setting of infection; however, selecting the appropriate mesh for a given clinical scenario is often a matter of debate. The purpose of this review is to highlight a number of the more commonly used biologic mesh products with a review of outcomes from the current literature. Methods: Outcomes following abdominal wall reconstruction using biologic mesh were reviewed for acellular cadaveric human dermis, cross-linked porcine dermis, non-cross-linked porcine dermis, porcine small intestine submucosa, acellular bovine pericardial, and acellular bovine dermal mesh. Studies with rigorous methods, adequate patient samples, and sufficient follow-up were selected for review. Results: Hernia recurrence rates following biologic mesh reinforcement vary widely. Porcine small intestine submucosa and bovine pericardium were associated with the lowest hernia recurrence rates. Porcine cross-linked dermal mesh products resulted in higher rates of adhesion formation and lower rates of tissue incorporation compared to non-cross-linked porcine mesh. Conclusion: Successful ventral hernia repair can be achieved with acceptable complications rates for each of the reviewed mesh products. Biologic meshes have an advantage over synthetic mesh in contaminated wounds but their use may not be cost-effective in all patient populations. Those with and/or at high risk for wound complications may also undergo repair with biologic mesh. Keywords: biologic mesh, ventral hernia repair, acellular dermal matrix
Jablonowski, Christiane [Univ. of Michigan, Ann Arbor, MI (United States)
2015-07-14
The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project
Petroleum refining industry in China
The oil refining industry in China has faced rapid growth in oil imports of increasingly sour grades of crude with which to satisfy growing domestic demand for a slate of lighter and cleaner finished products sold at subsidized prices. At the same time, the world petroleum refining industry has been moving from one that serves primarily local and regional markets to one that serves global markets for finished products, as world refining capacity utilization has increased. Globally, refined product markets are likely to experience continued globalization until refining investments significantly expand capacity in key demand regions. We survey the oil refining industry in China in the context of the world market for heterogeneous crude oils and growing world trade in refined petroleum products.
Macromolecular crystallographic estructure refinement
Afonine, Pavel V.
2015-04-01
Full Text Available Model refinement is a key step in crystallographic structure determination that ensures final atomic structure of macromolecule represents measured diffraction data as good as possible. Several decades have been put into developing methods and computational tools to streamline this step. In this manuscript we provide a brief overview of major milestones of crystallographic computing and methods development pertinent to structure refinement.El refinamiento es un paso clave en el proceso de determinación de una estructura cristalográfica al garantizar que la estructura atómica de la macromolécula final represente de la mejor manera posible los datos de difracción. Han hecho falta varias décadas para poder desarrollar nuevos métodos y herramientas computacionales dirigidas a dinamizar esta etapa. En este artículo ofrecemos un breve resumen de los principales hitos en la computación cristalográfica y de los nuevos métodos relevantes para el refinamiento de estructuras.
Atlantic Basin refining profitability
A review of the profitability margins of oil refining in the Atlantic Basin was presented. Petroleum refiners face the continuous challenge of balancing supply with demand. It would appear that the profitability margins in the Atlantic Basin will increase significantly in the near future because of shrinking supply surpluses. Refinery capacity utilization has reached higher levels than ever before. The American Petroleum Institute reported that in August 1997, U.S. refineries used 99 per cent of their capacity for several weeks in a row. U.S. gasoline inventories have also declined as the industry has focused on reducing capital costs. This is further evidence that supply and demand are tightly balanced. Some of the reasons for tightening supplies were reviewed. It was predicted that U.S. gasoline demand will continue to grow in the near future. Gasoline demand has not declined as expected because new vehicles are not any more fuel efficient today than they were a decade ago. Although federally-mandated fuel efficiency standards were designed to lower gasoline consumption, they may actually have prevented consumption from falling. Atlantic margins were predicted to continue moving up because of the supply and demand evidence: high capacity utilization rates, low operating inventories, limited capacity addition resulting from lower capital spending, continued U.S. gasoline demand growth, and steady total oil demand growth. 11 figs
Mesh Plug Repair of Inguinal Hernia; Single Surgeon Experience
Ahmet Serdar Karaca
2013-10-01
Full Text Available Aim: Mesh repair of inguinal hernia repairs are shown to be an effective and reliable method. In this study, a single surgeon%u2019s experience with plug-mesh method performs inguinal hernia repair have been reported. Material and Method: 587 patients with plug-mesh repair of inguinal hernia, preoperative age, body / mass index, comorbid disease were recorded in terms of form. All of the patients during the preoperative and postoperative hernia classification of information, duration of operation, antibiotics, perioperative complications, and later, the early and late postoperative complications, infection, recurrence rates and return to normal daily activity, verbal pain scales in terms of time and postoperative pain were evaluated. Added to this form of long-term pain ones. The presence of wound infection was assessed by the presence of purulent discharge from the incision. Visual analog scale pain status of the patients was measured. Results: 587 patients underwent repair of primary inguinal hernia mesh plug. One of the patients, 439 (74% of them have adapted follow-ups. Patients%u2019 ages ranged from 18-86. Was calculated as the mean of 47±18:07. Follow-up period of the patients was found to be a minimum of 3 months, maximum 55 months. Found an average of 28.2±13.4 months. Mean duration of surgery was 35.07±4.00 min (min:22mn-max:52mn, respectively. When complication rates of patients with recurrence in 2 patients (0.5%, hematoma development (1.4% in 6 patients, the development of infection in 11 patients (2.5% and long-term groin pain in 4 patients (0.9% appeared. Discussion: In our experience, the plug-mesh repair of primary inguinal hernia repair safe, effective low recurrence and complication rates can be used.
European refiners re-adjust margins strategy
Gonzalez, R.G. [ed.
1996-05-01
Refiners in Europe are adjusting operating strategies to reflect the volatilities of tight operating margins. From the unexpected availability of quality crudes (e.g., Brent, 0.3% sulfur), to the role of government in refinery planning, the European refining industry is positioning itself to reverse the past few years of steadily declining profitability. Unlike expected increases in US gasoline demand, European gasoline consumption is not expected to increase, and heavy fuel oil consumption is also declining. However, diesel fuel consumption is expected to increase, even though diesel processing capacity has recently decreased (i.e., more imports). Some of the possible strategies that Europeans may adapt to improve margins and reduce volatility include: Increase conversion capacity to supply growing demand for middle distillates and LPG; alleviate refinery cash flow problems with alliances; and direct discretionary investment toward retail merchandising (unless there is a clear trend toward a widening of the sweet-sour crude price differential).
Jennings Jason
2010-01-01
Full Text Available Laparoscopic inguinal herniorraphy via a transabdominal preperitoneal (TAPP approach using Polypropylene Mesh (Mesh and staples is an accepted technique. Mesh induces a localised inflammatory response that may extend to, and involve, adjacent abdominal and pelvic viscera such as the appendix. We present an interesting case of suspected Mesh-induced appendicitis treated successfully with laparoscopic appendicectomy, without Mesh removal, in an elderly gentleman who presented with symptoms and signs of acute appendicitis 18 months after laparoscopic inguinal hernia repair. Possible mechanisms for Mesh-induced appendicitis are briefly discussed.
The moving mesh code Shadowfax
Vandenbroucke, Bert; De Rijcke, Sven
2016-01-01
We introduce the moving mesh code Shadowfax, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public License. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic ...
Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics
Stowers, S. T.; Bass, J. M.; Oden, J. T.
1993-01-01
A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.
Buntemeyer, Lars; Peters, Thomas; Klassen, Mikhail; Pudritz, Ralph E
2015-01-01
We present an algorithm for solving the radiative transfer problem on massively parallel computers using adaptive mesh refinement and domain decomposition. The solver is based on the method of characteristics which requires an adaptive raytracer that integrates the equation of radiative transfer. The radiation field is split into local and global components which are handled separately to overcome the non-locality problem. The solver is implemented in the framework of the magneto-hydrodynamics code FLASH and is coupled by an operator splitting step. The goal is the study of radiation in the context of star formation simulations with a focus on early disc formation and evolution. This requires a proper treatment of radiation physics that covers both the optically thin as well as the optically thick regimes and the transition region in particular. We successfully show the accuracy and feasibility of our method in a series of standard radiative transfer problems and two 3D collapse simulations resembling the ear...
Mesh sensitivity effects on fatigue crack growth by crack-tip blunting and re-sharpening
Tvergaard, Viggo
2007-01-01
Crack-tip blunting under tensile loads and re-sharpening of the crack-tip during unloading is one of the basic mechanisms for fatigue crack growth in ductile metals. Based on an elastic–perfectly plastic material model, crack growth computations have been continued up to 700 full cycles by using...... remeshing at several stages of the plastic deformation, with studies of the effect of overloads or compressive underloads. Recent published analyses for the first two cycles have shown folding of the crack surface in compression, leading to something that looks like striations. The influence of mesh...... refinement is used to study the possibility of this type of behaviour within the present method. Even with much refined meshes no indication of crack surface folding is found here....
Towards Perceptual Quality Evaluation of Dynamic Meshes
Torkhani, Fakhri; Wang, Kai; Montanvert, Annick
2011-01-01
In practical applications, it is common that a 3D mesh undergoes some lossy operations. Since the end users of 3D meshes are often human beings, it is thus important to derive metrics that can faithfully assess the perceptual distortions induced by these operations. Like in the case of image quality assessment, metrics based on mesh geometric distances (e.g. Hausdorff distance and root mean squared error) cannot correctly predict the visual quality degradation. Recently, several perceptually-...
Unstructured Polyhedral Mesh Thermal Radiation Diffusion
Palmer, T.S.; Zika, M.R.; Madsen, N.K.
2000-07-27
Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module.
Unstructured Polyhedral Mesh Thermal Radiation Diffusion
Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module
Association Discovery Protocol for Hybrid Wireless Mesh Networks
Adjih, Cédric; Cho, Song Yean; Jacquet, Philippe
2006-01-01
Wireless mesh networks (WMNs) consist of two kinds of nodes: mesh routers which form the backbones of WMNs and mesh clients which associate with mesh routers to access networks. Because of the discrepancy between mesh routers and mesh clients, WMNs have a hybrid structure. Their hybrid structure presents an opportunity to integrate WMNs with different networks such as wireless LAN, Bluetooth and sensor networks through bridging functions in mesh routers. Because of the ability to integrate va...
On LAGOON nose landing gear CFD/CAA computation over unstructured mesh using a ZDES approach.
De La Puente, F.; Sanders, L.; Vuillot, F
2014-01-01
This paper is part of ONERA's effort to compute the noise generation around landing gears, effort that has been shown with studies on a variety of configurations such as the ones included inside the BANC-II (Benchmark problems for Airframe Noise Computations). In this case, the addressed geometry is the LAGOON baseline nose landing gear. On the present computation, a refined unstructured mesh is generated for resolving the boundary layer up to y+ around one. The simulation of the flow was per...
Delaunay mesh generation for an unstructured-grid ocean general circulation model
Legrand, S.; Legat, V.; E. Deleersnijder
2000-01-01
An incremental method is presented to generate automatically boundary-fitted Delaunay triangulations of the global ocean. The method takes into account Earth curvature and allows local mesh refinement in order to resolve topological or dynamical features like midocean ridges or western boundary currents. Crucial issues like the nodes insertion process, the boundary integrity problem or the creation of inner nodes are explained. Finally, the quality of generated triangulations is discussed.
The complexity of resolution refinements
Buresh-Oppenheim, Joshua; Pitassi, Toniann
2007-01-01
Resolution is the most widely studied approach to propositional theorem proving. In developing efficient resolution-based algorithms, dozens of variants and refinements of resolution have been studied from both the empirical and analytic sides. The most prominent of these refinements are: DP (ordered), DLL (tree), semantic, negative, linear and regular resolution. In this paper, we characterize and study these six refinements of resolution. We give a nearly complete chara...
Interoperable mesh components for large-scale, distributed-memory simulations
SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. In this paper, we describe a software component - an abstract data model and programming interface - designed to provide support for parallel unstructured mesh operations. We describe key issues that must be addressed to successfully provide high-performance, distributed-memory unstructured mesh services and highlight some recent research accomplishments in developing new load balancing and MPI-based communication libraries appropriate for leadership class computing. Finally, we give examples of the use of parallel adaptive mesh modification in two SciDAC applications.
Interoperable mesh components for large-scale, distributed-memory simulations
Devine, K.; Diachin, L.; Kraftcheck, J.; Jansen, K. E.; Leung, V.; Luo, X.; Miller, M.; Ollivier-Gooch, C.; Ovcharenko, A.; Sahni, O.; Shephard, M. S.; Tautges, T.; Xie, T.; Zhou, M.
2009-07-01
SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. In this paper, we describe a software component - an abstract data model and programming interface - designed to provide support for parallel unstructured mesh operations. We describe key issues that must be addressed to successfully provide high-performance, distributed-memory unstructured mesh services and highlight some recent research accomplishments in developing new load balancing and MPI-based communication libraries appropriate for leadership class computing. Finally, we give examples of the use of parallel adaptive mesh modification in two SciDAC applications.