Issues in adaptive mesh refinement
Energy Technology Data Exchange (ETDEWEB)
Dai, William Wenlong [Los Alamos National Laboratory
2009-01-01
In this paper, we present an approach for a patch-based adaptive mesh refinement (AMR) for multi-physics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, and management of patches. Among the special features of this patch-based AMR are symmetry preserving, efficiency of refinement, special implementation offlux correction, and patch management in parallel computing environments. Here, higher efficiency of refinement means less unnecessarily refined cells for a given set of cells to be refined. To demonstrate the capability of the AMR framework, hydrodynamics simulations with many levels of refinement are shown in both two- and three-dimensions.
Adaptive Mesh Refinement in CTH
International Nuclear Information System (INIS)
Crawford, David
1999-01-01
This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems
Adaptive mesh refinement in titanium
Energy Technology Data Exchange (ETDEWEB)
Colella, Phillip; Wen, Tong
2005-01-21
In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.
Wrinkling prediction with adaptive mesh refinement
Selman, A.; Meinders, Vincent T.; van den Boogaard, Antonius H.; Huetink, Han
2000-01-01
An adaptive mesh refinement procedure for wrinkling prediction analyses is presented. First the critical values are determined using Hutchinson’s bifurcation functional. A wrinkling risk factor is then defined and used to determined areas of potential wrinkling risk. Finally, a mesh refinement is
Adaptive mesh refinement for storm surge
Mandli, Kyle T.
2014-03-01
An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the GeoClaw framework and compared to ADCIRC for Hurricane Ike along with observed tide gauge data and the computational cost of each model run. © 2014 Elsevier Ltd.
Local adaptive mesh refinement for shock hydrodynamics
International Nuclear Information System (INIS)
Berger, M.J.; Colella, P.; Lawrence Livermore Laboratory, Livermore, 94550 California)
1989-01-01
The aim of this work is the development of an automatic, adaptive mesh refinement strategy for solving hyperbolic conservation laws in two dimensions. There are two main difficulties in doing this. The first problem is due to the presence of discontinuities in the solution and the effect on them of discontinuities in the mesh. The second problem is how to organize the algorithm to minimize memory and CPU overhead. This is an important consideration and will continue to be important as more sophisticated algorithms that use data structures other than arrays are developed for use on vector and parallel computers. copyright 1989 Academic Press, Inc
Visualization Tools for Adaptive Mesh Refinement Data
Energy Technology Data Exchange (ETDEWEB)
Weber, Gunther H.; Beckner, Vincent E.; Childs, Hank; Ligocki,Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes
2007-05-09
Adaptive Mesh Refinement (AMR) is a highly effective method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations that must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR visualization research and tools and describe how VisIt currently handles AMR data.
Parallel adaptive mesh refinement for electronic structure calculations
Energy Technology Data Exchange (ETDEWEB)
Kohn, S.; Weare, J.; Ong, E.; Baden, S.
1996-12-01
We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.
Adaptive mesh refinement for shocks and material interfaces
Energy Technology Data Exchange (ETDEWEB)
Dai, William Wenlong [Los Alamos National Laboratory
2010-01-01
There are three kinds of adaptive mesh refinement (AMR) in structured meshes. Block-based AMR sometimes over refines meshes. Cell-based AMR treats cells cell by cell and thus loses the advantage of the nature of structured meshes. Patch-based AMR is intended to combine advantages of block- and cell-based AMR, i.e., the nature of structured meshes and sharp regions of refinement. But, patch-based AMR has its own difficulties. For example, patch-based AMR typically cannot preserve symmetries of physics problems. In this paper, we will present an approach for a patch-based AMR for hydrodynamics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, management of patches, and load balance. The special features of this patch-based AMR include symmetry preserving, efficiency of refinement across shock fronts and material interfaces, special implementation of flux correction, and patch management in parallel computing environments. To demonstrate the capability of the AMR framework, we will show both two- and three-dimensional hydrodynamics simulations with many levels of refinement.
Simulation of nonpoint source contamination based on adaptive mesh refinement
Kourakos, G.; Harter, T.
2014-12-01
Contamination of groundwater aquifers from nonpoint sources is a worldwide problem. Typical agricultural groundwater basins receive contamination from a large array (in the order of ~10^5-6) of spatially and temporally heterogeneous sources such as fields, crops, dairies etc, while the received contaminants emerge at significantly uncertain time lags to a large array of discharge surfaces such as public supply, domestic and irrigation wells and streams. To support decision making in such complex regimes several approaches have been developed, which can be grouped into 3 categories: i) Index methods, ii)regression methods and iii) physically based methods. Among the three, physically based methods are considered more accurate, but at the cost of computational demand. In this work we present a physically based simulation framework which exploits the latest hardware and software developments to simulate large (>>1,000 km2) groundwater basins. First we simulate groundwater flow using a sufficiently detailed mesh to capture the spatial heterogeneity. To achieve optimal mesh quality we combine adaptive mesh refinement with the nonlinear solution for unconfined flow. Starting from a coarse grid the mesh is refined iteratively in the parts of the domain where the flow heterogeneity appears higher resulting in optimal grid. Secondly we simulate the nonpoint source pollution based on the detailed velocity field computed from the previous step. In our approach we use the streamline model where the 3D transport problem is decomposed into multiple 1D transport problems. The proposed framework is applied to simulate nonpoint source pollution in the Central Valley aquifer system, California.
Block-structured Adaptive Mesh Refinement - Theory, Implementation and Application
Directory of Open Access Journals (Sweden)
Deiterding Ralf
2011-12-01
Full Text Available Structured adaptive mesh refinement (SAMR techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.
Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations
Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.
2012-09-01
Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.
CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM
International Nuclear Information System (INIS)
Miniati, Francesco; Martin, Daniel F.
2011-01-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
ENZO: AN ADAPTIVE MESH REFINEMENT CODE FOR ASTROPHYSICS
International Nuclear Information System (INIS)
Bryan, Greg L.; Turk, Matthew J.; Norman, Michael L.; Bordner, James; Xu, Hao; Kritsuk, Alexei G.; O'Shea, Brian W.; Smith, Britton; Abel, Tom; Wang, Peng; Skillman, Samuel W.; Wise, John H.; Reynolds, Daniel R.; Collins, David C.; Harkness, Robert P.; Kim, Ji-hoon; Kuhlen, Michael; Goldbaum, Nathan; Hummels, Cameron; Tasker, Elizabeth
2014-01-01
This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in one, two, and three dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically thin radiative cooling of primordial and metal-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the code's parallel performance, and discuss the Enzo collaboration's code development methodology
Direct numerical simulation of bubbles with parallelized adaptive mesh refinement
International Nuclear Information System (INIS)
Talpaert, A.
2015-01-01
The study of two-phase Thermal-Hydraulics is a major topic for Nuclear Engineering for both security and efficiency of nuclear facilities. In addition to experiments, numerical modeling helps to knowing precisely where bubbles appear and how they behave, in the core as well as in the steam generators. This work presents the finest scale of representation of two-phase flows, Direct Numerical Simulation of bubbles. We use the 'Di-phasic Low Mach Number' equation model. It is particularly adapted to low-Mach number flows, that is to say flows which velocity is much slower than the speed of sound; this is very typical of nuclear thermal-hydraulics conditions. Because we study bubbles, we capture the front between vapor and liquid phases thanks to a downward flux limiting numerical scheme. The specific discrete analysis technique this work introduces is well-balanced parallel Adaptive Mesh Refinement (AMR). With AMR, we refined the coarse grid on a batch of patches in order to locally increase precision in areas which matter more, and capture fine changes in the front location and its topology. We show that patch-based AMR is very adapted for parallel computing. We use a variety of physical examples: forced advection, heat transfer, phase changes represented by a Stefan model, as well as the combination of all those models. We will present the results of those numerical simulations, as well as the speed up compared to equivalent non-AMR simulation and to serial computation of the same problems. This document is made up of an abstract and the slides of the presentation. (author)
Parallel adaptive mesh refinement techniques for plasticity problems
Energy Technology Data Exchange (ETDEWEB)
Barry, W.J. [Carnegie Mellon Univ., Pittsburgh, PA (United States). Dept. of Civil and Environmental Engineering; Jones, M.T. [Virginia Polytechnic Institute, Blacksburg, VA (United States). Dept. of Electrical and Computer Engineering]|[State Univ., Blacksburg, VA (United States); Plassmann, P.E. [Argonne National Lab., IL (United States)
1997-12-31
The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.
Resolution Convergence in Cosmological Hydrodynamical Simulations Using Adaptive Mesh Refinement
Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim
2018-03-01
We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of initial conditions is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold back falls with increasing spatial and initial-condition resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of order of 10-20%, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold back on the star formation rate.
Hydrodynamics in full general relativity with conservative adaptive mesh refinement
East, William E.; Pretorius, Frans; Stephens, Branson C.
2012-06-01
There is great interest in numerical relativity simulations involving matter due to the likelihood that binary compact objects involving neutron stars will be detected by gravitational wave observatories in the coming years, as well as to the possibility that binary compact object mergers could explain short-duration gamma-ray bursts. We present a code designed for simulations of hydrodynamics coupled to the Einstein field equations targeted toward such applications. This code has recently been used to study eccentric mergers of black hole-neutron star binaries. We evolve the fluid conservatively using high-resolution shock-capturing methods, while the field equations are solved in the generalized-harmonic formulation with finite differences. In order to resolve the various scales that may arise, we use adaptive mesh refinement (AMR) with grid hierarchies based on truncation error estimates. A noteworthy feature of this code is the implementation of the flux correction algorithm of Berger and Colella to ensure that the conservative nature of fluid advection is respected across AMR boundaries. We present various tests to compare the performance of different limiters and flux calculation methods, as well as to demonstrate the utility of AMR flux corrections.
Parallel adaptive mesh refinement techniques for plasticity problems
Barry, W. J.; Jones, M. T.; Plassmann, P. E.
1997-01-01
The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.
An object-oriented approach for parallel self adaptive mesh refinement on block structured grids
Lemke, Max; Witsch, Kristian; Quinlan, Daniel
1993-01-01
Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.
Conforming to interface structured adaptive mesh refinement: 3D algorithm and implementation
Nagarajan, Anand; Soghrati, Soheil
2018-03-01
A new non-iterative mesh generation algorithm named conforming to interface structured adaptive mesh refinement (CISAMR) is introduced for creating 3D finite element models of problems with complex geometries. CISAMR transforms a structured mesh composed of tetrahedral elements into a conforming mesh with low element aspect ratios. The construction of the mesh begins with the structured adaptive mesh refinement of elements in the vicinity of material interfaces. An r-adaptivity algorithm is then employed to relocate selected nodes of nonconforming elements, followed by face-swapping a small fraction of them to eliminate tetrahedrons with high aspect ratios. The final conforming mesh is constructed by sub-tetrahedralizing remaining nonconforming elements, as well as tetrahedrons with hanging nodes. In addition to studying the convergence and analyzing element-wise errors in meshes generated using CISAMR, several example problems are presented to show the ability of this method for modeling 3D problems with intricate morphologies.
Radiation transport code with adaptive Mesh Refinement: acceleration techniques and applications
International Nuclear Information System (INIS)
Velarde, Pedro; Garcia-Fernaandez, Carlos; Portillo, David; Barbas, Alfonso
2011-01-01
We present a study of acceleration techniques for solving Sn radiation transport equations with Adaptive Mesh Refinement (AMR). Both DSA and TSA are considered, taking into account the influence of the interaction between different levels of the mesh structure and the order of approximation in angle. A Hybrid method is proposed in order to obtain better convergence rate and lower computer times. Some examples are presented relevant to ICF and X ray secondary sources. (author)
Energy Technology Data Exchange (ETDEWEB)
Hornung, R.D. [Duke Univ., Durham, NC (United States)
1996-12-31
An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.
International Nuclear Information System (INIS)
Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.
2013-01-01
Highlights: ► A new adaptive h-refinement approach has been developed for a class of nodal method. ► The resulting system of nodal equations is more amenable to efficient numerical solution. ► The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. ► Spatially adaptive approach greatly enhances the accuracy of the solution. - Abstract: The aim of this work is to develop a spatially adaptive coarse mesh strategy that progressively refines the nodes in appropriate regions of domain to solve the neutron balance equation by zeroth order nodal expansion method. A flux gradient based a posteriori estimation scheme has been utilized for checking the approximate solutions for various nodes. The relative surface net leakage of nodes has been considered as an assessment criterion. In this approach, the core module is called in by adaptive mesh generator to determine gradients of node surfaces flux to explore the possibility of node refinements in appropriate regions and directions of the problem. The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. For this purpose, a computer program ANRNE-2D, Adaptive Node Refinement Nodal Expansion, has been developed to solve neutron diffusion equation using average current nodal expansion method for 2D rectangular geometries. Implementing the adaptive algorithm confirms its superiority in enhancing the accuracy of the solution without using fine nodes throughout the domain and increasing the number of unknown solution. Some well-known benchmarks have been investigated and improvements are reported
A parallel adaptive mesh refinement algorithm for predicting turbulent non-premixed combusting flows
International Nuclear Information System (INIS)
Gao, X.; Groth, C.P.T.
2005-01-01
A parallel adaptive mesh refinement (AMR) algorithm is proposed for predicting turbulent non-premixed combusting flows characteristic of gas turbine engine combustors. The Favre-averaged Navier-Stokes equations governing mixture and species transport for a reactive mixture of thermally perfect gases in two dimensions, the two transport equations of the κ-ψ turbulence model, and the time-averaged species transport equations, are all solved using a fully coupled finite-volume formulation. A flexible block-based hierarchical data structure is used to maintain the connectivity of the solution blocks in the multi-block mesh and facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. This AMR approach allows for anisotropic mesh refinement and the block-based data structure readily permits efficient and scalable implementations of the algorithm on multi-processor architectures. Numerical results for turbulent non-premixed diffusion flames, including cold- and hot-flow predictions for a bluff body burner, are described and compared to available experimental data. The numerical results demonstrate the validity and potential of the parallel AMR approach for predicting complex non-premixed turbulent combusting flows. (author)
Fromang, S.; Hennebelle, P.; Teyssier, R.
2006-01-01
In this paper, we present a new method to perform numerical simulations of astrophysical MHD flows using the Adaptive Mesh Refinement framework and Constrained Transport. The algorithm is based on a previous work in which the MUSCL--Hancock scheme was used to evolve the induction equation. In this paper, we detail the extension of this scheme to the full MHD equations and discuss its properties. Through a series of test problems, we illustrate the performances of this new code using two diffe...
Self-gravitational Magnetohydrodynamics with Adaptive Mesh Refinement for Protostellar Collapse
Matsumoto, Tomoaki
2006-01-01
A new numerical code, called SFUMATO, for solving self-gravitational magnetohydrodynamics (MHD) problems using adaptive mesh refinement (AMR) is presented. A block-structured grid is adopted as the grid of the AMR hierarchy. The total variation diminishing (TVD) cell-centered scheme is adopted as the MHD solver, with hyperbolic cleaning of divergence error of the magnetic field also implemented. The self-gravity is solved by a multigrid method composed of (1) full multigrid (FMG)-cycle on the...
Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver
Czech Academy of Sciences Publication Activity Database
Kůs, Pavel; Šístek, Jakub
2017-01-01
Roč. 110, August (2017), s. 34-54 ISSN 0965-9978 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : adaptive mesh refinement * parallel algorithms * domain decomposition Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016 http://www.sciencedirect.com/science/article/pii/S0965997816305737
Adaptive Mesh Refinement for the Immersed Boundary Lattice Green's Function method
Mengaldo, Gianmarco; Colonius, Tim
2017-11-01
The immersed boundary lattice Green's function (IBLGF) method, recently developed by Liska and Colonius, is a recent scalable numerical framework to solve incompressible flows on unbounded domains. It uses an immersed boundary method, based on a 2nd -order mimetic finite volume scheme that is used in conjunction with an adaptive block refinement approach, achieved via lattice Green's functions, whose scope is to limit the computational domain to vortical regions that dominate the flow evolution - e.g. regions in proximity to the immersed body surface and in its wake. The method, as it stands, is competitive for low Reynolds number flows, as the staggered Cartesian mesh employed cannot be stretched or refined locally. In this talk we address this issue by presenting the development of adaptive mesh refinement (AMR) capabilities in the IBLFG method. As we shall see, this new feature and the adaptive block refinement already present in the code help overcoming the limitation of simulating high Reynolds number flows, issue that is endemic to the vast majority of immersed boundary-based methods. Supported by ONR-N00014-16-1-2734.
Directory of Open Access Journals (Sweden)
Greg L. Bryan
2002-01-01
Full Text Available As an entry for the 2001 Gordon Bell Award in the "special" category, we describe our 3-d, hybrid, adaptive mesh refinement (AMR code Enzo designed for high-resolution, multiphysics, cosmological structure formation simulations. Our parallel implementation places no limit on the depth or complexity of the adaptive grid hierarchy, allowing us to achieve unprecedented spatial and temporal dynamic range. We report on a simulation of primordial star formation which develops over 8000 subgrids at 34 levels of refinement to achieve a local refinement of a factor of 1012 in space and time. This allows us to resolve the properties of the first stars which form in the universe assuming standard physics and a standard cosmological model. Achieving extreme resolution requires the use of 128-bit extended precision arithmetic (EPA to accurately specify the subgrid positions. We describe our EPA AMR implementation on the IBM SP2 Blue Horizon system at the San Diego Supercomputer Center.
International Nuclear Information System (INIS)
Penner, Joyce E; Andronova, Natalia; Oehmke, Robert C; Brown, Jonathan; Stout, Quentin F; Jablonowski, Christiane; Leer, Bram van; Powell, Kenneth G; Herzog, Michael
2007-01-01
One of the most important advances needed in global climate models is the development of atmospheric General Circulation Models (GCMs) that can reliably treat convection. Such GCMs require high resolution in local convectively active regions, both in the horizontal and vertical directions. During previous research we have developed an Adaptive Mesh Refinement (AMR) dynamical core that can adapt its grid resolution horizontally. Our approach utilizes a finite volume numerical representation of the partial differential equations with floating Lagrangian vertical coordinates and requires resolving dynamical processes on small spatial scales. For the latter it uses a newly developed general-purpose library, which facilitates 3D block-structured AMR on spherical grids. The library manages neighbor information as the blocks adapt, and handles the parallel communication and load balancing, freeing the user to concentrate on the scientific modeling aspects of their code. In particular, this library defines and manages adaptive blocks on the sphere, provides user interfaces for interpolation routines and supports the communication and load-balancing aspects for parallel applications. We have successfully tested the library in a 2-D (longitude-latitude) implementation. During the past year, we have extended the library to treat adaptive mesh refinement in the vertical direction. Preliminary results are discussed. This research project is characterized by an interdisciplinary approach involving atmospheric science, computer science and mathematical/numerical aspects. The work is done in close collaboration between the Atmospheric Science, Computer Science and Aerospace Engineering Departments at the University of Michigan and NOAA GFDL
PARAMESH: A Parallel, Adaptive Mesh Refinement Toolkit and Performance of the ASCI/FLASH code
Olson, K. M.; MacNeice, P.; Fryxell, B.; Ricker, P.; Timmes, F. X.; Zingale, M.
1999-12-01
We describe a package of routines known as PARAMESH which enables a user to easily convert an existing serial, uniform grid code to a parallel code with adaptive-mesh refinement. The package does this through the use of a block-structured form of AMR in combination with a tree data structure for distributing blocks to processors. We also describe some of the applications which have been developed using PARAMESH with special emaphasis on the ASCI/FLASH code. Performance results are also discussed for a variety of parallel architectures.
A new adaptive mesh refinement data structure with an application to detonation
Ji, Hua; Lien, Fue-Sang; Yee, Eugene
2010-11-01
A new Cell-based Structured Adaptive Mesh Refinement (CSAMR) data structure is developed. In our CSAMR data structure, Cartesian-like indices are used to identify each cell. With these stored indices, the information on the parent, children and neighbors of a given cell can be accessed simply and efficiently. Owing to the usage of these indices, the computer memory required for storage of the proposed AMR data structure is only {5}/{8} word per cell, in contrast to the conventional oct-tree [P. MacNeice, K.M. Olson, C. Mobary, R. deFainchtein, C. Packer, PARAMESH: a parallel adaptive mesh refinement community toolkit, Comput. Phys. Commun. 330 (2000) 126] and the fully threaded tree (FTT) [A.M. Khokhlov, Fully threaded tree algorithms for adaptive mesh fluid dynamics simulations, J. Comput. Phys. 143 (1998) 519] data structures which require, respectively, 19 and 2{3}/{8} words per cell for storage of the connectivity information. Because the connectivity information (e.g., parent, children and neighbors) of a cell in our proposed AMR data structure can be accessed using only the cell indices, a tree structure which was required in previous approaches for the organization of the AMR data is no longer needed for this new data structure. Instead, a much simpler hash table structure is used to maintain the AMR data, with the entry keys in the hash table obtained directly from the explicitly stored cell indices. The proposed AMR data structure simplifies the implementation and parallelization of an AMR code. Two three-dimensional test cases are used to illustrate and evaluate the computational performance of the new CSAMR data structure.
BISICLES - A Scalable Finite-Volume Adaptive Mesh Refinement Ice Sheet Model
Martin, D. F.; Cornford, S. L.; Ranken, D. F.; Le Brocq, A. M.; Gladstone, R. M.; Payne, A. J.; Ng, E. G.; Lipscomb, W. H.
2012-04-01
Understanding the changing behavior of land ice sheets is essential for accurate projection of sea-level change. The dynamics of ice sheets span a wide range of scales. Localized regions such as grounding lines and ice streams require extremely fine (better than 1 km) resolution to correctly capture the dynamics. Resolving such features using a uniform computational mesh would be prohibitively expensive. Conversely, there are large regions where such fine resolution is unnecessary and represents a waste of computational resources. This makes ice sheets a prime candidate for adaptive mesh refinement (AMR), in which finer spatial resolution is added only where needed, enabling the efficient use of computing resources. The Berkeley ISICLES (BISICLES) project is a collaboration among the Lawrence Berkeley and Los Alamos National Laboratories in the U.S. and the University of Bristol in the U.K. We are constructing a high-performance scalable AMR ice sheet model using the Chombo parallel AMR framework. The placement of refined meshes can easily adapt dynamically to follow the changing and evolving features of the ice sheets. We also use a vertically-integrated treatment of the momentum equation based on that of Schoof and Hindmarsh (2010), which permits additional computational efficiency. Using Chombo enables us to take advantage of existing scalable multigrid-based AMR elliptic solvers and PPM-based AMR hyperbolic solvers. Linking to the existing Glimmer-CISM community ice sheet model as an alternative dynamical core allows use of many features of the existing Glimmer-CISM model, including a coupler to CESM. We present results showing the effectiveness of our approach, both for simple benchmark problems which validate our approach, and for application to regional and continental-scale ice-sheet modeling.
Gerya, T.; Duretz, T.; May, D. A.
2012-04-01
We present new 2D adaptive mesh refinement (AMR) algorithm based on stress-conservative finite-differences formulated for non-uniform rectangular staggered grid. The refinement approach is based on a repetitive cell splitting organized via a quad-tree construction (every parent cell is split into 4 daughter cells of equal size). Irrespective of the level of resolution every cell has 5 staggered nodes (2 horizontal velocities, 2 vertical velocities and 1 pressure) for which respective governing equations, boundary conditions and interpolation equations are formulated. The connectivity of the grid is achieved via cross-indexing of grid cells and basic nodal points located in their corners: four corner nodes are indexed for every cell and up to 4 surrounding cells are indexed for every node. The accuracy of the approach depends critically on the formulation of the stencil used at the "hanging" velocity nodes located at the boundaries between different levels of resolution. Most accurate results are obtained for the scheme based on the volume flux balance across the resolution boundary combined with stress-based interpolation of velocity orthogonal to the boundary. We tested this new approach with a number of 2D variable viscosity analytical solutions. Our tests demonstrate that the adaptive staggered grid formulation has convergence properties similar to those obtained in case of a standard, non-adaptive staggered grid formulation. This convergence is also achieved when resolution boundary crosses sharp viscosity contrast interfaces. The convergence rates measured are found to be insensitive to scenarios when the transition in grid resolution crosses sharp viscosity contrast interfaces. We compared various grid refinement strategies based on distribution of different field variables such as viscosity, density and velocity. According to these tests the refinement allows for significant (0.5-1 order of magnitude) increase in the computational accuracy at the same
Capturing Multiscale Phenomena via Adaptive Mesh Refinement (AMR) in 2D and 3D Atmospheric Flows
Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.; Langhans, W.; Collins, W. D.
2017-12-01
Extreme atmospheric events such as tropical cyclones are inherently complex multiscale phenomena. Such phenomena are a challenge to simulate in conventional atmosphere models, which typically use rather coarse uniform-grid resolutions. To enable study of these systems, Adaptive Mesh Refinement (AMR) can provide sufficient local resolution by dynamically placing high-resolution grid patches selectively over user-defined features of interest, such as a developing cyclone, while limiting the total computational burden of requiring such high-resolution globally. This work explores the use of AMR with a high-order, non-hydrostatic, finite-volume dynamical core, which uses the Chombo AMR library to implement refinement in both space and time on a cubed-sphere grid. The characteristics of the AMR approach are demonstrated via a series of idealized 2D and 3D test cases designed to mimic atmospheric dynamics and multiscale flows. In particular, new shallow-water test cases with forcing mechanisms are introduced to mimic the strengthening of tropical cyclone-like vortices and to include simplified moisture and convection processes. The forced shallow-water experiments quantify the improvements gained from AMR grids, assess how well transient features are preserved across grid boundaries, and determine effective refinement criteria. In addition, results from idealized 3D test cases are shown to characterize the accuracy and stability of the non-hydrostatic 3D AMR dynamical core.
3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks
International Nuclear Information System (INIS)
Samtaney, S.; Jardin, S.C.; Colella, P.; Martin, D.F.
2003-01-01
We present results of Adaptive Mesh Refinement (AMR) simulations of the pellet injection process, a proven method of refueling tokamaks. AMR is a computationally efficient way to provide the resolution required to simulate realistic pellet sizes relative to device dimensions. The mathematical model comprises of single-fluid MHD equations with source terms in the continuity equation along with a pellet ablation rate model. The numerical method developed is an explicit unsplit upwinding treatment of the 8-wave formulation, coupled with a MAC projection method to enforce the solenoidal property of the magnetic field. The Chombo framework is used for AMR. The role of the E x B drift in mass redistribution during inside and outside pellet injections is emphasized
On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields
Energy Technology Data Exchange (ETDEWEB)
Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.
2011-06-27
Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.
Performance Characteristics of an Adaptive Mesh RefinementCalculation on Scalar and Vector Platforms
Energy Technology Data Exchange (ETDEWEB)
Welcome, Michael; Rendleman, Charles; Oliker, Leonid; Biswas, Rupak
2006-01-31
Adaptive mesh refinement (AMR) is a powerful technique thatreduces the resources necessary to solve otherwise in-tractable problemsin computational science. The AMR strategy solves the problem on arelatively coarse grid, and dynamically refines it in regions requiringhigher resolution. However, AMR codes tend to be far more complicatedthan their uniform grid counterparts due to the software infrastructurenecessary to dynamically manage the hierarchical grid framework. Despitethis complexity, it is generally believed that future multi-scaleapplications will increasingly rely on adaptive methods to study problemsat unprecedented scale and resolution. Recently, a new generation ofparallel-vector architectures have become available that promise toachieve extremely high sustained performance for a wide range ofapplications, and are the foundation of many leadership-class computingsystems worldwide. It is therefore imperative to understand the tradeoffsbetween conventional scalar and parallel-vector platforms for solvingAMR-based calculations. In this paper, we examine the HyperCLaw AMRframework to compare and contrast performance on the Cray X1E, IBM Power3and Power5, and SGI Altix. To the best of our knowledge, this is thefirst work that investigates and characterizes the performance of an AMRcalculation on modern parallel-vector systems.
Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement
Shervani-Tabar, Navid; Vasilyev, Oleg V.
2016-11-01
This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.
Direct numerical simulation of bubbles with adaptive mesh refinement with distributed algorithms
International Nuclear Information System (INIS)
Talpaert, Arthur
2017-01-01
This PhD work presents the implementation of the simulation of two-phase flows in conditions of water-cooled nuclear reactors, at the scale of individual bubbles. To achieve that, we study several models for Thermal-Hydraulic flows and we focus on a technique for the capture of the thin interface between liquid and vapour phases. We thus review some possible techniques for adaptive Mesh Refinement (AMR) and provide algorithmic and computational tools adapted to patch-based AMR, which aim is to locally improve the precision in regions of interest. More precisely, we introduce a patch-covering algorithm designed with balanced parallel computing in mind. This approach lets us finely capture changes located at the interface, as we show for advection test cases as well as for models with hyperbolic-elliptic coupling. The computations we present also include the simulation of the incompressible Navier-Stokes system, which models the shape changes of the interface between two non-miscible fluids. (author) [fr
Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model
O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.
2015-12-01
Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.
Yuan, H. Z.; Wang, Y.; Shu, C.
2017-12-01
This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.
International Nuclear Information System (INIS)
Skillman, Samuel W.; Hallman, Eric J.; Burns, Jack O.; Smith, Britton D.; O'Shea, Brian W.; Turk, Matthew J.
2011-01-01
Cosmological shocks are a critical part of large-scale structure formation, and are responsible for heating the intracluster medium in galaxy clusters. In addition, they are capable of accelerating non-thermal electrons and protons. In this work, we focus on the acceleration of electrons at shock fronts, which is thought to be responsible for radio relics-extended radio features in the vicinity of merging galaxy clusters. By combining high-resolution adaptive mesh refinement/N-body cosmological simulations with an accurate shock-finding algorithm and a model for electron acceleration, we calculate the expected synchrotron emission resulting from cosmological structure formation. We produce synthetic radio maps of a large sample of galaxy clusters and present luminosity functions and scaling relationships. With upcoming long-wavelength radio telescopes, we expect to see an abundance of radio emission associated with merger shocks in the intracluster medium. By producing observationally motivated statistics, we provide predictions that can be compared with observations to further improve our understanding of magnetic fields and electron shock acceleration.
International Nuclear Information System (INIS)
Hummels, Cameron B.; Bryan, Greg L.
2012-01-01
We carry out adaptive mesh refinement cosmological simulations of Milky Way mass halos in order to investigate the formation of disk-like galaxies in a Λ-dominated cold dark matter model. We evolve a suite of five halos to z = 0 and find a gas disk formation in each; however, in agreement with previous smoothed particle hydrodynamics simulations (that did not include a subgrid feedback model), the rotation curves of all halos are centrally peaked due to a massive spheroidal component. Our standard model includes radiative cooling and star formation, but no feedback. We further investigate this angular momentum problem by systematically modifying various simulation parameters including: (1) spatial resolution, ranging from 1700 to 212 pc; (2) an additional pressure component to ensure that the Jeans length is always resolved; (3) low star formation efficiency, going down to 0.1%; (4) fixed physical resolution as opposed to comoving resolution; (5) a supernova feedback model that injects thermal energy to the local cell; and (6) a subgrid feedback model which suppresses cooling in the immediate vicinity of a star formation event. Of all of these, we find that only the last (cooling suppression) has any impact on the massive spheroidal component. In particular, a simulation with cooling suppression and feedback results in a rotation curve that, while still peaked, is considerably reduced from our standard runs.
Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths
Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.
2018-04-01
We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.
Parallelization of Unsteady Adaptive Mesh Refinement for Unstructured Navier-Stokes Solvers
Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.
2014-01-01
This paper explores the implementation of the MPI parallelization in a Navier-Stokes solver using adaptive mesh re nement. Viscous and inviscid test problems are considered for the purpose of benchmarking, as are implicit and explicit time advancement methods. The main test problem for comparison includes e ects from boundary layers and other viscous features and requires a large number of grid points for accurate computation. Ex- perimental validation against double cone experiments in hypersonic ow are shown. The adaptive mesh re nement shows promise for a staple test problem in the hypersonic com- munity. Extension to more advanced techniques for more complicated ows is described.
Rastigejev, Y.; Semakin, A. N.
2013-12-01
Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical
Energy Technology Data Exchange (ETDEWEB)
De Colle, Fabio; Ramirez-Ruiz, Enrico [Astronomy and Astrophysics Department, University of California, Santa Cruz, CA 95064 (United States); Granot, Jonathan [Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); Lopez-Camara, Diego, E-mail: fabio@ucolick.org [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Ap. 70-543, 04510 D.F. (Mexico)
2012-02-20
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the
De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico
2012-02-01
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.
Donmez, Orhan
We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.
Calder, A. C.; Curtis, B. C.; Dursi, L. J.; Fryxell, B.; Henry, G.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Tufo, H. M.; Truran, J. W.; Zingale, M.
We present simulations and performance results of nuclear burning fronts in supernovae on the largest domain and at the finest spatial resolution studied to date. These simulations were performed on the Intel ASCI-Red machine at Sandia National Laboratories using FLASH, a code developed at the Center for Astrophysical Thermonuclear Flashes at the University of Chicago. FLASH is a modular, adaptive mesh, parallel simulation code capable of handling compressible, reactive fluid flows in astrophysical environments. FLASH is written primarily in Fortran 90, uses the Message-Passing Interface library for inter-processor communication and portability, and employs the PARAMESH package to manage a block-structured adaptive mesh that places blocks only where the resolution is required and tracks rapidly changing flow features, such as detonation fronts, with ease. We describe the key algorithms and their implementation as well as the optimizations required to achieve sustained performance of 238 GLOPS on 6420 processors of ASCI-Red in 64-bit arithmetic.
Ying, Wenjun; Henriquez, Craig S
2015-01-01
A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented.
Directory of Open Access Journals (Sweden)
Wenjun Ying
2015-01-01
Full Text Available A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented.
A short note on the use of the red-black tree in Cartesian adaptive mesh refinement algorithms
Hasbestan, Jaber J.; Senocak, Inanc
2017-12-01
Mesh adaptivity is an indispensable capability to tackle multiphysics problems with large disparity in time and length scales. With the availability of powerful supercomputers, there is a pressing need to extend time-proven computational techniques to extreme-scale problems. Cartesian adaptive mesh refinement (AMR) is one such method that enables simulation of multiscale, multiphysics problems. AMR is based on construction of octrees. Originally, an explicit tree data structure was used to generate and manipulate an adaptive Cartesian mesh. At least eight pointers are required in an explicit approach to construct an octree. Parent-child relationships are then used to traverse the tree. An explicit octree, however, is expensive in terms of memory usage and the time it takes to traverse the tree to access a specific node. For these reasons, implicit pointerless methods have been pioneered within the computer graphics community, motivated by applications requiring interactivity and realistic three dimensional visualization. Lewiner et al. [1] provides a concise review of pointerless approaches to generate an octree. Use of a hash table and Z-order curve are two key concepts in pointerless methods that we briefly discuss next.
Semplice, Matteo; Loubère, Raphaël
2018-02-01
In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.
Multigrid for refined triangle meshes
Energy Technology Data Exchange (ETDEWEB)
Shapira, Yair
1997-02-01
A two-level preconditioning method for the solution of (locally) refined finite element schemes using triangle meshes is introduced. In the isotropic SPD case, it is shown that the condition number of the preconditioned stiffness matrix is bounded uniformly for all sufficiently regular triangulations. This is also verified numerically for an isotropic diffusion problem with highly discontinuous coefficients.
Directory of Open Access Journals (Sweden)
Amaziane Brahim
2014-07-01
Full Text Available In this paper, we consider adaptive numerical simulation of miscible displacement problems in porous media, which are modeled by single phase flow equations. A vertex-centred finite volume method is employed to discretize the coupled system: the Darcy flow equation and the diffusion-convection concentration equation. The convection term is approximated with a Godunov scheme over the dual finite volume mesh, whereas the diffusion-dispersion term is discretized by piecewise linear conforming finite elements. We introduce two kinds of indicators, both of them of residual type. The first one is related to time discretization and is local with respect to the time discretization: thus, at each time, it provides an appropriate information for the choice of the next time step. The second is related to space discretization and is local with respect to both the time and space variable and the idea is that at each time it is an efficient tool for mesh adaptivity. An error estimation procedure evaluates where additional refinement is needed and grid generation procedures dynamically create or remove fine-grid patches as resolution requirements change. The method was implemented in the software MELODIE, developed by the French Institute for Radiological Protection and Nuclear Safety (IRSN, Institut de Radioprotection et de Sûreté Nucléaire. The algorithm is then used to simulate the evolution of radionuclide migration from the waste packages through a heterogeneous disposal, demonstrating its capability to capture complex behavior of the resulting flow.
Angelidis, Dionysios; Sotiropoulos, Fotis
2015-11-01
The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.
2015-04-09
Refinement for Idealized Tropical Cyclone Problems in a Spectral Element Shallow Water Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...amined for idealized tropical cyclone (TC) simulations in a spectral element f-plane shallow water model. The SMR simulations have varying sizes of...adaptive mesh refinement1 for idealized tropical cyclone problems in a spectral element2 shallow water model3 Eric A. Hendricks ∗ Marine Meteorology Division
Constrained-Transport Magnetohydrodynamics with Adaptive-Mesh-Refinement in CHARM
Energy Technology Data Exchange (ETDEWEB)
Miniatii, Francesco; Martin, Daniel
2011-05-24
We present the implementation of a three-dimensional, second order accurate Godunov-type algorithm for magneto-hydrodynamic (MHD), in the adaptivemesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit Corner-Transport-Upwind (CTU) scheme. Thefluid quantities are cell-centered and are updated using the Piecewise-Parabolic- Method (PPM), while the magnetic field variables are face-centered and areevolved through application of the Stokes theorem on cell edges via a Constrained- Transport (CT) method. The so-called ?multidimensional MHD source terms?required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracyor robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These includeface-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. Thecode is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests,a three-dimensional shock-cloud interaction problem and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence isshown to remain negligible throughout. Subject headings: cosmology: theory - methods: numerical
Deng, Xiaolong; Dong, Haibo
2017-11-01
Developing a high-fidelity, high-efficiency numerical method for bio-inspired flow problems with flow-structure interaction is important for understanding related physics and developing many bio-inspired technologies. To simulate a fast-swimming big fish with multiple finlets or fish schooling, we need fine grids and/or a big computational domain, which are big challenges for 3-D simulations. In current work, based on the 3-D finite-difference sharp-interface immersed boundary method for incompressible flows (Mittal et al., JCP 2008), we developed an octree-like Adaptive Mesh Refinement (AMR) technique to enhance the computational ability and increase the computational efficiency. The AMR is coupled with a multigrid acceleration technique and a MPI +OpenMP hybrid parallelization. In this work, different AMR layers are treated separately and the synchronization is performed in the buffer regions and iterations are performed for the convergence of solution. Each big region is calculated by a MPI process which then uses multiple OpenMP threads for further acceleration, so that the communication cost is reduced. With these acceleration techniques, various canonical and bio-inspired flow problems with complex boundaries can be simulated accurately and efficiently. This work is supported by the MURI Grant Number N00014-14-1-0533 and NSF Grant CBET-1605434.
Energy Technology Data Exchange (ETDEWEB)
Rasia, Elena [Department of Physics, University of Michigan, 450 Church Street, Ann Arbor, MI 48109 (United States); Lau, Erwin T.; Nagai, Daisuke; Avestruz, Camille [Department of Physics, Yale University, New Haven, CT 06520 (United States); Borgani, Stefano [Dipartimento di Fisica dell' Università di Trieste, Sezione di Astronomia, via Tiepolo 11, I-34131 Trieste (Italy); Dolag, Klaus [University Observatory Munich, Scheiner-Str. 1, D-81679 Munich (Germany); Granato, Gian Luigi; Murante, Giuseppe; Ragone-Figueroa, Cinthia [INAF, Osservatorio Astronomico di Trieste, via Tiepolo 11, I-34131, Trieste (Italy); Mazzotta, Pasquale [Dipartimento di Fisica, Università di Roma Tor Vergata, via della Ricerca Scientifica, I-00133, Roma (Italy); Nelson, Kaylea, E-mail: rasia@umich.edu [Department of Astronomy, Yale University, New Haven, CT 06520 (United States)
2014-08-20
Analyses of cosmological hydrodynamic simulations of galaxy clusters suggest that X-ray masses can be underestimated by 10%-30%. The largest bias originates from both violation of hydrostatic equilibrium (HE) and an additional temperature bias caused by inhomogeneities in the X-ray-emitting intracluster medium (ICM). To elucidate this large dispersion among theoretical predictions, we evaluate the degree of temperature structures in cluster sets simulated either with smoothed-particle hydrodynamics (SPH) or adaptive-mesh refinement (AMR) codes. We find that the SPH simulations produce larger temperature variations connected to the persistence of both substructures and their stripped cold gas. This difference is more evident in nonradiative simulations, whereas it is reduced in the presence of radiative cooling. We also find that the temperature variation in radiative cluster simulations is generally in agreement with that observed in the central regions of clusters. Around R {sub 500} the temperature inhomogeneities of the SPH simulations can generate twice the typical HE mass bias of the AMR sample. We emphasize that a detailed understanding of the physical processes responsible for the complex thermal structure in ICM requires improved resolution and high-sensitivity observations in order to extend the analysis to higher temperature systems and larger cluster-centric radii.
Mesh Adaptation and Shape Optimization on Unstructured Meshes, Phase I
National Aeronautics and Space Administration — In this SBIR CRM proposes to implement the entropy adjoint method for solution adaptive mesh refinement into the Loci/CHEM unstructured flow solver. The scheme will...
Autotuning of Adaptive Mesh Refinement PDE Solvers on Shared Memory Architectures
Nogina, Svetlana
2012-01-01
Many multithreaded, grid-based, dynamically adaptive solvers for partial differential equations permanently have to traverse subgrids (patches) of different and changing sizes. The parallel efficiency of this traversal depends on the interplay of the patch size, the architecture used, the operations triggered throughout the traversal, and the grain size, i.e. the size of the subtasks the patch is broken into. We propose an oracle mechanism delivering grain sizes on-the-fly. It takes historical runtime measurements for different patch and grain sizes as well as the traverse\\'s operations into account, and it yields reasonable speedups. Neither magic configuration settings nor an expensive pre-tuning phase are necessary. It is an autotuning approach. © 2012 Springer-Verlag.
Curved mesh generation and mesh refinement using Lagrangian solid mechanics
Energy Technology Data Exchange (ETDEWEB)
Persson, P.-O.; Peraire, J.
2008-12-31
We propose a method for generating well-shaped curved unstructured meshes using a nonlinear elasticity analogy. The geometry of the domain to be meshed is represented as an elastic solid. The undeformed geometry is the initial mesh of linear triangular or tetrahedral elements. The external loading results from prescribing a boundary displacement to be that of the curved geometry, and the final configuration is determined by solving for the equilibrium configuration. The deformations are represented using piecewise polynomials within each element of the original mesh. When the mesh is sufficiently fine to resolve the solid deformation, this method guarantees non-intersecting elements even for highly distorted or anisotropic initial meshes. We describe the method and the solution procedures, and we show a number of examples of two and three dimensional simplex meshes with curved boundaries. We also demonstrate how to use the technique for local refinement of non-curved meshes in the presence of curved boundaries.
An Adaptive Mesh Algorithm: Mesh Structure and Generation
Energy Technology Data Exchange (ETDEWEB)
Scannapieco, Anthony J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-21
The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented by a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally
Advanced numerical methods in mesh generation and mesh adaptation
Energy Technology Data Exchange (ETDEWEB)
Lipnikov, Konstantine [Los Alamos National Laboratory; Danilov, A [MOSCOW, RUSSIA; Vassilevski, Y [MOSCOW, RUSSIA; Agonzal, A [UNIV OF LYON
2010-01-01
Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge
Dobravec, Tadej; Mavrič, Boštjan; Šarler, Božidar
2017-11-01
A two-dimensional model to simulate the dendritic and eutectic growth in binary alloys is developed. A cellular automaton method is adopted to track the movement of the solid-liquid interface. The diffusion equation is solved in the solid and liquid phases by using an explicit finite volume method. The computational domain is divided into square cells that can be hierarchically refined or coarsened using an adaptive mesh based on the quadtree algorithm. Such a mesh refines the regions of the domain near the solid-liquid interface, where the highest concentration gradients are observed. In the regions where the lowest concentration gradients are observed the cells are coarsened. The originality of the work is in the novel, adaptive approach to the efficient and accurate solution of the posed multiscale problem. The model is verified and assessed by comparison with the analytical results of the Lipton-Glicksman-Kurz model for the steady growth of a dendrite tip and the Jackson-Hunt model for regular eutectic growth. Several examples of typical microstructures are simulated and the features of the method as well as further developments are discussed.
Solution adaptive mesh using moving mesh method
International Nuclear Information System (INIS)
Tilak, A.S.; Tong, A.Y.; Liao, G.
2004-01-01
This work deals with mesh adaptation strategy to enhance the accuracy of numerical solution of partial differential equations. This was achieved economically by employing the Moving Grid Finite Difference Method. The method was reformulated as first order div-curl system. This system was then solved using the Least Square Finite Element method (LSFEM). The reformulation of the method has two desirable effects. Firstly, it eliminates the expensive gradient computation in the original method and secondly it allows the method to be employed for mesh adaptation with dynamic boundaries. A 2-D general finite element code implementing the mesh adaptation method based on LSFEM, capable of analyzing self-adjoint problems in elasticity and heat transfer with variety of boundary conditions, sources or sinks was developed and thoroughly validated. The code was used to analyze and adapt mesh for problems in heat transfer and elasticity. The method was found to perform satisfactorily in all test cases. (author)
International Nuclear Information System (INIS)
Torej, Allen J.; Rizwan-Uddin
2001-01-01
The nodal integral method (NIM) has been developed for several problems, including the Navier-Stokes equations, the convection-diffusion equation, and the multigroup neutron diffusion equations. The coarse-mesh efficiency of the NIM is not fully realized in problems characterized by a wide range of spatial scales. However, the combination of adaptive mesh refinement (AMR) capability with the NIM can recover the coarse mesh efficiency by allowing high degrees of resolution in specific localized areas where it is needed and by using a lower resolution everywhere else. Furthermore, certain features of the NIM can be fruitfully exploited in the application of the AMR process. In this paper, we outline a general approach to couple nodal schemes with AMR and then apply it to the convection-diffusion (energy) equation. The development of the NIM with AMR capability (NIMAMR) is based on the well-known Berger-Oliger method for structured AMR. In general, the main components of all AMR schemes are 1. the solver; 2. the level-grid hierarchy; 3. the selection algorithm; 4. the communication procedures; 5. the governing algorithm. The first component, the solver, consists of the numerical scheme for the governing partial differential equations and the algorithm used to solve the resulting system of discrete algebraic equations. In the case of the NIM-AMR, the solver is the iterative approach to the solution of the set of discrete equations obtained by applying the NIM. Furthermore, in the NIM-AMR, the level-grid hierarchy (the second component) is based on the Hierarchical Adaptive Mesh Refinement (HAMR) system,6 and hence, the details of the hierarchy are omitted here. In the selection algorithm, regions of the domain that require mesh refinement are identified. The criterion to select regions for mesh refinement can be based on the magnitude of the gradient or on the Richardson truncation error estimate. Although an excellent choice for the selection criterion, the Richardson
DEFF Research Database (Denmark)
Nicholas, Paul; Stasiuk, David; Nørgaard, Esben
2015-01-01
This paper describes the development of a modelling approach for the design and fabrication of an incrementally formed, stressed skin metal structure. The term incremental forming refers to a progression of localised plastic deformation to impart 3D form onto a 2D metal sheet, directly from 3D...... design data. A brief introduction presents this fabrication concept, as well as the context of structures whose skin plays a significant structural role. Existing research into ISF privileges either the control of forming parameters to minimise geometric deviation, or the more accurate measurement...... of the impact of the forming process at the scale of the grain. But to enhance structural performance for architectural applications requires that both aspects are considered synthetically. We demonstrate a mesh-based approach that incorporates critical parameters at the scales of structure, element...
Energy Technology Data Exchange (ETDEWEB)
Schartmann, M.; Ballone, A.; Burkert, A. [Universitäts-Sternwarte München, Scheinerstraße 1, D-81679 München (Germany); Gillessen, S.; Genzel, R.; Pfuhl, O.; Eisenhauer, F.; Plewa, P. M.; Ott, T.; George, E. M.; Habibi, M., E-mail: mschartmann@swin.edu.au [Max-Planck-Institut für extraterrestrische Physik, Postfach 1312, Giessenbachstr., D-85741 Garching (Germany)
2015-10-01
The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtained results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-γ data, (3) a detailed comparison to the observed high-quality position–velocity (PV) diagrams and the evolution of the total Brackett-γ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scenario: the unphysical formation epoch only shortly before the first detection and the too steep Brackett-γ light curve obtained in simulations, whereas the observations indicate a constant Brackett-γ luminosity between 2004 and 2013. For a given atmosphere and cloud mass, we find a consistent model that can explain both, the observed Brackett-γ light curve and the PV diagrams of all epochs. Assuming initial pressure equilibrium with the atmosphere, this can be reached for a starting date earlier than roughly 1900, which is close to apo-center and well within the disks of young stars.
Directory of Open Access Journals (Sweden)
S Kaennakham
2016-09-01
Full Text Available The interaction between discretization error and modeling error has led to some doubts in adopting Solution Adaptive Grid (SAG strategies with LES. Existing SAG approaches contain undesired aspects making the use of one complicated and less convenient to apply to real engineering applications. In this work, a new refinement algorithm is proposed aiming to enhance the efficiency of SAG methodology in terms of simplicity in defining, less user's judgment, designed especially for standard Smagorinsky LES and computational affordability. The construction of a new refinement variable as a function of the Taylor scale, corresponding to the kinetic energy balance requirement of the Smagorinsky SGS model is presented. The numerical study has been tested out with a turbulent plane jet in two dimensions. It is found that the result quality can be effectively improved as well as a significant reduction in CPU time compared to fixed grid cases.
Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1999-01-01
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Trajectory Optimization Based on Multi-Interval Mesh Refinement Method
Directory of Open Access Journals (Sweden)
Ningbo Li
2017-01-01
Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.
Mesh Generation via Local Bisection Refinement of Triangulated Grids
2015-06-01
Science and Technology Organisation DSTO–TR–3095 ABSTRACT This report provides a comprehensive implementation of an unstructured mesh generation method...and Technology Organisation 506 Lorimer St, Fishermans Bend, Victoria 3207, Australia Telephone: 1300 333 362 Facsimile: (03) 9626 7999 c© Commonwealth...their behaviour is critically linked to Maubach’s method and the data structures N and T . The top- level mesh refinement algorithm is also presented
Energy Technology Data Exchange (ETDEWEB)
Core, X.
2002-02-01
The isobar approximation for the system of the balance equations of mass, momentum, energy and chemical species is a suitable approximation to represent low Mach number reactive flows. In this approximation, which neglects acoustics phenomena, the mixture is hydrodynamically incompressible and the thermodynamic effects lead to an uniform compression of the system. We present a novel numerical scheme for this approximation. An incremental projection method, which uses the original form of mass balance equation, discretizes in time the Navier-Stokes equations. Spatial discretization is achieved through a finite volume approach on MAC-type staggered mesh. A higher order de-centered scheme is used to compute the convective fluxes. We associate to this discretization a local mesh refinement method, based on Flux Interface Correction technique. A first application concerns a forced flow with variable density which mimics a combustion problem. The second application is natural convection with first small temperature variations and then beyond the limit of validity of the Boussinesq approximation. Finally, we treat a third application which is a laminar diffusion flame. For each of these test problems, we demonstrate the robustness of the proposed numerical scheme, notably for the density spatial variations. We analyze the gain in accuracy obtained with the local mesh refinement method. (author)
Parallel adaptive simulations on unstructured meshes
International Nuclear Information System (INIS)
Shephard, M S; Jansen, K E; Sahni, O; Diachin, L A
2007-01-01
This paper discusses methods being developed by the ITAPS center to support the execution of parallel adaptive simulations on unstructured meshes. The paper first outlines the ITAPS approach to the development of interoperable mesh, geometry and field services to support the needs of SciDAC application in these areas. The paper then demonstrates the ability of unstructured adaptive meshing methods built on such interoperable services to effectively solve important physics problems. Attention is then focused on ITAPs' developing ability to solve adaptive unstructured mesh problems on massively parallel computers
Unstructured mesh adaptivity for urban flooding modelling
Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.
2018-05-01
Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. Unfortunately, an efficient parallel implementation is difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. First, we present an efficient parallel implementation of a tetrahedral mesh adaption scheme. Extremely promising parallel performance is achieved for various refinement and coarsening strategies on a realistic-sized domain. Next we describe PLUM, a novel method for dynamically balancing the processor workloads in adaptive grid computations. This research includes interfacing the parallel mesh adaption procedure based on actual flow solutions to a data remapping module, and incorporating an efficient parallel mesh repartitioner. A significant runtime improvement is achieved by observing that data movement for a refinement step should be performed after the edge-marking phase but before the actual subdivision. We also present optimal and heuristic remapping cost metrics that can accurately predict the total overhead for data redistribution. Several experiments are performed to verify the effectiveness of PLUM on sequences of dynamically adapted unstructured grids. Portability is demonstrated by presenting results on the two vastly different architectures of the SP2 and the Origin2OOO. Additionally, we evaluate the performance of five state-of-the-art partitioning algorithms that can be used within PLUM. It is shown that for certain classes of unsteady adaption, globally repartitioning the computational mesh produces higher quality results than diffusive repartitioning schemes. We also demonstrate that a coarse starting mesh produces high quality load balancing, at
Mesh refinement of simulation with the AID riser transmission gamma
International Nuclear Information System (INIS)
Lima Filho, Hilario J.B. de; Benachour, Mohand; Dantas, Carlos C.; Brito, Marcio F.P.; Santos, Valdemir A. dos
2013-01-01
Type reactors Circulating Fluidized Bed (CFBR) vertical, in which the particulate and gaseous phases have flows upward (riser) have been widely used in gasification processes, combustion and fluid catalytic cracking (FCC). These biphasic reactors (gas-solid) efficiency depends largely on their hydrodynamic characteristics, and shows different behaviors in the axial and radial directions. The solids axial distribution is observed by the higher concentration in the base, getting more diluted toward the top. Radially, the solids concentration is characterized as core-annular, in which the central region is highly diluted, consisting of dispersed particles and fluid. In the present work developed a two-dimensional geometry (2D) techniques through simulations in computational fluid dynamics (CFD) to predict the gas-solid flow in the riser type CFBR through transient modeling, based on the kinetic theory of granular flow . The refinement of computational meshes provide larger amounts of information on the parameters studied, but may increase the processing time of the simulations. A minimum number of cells applied to the mesh construction was obtained by testing five meshes. The validation of the hydrodynamic parameters was performed using a range of 241Am source and detector NaI (Tl). The numerical results were provided consistent with the experimental data, indicating that the refined computational mesh in a controlled manner, improve the approximation of the expected results. (author)
Mesh refinement of simulation with the AID riser transmission gamma
Energy Technology Data Exchange (ETDEWEB)
Lima Filho, Hilario J.B. de; Benachour, Mohand; Dantas, Carlos C.; Brito, Marcio F.P., E-mail: hilariojorge2005@yahoo.com.br, E-mail: ccd@ufpe.br [Universidade Federal de Pernambuco (UFPE), Pernambuco, PE (Brazil); Santos, Valdemir A. dos, E-mail: vas@unicap.br [Universidade Catolica de Pernambuco (CCT/UNICAP), Pernambuco, PE (Brazil). Dept. de Engenharia Quimica
2013-07-01
Type reactors Circulating Fluidized Bed (CFBR) vertical, in which the particulate and gaseous phases have flows upward (riser) have been widely used in gasification processes, combustion and fluid catalytic cracking (FCC). These biphasic reactors (gas-solid) efficiency depends largely on their hydrodynamic characteristics, and shows different behaviors in the axial and radial directions. The solids axial distribution is observed by the higher concentration in the base, getting more diluted toward the top. Radially, the solids concentration is characterized as core-annular, in which the central region is highly diluted, consisting of dispersed particles and fluid. In the present work developed a two-dimensional geometry (2D) techniques through simulations in computational fluid dynamics (CFD) to predict the gas-solid flow in the riser type CFBR through transient modeling, based on the kinetic theory of granular flow . The refinement of computational meshes provide larger amounts of information on the parameters studied, but may increase the processing time of the simulations. A minimum number of cells applied to the mesh construction was obtained by testing five meshes. The validation of the hydrodynamic parameters was performed using a range of 241Am source and detector NaI (Tl). The numerical results were provided consistent with the experimental data, indicating that the refined computational mesh in a controlled manner, improve the approximation of the expected results. (author)
Floating shock fitting via Lagrangian adaptive meshes
Vanrosendale, John
1995-01-01
In recent work we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered on Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM), is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence.
Anisotropic mesh adaptation for marine ice-sheet modelling
Gillet-Chaulet, Fabien; Tavard, Laure; Merino, Nacho; Peyaud, Vincent; Brondex, Julien; Durand, Gael; Gagliardini, Olivier
2017-04-01
Improving forecasts of ice-sheets contribution to sea-level rise requires, amongst others, to correctly model the dynamics of the grounding line (GL), i.e. the line where the ice detaches from its underlying bed and goes afloat on the ocean. Many numerical studies, including the intercomparison exercises MISMIP and MISMIP3D, have shown that grid refinement in the GL vicinity is a key component to obtain reliable results. Improving model accuracy while maintaining the computational cost affordable has then been an important target for the development of marine icesheet models. Adaptive mesh refinement (AMR) is a method where the accuracy of the solution is controlled by spatially adapting the mesh size. It has become popular in models using the finite element method as they naturally deal with unstructured meshes, but block-structured AMR has also been successfully applied to model GL dynamics. The main difficulty with AMR is to find efficient and reliable estimators of the numerical error to control the mesh size. Here, we use the estimator proposed by Frey and Alauzet (2015). Based on the interpolation error, it has been found effective in practice to control the numerical error, and has some flexibility, such as its ability to combine metrics for different variables, that makes it attractive. Routines to compute the anisotropic metric defining the mesh size have been implemented in the finite element ice flow model Elmer/Ice (Gagliardini et al., 2013). The mesh adaptation is performed using the freely available library MMG (Dapogny et al., 2014) called from Elmer/Ice. Using a setup based on the inter-comparison exercise MISMIP+ (Asay-Davis et al., 2016), we study the accuracy of the solution when the mesh is adapted using various variables (ice thickness, velocity, basal drag, …). We show that combining these variables allows to reduce the number of mesh nodes by more than one order of magnitude, for the same numerical accuracy, when compared to uniform mesh
Mesh refinement for uncertainty quantification through model reduction
International Nuclear Information System (INIS)
Li, Jing; Stinis, Panos
2015-01-01
We present a novel way of deciding when and where to refine a mesh in probability space in order to facilitate uncertainty quantification in the presence of discontinuities in random space. A discontinuity in random space makes the application of generalized polynomial chaos expansion techniques prohibitively expensive. The reason is that for discontinuous problems, the expansion converges very slowly. An alternative to using higher terms in the expansion is to divide the random space in smaller elements where a lower degree polynomial is adequate to describe the randomness. In general, the partition of the random space is a dynamic process since some areas of the random space, particularly around the discontinuity, need more refinement than others as time evolves. In the current work we propose a way to decide when and where to refine the random space mesh based on the use of a reduced model. The idea is that a good reduced model can monitor accurately, within a random space element, the cascade of activity to higher degree terms in the chaos expansion. In turn, this facilitates the efficient allocation of computational sources to the areas of random space where they are more needed. For the Kraichnan–Orszag system, the prototypical system to study discontinuities in random space, we present theoretical results which show why the proposed method is sound and numerical results which corroborate the theory
Evolving a puncture black hole with fixed mesh refinement
International Nuclear Information System (INIS)
Imbiriba, Breno; Baker, John; Centrella, Joan; Meter, James R. van; Choi, Dae-Il; Fiske, David R.; Brown, J. David; Olson, Kevin
2004-01-01
We present an algorithm for treating mesh refinement interfaces in numerical relativity. We discuss the behavior of the solution near such interfaces located in the strong-field regions of dynamical black hole spacetimes, with particular attention to the convergence properties of the simulations. In our applications of this technique to the evolution of puncture initial data with vanishing shift, we demonstrate that it is possible to simultaneously maintain second order convergence near the puncture and extend the outer boundary beyond 100M, thereby approaching the asymptotically flat region in which boundary condition problems are less difficult and wave extraction is meaningful
Local mesh refinement for incompressible fluid flow with free surfaces
Energy Technology Data Exchange (ETDEWEB)
Terasaka, H.; Kajiwara, H.; Ogura, K. [Tokyo Electric Power Company (Japan)] [and others
1995-09-01
A new local mesh refinement (LMR) technique has been developed and applied to incompressible fluid flows with free surface boundaries. The LMR method embeds patches of fine grid in arbitrary regions of interest. Hence, more accurate solutions can be obtained with a lower number of computational cells. This method is very suitable for the simulation of free surface movements because free surface flow problems generally require a finer computational grid to obtain adequate results. By using this technique, one can place finer grids only near the surfaces, and therefore greatly reduce the total number of cells and computational costs. This paper introduces LMR3D, a three-dimensional incompressible flow analysis code. Numerical examples calculated with the code demonstrate well the advantages of the LMR method.
International Nuclear Information System (INIS)
Liu, Hao
2016-01-01
This Ph.D. work takes place within the framework of studies on Pellet-Cladding mechanical Interaction (PCI) which occurs in the fuel rods of pressurized water reactor. This manuscript focuses on automatic mesh refinement to simulate more accurately this phenomena while maintaining acceptable computational time and memory space for industrial calculations. An automatic mesh refinement strategy based on the combination of the Local Defect Correction multigrid method (LDC) with the Zienkiewicz and Zhu a posteriori error estimator is proposed. The estimated error is used to detect the zones to be refined, where the local sub-grids of the LDC method are generated. Several stopping criteria are studied to end the refinement process when the solution is accurate enough or when the refinement does not improve the global solution accuracy anymore. Numerical results for elastic 2D test cases with pressure discontinuity show the efficiency of the proposed strategy. The automatic mesh refinement in case of unilateral contact problems is then considered. The strategy previously introduced can be easily adapted to the multi-body refinement by estimating solution error on each body separately. Post-processing is often necessary to ensure the conformity of the refined areas regarding the contact boundaries. A variety of numerical experiments with elastic contact (with or without friction, with or without an initial gap) confirms the efficiency and adaptability of the proposed strategy. (author) [fr
Error sensitivity to refinement: a criterion for optimal grid adaptation
Luchini, Paolo; Giannetti, Flavio; Citro, Vincenzo
2017-12-01
Most indicators used for automatic grid refinement are suboptimal, in the sense that they do not really minimize the global solution error. This paper concerns with a new indicator, related to the sensitivity map of global stability problems, suitable for an optimal grid refinement that minimizes the global solution error. The new criterion is derived from the properties of the adjoint operator and provides a map of the sensitivity of the global error (or its estimate) to a local mesh refinement. Examples are presented for both a scalar partial differential equation and for the system of Navier-Stokes equations. In the last case, we also present a grid-adaptation algorithm based on the new estimator and on the FreeFem++ software that improves the accuracy of the solution of almost two order of magnitude by redistributing the nodes of the initial computational mesh.
Local multigrid mesh refinement in view of nuclear fuel 3D modelling in pressurised water reactors
International Nuclear Information System (INIS)
Barbie, L.
2013-01-01
The aim of this study is to improve the performances, in terms of memory space and computational time, of the current modelling of the Pellet-Cladding mechanical Interaction (PCI), complex phenomenon which may occurs during high power rises in pressurised water reactors. Among the mesh refinement methods - methods dedicated to efficiently treat local singularities - a local multi-grid approach was selected because it enables the use of a black-box solver while dealing few degrees of freedom at each level. The Local Defect Correction (LDC) method, well suited to a finite element discretization, was first analysed and checked in linear elasticity, on configurations resulting from the PCI, since its use in solid mechanics is little widespread. Various strategies concerning the implementation of the multilevel algorithm were also compared. Coupling the LDC method with the Zienkiewicz-Zhu a posteriori error estimator in order to automatically detect the zones to be refined, was then tested. Performances obtained on two-dimensional and three-dimensional cases are very satisfactory, since the algorithm proposed is more efficient than h-adaptive refinement methods. Lastly, the LDC algorithm was extended to nonlinear mechanics. Space/time refinement as well as transmission of the initial conditions during the re-meshing step were looked at. The first results obtained are encouraging and show the interest of using the LDC method for PCI modelling. (author) [fr
Unstructured Adaptive Meshes: Bad for Your Memory?
Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob
2003-01-01
This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.
Development and verification of unstructured adaptive mesh technique with edge compatibility
International Nuclear Information System (INIS)
Ito, Kei; Ohshima, Hiroyuki; Kunugi, Tomoaki
2010-01-01
In the design study of the large-sized sodium-cooled fast reactor (JSFR), one key issue is suppression of gas entrainment (GE) phenomena at a gas-liquid interface. Therefore, the authors have been developed a high-precision CFD algorithm to evaluate the GE phenomena accurately. The CFD algorithm has been developed on unstructured meshes to establish an accurate modeling of JSFR system. For two-phase interfacial flow simulations, a high-precision volume-of-fluid algorithm is employed. It was confirmed that the developed CFD algorithm could reproduce the GE phenomena in a simple GE experiment. Recently, the authors have been developed an important technique for the simulation of the GE phenomena in JSFR. That is an unstructured adaptive mesh technique which can apply fine cells dynamically to the region where the GE occurs in JSFR. In this paper, as a part of the development, a two-dimensional unstructured adaptive mesh technique is discussed. In the two-dimensional adaptive mesh technique, each cell is refined isotropically to reduce distortions of the mesh. In addition, connection cells are formed to eliminate the edge incompatibility between refined and non-refined cells. The two-dimensional unstructured adaptive mesh technique is verified by solving well-known lid-driven cavity flow problem. As a result, the two-dimensional unstructured adaptive mesh technique succeeds in providing a high-precision solution, even though poor-quality distorted initial mesh is employed. In addition, the simulation error on the two-dimensional unstructured adaptive mesh is much less than the error on the structured mesh with a larger number of cells. (author)
Automatic off-body overset adaptive Cartesian mesh method based on an octree approach
International Nuclear Information System (INIS)
Péron, Stéphanie; Benoit, Christophe
2013-01-01
This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.
Large Eddy Simulation for round jet in cross-flow using Local Mesh Refinement
Cevheri, Mehtap; Stoesser, Thorsten
2013-11-01
The aim of this research is the simulation of near field multi-phase plumes in cross-flows to understand the physical processes of oil spill in Gulf of Mexico. Since this is a multi-phase and multi-scale problem, a local mesh refinement (LMR) technique has been coupled to the multi-grid method to solve the unsteady, incompressible Navier-Stokes problem on a Cartesian grid with staggered variable arrangement. Wall-Adapting Local Eddy Viscosity (WALE) subgrid model has been used to simulate the turbulent flow. In this current study, the verification of the developed code will be presented before the simulation of multi-phase plumes. The accuracy of local mesh refinement and the subgrid model are presented with two test cases: moderate Reynolds number turbulent channel flow and a round turbulent jet into a laminar cross-flow. For the first test case, turbulence statistics for the fully developed turbulent flow are compared with the DNS data. For the second test case, a simulation with a 3.3 velocity ratio and 6930 jet Reynolds number is tested and compared with the experimental and other computational data.
Goal-oriented mesh adaptivity for multi-dimension SPN equations
International Nuclear Information System (INIS)
Turcksin, B.; Ragusa, J. C.
2009-01-01
We present a mesh adaptivity strategy that incorporates, via adjoint calculations, the relative importance of computational regions towards a quantity of interest, a technique known as goal-oriented mesh adaptivity. In this approach, the mesh refinement is still dictated by the error in the forward solution, but it is also weighted by the importance of the solution towards a given goal. Quantities of interest May include reaction rates in a sub domain, pointwise flux values, etc. We have implemented goal-oriented mesh adaptivity for the multi-D (2- and 3-D) SP N equations and propose some numerical results demonstrating the superiority of the goal-oriented approach against the standard adaptivity technique. (authors)
Applications of automatic mesh generation and adaptive methods in computational medicine
Energy Technology Data Exchange (ETDEWEB)
Schmidt, J.A.; Macleod, R.S. [Univ. of Utah, Salt Lake City, UT (United States); Johnson, C.R.; Eason, J.C. [Duke Univ., Durham, NC (United States)
1995-12-31
Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.
A simple nodal force distribution method in refined finite element meshes
International Nuclear Information System (INIS)
Park, Jai Hak; Shin, Kyu In; Lee, Dong Won; Cho, Seungyon
2017-01-01
In finite element analyses, mesh refinement is frequently performed to obtain accurate stress or strain values or to accurately define the geometry. After mesh refinement, equivalent nodal forces should be calculated at the nodes in the refined mesh. If field variables and material properties are available at the integration points in each element, then the accurate equivalent nodal forces can be calculated using an adequate numerical integration. However, in certain circumstances, equivalent nodal forces cannot be calculated because field variable data are not available. In this study, a very simple nodal force distribution method was proposed. Nodal forces of the original finite element mesh are distributed to the nodes of refined meshes to satisfy the equilibrium conditions. The effect of element size should also be considered in determining the magnitude of the distributing nodal forces. A program was developed based on the proposed method, and several example problems were solved to verify the accuracy and effectiveness of the proposed method. From the results, accurate stress field can be recognized to be obtained from refined meshes using the proposed nodal force distribution method. In example problems, the difference between the obtained maximum stress and target stress value was less than 6 % in models with 8-node hexahedral elements and less than 1 % in models with 20-node hexahedral elements or 10-node tetrahedral elements.
Energy Technology Data Exchange (ETDEWEB)
Lober, R.R.; Tautges, T.J.; Vaughan, C.T.
1997-03-01
Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.
Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.
2017-04-01
In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.
Directory of Open Access Journals (Sweden)
Ralf Deiterding
2011-01-01
Full Text Available Numerical simulation can be key to the understanding of the multidimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the nonequilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniques in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, that is, under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis, and a diagram of the transition boundaries between possible Mach reflection structures is constructed.
Parallel Implementation and Scaling of an Adaptive Mesh Discrete Ordinates Algorithm for Transport
International Nuclear Information System (INIS)
Howell, L H
2004-01-01
Block-structured adaptive mesh refinement (AMR) uses a mesh structure built up out of locally-uniform rectangular grids. In the BoxLib parallel framework used by the Raptor code, each processor operates on one or more of these grids at each refinement level. The decomposition of the mesh into grids and the distribution of these grids among processors may change every few timesteps as a calculation proceeds. Finer grids use smaller timesteps than coarser grids, requiring additional work to keep the system synchronized and ensure conservation between different refinement levels. In a paper for NECDC 2002 I presented preliminary results on implementation of parallel transport sweeps on the AMR mesh, conjugate gradient acceleration, accuracy of the AMR solution, and scalar speedup of the AMR algorithm compared to a uniform fully-refined mesh. This paper continues with a more in-depth examination of the parallel scaling properties of the scheme, both in single-level and multi-level calculations. Both sweeping and setup costs are considered. The algorithm scales with acceptable performance to several hundred processors. Trends suggest, however, that this is the limit for efficient calculations with traditional transport sweeps, and that modifications to the sweep algorithm will be increasingly needed as job sizes in the thousands of processors become common
Adaptive-mesh zoning by the equipotential method
Energy Technology Data Exchange (ETDEWEB)
Winslow, A.M.
1981-04-01
An adaptive mesh method is proposed for the numerical solution of differential equations which causes the mesh lines to move closer together in regions where higher resolution in some physical quantity T is desired. A coefficient D > 0 is introduced into the equipotential zoning equations, where D depends on the gradient of T . The equations are inverted, leading to nonlinear elliptic equations for the mesh coordinates with source terms which depend on the gradient of D. A functional form of D is proposed.
AbouEisha, Hassan M.
2017-07-13
We consider a class of two-and three-dimensional h-refined meshes generated by an adaptive finite element method. We introduce an element partition tree, which controls the execution of the multi-frontal solver algorithm over these refined grids. We propose and study algorithms with polynomial computational cost for the optimization of these element partition trees. The trees provide an ordering for the elimination of unknowns. The algorithms automatically optimize the element partition trees using extensions of dynamic programming. The construction of the trees by the dynamic programming approach is expensive. These generated trees cannot be used in practice, but rather utilized as a learning tool to propose fast heuristic algorithms. In this first part of our paper we focus on the dynamic programming approach, and draw a sketch of the heuristic algorithm. The second part will be devoted to a more detailed analysis of the heuristic algorithm extended for the case of hp-adaptive
r-Adaptive mesh generation for shell finite element analysis
International Nuclear Information System (INIS)
Cho, Maenghyo; Jun, Seongki
2004-01-01
An r-adaptive method or moving grid technique relocates a grid so that it becomes concentrated in the desired region. This concentration improves the accuracy and efficiency of finite element solutions. We apply the r-adaptive method to computational mesh of shell surfaces, which is initially regular and uniform. The r-adaptive method, given by Liao and Anderson [Appl. Anal. 44 (1992) 285], aggregate the grid in the region with a relatively high weight function without any grid-tangling. The stress error estimator is calculated in the initial uniform mesh for a weight function. However, since the r-adaptive method is a method that moves the grid, shell surface geometry error such as curvature error and mesh distortion error will increase. Therefore, to represent the exact geometry of a shell surface and to prevent surface geometric errors, we use the Naghdi's shell theory and express the shell surface by a B-spline patch. In addition, using a nine-node element, which is relatively less sensitive to mesh distortion, we try to diminish mesh distortion error in the application of an r-adaptive method. In the numerical examples, it is shown that the values of the error estimator for a cylinder, hemisphere, and torus in the overall domain can be reduced effectively by using the mesh generated by the r-adaptive method. Also, the reductions of the estimated relative errors are demonstrated in the numerical examples. In particular, a new functional is proposed to construct an adjusted mesh configuration by considering a mesh distortion measure as well as the stress error function. The proposed weight function provides a reliable mesh adaptation method after a parameter value in the weight function is properly chosen
Block-structured adaptive meshes and reduced grids for atmospheric general circulation models.
Jablonowski, Christiane; Oehmke, Robert C; Stout, Quentin F
2009-11-28
Adaptive mesh refinement techniques offer a flexible framework for future variable-resolution climate and weather models since they can focus their computational mesh on certain geographical areas or atmospheric events. Adaptive meshes can also be used to coarsen a latitude-longitude grid in polar regions. This allows for the so-called reduced grid setups. A spherical, block-structured adaptive grid technique is applied to the Lin-Rood finite-volume dynamical core for weather and climate research. This hydrostatic dynamics package is based on a conservative and monotonic finite-volume discretization in flux form with vertically floating Lagrangian layers. The adaptive dynamical core is built upon a flexible latitude-longitude computational grid and tested in two- and three-dimensional model configurations. The discussion is focused on static mesh adaptations and reduced grids. The two-dimensional shallow water setup serves as an ideal testbed and allows the use of shallow water test cases like the advection of a cosine bell, moving vortices, a steady-state flow, the Rossby-Haurwitz wave or cross-polar flows. It is shown that reduced grid configurations are viable candidates for pure advection applications but should be used moderately in nonlinear simulations. In addition, static grid adaptations can be successfully used to resolve three-dimensional baroclinic waves in the storm-track region.
Adaptive upscaling with the dual mesh method
Energy Technology Data Exchange (ETDEWEB)
Guerillot, D.; Verdiere, S.
1997-08-01
The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.
International Nuclear Information System (INIS)
Vay, J.-L.; Colella, P.; McCorquodale, P.; Van Straalen, B.; Friedman, A.; Grote, D.P.
2002-01-01
The numerical simulation of the driving beams in a heavy ion fusion power plant is a challenging task, and simulation of the power plant as a whole, or even of the driver, is not yet possible. Despite the rapid progress in computer power, past and anticipated, one must consider the use of the most advanced numerical techniques, if we are to reach our goal expeditiously. One of the difficulties of these simulations resides in the disparity of scales, in time and in space, which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g., fluid dynamics simulations) is the mesh refinement technique. They discuss the challenges posed by the implementation of this technique into plasma simulations (due to the presence of particles and electromagnetic waves). They will present the prospects for and projected benefits of its application to heavy ion fusion. In particular to the simulation of the ion source and the final beam propagation in the chamber. A collaboration project is under way at LBNL between the Applied Numerical Algorithms Group (ANAG) and the HIF group to couple the Adaptive Mesh Refinement (AMR) library (CHOMBO) developed by the ANAG group to the Particle-In-Cell accelerator code WARP developed by the HIF-VNL. They describe their progress and present their initial findings
Energy Technology Data Exchange (ETDEWEB)
Vay, J.-L.; Colella, P.; McCorquodale, P.; Van Straalen, B.; Friedman, A.; Grote, D.P.
2002-05-01
The numerical simulation of the driving beams in a heavy ion fusion power plant is a challenging task, and simulation of the power plant as a whole, or even of the driver, is not yet possible. Despite the rapid progress in computer power, past and anticipated, one must consider the use of the most advanced numerical techniques, if we are to reach our goal expeditiously. One of the difficulties of these simulations resides in the disparity of scales, in time and in space, which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g., fluid dynamics simulations) is the mesh refinement technique. They discuss the challenges posed by the implementation of this technique into plasma simulations (due to the presence of particles and electromagnetic waves). They will present the prospects for and projected benefits of its application to heavy ion fusion. In particular to the simulation of the ion source and the final beam propagation in the chamber. A collaboration project is under way at LBNL between the Applied Numerical Algorithms Group (ANAG) and the HIF group to couple the Adaptive Mesh Refinement (AMR) library (CHOMBO) developed by the ANAG group to the Particle-In-Cell accelerator code WARP developed by the HIF-VNL. They describe their progress and present their initial findings.
Solution adaptive meshes with a hyperbolic grid generator
Klopfer, G. H.
An alternative numerical procedure to generate solution-adaptive grids for use in CFD simulations is developed analytically and demonstrated. The approach is based on the hyperbolic generation scheme of Steger and Chausee (1980), with terms added to achieve line clustering while fulfilling orthogonality and smoothness requirements. The formulation of the method is outlined, and adapted meshes for a shock-shock interaction at freestream Mach number 8.03 and Reynolds number 387,500 are presented graphically. The hyperbolic approach is shown to be significantly faster than comparable elliptic-grid methods and capable of producing an arbitrarily high degree of clustering on structured meshes.
Adaptive meshes in ecosystem modelling: a way forward?
Popova, E. E.; Ham, D. A.; Srokosz, M. A.; Piggott, M. D.
2009-04-01
The need to resolve physical processes occuring on many different length scales has lead to the development of ocean flow models based on unstructured and adaptive meshes. However, thus far models of biological processes have been based on fixed, structured grids which lack the ability to dynamically focus resolution on areas of developing small-scale structure. Here we will present the initial results of coupling a four component biological model to the 3D non-hydrostatic, finite element, adaptive grid ocean model ICOM (the Imperial College Ocean Model). Mesh adaptivity automatically resolves fine-scale physical or biological features as they develop, optimising computational cost by reducing resolution where it is not required. Experiments are carried out within the framework of a horizontally uniform water column. The vertical physical processes in top 500m are represented by a two equation turbulence model. The physical model is coupled to a four component biological model, which includes generic phytoplankton, zooplankton, nitrate and particular organic matter (detritus). The physical and biological model is set up to represent idealised oligotrophic conditions, typical of subtropical gyres. A stable annual cycle is achieved after a number of years of integration. We compare results obtained on a fully adaptive mesh with ones using a high resolution static mesh. We assess the computational efficiency of the adaptive approach for modelling of ecosystem processes such as the dynamics of the phytoplankton spring bloom, formation of the subsurface chlorophyll maximum and nutrient supply to the photic zone.
Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.
2018-02-01
While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.
Adaptive unstructured meshes for finite element ocean modelling
Power, P. W.; Pain, C. C.; Piggott, M. D.; Marshall, D. P.; Fang, F.; Umpleby, A. P.; de Oliveira, C. R. E.; Goddard, A. J. H.
2003-04-01
Flow in the world's oceans occurs at a wide range of spatial scales, from micro-metres to mega-metres. In particular, regions of intense flow are often highly localised, for example Western Boundary Currents. Conventional numerical ocean models generally use static meshes. The Imperial College Ocean Model (ICOM) uses advanced finite element methods to evolve the mesh to follow regions of intense flow, where high resolution may be required. Coarser resolution can be used in other areas of the flow domain. Evolution of the unstructured mesh is achieved by the use of a variety of error norms which control a self-adaptive anisotrophic meshing algorithm. The objective of this work is a reduction in computational cost, ensuring areas of fine resolution are used only where and when they are required. In this work we present some examples of an error measure being used to obtain high-quality solutions to a set of benchmark problems, for example flow over a seamount and a wind driven gyre, while using a minimal number of elements. The long term objective of this work is to define a rigorous self-adaptive technique for use in an Oceanographic context, and we present plans for the implimentation of a sensitivity based error measure.
Comprehensive adaptive mesh refinement in wrinkling prediction analysis
Selman, A.; Meinders, Vincent T.; Huetink, Han; van den Boogaard, Antonius H.
2002-01-01
Discretisation errors indicator, contact free wrinkling and wrinkling with contact indicators are, in a challenging task, brought together and used in a comprehensive approach to wrinkling prediction analysis in thin sheet metal forming processes.
On adaptive mesh refinement in wrinkling prediction analysis
Selman, A.; Meinders, Vincent T.; van den Boogaard, Antonius H.; Huetink, Han
2002-01-01
Hutchinson approach has been successfully used by a number of researchers in thin sheet metal forming processes for wrinkling prediction. However, Hutchinson approach is limited to regions of the sheet that are free of any contact. Therefore, a new wrinkling indicator that can be used in the contact
ADAPTIVE MODEL REFINEMENT FOR THE IONOSPHERE AND THERMOSPHERE
National Aeronautics and Space Administration — ADAPTIVE MODEL REFINEMENT FOR THE IONOSPHERE AND THERMOSPHERE ANTHONY M. D’AMATO∗, AARON J. RIDLEY∗∗, AND DENNIS S. BERNSTEIN∗∗∗ Abstract. Mathematical models of...
Mesh adaptation technique for Fourier-domain fluorescence lifetime imaging
International Nuclear Information System (INIS)
Soloviev, Vadim Y.
2006-01-01
A novel adaptive mesh technique in the Fourier domain is introduced for problems in fluorescence lifetime imaging. A dynamical adaptation of the three-dimensional scheme based on the finite volume formulation reduces computational time and balances the ill-posed nature of the inverse problem. Light propagation in the medium is modeled by the telegraph equation, while the lifetime reconstruction algorithm is derived from the Fredholm integral equation of the first kind. Stability and computational efficiency of the method are demonstrated by image reconstruction of two spherical fluorescent objects embedded in a tissue phantom
Mesh refinement study and experimental validation for stretch bending of sheet metals
Raupach, M.; Kreissl, S.; Vuaille, L.; Möller, T.; Friebe, H.; Volk, W.
2017-09-01
For sheet metal parts with small radii and large bending angles, the sheet metal forming simulation reaches their application limits. Alternatives are complex shell formulations and volume elements. For volume elements, the necessary number of elements over the thickness is important. Valid values are not available depending on discrete radii. Therefore in this work, a convergence study is performed using the example of an angular stretch bend test with a radius to thickness ratio of 1. For various states of mesh refinement, simulations are performed, various results are presented, analysed and discussed with regard to convergence behaviour to the necessary number of elements in thickness direction. Recommendations for suitable validation variables are derived. Based on the refinement study, a simulation model for an experimental validation is developed. The experiments are carried out in a sheet metal forming machine. Experimental angular stretch bend test with a punch radius of 1 mm are performed until failure and the strain distribution on the top side of the sheet is measured. Finally, simulation and experiments are compared based on the surface strain.
New Capabilities for Adaptive Mesh Simulation Use within FORWARD
Mathews, N.; Flyer, N.; Gibson, S. E.; Kucera, T. A.; Manchester, W.
2016-12-01
The multiscale nature of the solar corona can pose challenges to numerical simulations. Adaptive meshes are often used to resolve fine-scale structures, such as the chromospheric-coronal interface found in prominences and the transition region as a whole. FORWARD is a SolarSoft IDL package designed as a community resource for creating a broad range of synthetic coronal observables from numerical models and comparing them to data. However, to date its interface with numerical simulations has been limited to regular grids. We will present a new adaptive-grid interface to FORWARD that will enable efficient synthesis of solar observations. This is accomplished through the use of hierarchical IDL structures designed to enable finding nearest-neighbor points quickly for non-uniform grids. This facilitates line-of-sight integrations that can adapt to the unequally spaced mesh. We will demonstrate this capability for the Alfven-Wave driven SOlar wind Model (AWSOM), part of the Space Weather Modeling Framework (SWMF). In addition, we will use it in the context of a prominence-cavity model, highlighting new capabilities in FORWARD that allow treatment of continuum absorbtion as well as EUV line emission via dual populations (chromosphere-corona).
Directory of Open Access Journals (Sweden)
Humin Lei
2017-01-01
Full Text Available An adaptive mesh iteration method based on Hermite-Pseudospectral is described for trajectory optimization. The method uses the Legendre-Gauss-Lobatto points as interpolation points; then the state equations are approximated by Hermite interpolating polynomials. The method allows for changes in both number of mesh points and the number of mesh intervals and produces significantly smaller mesh sizes with a higher accuracy tolerance solution. The derived relative error estimate is then used to trade the number of mesh points with the number of mesh intervals. The adaptive mesh iteration method is applied successfully to the examples of trajectory optimization of Maneuverable Reentry Research Vehicle, and the simulation experiment results show that the adaptive mesh iteration method has many advantages.
Choudhary, Aniruddha
modification, an error equidistribution strategy to perform r-refinement (i.e., mesh node relocation) is employed. This technique is applied to 1D and 2D inviscid flow problems where the exact (i.e., analytic) solution is available. For mesh adaptation based upon TE, about an order of magnitude improvement in discretization error levels is observed when compared with the uniform mesh.
Incompressible Navier-Stokes inverse design method based on adaptive unstructured meshes
International Nuclear Information System (INIS)
Rahmati, M.T.; Charlesworth, D.; Zangeneh, M.
2005-01-01
An inverse method for blade design based on Navier-Stokes equations on adaptive unstructured meshes has been developed. In the method, unlike the method based on inviscid equations, the effect of viscosity is directly taken into account. In the method, the pressure (or pressure loading) is prescribed. The design method then computes the blade shape that would accomplish the target prescribed pressure distribution. The method is implemented using a cell-centered finite volume method, which solves the incompressible Navier-Stokes equations on unstructured meshes. An adaptive unstructured mesh method based on grid subdivision and local adaptive mesh method is utilized for increasing the accuracy. (author)
Adapting to life: simulating an ecosystem within an unstructured adaptive mesh ocean model
Hill, J.; Piggott, M. D.; Popova, E. E.; Ham, D. A.; Srokosz, M. A.
2010-12-01
Ocean oligotrophic gyres are characterised by low rates of primary production. Nevertheless their great area, covering roughly a third of the Earth's surface, and probably constituting the largest ecosystem on the planet means that they play a crucial role in global biogeochemistry. Current models give values of primary production two orders of magnitude lower than those observed, thought to be due to the non-resolution of sub-mesoscale phenomena, which play a significant role in nutrient supply in such areas. However, which aspects of sub-mesoscale processes are responsible for the observed higher productivity is an open question. Existing models are limited by two opposing requirements: to have high enough spatial resolution to resolve fully the processes involved (down to order 1km) and the need to realistically simulate the full gyre. No model can currently satisfy both of these constraints. Here, we detail Fluidity-ICOM, a non-hydrostatic, finite-element, unstructured mesh ocean model. Adaptive mesh techniques allow us to focus resolution where and when we require it. We present the first steps towards performing a full North Atlantic simulation, by showing that adaptive mesh techniques can be used in conjunction with both turbulent parametrisations and ecosystems models in psuedo-1D water columns. We show that the model can successfully reproduce the annual variation of the mixed layer depth at keys locations within the North Atlantic gyre, with adaptive meshing producing more accurate results than the fixed mesh simulations, with fewer degrees of freedom. Moreover, the model is capable of reproducing the key behaviour of the ecosystem in those locations.
Goal based mesh adaptivity for fixed source radiation transport calculations
International Nuclear Information System (INIS)
Baker, C.M.J.; Buchan, A.G.; Pain, C.C.; Tollit, B.S.; Goffin, M.A.; Merton, S.R.; Warner, P.
2013-01-01
Highlights: ► Derives an anisotropic goal based error measure for shielding problems. ► Reduces the error in the detector response by optimizing the finite element mesh. ► Anisotropic adaptivity captures material interfaces using fewer elements than AMR. ► A new residual based on the numerical scheme chosen forms the error measure. ► The error measure also combines the forward and adjoint metrics in a novel way. - Abstract: In this paper, the application of goal based error measures for anisotropic adaptivity applied to shielding problems in which a detector is present is explored. Goal based adaptivity is important when the response of a detector is required to ensure that dose limits are adhered to. To achieve this, a dual (adjoint) problem is solved which solves the neutron transport equation in terms of the response variables, in this case the detector response. The methods presented can be applied to general finite element solvers, however, the derivation of the residuals are dependent on the underlying finite element scheme which is also discussed in this paper. Once error metrics for the forward and adjoint solutions have been formed they are combined using a novel approach. The two metrics are combined by forming the minimum ellipsoid that covers both the error metrics rather than taking the maximum ellipsoid that is contained within the metrics. Another novel approach used within this paper is the construction of the residual. The residual, used to form the goal based error metrics, is calculated from the subgrid scale correction which is inherent in the underlying spatial discretisation employed
An adaptively refined XFEM with virtual node polygonal elements for dynamic crack problems
Teng, Z. H.; Sun, F.; Wu, S. C.; Zhang, Z. B.; Chen, T.; Liao, D. M.
2018-02-01
By introducing the shape functions of virtual node polygonal (VP) elements into the standard extended finite element method (XFEM), a conforming elemental mesh can be created for the cracking process. Moreover, an adaptively refined meshing with the quadtree structure only at a growing crack tip is proposed without inserting hanging nodes into the transition region. A novel dynamic crack growth method termed as VP-XFEM is thus formulated in the framework of fracture mechanics. To verify the newly proposed VP-XFEM, both quasi-static and dynamic cracked problems are investigated in terms of computational accuracy, convergence, and efficiency. The research results show that the present VP-XFEM can achieve good agreement in stress intensity factor and crack growth path with the exact solutions or experiments. Furthermore, better accuracy, convergence, and efficiency of different models can be acquired, in contrast to standard XFEM and mesh-free methods. Therefore, VP-XFEM provides a suitable alternative to XFEM for engineering applications.
A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry
Almarouf, Mohamad Abdulilah Alhusain Alali
2017-02-25
We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.
Adaptive and ubiquitous video streaming over Wireless Mesh Networks
Directory of Open Access Journals (Sweden)
Malik Mubashir Hassan
2016-10-01
Full Text Available In recent years, with the dramatic improvement on scalability of H.264/MPEG-4 standard and growing demand for new multimedia services have spurred the research on scalable video streaming over wireless networks in both industry and academia. Video streaming applications are increasingly being deployed in Wireless Mesh Networks (WMNs. However, robust streaming of video over WMNs poses many challenges due to varying nature of wireless networks. Bit-errors, packet-losses and burst-packet-losses are very common in such type of networks, which severely influence the perceived video quality at receiving end. Therefore, a carefully-designed error recovery scheme must be employed. In this paper, we propose an interactive and ubiquitous video streaming scheme for Scalable Video Coding (SVC based video streaming over WMNs towards heterogeneous receivers. Intelligently taking the benefit of path diversity, the proposed scheme initially calculates the quality of all candidate paths and then based on quality of path it decides adaptively the size and level of error protection for all packets in order to combat the effect of losses on perceived quality of reconstructed video at receiving end. Our experimental results show that the proposed streaming approach can react to varying channel conditions with less degradation in video quality.
Numerical study of Taylor bubbles with adaptive unstructured meshes
Xie, Zhihua; Pavlidis, Dimitrios; Percival, James; Pain, Chris; Matar, Omar; Hasan, Abbas; Azzopardi, Barry
2014-11-01
The Taylor bubble is a single long bubble which nearly fills the entire cross section of a liquid-filled circular tube. This type of bubble flow regime often occurs in gas-liquid slug flows in many industrial applications, including oil-and-gas production, chemical and nuclear reactors, and heat exchangers. The objective of this study is to investigate the fluid dynamics of Taylor bubbles rising in a vertical pipe filled with oils of extremely high viscosity (mimicking the ``heavy oils'' found in the oil-and-gas industry). A modelling and simulation framework is presented here which can modify and adapt anisotropic unstructured meshes to better represent the underlying physics of bubble rise and reduce the computational effort without sacrificing accuracy. The numerical framework consists of a mixed control-volume and finite-element formulation, a ``volume of fluid''-type method for the interface capturing based on a compressive control volume advection method, and a force-balanced algorithm for the surface tension implementation. Numerical examples of some benchmark tests and the dynamics of Taylor bubbles are presented to show the capability of this method. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.
Adaptive mesh generation for image registration and segmentation
DEFF Research Database (Denmark)
Fogtmann, Mads; Larsen, Rasmus
2013-01-01
measure. The method was tested on a T1 weighted MR volume of an adult brain and showed a 66% reduction in the number of mesh vertices compared to a red-subdivision strategy. The deformation capability of the mesh was tested by registration to five additional T1-weighted MR volumes....
h-Adaptive Mesh Generation using Electric Field Intensity Value as a Criterion (in Japanese)
Toyonaga, Kiyomi; Cingoski, Vlatko; Kaneda, Kazufumi; Yamashita, Hideo
1994-01-01
Finite mesh divisions are essential to obtain accurate solution of two dimensional electric field analysis. It requires the technical knowledge to generate a suitable fine mesh divisions. In electric field problem, analysts are usually interested in the electric field intensity and its distribution. In order to obtain electric field intensity with high-accuracy, we have developed and adaptive mesh generator using electric field intensity value as a criterion.
Numerical analysis of dependence between adapted mesh and assumed error indicator
Kucwaj, Jan
2018-01-01
The paper considers the influence of the assumed error indicator on the final adapted mesh. Provided that threshold values of an error are increased by applying the adaptive procedure, it turns out that final mesh depends on the assumed error indicator. In the paper, there were used the standard error estimates and the error indicator proposed by the author. The proposed error indicator is based on applying hierarchically generalized finite difference method (FDM). In the case of the proposed error indicator, the final adapted mesh is the most optimal for the exact solution.
Energy Technology Data Exchange (ETDEWEB)
Greene, Patrick T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schofield, Samuel P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nourgaliev, Robert [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-06-21
A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.
International Nuclear Information System (INIS)
Lee, W H; Kim, T-S; Cho, M H; Ahn, Y B; Lee, S Y
2006-01-01
In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems
Geometrically Consistent Mesh Modification
Bonito, A.
2010-01-01
A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.
Finite Element approach for Density Functional Theory calculations on locally refined meshes
Energy Technology Data Exchange (ETDEWEB)
Fattebert, J; Hornung, R D; Wissink, A M
2007-02-23
We present a quadratic Finite Element approach to discretize the Kohn-Sham equations on structured non-uniform meshes. A multigrid FAC preconditioner is proposed to iteratively solve the equations by an accelerated steepest descent scheme. The method was implemented using SAMRAI, a parallel software infrastructure for general AMR applications. Examples of applications to small nanoclusters calculations are presented.
Welch, J. A.; Kópházi, J.; Owens, A. R.; Eaton, M. D.
2017-10-01
In this paper a method is presented for the application of energy-dependent spatial meshes applied to the multigroup, second-order, even-parity form of the neutron transport equation using Isogeometric Analysis (IGA). The computation of the inter-group regenerative source terms is based on conservative interpolation by Galerkin projection. The use of Non-Uniform Rational B-splines (NURBS) from the original computer-aided design (CAD) model allows for efficient implementation and calculation of the spatial projection operations while avoiding the complications of matching different geometric approximations faced by traditional finite element methods (FEM). The rate-of-convergence was verified using the method of manufactured solutions (MMS) and found to preserve the theoretical rates when interpolating between spatial meshes of different refinements. The scheme's numerical efficiency was then studied using a series of two-energy group pincell test cases where a significant saving in the number of degrees-of-freedom can be found if the energy group with a complex variation in the solution is refined more than an energy group with a simpler solution function. Finally, the method was applied to a heterogeneous, seven-group reactor pincell where the spatial meshes for each energy group were adaptively selected for refinement. It was observed that by refining selected energy groups a reduction in the total number of degrees-of-freedom for the same total L2 error can be obtained.
Directory of Open Access Journals (Sweden)
Kai Guo
2018-01-01
Full Text Available A coupled Lattice Boltzmann-Volume Penalization (LBM-VP with local mesh refinement is presented to simulate flows past obstacles in this article. Based on the finite-difference LBM, the local mesh refinement is incorporated into the LBM to improve computing efficiency. The volume penalization method is introduced into the LBM by an external forcing term. In the LBM-VP method, the processes of interpolating velocities on the boundaries points and distributing the force density to the Eulerian points near the boundaries are unnecessary. Performing the LBM-VP on a certain point, only the variables of this point are needed, which means the whole procedure can be conducted parallelly. As a consequence, the whole computing efficiency can be improved. To verify the presented method, flows past a single circular cylinder, a pair of cylinders in tandem arrangement, and a NACA-0012 are investigated. A good agreement between the present results and the data in the previous literatures is achieved, which demonstrates the accuracy and effectiveness of the present method to solve the flows past obstacle problems.
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2016-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
Huang, W.; Zheng, Lingyun; Zhan, X.
2002-01-01
Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.
Adaptive Meshing for Bi-directional Information Flows
DEFF Research Database (Denmark)
Nicholas, Paul; Zwierzycki, Mateusz; Stasiuk, David
2016-01-01
This paper describes a mesh-based modelling approach that supports the multiscale design of a panelised, thin-skinned metal structure. The term multi-scale refers to the decomposition of a design modelling problem into distinct but interdependent models associated with particular scales, and the ...
Adjoint-Based Mesh Adaptation for the Sonic Boom Signature Loudness
Rallabhandi, Sriram K.; Park, Michael A.
2017-01-01
The mesh adaptation functionality of FUN3D is utilized to obtain a mesh optimized to calculate sonic boom ground signature loudness. During this process, the coupling between the discrete-adjoints of the computational fluid dynamics tool FUN3D and the atmospheric propagation tool sBOOM is exploited to form the error estimate. This new mesh adaptation methodology will allow generation of suitable meshes adapted to reduce the estimated errors in the ground loudness, which is an optimization metric employed in supersonic aircraft design. This new output-based adaptation could allow new insights into meshing for sonic boom analysis and design, and complements existing output-based adaptation techniques such as adaptation to reduce estimated errors in off-body pressure functional. This effort could also have implications for other coupled multidisciplinary adjoint capabilities (e.g., aeroelasticity) as well as inclusion of propagation specific parameters such as prevailing winds or non-standard atmospheric conditions. Results are discussed in the context of existing methods and appropriate conclusions are drawn as to the efficacy and efficiency of the developed capability.
A dynamic mesh refinement technique for Lattice Boltzmann simulations on octree-like grids
Neumann, Philipp
2012-04-27
In this contribution, we present our new adaptive Lattice Boltzmann implementation within the Peano framework, with special focus on nanoscale particle transport problems. With the continuum hypothesis not holding anymore on these small scales, new physical effects - such as Brownian fluctuations - need to be incorporated. We explain the overall layout of the application, including memory layout and access, and shortly review the adaptive algorithm. The scheme is validated by different benchmark computations in two and three dimensions. An extension to dynamically changing grids and a spatially adaptive approach to fluctuating hydrodynamics, allowing for the thermalisation of the fluid in particular regions of interest, is proposed. Both dynamic adaptivity and adaptive fluctuating hydrodynamics are validated separately in simulations of particle transport problems. The application of this scheme to an oscillating particle in a nanopore illustrates the importance of Brownian fluctuations in such setups. © 2012 Springer-Verlag.
Stress adapted embroidered meshes with a graded pattern design for abdominal wall hernia repair
Hahn, J.; Bittrich, L.; Breier, A.; Spickenheuer, A.
2017-10-01
Abdominal wall hernias are one of the most relevant injuries of the digestive system with 25 million patients in 2013. Surgery is recommended primarily using allogenic non-absorbable wrap-knitted meshes. These meshes have in common that their stress-strain behaviour is not adapted to the anisotropic behaviour of native abdominal wall tissue. The ideal mesh should possess an adequate mechanical behaviour and a suitable porosity at the same time. An alternative fabrication method to wrap-knitting is the embroidery technology with a high flexibility in pattern design and adaption of mechanical properties. In this study, a pattern generator was created for pattern designs consisting of a base and a reinforcement pattern. The embroidered mesh structures demonstrated different structural and mechanical characteristics. Additionally, the investigation of the mechanical properties exhibited an anisotropic mechanical behaviour for the embroidered meshes. As a result, the investigated pattern generator and the embroidery technology allow the production of stress adapted mesh structures that are a promising approach for hernia reconstruction.
Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo
2017-12-01
A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.
DEFF Research Database (Denmark)
Jensen, Kristian
2018-01-01
Topology optimization was recently combined with anisotropic mesh adaptation to solve 3D minimum compliance problems in a fast and robust way. This paper demonstrates that the methodology is also applicable to 2D/3D heat conduction problems. Nodal design variables are used and the objective...... function is chosen such that the problem is self-adjoint. There is no way around the book keeping associated with mesh adaptation, so the whole 5527 line MATLAB code is published (https://github.com/kristianE86/trullekrul). The design variables as well as the sensitivities have to be interpolated between...
The impact of mesh adaptivity on the gravity current front speed in a two-dimensional lock-exchange
Hiester, H. R.; Piggott, M. D.; Allison, P. A.
Numerical simulations of the two-dimensional lock-exchange flow are used to evaluate the performance of adaptive meshes as implemented in the non-hydrostatic, finite-element model Fluidity-ICOM. The lock-exchange is a widely studied laboratory-scale set-up that produces two horizontally propagating gravity currents and incorporates key physical processes associated with gravity currents over many scales, including ocean overflows. The Froude number (non-dimensional front speed) is used to assess simulations performed on structured-fixed, unstructured-fixed and unstructured-adaptive meshes and different adaptive mesh configurations are compared. Fluidity-ICOM successfully captures the flow dynamics, including the development of Kelvin-Helmholtz billows. Mesh adapts are guided by a metric which is key to the ability of an adaptive mesh to represent the flow. The metric employed in Fluidity-ICOM is simple, based on the curvature of the solution fields and user-defined solution field weights. Good representation of the gravity current front region is essential to the quality of the solution and for the adaptive meshes this is achieved by reducing the horizontal velocity field weight near the boundaries. Adaptive meshes that are configured in this way are seen to perform as well as high-resolution fixed meshes whilst using at least one order of magnitude fewer nodes. The Froude numbers also compare well with previously published values determined from experimental, numerical and theoretical approaches. The substantial reduction in the number of nodes used by the adaptive meshes is particularly encouraging as it suggests that even greater gains may be achieved in three-dimensional simulations and larger-scale problems. Results show that successful use of the adaptive mesh approach employed requires a clear understanding of the physics of the system and the metric. These considerations will be vital to the effective application of adaptive mesh approaches in numerical
Optimal Channel Width Adaptation, Logical Topology Design, and Routing in Wireless Mesh Networks
Directory of Open Access Journals (Sweden)
Li Li
2009-01-01
Full Text Available Radio frequency spectrum is a finite and scarce resource. How to efficiently use the spectrum resource is one of the fundamental issues for multi-radio multi-channel wireless mesh networks. However, past research efforts that attempt to exploit multiple channels always assume channels of fixed predetermined width, which prohibits the further effective use of the spectrum resource. In this paper, we address how to optimally adapt channel width to more efficiently utilize the spectrum in IEEE802.11-based multi-radio multi-channel mesh networks. We mathematically formulate the channel width adaptation, logical topology design, and routing as a joint mixed 0-1 integer linear optimization problem, and we also propose our heuristic assignment algorithm. Simulation results show that our method can significantly improve spectrum use efficiency and network performance.
Error estimation for goal-oriented spatial adaptivity for the SN equations on triangular meshes
International Nuclear Information System (INIS)
Lathouwers, D.
2011-01-01
In this paper we investigate different error estimation procedures for use within a goal oriented adaptive algorithm for the S N equations on unstructured meshes. The method is based on a dual-weighted residual approach where an appropriate adjoint problem is formulated and solved in order to obtain the importance of residual errors in the forward problem on the specific goal of interest. The forward residuals and the adjoint function are combined to obtain both economical finite element meshes tailored to the solution of the target functional as well as providing error estimates. Various approximations made to make the calculation of the adjoint angular flux more economically attractive are evaluated by comparing the performance of the resulting adaptive algorithm and the quality of the error estimators when applied to two shielding-type test problems. (author)
International Nuclear Information System (INIS)
Young, T. D.; Armiento, R.
2010-01-01
A Schroedinger eigenvalue problem is solved for the 2D quantum simple harmonic oscillator using a finite element discretization of real space within which elements are adaptively spatially refined. We compare two competing methods of adaptively discretizing the real-space grid on which computations are performed without modifying the standard polynomial basis-set traditionally used in finite element interpolations; namely, (i) an application of the Kelly error estimator, and (ii) a refinement based on the local potential level. When the performance of these methods are compared to standard uniform global refinement, we find that they significantly improve the total time spent in the eigensolver. (general)
Refinement trajectory and determination of eigenstates by a wavelet based adaptive method
International Nuclear Information System (INIS)
Pipek, Janos; Nagy, Szilvia
2006-01-01
The detail structure of the wave function is analyzed at various refinement levels using the methods of wavelet analysis. The eigenvalue problem of a model system is solved in granular Hilbert spaces, and the trajectory of the eigenstates is traced in terms of the resolution. An adaptive method is developed for identifying the fine structure localization regions, where further refinement of the wave function is necessary
Directory of Open Access Journals (Sweden)
Luis Gavete
2018-01-01
Full Text Available We apply a 3D adaptive refinement procedure using meshless generalized finite difference method for solving elliptic partial differential equations. This adaptive refinement, based on an octree structure, allows adding nodes in a regular way in order to obtain smooth transitions with different nodal densities in the model. For this purpose, we define an error indicator as stop condition of the refinement, a criterion for choosing nodes with the highest errors, and a limit for the number of nodes to be added in each adaptive stage. This kind of equations often appears in engineering problems such as simulation of heat conduction, electrical potential, seepage through porous media, or irrotational flow of fluids. The numerical results show the high accuracy obtained.
Guo, Zhikui; Chen, Chao; Tao, Chunhui
2016-04-01
Since 2007, there are four China Da yang cruises (CDCs), which have been carried out to investigate polymetallic sulfides in the southwest Indian ridge (SWIR) and have acquired both gravity data and bathymetry data on the corresponding survey lines(Tao et al., 2014). Sandwell et al. (2014) published a new global marine gravity model including the free air gravity data and its first order vertical gradient (Vzz). Gravity data and its gradient can be used to extract unknown density structure information(e.g. crust thickness) under surface of the earth, but they contain all the mass effect under the observation point. Therefore, how to get accurate gravity and its gradient effect of the existing density structure (e.g. terrain) has been a key issue. Using the bathymetry data or ETOPO1 (http://www.ngdc.noaa.gov/mgg/global/global.html) model at a full resolution to calculate the terrain effect could spend too much computation time. We expect to develop an effective method that takes less time but can still yield the desired accuracy. In this study, a constant-density polyhedral model is used to calculate the gravity field and its vertical gradient, which is based on the work of Tsoulis (2012). According to gravity field attenuation with distance and variance of bathymetry, we present an adaptive mesh refinement and coarsening strategies to merge both global topography data and multi-beam bathymetry data. The local coarsening or size of mesh depends on user-defined accuracy and terrain variation (Davis et al., 2011). To depict terrain better, triangular surface element and rectangular surface element are used in fine and coarse mesh respectively. This strategy can also be applied to spherical coordinate in large region and global scale. Finally, we applied this method to calculate Bouguer gravity anomaly (BGA), mantle Bouguer anomaly(MBA) and their vertical gradient in SWIR. Further, we compared the result with previous results in the literature. Both synthetic model
Adaptive Fault-Tolerant Routing in 2D Mesh with Cracky Rectangular Model
Directory of Open Access Journals (Sweden)
Yi Yang
2014-01-01
Full Text Available This paper mainly focuses on routing in two-dimensional mesh networks. We propose a novel faulty block model, which is cracky rectangular block, for fault-tolerant adaptive routing. All the faulty nodes and faulty links are surrounded in this type of block, which is a convex structure, in order to avoid routing livelock. Additionally, the model constructs the interior spanning forest for each block in order to keep in touch with the nodes inside of each block. The procedure for block construction is dynamically and totally distributed. The construction algorithm is simple and ease of implementation. And this is a fully adaptive block which will dynamically adjust its scale in accordance with the situation of networks, either the fault emergence or the fault recovery, without shutdown of the system. Based on this model, we also develop a distributed fault-tolerant routing algorithm. Then we give the formal proof for this algorithm to guarantee that messages will always reach their destinations if and only if the destination nodes keep connecting with these mesh networks. So the new model and routing algorithm maximize the availability of the nodes in networks. This is a noticeable overall improvement of fault tolerability of the system.
Conservative multi-implicit integral deferred correction methods with adaptive mesh refinement
International Nuclear Information System (INIS)
Layton, A.T.
2004-01-01
In most models of reacting gas dynamics, the characteristic time scales of chemical reactions are much shorter than the hydrodynamic and diffusive time scales, rendering the reaction part of the model equations stiff. Moreover, nonlinear forcings may introduce into the solutions sharp gradients or shocks, the robust behavior and correct propagation of which require the use of specialized spatial discretization procedures. This study presents high-order conservative methods for the temporal integration of model equations of reacting flows. By means of a method of lines discretization on the flux difference form of the equations, these methods compute approximations to the cell-averaged or finite-volume solution. The temporal discretization is based on a multi-implicit generalization of integral deferred correction methods. The advection term is integrated explicitly, and the diffusion and reaction terms are treated implicitly but independently, with the splitting errors present in traditional operator splitting methods reduced via the integral deferred correction procedure. To reduce computational cost, time steps used to integrate processes with widely-differing time scales may differ in size. (author)
Three-Dimensional Adaptive Mesh Refinement Simulations of Point-Symmetric Nebulae
Rijkhorst, E.-J.; Icke, V.; Mellema, G.; Meixner, M.; Kastner, J.H.; Balick, B.; Soker, N.
2004-01-01
Previous analytical and numerical work shows that the generalized interacting stellar winds model can explain the observed bipolar shapes of planetary nebulae very well. However, many circumstellar nebulae have a multipolar or point-symmetric shape. With two-dimensional calculations, Icke showed
Radiative cooling in numerical astrophysics: The need for adaptive mesh refinement
van Marle, A. J.; Keppens, R.
2011-01-01
Energy loss through optically thin radiative cooling plays an important part in the evolution of astrophysical gas dynamics and should therefore be considered a necessary element in any numerical simulation. Although the addition of this physical process to the equations of hydrodynamics is
woptic: Optical conductivity with Wannier functions and adaptive k-mesh refinement
Czech Academy of Sciences Publication Activity Database
Assmann, E.; Wissgott, P.; Kuneš, Jan; Toschi, A.; Blaha, P.; Held, K.
2016-01-01
Roč. 202, May (2016), s. 1-11 ISSN 0010-4655 Institutional support: RVO:68378271 Keywords : optical spectra * Wannier orbital Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.936, year: 2016
6th International Meshing Roundtable '97
Energy Technology Data Exchange (ETDEWEB)
White, D.
1997-09-01
The goal of the 6th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the pas~ the Roundtable has enjoyed significant participation born each of these groups from a wide variety of countries. The Roundtable will consist of technical presentations from contributed papers and abstracts, two invited speakers, and two invited panels of experts discussing topics related to the development and use of automatic mesh generation tools. In addition, this year we will feature a "Bring Your Best Mesh" competition and poster session to encourage discussion and participation from a wide variety of mesh generation tool users. The schedule and evening social events are designed to provide numerous opportunities for informal dialog. A proceedings will be published by Sandia National Laboratories and distributed at the Roundtable. In addition, papers of exceptionally high quaIity will be submitted to a special issue of the International Journal of Computational Geometry and Applications. Papers and one page abstracts were sought that present original results on the meshing process. Potential topics include but are got limited to: Unstructured triangular and tetrahedral mesh generation Unstructured quadrilateral and hexahedral mesh generation Automated blocking and structured mesh generation Mixed element meshing Surface mesh generation Geometry decomposition and clean-up techniques Geometry modification techniques related to meshing Adaptive mesh refinement and mesh quality control Mesh visualization Special purpose meshing algorithms for particular applications Theoretical or novel ideas with practical potential Technical presentations from industrial researchers.
An adaptive grid refinement strategy for the simulation of negative streamers
International Nuclear Information System (INIS)
Montijn, C.; Hundsdorfer, W.; Ebert, U.
2006-01-01
The evolution of negative streamers during electric breakdown of a non-attaching gas can be described by a two-fluid model for electrons and positive ions. It consists of continuity equations for the charged particles including drift, diffusion and reaction in the local electric field, coupled to the Poisson equation for the electric potential. The model generates field enhancement and steep propagating ionization fronts at the tip of growing ionized filaments. An adaptive grid refinement method for the simulation of these structures is presented. It uses finite volume spatial discretizations and explicit time stepping, which allows the decoupling of the grids for the continuity equations from those for the Poisson equation. Standard refinement methods in which the refinement criterion is based on local error monitors fail due to the pulled character of the streamer front that propagates into a linearly unstable state. We present a refinement method which deals with all these features. Tests on one-dimensional streamer fronts as well as on three-dimensional streamers with cylindrical symmetry (hence effectively 2D for numerical purposes) are carried out successfully. Results on fine grids are presented, they show that such an adaptive grid method is needed to capture the streamer characteristics well. This refinement strategy enables us to adequately compute negative streamers in pure gases in the parameter regime where a physical instability appears: branching streamers
Adaptive local surface refinement based on LR NURBS and its application to contact
Zimmermann, Christopher; Sauer, Roger A.
2017-12-01
A novel adaptive local surface refinement technique based on Locally Refined Non-Uniform Rational B-Splines (LR NURBS) is presented. LR NURBS can model complex geometries exactly and are the rational extension of LR B-splines. The local representation of the parameter space overcomes the drawback of non-existent local refinement in standard NURBS-based isogeometric analysis. For a convenient embedding into general finite element codes, the Bézier extraction operator for LR NURBS is formulated. An automatic remeshing technique is presented that allows adaptive local refinement and coarsening of LR NURBS. In this work, LR NURBS are applied to contact computations of 3D solids and membranes. For solids, LR NURBS-enriched finite elements are used to discretize the contact surfaces with LR NURBS finite elements, while the rest of the body is discretized by linear Lagrange finite elements. For membranes, the entire surface is discretized by LR NURBS. Various numerical examples are shown, and they demonstrate the benefit of using LR NURBS: Compared to uniform refinement, LR NURBS can achieve high accuracy at lower computational cost.
Energy Technology Data Exchange (ETDEWEB)
Gutowski, William J.; Prusa, Joseph M.; Smolarkiewicz, Piotr K.
2012-05-08
This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the "physics" of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer-reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited. 3a. EULAG Advances EULAG is a non-hydrostatic, parallel computational model for all-scale geophysical flows. EULAG's name derives from its two computational options: EULerian (flux form) or semi-LAGrangian (advective form). The model combines nonoscillatory forward-in-time (NFT) numerical algorithms with a robust elliptic Krylov solver. A signature feature of EULAG is that it is formulated in generalized time-dependent curvilinear coordinates. In particular, this enables grid adaptivity. In total, these features give EULAG novel advantages over many existing dynamical cores. For EULAG itself, numerical advances included refining boundary conditions and filters for optimizing model performance in polar regions. We also added flexibility to the model's underlying formulation, allowing it to work with the pseudo-compressible equation set of Durran in addition to EULAG's standard anelastic formulation. Work in collaboration with others also extended the
Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems
Energy Technology Data Exchange (ETDEWEB)
Turinsky, Paul [North Carolina State Univ., Raleigh, NC (United States)
2015-02-09
This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can be realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial
Chakraborty, Souvik; Chowdhury, Rajib
2017-12-01
Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.
SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method
International Nuclear Information System (INIS)
Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X
2015-01-01
Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed
Mesh Generation and Adaption for High Reynolds Number RANS Computations, Phase I
National Aeronautics and Space Administration — This proposal offers to provide NASA with an automatic mesh generator for the simulation of aerodynamic flows using Reynolds-Averages Navier-Stokes (RANS) models....
Mesh Generation and Adaption for High Reynolds Number RANS Computations, Phase II
National Aeronautics and Space Administration — This proposal offers to provide NASA with an automatic mesh generator for the simulation of aerodynamic flows using Reynolds-Averages Navier-Stokes (RANS) models....
Mesh Generation and Adaption for High Reynolds Number RANS Computations Project
National Aeronautics and Space Administration — This proposal offers to provide NASA with an automatic mesh generator for the simulation of aerodynamic flows using Reynolds-Averages Navier-Stokes (RANS) models....
A flexible content-adaptive mesh-generation strategy for image representation.
Adams, Michael D
2011-09-01
Based on the greedy-point removal (GPR) scheme of Demaret and Iske, a simple yet highly effective framework for constructing triangle-mesh representations of images, called GPRFS, is proposed. By using this framework and ideas from the error diffusion (ED) scheme (for mesh-generation) of Yang et al., a highly effective mesh-generation method, called GPRFS-ED, is derived and presented. Since the ED scheme plays a crucial role in our work, factors affecting the performance of this scheme are also studied in detail. Through experimental results, our GPRFS-ED method is shown to be capable of generating meshes of quality comparable to, and in many cases better than, the state-of-the-art GPR scheme, while requiring substantially less computation and memory. Furthermore, with our GPRFS-ED method, one can easily trade off between mesh quality and computational/memory complexity. A reduced-complexity version of the GPRFS-ED method (called GPRFS-MED) is also introduced to further demonstrate the computational/memory-complexity scalability of our GPRFS-ED method.
Directory of Open Access Journals (Sweden)
D. V. Lukyanenko
2016-01-01
Full Text Available The main objective of the paper is to present a new analytic-numerical approach to singularly perturbed reaction-diﬀusion-advection models with solutions containing moving interior layers (fronts. We describe some methods to generate the dynamic adapted meshes for an eﬃcient numerical solution of such problems. It is based on a priori information about the moving front properties provided by the asymptotic analysis. In particular, for the mesh construction we take into account a priori asymptotic evaluation of the location and speed of the moving front, its width and structure. Our algorithms signiﬁcantly reduce the CPU time and enhance the stability of the numerical process compared with classical approaches.The article is published in the authors’ wording.
DeBenedictis, Andrew; Atherton, Timothy J.; Rodarte, Andrea L.; Hirst, Linda S.
2018-03-01
A micrometer-scale elastic shell immersed in a nematic liquid crystal may be deformed by the host if the cost of deformation is comparable to the cost of elastic deformation of the nematic. Moreover, such inclusions interact and form chains due to quadrupolar distortions induced in the host. A continuum theory model using finite elements is developed for this system, using mesh regularization and dynamic refinement to ensure quality of the numerical representation even for large deformations. From this model, we determine the influence of the shell elasticity, nematic elasticity, and anchoring condition on the shape of the shell and hence extract parameter values from an experimental realization. Extending the model to multibody interactions, we predict the alignment angle of the chain with respect to the host nematic as a function of aspect ratio, which is found to be in excellent agreement with experiments.
TU-AB-202-05: GPU-Based 4D Deformable Image Registration Using Adaptive Tetrahedral Mesh Modeling
Energy Technology Data Exchange (ETDEWEB)
Zhong, Z; Zhuang, L [Wayne State University, Detroit, MI (United States); Gu, X; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States); Chen, H; Zhen, X [Southern Medical University, Guangzhou, Guangdong (China)
2016-06-15
Purpose: Deformable image registration (DIR) has been employed today as an automated and effective segmentation method to transfer tumor or organ contours from the planning image to daily images, instead of manual segmentation. However, the computational time and accuracy of current DIR approaches are still insufficient for online adaptive radiation therapy (ART), which requires real-time and high-quality image segmentation, especially in a large datasets of 4D-CT images. The objective of this work is to propose a new DIR algorithm, with fast computational speed and high accuracy, by using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the adaptive tetrahedral mesh based on the image features of a reference phase of 4D-CT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. Subsequently, the deformation vector fields (DVF) and other phases of 4D-CT can be obtained by matching each phase of the target 4D-CT images with the corresponding deformed reference phase. The proposed 4D DIR method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its parallel computing ability. Results: A 4D NCAT digital phantom was used to test the efficiency and accuracy of our method. Both the image and DVF results show that the fine structures and shapes of lung are well preserved, and the tumor position is well captured, i.e., 3D distance error is 1.14 mm. Compared to the previous voxel-based CPU implementation of DIR, such as demons, the proposed method is about 160x faster for registering a 10-phase 4D-CT with a phase dimension of 256×256×150. Conclusion: The proposed 4D DIR method uses feature-based mesh and GPU-based parallelism, which demonstrates the capability to compute both high-quality image and motion results, with significant improvement on the computational speed.
Directory of Open Access Journals (Sweden)
Faosan Mapa
2014-01-01
Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Wireless Mesh Network (WMN adalah suatu konektivitas jaringan yang self-organized, self-configured dan multi-hop. Tujuan dari WMN adalah menawarkan pengguna suatu bentuk jaringan nirkabel yang dapat dengan mudah berkomunikasi dengan jaringan konvensional dengan kecepatan tinggi dan dengan cakupan yang lebih luas serta biaya awal yang minimal. Diperlukan suatu desain protokol routing yang efisien untuk WMN yang secara adaptif dapat mendukung mesh routers dan mesh clients. Dalam tulisan ini, diusulkan untuk mengoptimalkan protokol OLSR, yang merupakan protokol routing proaktif. Digunakan heuristik yang meningkatkan protokol OLSR melalui adaptive refreshing time interval dan memperbaiki metode MPR selecting algorithm. Suatu analisa dalam meningkatkan protokol OLSR melalui adaptive refreshing time interval dan memperbaiki algoritma pemilihan MPR menunjukkan kinerja yang signifikan dalam hal throughput jika dibandingkan dengan protokol OLSR yang asli. Akan tetapi, terdapat kenaikan dalam hal delay. Pada simulasi yang dilakukan dapat disimpulkan bahwa OLSR dapat dioptimalkan dengan memodifikasi pemilihan node MPR berdasarkan cost effective dan penyesuaian waktu interval refreshing hello message sesuai dengan keadaan
A MATLAB Script for Solving 2D/3D Minimum Compliance Problems using Anisotropic Mesh Adaptation
DEFF Research Database (Denmark)
Jensen, Kristian Ejlebjerg
2017-01-01
We present a pure MATLAB implementation for solving 2D/3D compliance minimization problems using the density method. A filtered design variable with a minimum length is computed using a Helmholtz-type differential equation. The optimality criteria is used as optimizer and to avoid local minima we...... the implementation totals some 5,000 lines of MAT-LAB code, but the functions associated with the forward analysis, geometry/mesh setup and optimization are concise and well documented, so the implementation can be used as a starting point for research on related topics....
Adaptive Ridge Point Refinement for Seeds Detection in X-Ray Coronary Angiogram
Directory of Open Access Journals (Sweden)
Ruoxiu Xiao
2015-01-01
Full Text Available Seed point is prerequired condition for tracking based method for extracting centerline or vascular structures from the angiogram. In this paper, a novel seed point detection method for coronary artery segmentation is proposed. Vessels on the image are first enhanced according to the distribution of Hessian eigenvalue in multiscale space; consequently, centerlines of tubular vessels are also enhanced. Ridge point is extracted as candidate seed point, which is then refined according to its mathematical definition. The theoretical feasibility of this method is also proven. Finally, all the detected ridge points are checked using a self-adaptive threshold to improve the robustness of results. Clinical angiograms are used to evaluate the performance of the proposed algorithm, and the results show that the proposed algorithm can detect a large set of true seed points located on most branches of vessels. Compared with traditional seed point detection algorithms, the proposed method can detect a larger number of seed points with higher precision. Considering that the proposed method can achieve accurate seed detection without any human interaction, it can be utilized for several clinical applications, such as vessel segmentation, centerline extraction, and topological identification.
Biomolecular structure refinement based on adaptive restraints using local-elevation simulation
International Nuclear Information System (INIS)
Christen, Markus; Keller, Bettina; Gunsteren, Wilfred F. van
2007-01-01
Introducing experimental values as restraints into molecular dynamics (MD) simulation to bias the values of particular molecular properties, such as nuclear Overhauser effect intensities or distances, dipolar couplings, 3 J-coupling constants, chemical shifts or crystallographic structure factors, towards experimental values is a widely used structure refinement method. Because multiple torsion angle values φ correspond to the same 3 J-coupling constant and high-energy barriers are separating those, restraining 3 J-coupling constants remains difficult. A method to adaptively enforce restraints using a local elevation (LE) potential energy function is presented and applied to 3 J-coupling constant restraining in an MD simulation of hen egg-white lysozyme (HEWL). The method successfully enhances sampling of the restrained torsion angles until the 37 experimental 3 J-coupling constant values are reached, thereby also improving the agreement with the 1,630 experimental NOE atom-atom distance upper bounds. Afterwards the torsional angles φ are kept restrained by the built-up local-elevation potential energies
Kimura, Satoshi; Candy, Adam S.; Holland, Paul R.; Piggott, Matthew D.; Jenkins, Adrian
2013-07-01
Several different classes of ocean model are capable of representing floating glacial ice shelves. We describe the incorporation of ice shelves into Fluidity-ICOM, a nonhydrostatic finite-element ocean model with the capacity to utilize meshes that are unstructured and adaptive in three dimensions. This geometric flexibility offers several advantages over previous approaches. The model represents melting and freezing on all ice-shelf surfaces including vertical faces, treats the ice shelf topography as continuous rather than stepped, and does not require any smoothing of the ice topography or any of the additional parameterisations of the ocean mixed layer used in isopycnal or z-coordinate models. The model can also represent a water column that decreases to zero thickness at the 'grounding line', where the floating ice shelf is joined to its tributary ice streams. The model is applied to idealised ice-shelf geometries in order to demonstrate these capabilities. In these simple experiments, arbitrarily coarsening the mesh outside the ice-shelf cavity has little effect on the ice-shelf melt rate, while the mesh resolution within the cavity is found to be highly influential. Smoothing the vertical ice front results in faster flow along the smoothed ice front, allowing greater exchange with the ocean than in simulations with a realistic ice front. A vanishing water-column thickness at the grounding line has little effect in the simulations studied. We also investigate the response of ice shelf basal melting to variations in deep water temperature in the presence of salt stratification.
Refined adaptive optics simulation with wide field of view for the E-ELT
International Nuclear Information System (INIS)
Chebbo, Manal
2012-01-01
Refined simulation tools for wide field AO systems (such as MOAO, MCAO or LTAO) on ELTs present new challenges. Increasing the number of degrees of freedom (scales as the square of the telescope diameter) makes the standard simulation's codes useless due to the huge number of operations to be performed at each step of the Adaptive Optics (AO) loop process. This computational burden requires new approaches in the computation of the DM voltages from WFS data. The classical matrix inversion and the matrix vector multiplication have to be replaced by a cleverer iterative resolution of the Least Square or Minimum Mean Square Error criterion (based on sparse matrices approaches). Moreover, for this new generation of AO systems, concepts themselves will become more complex: data fusion coming from multiple Laser and Natural Guide Stars (LGS / NGS) will have to be optimized, mirrors covering all the field of view associated to dedicated mirrors inside the scientific instrument itself will have to be coupled using split or integrated tomography schemes, differential pupil or/and field rotations will have to be considered, etc. All these new entries should be carefully simulated, analysed and quantified in terms of performance before any implementation in AO systems. For those reasons I developed, in collaboration with the ONERA, a full simulation code, based on iterative solution of linear systems with many parameters (use of sparse matrices). On this basis, I introduced new concepts of filtering and data fusion (LGS / NGS) to effectively manage modes such as tip, tilt and defocus in the entire process of tomographic reconstruction. The code will also eventually help to develop and test complex control laws (Multi-DM and multi-field) who have to manage a combination of adaptive telescope and post-focal instrument including dedicated deformable mirrors. The first application of this simulation tool has been studied in the framework of the EAGLE multi-object spectrograph
Viré, Axelle; Xiang, Jiansheng; Milthaler, Frank; Farrell, Patrick Emmet; Piggott, Matthew David; Latham, John-Paul; Pavlidis, Dimitrios; Pain, Christopher Charles
2012-12-01
Fluid-structure interactions are modelled by coupling the finite element fluid/ocean model `Fluidity-ICOM' with a combined finite-discrete element solid model `Y3D'. Because separate meshes are used for the fluids and solids, the present method is flexible in terms of discretisation schemes used for each material. Also, it can tackle multiple solids impacting on one another, without having ill-posed problems in the resolution of the fluid's equations. Importantly, the proposed approach ensures that Newton's third law is satisfied at the discrete level. This is done by first computing the action-reaction force on a supermesh, i.e. a function superspace of the fluid and solid meshes, and then projecting it to both meshes to use it as a source term in the fluid and solid equations. This paper demonstrates the properties of spatial conservation and accuracy of the method for a sphere immersed in a fluid, with prescribed fluid and solid velocities. While spatial conservation is shown to be independent of the mesh resolutions, accuracy requires fine resolutions in both fluid and solid meshes. It is further highlighted that unstructured meshes adapted to the solid concentration field reduce the numerical errors, in comparison with uniformly structured meshes with the same number of elements. The method is verified on flow past a falling sphere. Its potential for ocean applications is further shown through the simulation of vortex-induced vibrations of two cylinders and the flow past two flexible fibres.
Bode, Paul; Ostriker, Jeremiah P.
2003-03-01
An improved implementation of an N-body code for simulating collisionless cosmological dynamics is presented. TPM (tree particle-mesh) combines the PM method on large scales with a tree code to handle particle-particle interactions at small separations. After the global PM forces are calculated, spatially distinct regions above a given density contrast are located; the tree code calculates the gravitational interactions inside these denser objects at higher spatial and temporal resolution. The new implementation includes individual particle time steps within trees, an improved treatment of tidal forces on trees, new criteria for higher force resolution and choice of time step, and parallel treatment of large trees. TPM is compared to P3M and a tree code (GADGET) and is found to give equivalent results in significantly less time. The implementation is highly portable (requiring a FORTRAN compiler and MPI) and efficient on parallel machines. The source code can be found on the World Wide Web.
Surface meshing with curvature convergence
Li, Huibin
2014-06-01
Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.
N-body simulations for f(R) gravity using a self-adaptive particle-mesh code
International Nuclear Information System (INIS)
Zhao Gongbo; Koyama, Kazuya; Li Baojiu
2011-01-01
We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.
2013-01-01
parent side of the face 1∫ −1 ( qR(ξ)− q̃R(ξ) ) ψi(ξ)dξ = 0, (31) where qR is the continuous projection of the variables qR1 and qR2 from children sides...to a parent side, and q̃R(ξ) = q R1(z(1)) = qR1 ( ξ−o(1) s ) for − 1 ≤ ξ ≤ 0−, qR2(z(2)) = qR2 ( ξ−o(2) s ) for 0+ ≤ ξ ≤ 1. (32) Note that q̃R(ξ...allows for a discontinuity at ξ = 0. Substituting (32) into (31) yields 0∫ −1 ( qR(ξ)− qR1 ( ξ − o(1) s )) ψi(ξ)dξ + 1∫ 0 ( qR(ξ)− qR2 ( ξ − o(2) s
Robust, multidimensional mesh motion based on Monge-Kantorovich equidistribution
Energy Technology Data Exchange (ETDEWEB)
Delzanno, G L [Los Alamos National Laboratory; Finn, J M [Los Alamos National Laboratory
2009-01-01
Mesh-motion (r-refinement) grid adaptivity schemes are attractive due to their potential to minimize the numerical error for a prescribed number of degrees of freedom. However, a key roadblock to a widespread deployment of the technique has been the formulation of robust, reliable mesh motion governing principles, which (1) guarantee a solution in multiple dimensions (2D and 3D), (2) avoid grid tangling (or folding of the mesh, whereby edges of a grid cell cross somewhere in the domain), and (3) can be solved effectively and efficiently. In this study, we formulate such a mesh-motion governing principle, based on volume equidistribution via Monge-Kantorovich optimization (MK). In earlier publications [1, 2], the advantages of this approach in regards to these points have been demonstrated for the time-independent case. In this study, demonstrate that Monge-Kantorovich equidistribution can in fact be used effectively in a time stepping context, and delivers an elegant solution to the otherwise pervasive problem of grid tangling in mesh motion approaches, without resorting to ad-hoc time-dependent terms (as in moving-mesh PDEs, or MMPDEs [3, 4]). We explore two distinct r-refinement implementations of MK: direct, where the current mesh relates to an initial, unchanging mesh, and sequential, where the current mesh is related to the previous one in time. We demonstrate that the direct approach is superior in regards to mesh distortion and robustness. The properties of the approach are illustrated with a paradigmatic hyperbolic PDE, the advection of a passive scalar. Imposed velocity flow fields or varying vorticity levels and flow shears are considered.
Breier, A.; Bittrich, L.; Hahn, J.; Spickenheuer, A.
2017-10-01
For the sustainable repair of abdominal wall hernia the application of hernia meshes is required. One reason for the relapse of hernia after surgery is seen in an inadequate adaption of the mechanical properties of the mesh to the movements of the abdominal wall. Differences in the stiffness of the mesh and the abdominal tissue cause tension, friction and stress resulting in a deficient tissue response and subsequently in a recurrence of a hernia, preferentially in the marginal area of the mesh. Embroidery technology enables a targeted influence on the mechanical properties of the generated textile structure by a directed thread deposition. Textile parameters like stitch density, alignment and angle can be changed easily and locally in the embroidery pattern to generate a space-resolved mesh with mechanical properties adapted to the requirement of the surrounding tissue. To determine those requirements the movements of the abdominal wall and the resulting distortions need to be known. This study was conducted to gain optical data of the abdominal wall movements by non-invasive ARAMIS-measurement on 39 test persons to estimate direction and value of the major strains.
Hybrid direct and iterative solvers for h refined grids with singularities
Paszyński, Maciej R.
2015-04-27
This paper describes a hybrid direct and iterative solver for two and three dimensional h adaptive grids with point singularities. The point singularities are eliminated by using a sequential linear computational cost solver O(N) on CPU [1]. The remaining Schur complements are submitted to incomplete LU preconditioned conjugated gradient (ILUPCG) iterative solver. The approach is compared to the standard algorithm performing static condensation over the entire mesh and executing the ILUPCG algorithm on top of it. The hybrid solver is applied for two or three dimensional grids automatically h refined towards point or edge singularities. The automatic refinement is based on the relative error estimations between the coarse and fine mesh solutions [2], and the optimal refinements are selected using the projection based interpolation. The computational mesh is partitioned into sub-meshes with local point and edge singularities separated. This is done by using the following greedy algorithm.
Obtainment of nuclear power plant dynamic parameters by adaptive mesh technique
International Nuclear Information System (INIS)
Carvalho Miranda, W. de.
1979-01-01
This thesis involves the problem in determination of the parameters of the Mathematical Model of a Nuclear Reactor, including non-linearity which is considered as a bi-linear system. Being a non-linear model, the determination of its parameters cannot be made with the classical techniques as in obtaining its experimental frequency response. In the present work, we examine the possibility of using a model with parameters that adapt according to a algorithm of Newton type minimization, showing that in the case of the single parameter determination, the method is successful. This work was done, using the CSMP (Continuous System Modelling Program) of IBM 1130 of IME. (author)
Adaptive Finite Volume Method for the Shallow Water Equations on Triangular Grids
Directory of Open Access Journals (Sweden)
Sudi Mungkasi
2016-01-01
Full Text Available This paper presents a numerical entropy production (NEP scheme for two-dimensional shallow water equations on unstructured triangular grids. We implement NEP as the error indicator for adaptive mesh refinement or coarsening in solving the shallow water equations using a finite volume method. Numerical simulations show that NEP is successful to be a refinement/coarsening indicator in the adaptive mesh finite volume method, as the method refines the mesh or grids around nonsmooth regions and coarsens them around smooth regions.
Selvan, S Easter; Borckmans, Pierre B; Chattopadhyay, A; Absil, P-A
2013-09-01
It is seemingly paradoxical to the classical definition of the independent component analysis (ICA), that in reality, the true sources are often not strictly uncorrelated. With this in mind, this letter concerns a framework to extract quasi-uncorrelated sources with finite supports by optimizing a range-based contrast function under unit-norm constraints (to handle the inherent scaling indeterminacy of ICA) but without orthogonality constraints. Albeit the appealing contrast properties of the range-based function (e.g., the absence of mixing local optima), the function is not differentiable everywhere. Unfortunately, there is a dearth of literature on derivative-free optimizers that effectively handle such a nonsmooth yet promising contrast function. This is the compelling reason for the design of a nonsmooth optimization algorithm on a manifold of matrices having unit-norm columns with the following objectives: to ascertain convergence to a Clarke stationary point of the contrast function and adhere to the necessary unit-norm constraints more naturally. The proposed nonsmooth optimization algorithm crucially relies on the design and analysis of an extension of the mesh adaptive direct search (MADS) method to handle locally Lipschitz objective functions defined on the sphere. The applicability of the algorithm in the ICA domain is demonstrated with simulations involving natural, face, aerial, and texture images.
Samaké, Abdoulaye; Rampal, Pierre; Bouillon, Sylvain; Ólason, Einar
2017-12-01
We present a parallel implementation framework for a new dynamic/thermodynamic sea-ice model, called neXtSIM, based on the Elasto-Brittle rheology and using an adaptive mesh. The spatial discretisation of the model is done using the finite-element method. The temporal discretisation is semi-implicit and the advection is achieved using either a pure Lagrangian scheme or an Arbitrary Lagrangian Eulerian scheme (ALE). The parallel implementation presented here focuses on the distributed-memory approach using the message-passing library MPI. The efficiency and the scalability of the parallel algorithms are illustrated by the numerical experiments performed using up to 500 processor cores of a cluster computing system. The performance obtained by the proposed parallel implementation of the neXtSIM code is shown being sufficient to perform simulations for state-of-the-art sea ice forecasting and geophysical process studies over geographical domain of several millions squared kilometers like the Arctic region.
Directory of Open Access Journals (Sweden)
John Maltby
Full Text Available The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing relationships with adaptive expressions of the traits of the five-factor personality model and the contribution to well-being after controlling for personality and coping, or over time. The current findings suggest that EEA resilience is a useful and parsimonious model and measure of trait resilience that can readily be placed within wider trait psychology and that is found to contribute to individual well-being.
Maltby, John; Day, Liz; Hall, Sophie
2015-01-01
The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA) resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing relationships with adaptive expressions of the traits of the five-factor personality model and the contribution to well-being after controlling for personality and coping, or over time. The current findings suggest that EEA resilience is a useful and parsimonious model and measure of trait resilience that can readily be placed within wider trait psychology and that is found to contribute to individual well-being. PMID:26132197
Bhalla, Amneet Pal Singh; Johansen, Hans; Graves, Dan; Martin, Dan; Colella, Phillip; Applied Numerical Algorithms Group Team
2017-11-01
We present a consistent cell-averaged discretization for incompressible Navier-Stokes equations on complex domains using embedded boundaries. The embedded boundary is allowed to freely cut the locally-refined background Cartesian grid. Implicit-function representation is used for the embedded boundary, which allows us to convert the required geometric moments in the Taylor series expansion (upto arbitrary order) of polynomials into an algebraic problem in lower dimensions. The computed geometric moments are then used to construct stencils for various operators like the Laplacian, divergence, gradient, etc., by solving a least-squares system locally. We also construct the inter-level data-transfer operators like prolongation and restriction for multi grid solvers using the same least-squares system approach. This allows us to retain high-order of accuracy near coarse-fine interface and near embedded boundaries. Canonical problems like Taylor-Green vortex flow and flow past bluff bodies will be presented to demonstrate the proposed method. U.S. Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231).
The quasidiffusion method for transport problems on unstructured meshes
Wieselquist, William A.
2009-06-01
In this work, we develop a quasidiffusion (QD) method for solving radiation transport problems on unstructured quadrilateral meshes in 2D Cartesian geometry, for example hanging-node meshes from adaptive mesh refinement (AMR) applications or skewed quadrilateral meshes from radiation hydrodynamics with Lagrangian meshing. The main result of the work is a new low-order quasidiffusion (LOQD) discretization on arbitrary quadrilaterals and a strategy for the efficient iterative solution which uses Krylov methods and incomplete LU factorization (ILU) preconditioning. The LOQD equations are a non-symmetric set of first-order PDEs that in second-order form resembles convection- diffusion with a diffusion tensor, with the difference that the LOQD equations contain extra cross-derivative terms. Our finite volume (FV) discretization of the LOQD equations is compared with three LOQD discretizations from literature. We then present a conservative, short characteristics discretization based on subcell balances (SCSB) that uses polynomial exponential moments to achieve robust behavior in various limits (e.g. small cells and voids) and is second- order accurate in space. A linear representation of the isotropic component of the scattering source based on face-average and cell-average scalar fluxes is also proposed and shown to be effective in some problems. In numerical tests, our QD method with linear scattering source representation shows some advantages compared to other transport methods. We conclude with avenues for future research and note that this QD method may easily be extended to arbitrary meshes in 3D Cartesian geometry.
Multimesh anisotropic adaptivity for the Boltzmann transport equation
International Nuclear Information System (INIS)
Baker, C.M.J.; Buchan, A.G.; Pain, C.C.; Farrell, P.E.; Eaton, M.D.; Warner, P.
2013-01-01
Highlights: ► We solve the Boltzmann transport equation using anisotropically adaptive finite element meshes. ► The finite element mesh is resolved with minimal user input. ► Anisotropic adaptivity uses less elements than adaptive mesh refinement for the same finite element error. ► This paper also demonstrates the use of separate meshes for each energy group within the multigroup discretisation. ► The methods are applied to a range of fixed source and eigenvalue problems. - Abstract: This article presents a new adaptive finite element based method for the solution of the spatial dimensions of the Boltzmann transport equation. The method applies a curvature based error metric to locate the under and over resolved regions of a solution and this, in turn, is used to guide the refinement and coarsening of the spatial mesh. The error metrics and re-meshing procedures are designed such that they enable anisotropic resolution to form in the mesh should it be appropriate to do so. The adaptive mesh enables the appropriate resolution to be applied throughout the whole domain of a problem and so increase the efficiency of the solution procedure. Another new approach is also described that allows independent adaptive meshes to form for each of the energy group fluxes. The use of independent meshes can significantly improve computational efficiency when solving problems where the different group fluxes require high resolution over different regions. The mesh to mesh interpolation is made possible through the use of a ‘supermeshing’ procedure that ensures the conservation of particles when calculating the group to group scattering sources. Finally it is shown how these methods can be incorporated within a solver to resolve both fixed source and eigenvalue problems. A selection of both fixed source and eigenvalue problems are solved in order to demonstrate the capabilities of these methods
Domain Decomposition with Local Mesh Refinement.
1989-08-01
rder hiiio (litferi-’tio represeittaions. They arte identical exct-pt for the ty Ipe of boindatrv coinditionts Iotte 44110, , jIdo of their (pitarv...decomposed precoiidit loner means t hat new applicatioins are found for tit( "’’saniilAr (1vr il coriveri tI mmi software libraie. Ili I(traditionial
Directory of Open Access Journals (Sweden)
Georg eLayher
2014-12-01
Full Text Available The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, but both belong to the category of felines. In other words, tigers and leopards are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in the computational neurosciences. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of (sub- category representations. We demonstrate the temporal evolution of such learning and show how the approach successully establishes category and subcategory
Czech Academy of Sciences Publication Activity Database
Korous, L.; Šolín, Pavel
2013-01-01
Roč. 95, č. 1 (2013), S425-S444 ISSN 0010-485X Institutional support: RVO:61388998 Keywords : numerical simulation * finite element method * hp-adaptivity Subject RIV: BA - General Math ematics Impact factor: 1.055, year: 2013
Adaptive Finite Element Method Assisted by Stochastic Simulation of Chemical Systems
Cotter, Simon L.
2013-01-01
Stochastic models of chemical systems are often analyzed by solving the corresponding Fokker-Planck equation, which is a drift-diffusion partial differential equation for the probability distribution function. Efficient numerical solution of the Fokker-Planck equation requires adaptive mesh refinements. In this paper, we present a mesh refinement approach which makes use of a stochastic simulation of the underlying chemical system. By observing the stochastic trajectory for a relatively short amount of time, the areas of the state space with nonnegligible probability density are identified. By refining the finite element mesh in these areas, and coarsening elsewhere, a suitable mesh is constructed and used for the computation of the stationary probability density. Numerical examples demonstrate that the presented method is competitive with existing a posteriori methods. © 2013 Society for Industrial and Applied Mathematics.
Coirier, William John
1994-01-01
A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a
Cache-mesh, a Dynamics Data Structure for Performance Optimization
DEFF Research Database (Denmark)
Nguyen, Tuan T.; Dahl, Vedrana Andersen; Bærentzen, J. Andreas
2017-01-01
This paper proposes the cache-mesh, a dynamic mesh data structure in 3D that allows modifications of stored topological relations effortlessly. The cache-mesh can adapt to arbitrary problems and provide fast retrieval to the most-referred-to topological relations. This adaptation requires trivial...
International Nuclear Information System (INIS)
Gheribi, Aimen E.; Robelin, Christian; Digabel, Sebastien Le; Audet, Charles; Pelton, Arthur D.
2011-01-01
Highlights: → Systematic search of low melting temperatures in multicomponent systems. → Calculation of eutectic in multicomponent systems. → The FactSage software and the direct search algorithm are used simultaneously. - Abstract: It is often of interest, for a multicomponent system, to identify the low melting compositions at which local minima of the liquidus surface occur. The experimental determination of these minima can be very time-consuming. An alternative is to employ the CALPHAD approach using evaluated thermodynamic databases containing optimized model parameters giving the thermodynamic properties of all phases as functions of composition and temperature. Liquidus temperatures are then calculated by Gibbs free energy minimization algorithms which access the databases. Several such large databases for many multicomponent systems have been developed over the last 40 years, and calculated liquidus temperatures are generally quite accurate. In principle, one could then search for local liquidus minima by simply calculating liquidus temperatures over a compositional grid. In practice, such an approach is prohibitively time-consuming for all but the simplest systems since the required number of grid points is extremely large. In the present article, the FactSage database computing system is coupled with the powerful Mesh Adaptive Direct Search (MADS) algorithm in order to search for and calculate automatically all liquidus minima in a multicomponent system. Sample calculations for a 4-component oxide system, a 7-component chloride system, and a 9-component ferrous alloy system are presented. It is shown that the algorithm is robust and rapid.
Multi-level adaptive simulation of transient two-phase flow in heterogeneous porous media
Chueh, C.C.
2010-10-01
An implicit pressure and explicit saturation (IMPES) finite element method (FEM) incorporating a multi-level shock-type adaptive refinement technique is presented and applied to investigate transient two-phase flow in porous media. Local adaptive mesh refinement is implemented seamlessly with state-of-the-art artificial diffusion stabilization allowing simulations that achieve both high resolution and high accuracy. Two benchmark problems, modelling a single crack and a random porous medium, are used to demonstrate the robustness of the method and illustrate the capabilities of the adaptive refinement technique in resolving the saturation field and the complex interaction (transport phenomena) between two fluids in heterogeneous media. © 2010 Elsevier Ltd.
Bennett, Beth Anne V.; Fielding, Joseph; Mauro, Richard J.; Long, Marshall B.; Smooke, Mitchell D.
1999-12-01
Axisymmetric laminar methane-air Bunsen flames are computed for two equivalence ratios: lean (icons/Journals/Common/Phi" ALT="Phi" ALIGN="TOP"/> = 0.776), in which the traditional Bunsen cone forms above the burner; and rich (icons/Journals/Common/Phi" ALT="Phi" ALIGN="TOP"/> = 1.243), in which the premixed Bunsen cone is accompanied by a diffusion flame halo located further downstream. Because the extremely large gradients at premixed flame fronts greatly exceed those in diffusion flames, their resolution requires a more sophisticated adaptive numerical method than those ordinarily applied to diffusion flames. The local rectangular refinement (LRR) solution-adaptive gridding method produces robust unstructured rectangular grids, utilizes multiple-scale finite-difference discretizations, and incorporates Newton's method to solve elliptic partial differential equation systems simultaneously. The LRR method is applied to the vorticity-velocity formulation of the fully elliptic governing equations, in conjunction with detailed chemistry, multicomponent transport and an optically-thin radiation model. The computed lean flame is lifted above the burner, and this liftoff is verified experimentally. For both lean and rich flames, grid spacing greatly influences the Bunsen cone's position, which only stabilizes with adequate refinement. In the rich configuration, the oxygen-free region above the Bunsen cone inhibits the complete decay of CH4, thus indirectly initiating the diffusion flame halo where CO oxidizes to CO2. In general, the results computed by the LRR method agree quite well with those obtained on equivalently refined conventional grids, yet the former require less than half the computational resources.
Quadrilateral/hexahedral finite element mesh coarsening
Staten, Matthew L; Dewey, Mark W; Scott, Michael A; Benzley, Steven E
2012-10-16
A technique for coarsening a finite element mesh ("FEM") is described. This technique includes identifying a coarsening region within the FEM to be coarsened. Perimeter chords running along perimeter boundaries of the coarsening region are identified. The perimeter chords are redirected to create an adaptive chord separating the coarsening region from a remainder of the FEM. The adaptive chord runs through mesh elements residing along the perimeter boundaries of the coarsening region. The adaptive chord is then extracted to coarsen the FEM.
Spherical geodesic mesh generation
Energy Technology Data Exchange (ETDEWEB)
Fung, Jimmy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kenamond, Mark Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Burton, Donald E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shashkov, Mikhail Jurievich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-27
In ALE simulations with moving meshes, mesh topology has a direct influence on feature representation and code robustness. In three-dimensional simulations, modeling spherical volumes and features is particularly challenging for a hydrodynamics code. Calculations on traditional spherical meshes (such as spin meshes) often lead to errors and symmetry breaking. Although the underlying differencing scheme may be modified to rectify this, the differencing scheme may not be accessible. This work documents the use of spherical geodesic meshes to mitigate solution-mesh coupling. These meshes are generated notionally by connecting geodesic surface meshes to produce triangular-prismatic volume meshes. This mesh topology is fundamentally different from traditional mesh topologies and displays superior qualities such as topological symmetry. This work describes the geodesic mesh topology as well as motivating demonstrations with the FLAG hydrocode.
Unstructured mesh based elastic wave modelling on GPU: a double-mesh grid method
Yang, Kai; Zhang, Jianfeng; Gao, Hongwei
2017-11-01
We present an unstructured mesh based numerical technique for modelling elastic wave propagation in heterogeneous media with complex geometrical settings. The scheme is developed by adapting the so-called grid method with a double-mesh implementation. The double-mesh is generated by subdividing each triangular grid of the first-level mesh into a group of congruent smaller grids with equally dividing each edge of the triangle. The resulting double-mesh grid method incorporates the advantages of structured- and unstructured-mesh schemes. The irregular, unstructured first-level mesh, which is generated by centroidal Voronoi tessellation based on Delaunay triangulation with a velocity-dependent density function, can accurately describe the surface topography and interfaces, and the size of the grid cells can vary according to local velocities. Congruent smaller grids within each grid cell of the first-level mesh greatly reduce the memory requirement of geometrical coefficients compared to a whole irregular, unstructured mesh. Applying the double-mesh approach can also alleviate the discontinuity of memory accessing mainly caused by adoption of fully unstructured mesh. As a result, the GPU implementation of the proposed scheme can obtain a high speedup rate. Numerical examples demonstrate the good behaviour of the double-mesh elastic grid method.
Sierra toolkit computational mesh conceptual model
International Nuclear Information System (INIS)
Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.
2010-01-01
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
Directory of Open Access Journals (Sweden)
Dębski Roman
2016-06-01
Full Text Available A new dynamic programming based parallel algorithm adapted to on-board heterogeneous computers for simulation based trajectory optimization is studied in the context of “high-performance sailing”. The algorithm uses a new discrete space of continuously differentiable functions called the multi-splines as its search space representation. A basic version of the algorithm is presented in detail (pseudo-code, time and space complexity, search space auto-adaptation properties. Possible extensions of the basic algorithm are also described. The presented experimental results show that contemporary heterogeneous on-board computers can be effectively used for solving simulation based trajectory optimization problems. These computers can be considered micro high performance computing (HPC platforms-they offer high performance while remaining energy and cost efficient. The simulation based approach can potentially give highly accurate results since the mathematical model that the simulator is built upon may be as complex as required. The approach described is applicable to many trajectory optimization problems due to its black-box represented performance measure and use of OpenCL.
Energy Technology Data Exchange (ETDEWEB)
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Simulating streamer discharges in 3D with the parallel adaptive Afivo framework
H.J. Teunissen (Jannis); U. M. Ebert (Ute)
2017-01-01
htmlabstractWe present an open-source plasma fluid code for 2D, cylindrical and 3D simulations of streamer discharges, based on the Afivo framework that features adaptive mesh refinement, geometric multigrid methods for Poisson's equation, and OpenMP parallelism. We describe the numerical
Pei, Ping; Petrenko, Y. N.
2015-01-01
A Mesh network simulation framework which provides a powerful and concise modeling chain for a network structure will be introduce in this report. Mesh networks has a special topologic structure. The paper investigates a message transfer in wireless mesh network simulation and how does it works in cellular network simulation. Finally the experimental result gave us the information that mesh networks have different principle in transmission way with cellular networks in transmission, and multi...
Balanced monitoring of flow phenomena in moving mesh methods
van Dam, A.; Zegeling, P.A.
2010-01-01
Adaptive moving mesh research usually focuses either on analytical derivations for prescribed solutions or on pragmatic solvers with challenging physical applications. In the latter case, the monitor functions that steer mesh adaptation are often defined in an ad-hoc way. In this paper we generalize
International Nuclear Information System (INIS)
Lores, F.R.
2001-01-01
An overview of petroleum refining in Spain is presented (by Repsol YPF) and some views on future trends are discussed. Spain depends heavily on imports. Sub-headings in the article cover: sources of crude imports, investments and logistics and marketing, -detailed data for each are shown diagrammatically. Tables show: (1) economic indicators (e.g. total GDP, vehicle numbers and inflation) for 1998-200; (2) crude oil imports for 1995-2000; (3) oil products balance for 1995-2000; (4) commodities demand, by product; (5) refining in Spain in terms of capacity per region; (6) outlets in Spain and other European countries in 2002 and (7) sales distribution channel by product
Progressive compression of generic surface meshes
Caillaud , Florian; Vidal , Vincent; Dupont , Florent; Lavoué , Guillaume
2015-01-01
International audience; This paper presents a progressive compression method for generic surface meshes (non-manifold and/or polygonal). Two major contributions are proposed : (1) generic edge collapse and vertex split operators allowing surface simplication and renement of a mesh, whatever its connectivity; (2) a distortion-aware collapse clustering strategy that adapts the decima-tion granularity in order to optimize the rate-distortion tradeoff.
International Nuclear Information System (INIS)
Laucoin, E.
2008-10-01
Numerical resolution of partial differential equations can be made reliable and efficient through the use of adaptive numerical methods.We present here the work we have done for the design, the implementation and the validation of such a method within an industrial software platform with applications in thermohydraulics. From the geometric point of view, this method can deal both with mesh refinement and mesh coarsening, while ensuring the quality of the mesh cells. Numerically, we use the mortar elements formalism in order to extend the Finite Volumes-Elements method implemented in the Trio-U platform and to deal with the non-conforming meshes arising from the adaptation procedure. Finally, we present an implementation of this method using concepts from domain decomposition methods for ensuring its efficiency while running in a parallel execution context. (author)
Energy Technology Data Exchange (ETDEWEB)
Fournier, D.; Le Tellier, R.; Suteau, C., E-mail: damien.fournier@cea.fr, E-mail: romain.le-tellier@cea.fr, E-mail: christophe.suteau@cea.fr [CEA, DEN, DER/SPRC/LEPh, Cadarache, Saint Paul-lez-Durance (France); Herbin, R., E-mail: raphaele.herbin@cmi.univ-mrs.fr [Laboratoire d' Analyse et de Topologie de Marseille, Centre de Math´ematiques et Informatique (CMI), Universit´e de Provence, Marseille Cedex (France)
2011-07-01
The solution of the time-independent neutron transport equation in a deterministic way invariably consists in the successive discretization of the three variables: energy, angle and space. In the SNATCH solver used in this study, the energy and the angle are respectively discretized with a multigroup approach and the discrete ordinate method. A set of spatial coupled transport equations is obtained and solved using the Discontinuous Galerkin Finite Element Method (DGFEM). Within this method, the spatial domain is decomposed into elements and the solution is approximated by a hierarchical polynomial basis in each one. This approach is time and memory consuming when the mesh becomes fine or the basis order high. To improve the computational time and the memory footprint, adaptive algorithms are proposed. These algorithms are based on an error estimation in each cell. If the error is important in a given region, the mesh has to be refined (h−refinement) or the polynomial basis order increased (p−refinement). This paper is related to the choice between the two types of refinement. Two ways to estimate the error are compared on different benchmarks. Analyzing the differences, a hp−refinement method is proposed and tested. (author)
International Nuclear Information System (INIS)
Fournier, D.; Le Tellier, R.; Suteau, C.; Herbin, R.
2011-01-01
The solution of the time-independent neutron transport equation in a deterministic way invariably consists in the successive discretization of the three variables: energy, angle and space. In the SNATCH solver used in this study, the energy and the angle are respectively discretized with a multigroup approach and the discrete ordinate method. A set of spatial coupled transport equations is obtained and solved using the Discontinuous Galerkin Finite Element Method (DGFEM). Within this method, the spatial domain is decomposed into elements and the solution is approximated by a hierarchical polynomial basis in each one. This approach is time and memory consuming when the mesh becomes fine or the basis order high. To improve the computational time and the memory footprint, adaptive algorithms are proposed. These algorithms are based on an error estimation in each cell. If the error is important in a given region, the mesh has to be refined (h−refinement) or the polynomial basis order increased (p−refinement). This paper is related to the choice between the two types of refinement. Two ways to estimate the error are compared on different benchmarks. Analyzing the differences, a hp−refinement method is proposed and tested. (author)
View-Dependent Adaptive Cloth Simulation with Buckling Compensation.
Koh, Woojong; Narain, Rahul; O'Brien, James F
2015-10-01
This paper describes a method for view-dependent cloth simulation using dynamically adaptive mesh refinement and coarsening. Given a prescribed camera motion, the method adjusts the criteria controlling refinement to account for visibility and apparent size in the camera's view. Objectionable dynamic artifacts are avoided by anticipative refinement and smoothed coarsening, while locking in extremely coarsened regions is inhibited by modifying the material model to compensate for unresolved sub-element buckling. This approach preserves the appearance of detailed cloth throughout the animation while avoiding the wasted effort of simulating details that would not be discernible to the viewer. The computational savings realized by this method increase as scene complexity grows. The approach produces a 2× speed-up for a single character and more than 4× for a small group as compared to view-independent adaptive simulations, and respectively 5× and 9× speed-ups as compared to non-adaptive simulations.
Algorithm refinement for stochastic partial differential equations I. linear diffusion
Alexander, F J; Tartakovsky, D M
2002-01-01
A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. Results from a variety of numerical experiments are presented for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a nonstochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except in particle regions away from the interface. Extensions of the methodology to fluid mechanics applications are discussed.
Pectus excavatum repair using Prolene polypropylene mesh.
Rasihashemi, Seyed Ziaeddin; Ramouz, Ali
2016-02-01
We aimed to assess the clinical outcomes of our surgical technique for repair of pectus excavatum using Prolene polypropylene mesh. Among 29 patients with pectus excavatum, the major complaint was cosmetic dissatisfaction, and the main symptom was exercise dyspnea in 15 patients. The Haller index used to assess pectus excavatum severity; it was significant in 22 patients. In all patients, a 2-layer sheet of Prolene polypropylene mesh was placed behind the sternum. No serious complication was observed postoperatively, and all patients were satisfied with the cosmetic result. Mitral valve prolapse improved in all cases after 3 months. Spirometry revealed improved pulmonary function after surgery. With due attention to the advantages of Prolene polypropylene mesh, such as remaining permanently in place, adapting to various stresses encountered in the body, resisting degradation by tissue enzymes, and trimming without unraveling, we concluded that this mesh is suitable for use as posterior sternal support in pectus excavatum patients. © The Author(s) 2016.
Parameterization adaption for 3D shape optimization in aerodynamics
Directory of Open Access Journals (Sweden)
Badr Abou El Majd
2013-10-01
Full Text Available When solving a PDE problem numerically, a certain mesh-refinement process is always implicit, and very classically, mesh adaptivity is a very effective means to accelerate grid convergence. Similarly, when optimizing a shape by means of an explicit geometrical representation, it is natural to seek for an analogous concept of parameterization adaptivity. We propose here an adaptive parameterization for three-dimensional optimum design in aerodynamics by using the so-called “Free-Form Deformation” approach based on 3D tensorial Bézier parameterization. The proposed procedure leads to efficient numerical simulations with highly reduced computational costs.[How to cite this article: Majd, B.A.. 2014. Parameterization adaption for 3D shape optimization in aerodynamics. International Journal of Science and Engineering, 6(1:61-69. Doi: 10.12777/ijse.6.1.61-69
... knitted mesh or non-knitted sheet forms. The synthetic materials used can be absorbable, non-absorbable or a combination of absorbable and non-absorbable materials. Animal-derived mesh are made of animal tissue, such as intestine or skin, that has been processed and disinfected to be ...
Urogynecologic Surgical Mesh Implants
... knitted mesh or non-knitted sheet forms. The synthetic materials used can be either absorbable, non-absorbable, or a combination of absorbable and non-absorbable materials. Animal-derived mesh are made of animal tissue, such as intestine or skin, that have been processed and disinfected to be ...
Reaction rates for reaction-diffusion kinetics on unstructured meshes.
Hellander, Stefan; Petzold, Linda
2017-02-14
The reaction-diffusion master equation is a stochastic model often utilized in the study of biochemical reaction networks in living cells. It is applied when the spatial distribution of molecules is important to the dynamics of the system. A viable approach to resolve the complex geometry of cells accurately is to discretize space with an unstructured mesh. Diffusion is modeled as discrete jumps between nodes on the mesh, and the diffusion jump rates can be obtained through a discretization of the diffusion equation on the mesh. Reactions can occur when molecules occupy the same voxel. In this paper, we develop a method for computing accurate reaction rates between molecules occupying the same voxel in an unstructured mesh. For large voxels, these rates are known to be well approximated by the reaction rates derived by Collins and Kimball, but as the mesh is refined, no analytical expression for the rates exists. We reduce the problem of computing accurate reaction rates to a pure preprocessing step, depending only on the mesh and not on the model parameters, and we devise an efficient numerical scheme to estimate them to high accuracy. We show in several numerical examples that as we refine the mesh, the results obtained with the reaction-diffusion master equation approach those of a more fine-grained Smoluchowski particle-tracking model.
Held, Gilbert
2005-01-01
Wireless mesh networking is a new technology that has the potential to revolutionize how we access the Internet and communicate with co-workers and friends. Wireless Mesh Networks examines the concept and explores its advantages over existing technologies. This book explores existing and future applications, and examines how some of the networking protocols operate.The text offers a detailed analysis of the significant problems affecting wireless mesh networking, including network scale issues, security, and radio frequency interference, and suggests actual and potential solutions for each pro
Kimura, Satoshi; Candy, Adam; Holland, Paul; Piggott, Matthew; Jenkins, Adrian
2013-04-01
There have been many efforts to explicitly represent ice shelf cavities in ocean models. These ocean models employ isopycnic, terrain-following, or z coordinates. We will explore an alternate method by using the finite-element ocean model, Fluidity-ICOM, to represent an ice shelf. The Fluidity-ICOM model simulates non-hydrostatic dynamics on meshes that can be unstructured in all three dimensions. This geometric flexibility offers several advantages over previous approaches. The model represents melting or freezing on ice-ocean interfaces oriented in any direction, treats the ice shelf topography as continuous rather than stepped, and does not require any smoothing of the ice topography or any additional parameterisations of the ocean mixed layer used in isopycnal or z-coordinate models. We will demonstrate these capabilities by investigating the response of ice shelf basal melting to 1) variations in ocean temperature on an idealized ice shelf and 2) variation in sub-glacial discharge on an idealized Fjord. Melting near the grounding line of the ice shelf produces melt water that is lighter than the surrounding and therefore the meltwater ascends along the base. A band of melting area is concentrated at the Western region due to the Coriolis force in the Southern Hemisphere. As found in previous studies, the melt rate increases non-linearly as the temperature of the water forcing the cavity increases. However, the model is able to represent the dynamics of a meltwater plume that separates from the ice shelf when it reaches neutral buoyancy, unlike previous models with mixed-layer parameterisation. In the warmest case, the meltwater is lighter than the surrounding water, thereby warming the surface of the ocean. As the deep water temperature decreases, the meltwater is not light enough to penetrate to the surface, so it intrudes into the open ocean, cooling the deep water. In the case of the idealized Fjord, the discharged water ascends along the vertical ice base
Iqbal, Amer
2012-01-01
We establish a relation between the refined Hopf link invariant and the S-matrix of the refined Chern-Simons theory. We show that the refined open string partition function corresponding to the Hopf link, calculated using the refined topological vertex, when expressed in the basis of Macdonald polynomials gives the S-matrix of the refined Chern-Simons theory.
Directory of Open Access Journals (Sweden)
Domingues M. O.
2013-12-01
Full Text Available We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via Harten’s cell average multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge–Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution.
Botsch, Mario; Pauly, Mark; Alliez, Pierre; Levy, Bruno
2010-01-01
Geometry processing, or mesh processing, is a fast-growing area of research that uses concepts from applied mathematics, computer science, and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation, and transmission of complex 3D models. Applications of geometry processing algorithms already cover a wide range of areas from multimedia, entertainment, and classical computer-aided design, to biomedical computing, reverse engineering, and scientific computing. Over the last several years, triangle meshes have become increasingly popular,
Three-dimensional h-adaptivity for the multigroup neutron diffusion equations
Wang, Yaqi
2009-04-01
Adaptive mesh refinement (AMR) has been shown to allow solving partial differential equations to significantly higher accuracy at reduced numerical cost. This paper presents a state-of-the-art AMR algorithm applied to the multigroup neutron diffusion equation for reactor applications. In order to follow the physics closely, energy group-dependent meshes are employed. We present a novel algorithm for assembling the terms coupling shape functions from different meshes and show how it can be made efficient by deriving all meshes from a common coarse mesh by hierarchic refinement. Our methods are formulated using conforming finite elements of any order, for any number of energy groups. The spatial error distribution is assessed with a generalization of an error estimator originally derived for the Poisson equation. Our implementation of this algorithm is based on the widely used Open Source adaptive finite element library deal.II and is made available as part of this library\\'s extensively documented tutorial. We illustrate our methods with results for 2-D and 3-D reactor simulations using 2 and 7 energy groups, and using conforming finite elements of polynomial degree up to 6. © 2008 Elsevier Ltd. All rights reserved.
Optimal Throughput and Self-adaptability of Robust Real-Time IEEE 802.15.4 MAC for AMI Mesh Network
International Nuclear Information System (INIS)
Shabani, Hikma; Ahmed, Musse Mohamud; Khan, Sheroz; Hameed, Shahab Ahmed; Habaebi, Mohamed Hadi
2013-01-01
A smart grid refers to a modernization of the electricity system that brings intelligence, reliability, efficiency and optimality to the power grid. To provide an automated and widely distributed energy delivery, the smart grid will be branded by a two-way flow of electricity and information system between energy suppliers and their customers. Thus, the smart grid is a power grid that integrates data communication networks which provide the collected and analysed data at all levels in real time. Therefore, the performance of communication systems is so vital for the success of smart grid. Merit to the ZigBee/IEEE802.15.4std low cost, low power, low data rate, short range, simplicity and free licensed spectrum that makes wireless sensor networks (WSNs) the most suitable wireless technology for smart grid applications. Unfortunately, almost all ZigBee channels overlap with wireless local area network (WLAN) channels, resulting in severe performance degradation due to interference. In order to improve the performance of communication systems, this paper proposes an optimal throughput and self-adaptability of ZigBee/IEEE802.15.4std for smart grid
Optimal Throughput and Self-adaptability of Robust Real-Time IEEE 802.15.4 MAC for AMI Mesh Network
Shabani, Hikma; Mohamud Ahmed, Musse; Khan, Sheroz; Hameed, Shahab Ahmed; Hadi Habaebi, Mohamed
2013-12-01
A smart grid refers to a modernization of the electricity system that brings intelligence, reliability, efficiency and optimality to the power grid. To provide an automated and widely distributed energy delivery, the smart grid will be branded by a two-way flow of electricity and information system between energy suppliers and their customers. Thus, the smart grid is a power grid that integrates data communication networks which provide the collected and analysed data at all levels in real time. Therefore, the performance of communication systems is so vital for the success of smart grid. Merit to the ZigBee/IEEE802.15.4std low cost, low power, low data rate, short range, simplicity and free licensed spectrum that makes wireless sensor networks (WSNs) the most suitable wireless technology for smart grid applications. Unfortunately, almost all ZigBee channels overlap with wireless local area network (WLAN) channels, resulting in severe performance degradation due to interference. In order to improve the performance of communication systems, this paper proposes an optimal throughput and self-adaptability of ZigBee/IEEE802.15.4std for smart grid.
A LAGUERRE VORONOI BASED SCHEME FOR MESHING PARTICLE SYSTEMS.
Bajaj, Chandrajit
2005-06-01
We present Laguerre Voronoi based subdivision algorithms for the quadrilateral and hexahedral meshing of particle systems within a bounded region in two and three dimensions, respectively. Particles are smooth functions over circular or spherical domains. The algorithm first breaks the bounded region containing the particles into Voronoi cells that are then subsequently decomposed into an initial quadrilateral or an initial hexahedral scaffold conforming to individual particles. The scaffolds are subsequently refined via applications of recursive subdivision (splitting and averaging rules). Our choice of averaging rules yield a particle conforming quadrilateral/hexahedral mesh, of good quality, along with being smooth and differentiable in the limit. Extensions of the basic scheme to dynamic re-meshing in the case of addition, deletion, and moving particles are also discussed. Motivating applications of the use of these static and dynamic meshes for particle systems include the mechanics of epoxy/glass composite materials, bio-molecular force field calculations, and gas hydrodynamics simulations in cosmology.
Documentation for MeshKit - Reactor Geometry (&mesh) Generator
Energy Technology Data Exchange (ETDEWEB)
Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-09-30
This report gives documentation for using MeshKit’s Reactor Geometry (and mesh) Generator (RGG) GUI and also briefly documents other algorithms and tools available in MeshKit. RGG is a program designed to aid in modeling and meshing of complex/large hexagonal and rectilinear reactor cores. RGG uses Argonne’s SIGMA interfaces, Qt and VTK to produce an intuitive user interface. By integrating a 3D view of the reactor with the meshing tools and combining them into one user interface, RGG streamlines the task of preparing a simulation mesh and enables real-time feedback that reduces accidental scripting mistakes that could waste hours of meshing. RGG interfaces with MeshKit tools to consolidate the meshing process, meaning that going from model to mesh is as easy as a button click. This report is designed to explain RGG v 2.0 interface and provide users with the knowledge and skills to pilot RGG successfully. Brief documentation of MeshKit source code, tools and other algorithms available are also presented for developers to extend and add new algorithms to MeshKit. RGG tools work in serial and parallel and have been used to model complex reactor core models consisting of conical pins, load pads, several thousands of axially varying material properties of instrumentation pins and other interstices meshes.
Enhancing physiologic simulations using supervised learning on coarse mesh solutions.
Kolandaivelu, Kumaran; O'Brien, Caroline C; Shazly, Tarek; Edelman, Elazer R; Kolachalama, Vijaya B
2015-03-06
Computational modelling of physical and biochemical processes has emerged as a means of evaluating medical devices, offering new insights that explain current performance, inform future designs and even enable personalized use. Yet resource limitations force one to compromise with reduced order computational models and idealized assumptions that yield either qualitative descriptions or approximate, quantitative solutions to problems of interest. Considering endovascular drug delivery as an exemplary scenario, we used a supervised machine learning framework to process data generated from low fidelity coarse meshes and predict high fidelity solutions on refined mesh configurations. We considered two models simulating drug delivery to the arterial wall: (i) two-dimensional drug-coated balloons and (ii) three-dimensional drug-eluting stents. Simulations were performed on computational mesh configurations of increasing density. Supervised learners based on Gaussian process modelling were constructed from combinations of coarse mesh setting solutions of drug concentrations and nearest neighbourhood distance information as inputs, and higher fidelity mesh solutions as outputs. These learners were then used as computationally inexpensive surrogates to extend predictions using low fidelity information to higher levels of mesh refinement. The cross-validated, supervised learner-based predictions improved fidelity as compared with computational simulations performed at coarse level meshes--a result consistent across all outputs and computational models considered. Supervised learning on coarse mesh solutions can augment traditional physics-based modelling of complex physiologic phenomena. By obtaining efficient solutions at a fraction of the computational cost, this framework has the potential to transform how modelling approaches can be applied in the evaluation of medical technologies and their real-time administration in an increasingly personalized fashion.
A parallel adaptive finite difference algorithm for petroleum reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Hoang, Hai Minh
2005-07-01
Adaptive finite differential for problems arising in simulation of flow in porous medium applications are considered. Such methods have been proven useful for overcoming limitations of computational resources and improving the resolution of the numerical solutions to a wide range of problems. By local refinement of the computational mesh where it is needed to improve the accuracy of solutions, yields better solution resolution representing more efficient use of computational resources than is possible with traditional fixed-grid approaches. In this thesis, we propose a parallel adaptive cell-centered finite difference (PAFD) method for black-oil reservoir simulation models. This is an extension of the adaptive mesh refinement (AMR) methodology first developed by Berger and Oliger (1984) for the hyperbolic problem. Our algorithm is fully adaptive in time and space through the use of subcycling, in which finer grids are advanced at smaller time steps than the coarser ones. When coarse and fine grids reach the same advanced time level, they are synchronized to ensure that the global solution is conservative and satisfy the divergence constraint across all levels of refinement. The material in this thesis is subdivided in to three overall parts. First we explain the methodology and intricacies of AFD scheme. Then we extend a finite differential cell-centered approximation discretization to a multilevel hierarchy of refined grids, and finally we are employing the algorithm on parallel computer. The results in this work show that the approach presented is robust, and stable, thus demonstrating the increased solution accuracy due to local refinement and reduced computing resource consumption. (Author)
Toward An Unstructured Mesh Database
Rezaei Mahdiraji, Alireza; Baumann, Peter Peter
2014-05-01
Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi
Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature
Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)
2000-01-01
This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.
From intraperitoneal onlay mesh repair to preperitoneal onlay mesh repair.
Yang, George Pei Cheung
2017-05-01
Laparoscopic repair for ventral and incisional hernias was first reported in the early 1990s. It uses intraperitoneal only mesh placement to achieve a tension-free repair of the hernia. However, in recent years, there has been greater concern about long-term complication involving intraperitoneal mesh placement. Many case reports and case series have found evidence of mesh adhesion, mesh fistulation, and mesh migration into hollow organs including the esophagus, small bowel, and large bowel, resulting in various major acute abdominal events. Subsequent management of these complications may require major surgery that is technically demanding and difficult; in such cases, laparotomy and bowel resection have often been performed. Because of these significant, but not common, adverse events, many surgeons favor open sublay repair for ventral and incisional hernias. Investigators are therefore searching for a laparoscopic approach for ventral and incisional hernias that might overcome the mesh-induced visceral complications seen after intraperitoneal only mesh placement repair. Laparoscopic preperitoneal onlay mesh is one such approach. This article will explore the fundamental of intraperitoneal only mesh placement and its problems, the currently available peritoneal visceral-compatible meshes, and upcoming developments in laparoscopic ventral and incisional hernia repair. The technical details of preperitoneal onlay mesh, as well as its potential advantages and disadvantages, will also be discussed. © 2017 Japan Society for Endoscopic Surgery, Asia Endosurgery Task Force and John Wiley & Sons Australia, Ltd.
Dam, A. van; Zegeling, P.A.
2006-01-01
In this paper we describe a one-dimensional adaptive moving mesh method and its application to hyperbolic conservation laws from magnetohydrodynamics (MHD). The method is robust, because it employs automatic control of mesh adaptation when a new model is considered, without manually-set
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Energy mesh optimization for multi-level calculation schemes
International Nuclear Information System (INIS)
Mosca, P.; Taofiki, A.; Bellier, P.; Prevost, A.
2011-01-01
The industrial calculations of third generation nuclear reactors are based on sophisticated strategies of homogenization and collapsing at different spatial and energetic levels. An important issue to ensure the quality of these calculation models is the choice of the collapsing energy mesh. In this work, we show a new approach to generate optimized energy meshes starting from the SHEM 281-group library. The optimization model is applied on 1D cylindrical cells and consists of finding an energy mesh which minimizes the errors between two successive collision probability calculations. The former is realized over the fine SHEM mesh with Livolant-Jeanpierre self-shielded cross sections and the latter is performed with collapsed cross sections over the energy mesh being optimized. The optimization is done by the particle swarm algorithm implemented in the code AEMC and multigroup flux solutions are obtained from standard APOLLO2 solvers. By this new approach, a set of new optimized meshes which encompass from 10 to 50 groups has been defined for PWR and BWR calculations. This set will allow users to adapt the energy detail of the solution to the complexity of the calculation (assembly, multi-assembly, two-dimensional whole core). Some preliminary verifications, in which the accuracy of the new meshes is measured compared to a direct 281-group calculation, show that the 30-group optimized mesh offers a good compromise between simulation time and accuracy for a standard 17 x 17 UO 2 assembly with and without control rods. (author)
SUPERIMPOSED MESH PLOTTING IN MCNP
Energy Technology Data Exchange (ETDEWEB)
J. HENDRICKS
2001-02-01
The capability to plot superimposed meshes has been added to MCNP{trademark}. MCNP4C featured a superimposed mesh weight window generator which enabled users to set up geometries without having to subdivide geometric cells for variance reduction. The variance reduction was performed with weight windows on a rectangular or cylindrical mesh superimposed over the physical geometry. Experience with the new capability was favorable but also indicated that a number of enhancements would be very beneficial, particularly a means of visualizing the mesh and its values. The mathematics for plotting the mesh and its values is described here along with a description of other upgrades.
Analysis and development of spatial hp-refinement methods for solving the neutron transport equation
International Nuclear Information System (INIS)
Fournier, D.
2011-01-01
The different neutronic parameters have to be calculated with a higher accuracy in order to design the 4. generation reactor cores. As memory storage and computation time are limited, adaptive methods are a solution to solve the neutron transport equation. The neutronic flux, solution of this equation, depends on the energy, angle and space. The different variables are successively discretized. The energy with a multigroup approach, considering the different quantities to be constant on each group, the angle by a collocation method called SN approximation. Once the energy and angle variable are discretized, a system of spatially-dependent hyperbolic equations has to be solved. Discontinuous finite elements are used to make possible the development of hp-refinement methods. Thus, the accuracy of the solution can be improved by spatial refinement (h-refinement), consisting into subdividing a cell into sub-cells, or by order refinement (p-refinement), by increasing the order of the polynomial basis. In this thesis, the properties of this methods are analyzed showing the importance of the regularity of the solution to choose the type of refinement. Thus, two error estimators are used to lead the refinement process. Whereas the first one requires high regularity hypothesis (analytical solution), the second one supposes only the minimal hypothesis required for the solution to exist. The comparison of both estimators is done on benchmarks where the analytic solution is known by the method of manufactured solutions. Thus, the behaviour of the solution as a regard of the regularity can be studied. It leads to a hp-refinement method using the two estimators. Then, a comparison is done with other existing methods on simplified but also realistic benchmarks coming from nuclear cores. These adaptive methods considerably reduces the computational cost and memory footprint. To further improve these two points, an approach with energy-dependent meshes is proposed. Actually, as the
On Optimal Bilinear Quadrilateral Meshes
Energy Technology Data Exchange (ETDEWEB)
D' Azevedo, E
2000-03-17
The novelty of this work is in presenting interesting error properties of two types of asymptotically ''optimal'' quadrilateral meshes for bilinear approximation. The first type of mesh has an error equidistributing property where the maximum interpolation error is asymptotically the same over all elements. The second type has faster than expected ''super-convergence'' property for certain saddle-shaped data functions. The ''superconvergent'' mesh may be an order of magnitude more accurate than the error equidistributing mesh. Both types of mesh are generated by a coordinate transformation of a regular mesh of squares. The coordinate transformation is derived by interpreting the Hessian matrix of a data function as a metric tensor. The insights in this work may have application in mesh design near corner or point singularities.
Simulating streamer discharges in 3D with the parallel adaptive Afivo framework
Teunissen, Jannis; Ebert, Ute
2017-11-01
We present an open-source plasma fluid code for 2D, cylindrical and 3D simulations of streamer discharges. The code is based on the Afivo framework, which features adaptive mesh refinement on quadtree/octree grids, geometric multigrid methods for Poisson’s equation, and OpenMP parallelism. We describe the numerical implementation of a fluid model of the drift-diffusion-reaction type, combined with the local field approximation. Then we demonstrate its functionality with 3D simulations of long positive streamers in nitrogen in undervolted gaps. Three examples are presented. The first one shows how a stochastic background density affects streamer propagation and branching. The second one focuses on the interaction of a streamer with preionized regions, and the third one investigates the interaction between two streamers. The simulations use up to 108 grid cells and run in less than a day; without mesh refinement they would require more than 1012 grid cells.
Adaptive soft tissue deformation for a virtual reality surgical trainer.
Jerabkova, Lenka; Wolter, Timm P; Pallua, Norbert; Kuhlen, Torsten
2005-01-01
Real time tissue deformation is an important aspect of interactive virtual reality (VR) environments such as medical trainers. Most approaches in deformable modelling use a fixed space discretization. A surgical trainer requires high plausibility of the deformations especially in the area close to the instrument. As the area of intervention is not known a priori, adaptive techniques have to be applied. We present an approach for real time deformation of soft tissue based on a regular FEM mesh of cube elements as opposed to a mesh of tetrahedral elements used by the majority of soft tissue simulators. A regular mesh structure simplifies the local refinement operation as the elements topology and stiffness are known implicitly. We propose an octree-based adaptive multiresolution extension of our basic approach. The volumetric representation of the deformed object is created automatically from medical images or by voxelization of a surface model. The resolution of the volumetric representation is independent of the surface geometry resolution. The surface is deformed according to the simulation performed on the underlying volumetric mesh.
International Development Research Centre (IDRC) Digital Library (Canada)
. Dar es Salaam. Durban. Bloemfontein. Antananarivo. Cape Town. Ifrane ... program strategy. A number of CCAA-supported projects have relevance to other important adaptation-related themes such as disaster preparedness and climate.
Notes on the Mesh Handler and Mesh Data Conversion
Energy Technology Data Exchange (ETDEWEB)
Lee, Sang Yong; Park, Chan Eok [Korea Power Engineering Company, Daejeon (Korea, Republic of)
2009-10-15
At the outset of the development of the thermal-hydraulic code (THC), efforts have been made to utilize the recent technology of the computational fluid dynamics. Among many of them, the unstructured mesh approach was adopted to alleviate the restriction of the grid handling system. As a natural consequence, a mesh handler (MH) has been developed to manipulate the complex mesh data from the mesh generator. The mesh generator, Gambit, was chosen at the beginning of the development of the code. But a new mesh generator, Pointwise, was introduced to get more flexible mesh generation capability. An open source code, Paraview, was chosen as a post processor, which can handle unstructured as well as structured mesh data. Overall data processing system for THC is shown in Figure-1. There are various file formats to save the mesh data in the permanent storage media. A couple of dozen of file formats are found even in the above mentioned programs. A competent mesh handler should have the capability to import or export mesh data as many as possible formats. But, in reality, there are two aspects that make it difficult to achieve the competence. The first aspect to consider is the time and efforts to program the interface code. And the second aspect, which is even more difficult one, is the fact that many mesh data file formats are proprietary information. In this paper, some experience of the development of the format conversion programs will be presented. File formats involved are Gambit neutral format, Ansys-CFX grid file format, VTK legacy file format, Nastran format and CGNS.
Notes on the Mesh Handler and Mesh Data Conversion
International Nuclear Information System (INIS)
Lee, Sang Yong; Park, Chan Eok
2009-01-01
At the outset of the development of the thermal-hydraulic code (THC), efforts have been made to utilize the recent technology of the computational fluid dynamics. Among many of them, the unstructured mesh approach was adopted to alleviate the restriction of the grid handling system. As a natural consequence, a mesh handler (MH) has been developed to manipulate the complex mesh data from the mesh generator. The mesh generator, Gambit, was chosen at the beginning of the development of the code. But a new mesh generator, Pointwise, was introduced to get more flexible mesh generation capability. An open source code, Paraview, was chosen as a post processor, which can handle unstructured as well as structured mesh data. Overall data processing system for THC is shown in Figure-1. There are various file formats to save the mesh data in the permanent storage media. A couple of dozen of file formats are found even in the above mentioned programs. A competent mesh handler should have the capability to import or export mesh data as many as possible formats. But, in reality, there are two aspects that make it difficult to achieve the competence. The first aspect to consider is the time and efforts to program the interface code. And the second aspect, which is even more difficult one, is the fact that many mesh data file formats are proprietary information. In this paper, some experience of the development of the format conversion programs will be presented. File formats involved are Gambit neutral format, Ansys-CFX grid file format, VTK legacy file format, Nastran format and CGNS
Adaptative mixed methods to axisymmetric shells
International Nuclear Information System (INIS)
Malta, S.M.C.; Loula, A.F.D.; Garcia, E.L.M.
1989-09-01
The mixed Petrov-Galerkin method is applied to axisymmetric shells with uniform and non uniform meshes. Numerical experiments with a cylindrical shell showed a significant improvement in convergence and accuracy with adaptive meshes. (A.C.A.S.) [pt
A parallel direct solver for the self-adaptive hp Finite Element Method
Paszyński, Maciej R.
2010-03-01
In this paper we present a new parallel multi-frontal direct solver, dedicated for the hp Finite Element Method (hp-FEM). The self-adaptive hp-FEM generates in a fully automatic mode, a sequence of hp-meshes delivering exponential convergence of the error with respect to the number of degrees of freedom (d.o.f.) as well as the CPU time, by performing a sequence of hp refinements starting from an arbitrary initial mesh. The solver constructs an initial elimination tree for an arbitrary initial mesh, and expands the elimination tree each time the mesh is refined. This allows us to keep track of the order of elimination for the solver. The solver also minimizes the memory usage, by de-allocating partial LU factorizations computed during the elimination stage of the solver, and recomputes them for the backward substitution stage, by utilizing only about 10% of the computational time necessary for the original computations. The solver has been tested on 3D Direct Current (DC) borehole resistivity measurement simulations problems. We measure the execution time and memory usage of the solver over a large regular mesh with 1.5 million degrees of freedom as well as on the highly non-regular mesh, generated by the self-adaptive h p-FEM, with finite elements of various sizes and polynomial orders of approximation varying from p = 1 to p = 9. From the presented experiments it follows that the parallel solver scales well up to the maximum number of utilized processors. The limit for the solver scalability is the maximum sequential part of the algorithm: the computations of the partial LU factorizations over the longest path, coming from the root of the elimination tree down to the deepest leaf. © 2009 Elsevier Inc. All rights reserved.
Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration
Zhang, Y.; Key, K.; Ovall, J.; Holst, M.
2014-12-01
We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
The refined topological vertex
International Nuclear Information System (INIS)
Iqbal, Amer; Kozcaz, Can; Vafa, Cumrun
2009-01-01
We define a refined topological vertex which depends in addition on a parameter, which physically corresponds to extending the self-dual graviphoton field strength to a more general configuration. Using this refined topological vertex we compute, using geometric engineering, a two-parameter (equivariant) instanton expansion of gauge theories which reproduce the results of Nekrasov. The refined vertex is also expected to be related to Khovanov knot invariants.
International Nuclear Information System (INIS)
Cobb, C.B.
2001-01-01
This article focuses on recent developments in the US refining industry and presents a model for improving the performance of refineries based on the analysis of the refining industry by Cap Gemini Ernst and Young. The identification of refineries in risk of failing, the construction of pipelines for refinery products from Gulf State refineries, mergers and acquisitions, and poor financial performance are discussed. Current challenges concerning the stagnant demand for refinery products, environmental regulations, and shareholder value are highlighted. The structure of the industry, the creation of value in refining, and the search for business models are examined. The top 25 US companies and US refining business groups are listed
International Development Research Centre (IDRC) Digital Library (Canada)
Nairobi, Kenya. 28 Adapting Fishing Policy to Climate Change with the Aid of Scientific and Endogenous Knowledge. Cap Verde, Gambia,. Guinea, Guinea Bissau,. Mauritania and Senegal. Environment and Development in the Third World. (ENDA-TM). Dakar, Senegal. 29 Integrating Indigenous Knowledge in Climate Risk ...
Tsangaris, S.; Drikakis, D.
The solution of the compressible Euler and Navier-Stokes equations via an upwind finite volume scheme is obtained. For the inviscid fluxes the monotone, upstream centered scheme for conservation laws (MUSCL) has been incorporated into a Riemann solver. The flux vector splitting method of Steger and Warming is used with some modifications. The MUSCL scheme is used for the unfactored implicit equations which are solved by a Newton form and relaxation is performed with a Gauss-Seidel technique. The solution on the fine grid is obtained by iterating first on a sequence of coarse grids and then interpolating the solution up to the next refined grid. Because the distribution of the numerical error is not uniform, the local solution of the equations in regions where the numerical error is large can be obtained. The choice of the partial meshes, in which the iterations will be continued, is determined by the use of an adaptive procedure taking into account some convergence criteria. Reduction of the iterations for the two-dimensional problem is obtained via the local adaptive mesh solution which is expected to be more effective in three-dimensional complex flow computations.
Relational Demonic Fuzzy Refinement
Directory of Open Access Journals (Sweden)
Fairouz Tchier
2014-01-01
Full Text Available We use relational algebra to define a refinement fuzzy order called demonic fuzzy refinement and also the associated fuzzy operators which are fuzzy demonic join (⊔fuz, fuzzy demonic meet (⊓fuz, and fuzzy demonic composition (□fuz. Our definitions and properties are illustrated by some examples using mathematica software (fuzzy logic.
Quadrilateral finite element mesh coarsening
Staten, Matthew L; Dewey, Mark W; Benzley, Steven E
2012-10-16
Techniques for coarsening a quadrilateral mesh are described. These techniques include identifying a coarsening region within the quadrilateral mesh to be coarsened. Quadrilateral elements along a path through the coarsening region are removed. Node pairs along opposite sides of the path are identified. The node pairs along the path are then merged to collapse the path.
Paszyński, Maciej R.
2013-04-01
This paper describes a direct solver algorithm for a sequence of finite element meshes that are h-refined towards one or several point singularities. For such a sequence of grids, the solver delivers linear computational cost O(N) in terms of CPU time and memory with respect to the number of unknowns N. The linear computational cost is achieved by utilizing the recursive structure provided by the sequence of h-adaptive grids with a special construction of the elimination tree that allows for reutilization of previously computed partial LU (or Cholesky) factorizations over the entire unrefined part of the computational mesh. The reutilization technique reduces the computational cost of the entire sequence of h-refined grids from O(N2) down to O(N). Theoretical estimates are illustrated with numerical results on two- and three-dimensional model problems exhibiting one or several point singularities. © 2013 Elsevier Ltd. All rights reserved.
Park, Kyoo-Chul; Chhatre, Shreerang S.; Srinivasan, Siddarth; Cohen, Robert E.; McKinley, Gareth H.
2012-11-01
Fog represents a large, untapped source of potable water, especially in arid climates. Various plants and animals use morphological as well as chemical features on their surfaces to harvest this precious resource. In this work, we investigate the influence of surface wettability, structural length scale, and relative openness of the weave on the fog harvesting ability of mesh surfaces. We choose simple woven meshes as a canonical family of model permeable surfaces due to the ability to systematically vary periodicity, porosity, mechanical robustness and ease of fabrication. We measure the fog collecting capacity of a set of meshes with a directed aqueous aerosol stream to simulate a natural foggy environment. Further, we strive to develop and test appropriate scalings and correlations that quantify the collection of water on the mesh surfaces. These design rules can be deployed as an a priori design chart for designing optimal performance meshes for given environmental/operating conditions.
The mesh-LBP: a framework for extracting local binary patterns from discrete manifolds.
Werghi, Naoufel; Berretti, Stefano; del Bimbo, Alberto
2015-01-01
In this paper, we present a novel and original framework, which we dubbed mesh-local binary pattern (LBP), for computing local binary-like-patterns on a triangular-mesh manifold. This framework can be adapted to all the LBP variants employed in 2D image analysis. As such, it allows extending the related techniques to mesh surfaces. After describing the foundations, the construction and the main features of the mesh-LBP, we derive its possible variants and show how they can extend most of the 2D-LBP variants to the mesh manifold. In the experiments, we give evidence of the presence of the uniformity aspect in the mesh-LBP, similar to the one noticed in the 2D-LBP. We also report repeatability experiments that confirm, in particular, the rotation-invariance of mesh-LBP descriptors. Furthermore, we analyze the potential of mesh-LBP for the task of 3D texture classification of triangular-mesh surfaces collected from public data sets. Comparison with state-of-the-art surface descriptors, as well as with 2D-LBP counterparts applied on depth images, also evidences the effectiveness of the proposed framework. Finally, we illustrate the robustness of the mesh-LBP with respect to the class of mesh irregularity typical to 3D surface-digitizer scans.
A multilevel correction adaptive finite element method for Kohn-Sham equation
Hu, Guanghui; Xie, Hehu; Xu, Fei
2018-02-01
In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.
Replication, refinement & reachability
DEFF Research Database (Denmark)
Debois, Søren; Hildebrandt, Thomas T.; Slaats, Tijs
2018-01-01
We explore the complexity of reachability and run-time refinement under safety and liveness constraints in event-based process models. Our study is framed in the DCR? process language, which supports modular specification through a compositional operational semantics. DCR? encompasses the “Dynamic...... Condition Response (DCR) graphs” declarative process model for analysis, execution and safe run-time refinement of process-aware information systems; including replication of sub-processes. We prove that event-reachability and refinement are np-hard for DCR? processes without replication...
Ocean modeling on unstructured meshes
Danilov, S.
2013-09-01
Unstructured meshes are common in coastal modeling, but still rarely used for modeling the large-scale ocean circulation. Existing and new projects aim at changing this situation by proposing models enabling a regional focus (multiresolution) in global setups, without nesting and open boundaries. Among them, finite-volume models using the C-grid discretization on Voronoi-centroidal meshes or cell-vertex quasi-B-grid discretization on triangular meshes work well and offer the multiresolution functionality at a price of being 2 to 4 times slower per degree of freedom than structured-mesh models. This is already sufficient for many practical tasks and will be further improved as the number of vertical layers is increased. Approaches based on the finite-element method, both used or proposed, are as a rule slower at present. Most of staggered discretizations on triangular or Voronoi meshes allow spurious modes which are difficult to filter on unstructured meshes. The ongoing research seeks how to handle them and explores new approaches where such modes are absent. Issues of numerical efficiency and accurate transport schemes are still important, and the question on parameterizations for multiresolution meshes is hardly explored at all. The review summarizes recent developments the main practical result of which is the emergence of multiresolution models for simulating large-scale ocean circulation.
A New Approach to Adaptive Control of Multiple Scales in Plasma Simulations
Omelchenko, Yuri
2007-04-01
A new approach to temporal refinement of kinetic (Particle-in-Cell, Vlasov) and fluid (MHD, two-fluid) simulations of plasmas is presented: Discrete-Event Simulation (DES). DES adaptively distributes CPU resources in accordance with local time scales and enables asynchronous integration of inhomogeneous nonlinear systems with multiple time scales on meshes of arbitrary topologies. This removes computational penalties usually incurred in explicit codes due to the global Courant-Friedrich-Levy (CFL) restriction on a time-step size. DES stands apart from multiple time-stepping algorithms in that it requires neither selecting a global synchronization time step nor pre-determining a sequence of time-integration operations for individual parts of the system (local time increments need not bear any integer multiple relations). Instead, elements of a mesh-distributed solution self-adaptively predict and synchronize their temporal trajectories by directly enforcing local causality (accuracy) constraints, which are formulated in terms of incremental changes to the evolving solution. Together with flux-conservative propagation of information, this new paradigm ensures stable and fast asynchronous runs, where idle computation is automatically eliminated. DES is parallelized via a novel Preemptive Event Processing (PEP) technique, which automatically synchronizes elements with similar update rates. In this mode, events with close execution times are projected onto time levels, which are adaptively determined by the program. PEP allows reuse of standard message-passing algorithms on distributed architectures. For optimum accuracy, DES can be combined with adaptive mesh refinement (AMR) techniques for structured and unstructured meshes. Current examples of event-driven models range from electrostatic, hybrid particle-in-cell plasma systems to reactive fluid dynamics simulations. They demonstrate the superior performance of DES in terms of accuracy, speed and robustness.
Optimization-based Fluid Simulation on Unstructured Meshes
DEFF Research Database (Denmark)
Misztal, Marek Krzysztof; Bridson, Robert; Erleben, Kenny
2010-01-01
can be also represented implicitly as a set of faces separating tetrahedra marked as inside from the ones marked as outside. This representation introduces insignificant and con- trollable numerical diffusion, allows robust topological adaptivity and provides both a volumetric finite element mesh...
Aranha: a 2D mesh generator for triangular finite elements
International Nuclear Information System (INIS)
Fancello, E.A.; Salgado, A.C.; Feijoo, R.A.
1990-01-01
A method for generating unstructured meshes for linear and quadratic triangular finite elements is described in this paper. Some topics on the C language data structure used in the development of the program Aranha are also presented. The applicability for adaptive remeshing is shown and finally several examples are included to illustrate the performance of the method in irregular connected planar domains. (author)
Refining population health comparisons
DEFF Research Database (Denmark)
Hussain, M. Azhar; Jørgensen, Mette Møller; Østerdal, Lars Peter Raahave
2016-01-01
How to determine if a population group has better overall (multidimensional) health status than another is a central question in the health and social sciences. We apply a multidimensional first order dominance concept that does not rely on assumptions about the relative importance of each...... dimension. In particular, we show how one can explore the “depth” of dominance relations by gradually refining the health dimensions to see which dominance relations persist. We analyze a Danish health survey with many health indicators. These are initially collapsed into a single composite health dimension...... and then refined to four, seven, and ten health dimensions, each representing an (increasingly refined) area of health. Overall we find that younger age groups dominate older age groups in up to four dimensions, but no dominance relations are present with a more refined view of health. Comparing education groups...
Evolution and Refinement with Endogenous Mistake Probabilities
van Damme, E.E.C.; Weibull, J.
1999-01-01
Bergin and Lipman (1996) show that the refinement effect from the random mutations in the adaptive population dynamics in Kandori, Mailath and Rob (1993) and Young (1993) is due to restrictions on how these mutation rates vary across population states. We here model mutation rates as endogenously
Relational Demonic Fuzzy Refinement
Tchier, Fairouz
2014-01-01
We use relational algebra to define a refinement fuzzy order called demonic fuzzy refinement and also the associated fuzzy operators which are fuzzy demonic join $({\\bigsqcup }_{\\mathrm{\\text{f}}\\mathrm{\\text{u}}\\mathrm{\\text{z}}})$ , fuzzy demonic meet $({\\sqcap }_{\\mathrm{\\text{f}}\\mathrm{\\text{u}}\\mathrm{\\text{z}}})$ , and fuzzy demonic composition $({\\square }_{\\mathrm{\\text{f}}\\mathrm{\\text{u}}\\mathrm{\\text{z}}})$ . Our definitions and properties are illustrated by some examples using ma...
Benjes, Friederike
1996-01-01
Two problems on action refinement are considered: First we treat the problem that there may exist configurations in the product ${\\cal E}_1~\\|_A~{\\cal E}_2$ of event structures that do not map to configurations of the individual event structures under projection. [Sch91] and [CZ89] showed that languages using only the operators $+, ;, ~\\|_A$ do not create event structures of that type. We show the same for languages extended with a refinement operator. In the second part the connection betwee...
3D CSEM inversion based on goal-oriented adaptive finite element method
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with
A novel surface mesh deformation method for handling wing-fuselage intersections
Directory of Open Access Journals (Sweden)
Mario Jaime Martin-Burgos
2017-02-01
Full Text Available This paper describes a method for mesh adaptation in the presence of intersections, such as wing-fuselage. Automatic optimization tools, using Computational Fluid Dynamics (CFD simulations, face the problem to adapt the computational grid upon deformations of the boundary surface. When mesh regeneration is not feasible, due to the high cost to build up the computational grid, mesh deformation techniques are considered a cheap approach to adapt the mesh to changes on the geometry. Mesh adaptation is a well-known subject in the literature; however, there is very little work which deals with moving intersections. Without a proper treatment of the intersections, the use of automatic optimization methods for aircraft design is limited to individual components. The proposed method takes advantage of the CAD description, which usually comes in the form of Non-Uniform Rational B-Splines (NURBS patches. This paper describes an algorithm to recalculate the intersection line between two parametric surfaces. Then, the surface mesh is adapted to the moving intersection in parametric coordinates. Finally, the deformation is propagated through the volumetric mesh. The proposed method is tested with the DLR F6 wing-body configuration.
Mersiline mesh in premaxillary augmentation.
Foda, Hossam M T
2005-01-01
Premaxillary retrusion may distort the aesthetic appearance of the columella, lip, and nasal tip. This defect is characteristically seen in, but not limited to, patients with cleft lip nasal deformity. This study investigated 60 patients presenting with premaxillary deficiencies in which Mersiline mesh was used to augment the premaxilla. All the cases had surgery using the external rhinoplasty technique. Two methods of augmentation with Mersiline mesh were used: the Mersiline roll technique, for the cases with central symmetric deficiencies, and the Mersiline packing technique, for the cases with asymmetric deficiencies. Premaxillary augmentation with Mersiline mesh proved to be simple technically, easy to perform, and not associated with any complications. Periodic follow-up evaluation for a mean period of 32 months (range, 12-98 months) showed that an adequate degree of premaxillary augmentation was maintained with no clinically detectable resorption of the mesh implant.
A Parallel Geometry and Mesh Infrastructure for Explicit Phase Tracking in Multiphase Problems
Yang, Fan; Chandra, Anirban; Zhang, Yu; Shams, Ehsan; Tendulkar, Saurabh; Nastasia, Rocco; Oberai, Assad; Shephard, Mark; Sahni, Onkar
2017-11-01
Numerical simulations with explicit phase/interface tracking in a multiphase medium impact many applications. One such example is a combusting solid involving phase change. In these problems explicit tracking is crucial to accurately model and capture the interface physics, for example, discontinuous fields at the interface such as density or normal velocity. A necessary capability in an explicit approach is the evolution of the geometry and mesh during the simulation. In this talk, we will present an explicit approach that employs a combination of mesh motion and mesh modification on distributed/partitioned meshes. At the interface, a Lagrangian frame is employed on a discrete geometric description, while an arbitrary Lagrangian-Eulerian (ALE) frame is used elsewhere with arbitrary mesh motion. Mesh motion is based on the linear elasticity analogy that is applied until mesh deformation leads to undesirable cells, at which point local mesh modification is used to adapt the mesh. In addition, at the interface the structure and normal resolution of the highly anisotropic layered elements is adaptively maintained. We will demonstrate our approach for problems with large interface motions. Topological changes in the geometry (of any phase) will be considered in the future. This work is supported by the U.S. Army Grants W911NF1410301 and W911NF16C0117.
GENERATION OF IRREGULAR HEXAGONAL MESHES
Directory of Open Access Journals (Sweden)
Vlasov Aleksandr Nikolaevich
2012-07-01
Decomposition is performed in a constructive way and, as option, it involves meshless representation. Further, this mapping method is used to generate the calculation mesh. In this paper, the authors analyze different cases of mapping onto simply connected and bi-connected canonical domains. They represent forward and backward mapping techniques. Their potential application for generation of nonuniform meshes within the framework of the asymptotic homogenization theory is also performed to assess and project effective characteristics of heterogeneous materials (composites.
Method and system for mesh network embedded devices
Wang, Ray (Inventor)
2009-01-01
A method and system for managing mesh network devices. A mesh network device with integrated features creates an N-way mesh network with a full mesh network topology or a partial mesh network topology.
Goal-Oriented Self-Adaptive hp Finite Element Simulation of 3D DC Borehole Resistivity Simulations
Calo, Victor M.
2011-05-14
In this paper we present a goal-oriented self-adaptive hp Finite Element Method (hp-FEM) with shared data structures and a parallel multi-frontal direct solver. The algorithm automatically generates (without any user interaction) a sequence of meshes delivering exponential convergence of a prescribed quantity of interest with respect to the number of degrees of freedom. The sequence of meshes is generated from a given initial mesh, by performing h (breaking elements into smaller elements), p (adjusting polynomial orders of approximation) or hp (both) refinements on the finite elements. The new parallel implementation utilizes a computational mesh shared between multiple processors. All computational algorithms, including automatic hp goal-oriented adaptivity and the solver work fully in parallel. We describe the parallel self-adaptive hp-FEM algorithm with shared computational domain, as well as its efficiency measurements. We apply the methodology described to the three-dimensional simulation of the borehole resistivity measurement of direct current through casing in the presence of invasion.
International Nuclear Information System (INIS)
Constancio, Silva
2006-01-01
In 2004, refining margins showed a clear improvement that persisted throughout the first three quarters of 2005. This enabled oil companies to post significantly higher earnings for their refining activity in 2004 compared to 2003, with the results of the first half of 2005 confirming this trend. As for petrochemicals, despite a steady rise in the naphtha price, higher cash margins enabled a turnaround in 2004 as well as a clear improvement in oil company financial performance that should continue in 2005, judging by the net income figures reported for the first half-year. Despite this favorable business environment, capital expenditure in refining and petrochemicals remained at a low level, especially investment in new capacity, but a number of projects are being planned for the next five years. (author)
International Nuclear Information System (INIS)
Singh, I.J.
2002-01-01
The author discusses the history of the Indian refining industry and ongoing developments under the headings: the present state; refinery configuration; Indian capabilities for refinery projects; and reforms in the refining industry. Tables lists India's petroleum refineries giving location and capacity; new refinery projects together with location and capacity; and expansion projects of Indian petroleum refineries. The Indian refinery industry has undergone substantial expansion as well as technological changes over the past years. There has been progressive technology upgrading, energy efficiency, better environmental control and improved capacity utilisation. Major reform processes have been set in motion by the government of India: converting the refining industry from a centrally controlled public sector dominated industry to a delicensed regime in a competitive market economy with the introduction of a liberal exploration policy; dismantling the administered price mechanism; and a 25 year hydrocarbon vision. (UK)
Refining margins: recent trends
International Nuclear Information System (INIS)
Baudoin, C.; Favennec, J.P.
1999-01-01
Despite a business environment that was globally mediocre due primarily to the Asian crisis and to a mild winter in the northern hemisphere, the signs of improvement noted in the refining activity in 1996 were borne out in 1997. But the situation is not yet satisfactory in this sector: the low return on invested capital and the financing of environmental protection expenditure are giving cause for concern. In 1998, the drop in crude oil prices and the concomitant fall in petroleum product prices was ultimately rather favorable to margins. Two elements tended to put a damper on this relative optimism. First of all, margins continue to be extremely volatile and, secondly, the worsening of the economic and financial crisis observed during the summer made for a sharp decline in margins in all geographic regions, especially Asia. Since the beginning of 1999, refining margins are weak and utilization rates of refining capacities have decreased. (authors)
Energy Technology Data Exchange (ETDEWEB)
Constancio, Silva
2006-07-01
In 2004, refining margins showed a clear improvement that persisted throughout the first three quarters of 2005. This enabled oil companies to post significantly higher earnings for their refining activity in 2004 compared to 2003, with the results of the first half of 2005 confirming this trend. As for petrochemicals, despite a steady rise in the naphtha price, higher cash margins enabled a turnaround in 2004 as well as a clear improvement in oil company financial performance that should continue in 2005, judging by the net income figures reported for the first half-year. Despite this favorable business environment, capital expenditure in refining and petrochemicals remained at a low level, especially investment in new capacity, but a number of projects are being planned for the next five years. (author)
Unterweger, K.
2015-01-01
© Springer International Publishing Switzerland 2015. We propose to couple our adaptive mesh refinement software PeanoClaw with existing solvers for complex overland flows that are tailored to regular Cartesian meshes. This allows us to augment them with spatial adaptivity and local time-stepping without altering the computational kernels. FullSWOF2D—Full Shallow Water Overland Flows—here is our software of choice though all paradigms hold for other solvers as well.We validate our hybrid simulation software in an artificial test scenario before we provide results for a large-scale flooding scenario of the Mecca region. The latter demonstrates that our coupling approach enables the simulation of complex “real-world” scenarios.
International Nuclear Information System (INIS)
2008-01-01
Investment rallied in 2007, and many distillation and conversion projects likely to reach the industrial stage were announced. With economic growth sustained in 2006 and still pronounced in 2007, oil demand remained strong - especially in emerging countries - and refining margins stayed high. Despite these favorable business conditions, tensions persisted in the refining sector, which has fallen far behind in terms of investing in refinery capacity. It will take renewed efforts over a long period to catch up. Looking at recent events that have affected the economy in many countries (e.g. the sub-prime crisis), prudence remains advisable
Verborgh, Ruben
2013-01-01
The book is styled on a Cookbook, containing recipes - combined with free datasets - which will turn readers into proficient OpenRefine users in the fastest possible way.This book is targeted at anyone who works on or handles a large amount of data. No prior knowledge of OpenRefine is required, as we start from the very beginning and gradually reveal more advanced features. You don't even need your own dataset, as we provide example data to try out the book's recipes.
Tetrahedral meshing via maximal Poisson-disk sampling
Guo, Jianwei
2016-02-15
In this paper, we propose a simple yet effective method to generate 3D-conforming tetrahedral meshes from closed 2-manifold surfaces. Our approach is inspired by recent work on maximal Poisson-disk sampling (MPS), which can generate well-distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform or adaptive sampling, respectively. We also propose an efficient optimization strategy to protect the domain boundaries and to remove slivers to improve the meshing quality. We present various experimental results to illustrate the efficiency and the robustness of our proposed approach. We demonstrate that the performance and quality (e.g., minimal dihedral angle) of our approach are superior to current state-of-the-art optimization-based approaches.
Connectivity editing for quadrilateral meshes
Peng, Chihan
2011-12-01
We propose new connectivity editing operations for quadrilateral meshes with the unique ability to explicitly control the location, orientation, type, and number of the irregular vertices (valence not equal to four) in the mesh while preserving sharp edges. We provide theoretical analysis on what editing operations are possible and impossible and introduce three fundamental operations to move and re-orient a pair of irregular vertices. We argue that our editing operations are fundamental, because they only change the quad mesh in the smallest possible region and involve the fewest irregular vertices (i.e., two). The irregular vertex movement operations are supplemented by operations for the splitting, merging, canceling, and aligning of irregular vertices. We explain how the proposed high-level operations are realized through graph-level editing operations such as quad collapses, edge flips, and edge splits. The utility of these mesh editing operations are demonstrated by improving the connectivity of quad meshes generated from state-of-art quadrangulation techniques.
International Nuclear Information System (INIS)
Yoon, S; Lindstrom, P; Pascucci, V; Manocha, D
2005-01-01
We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications
Connectivity editing for quadrilateral meshes
Peng, Chihan
2011-12-12
We propose new connectivity editing operations for quadrilateral meshes with the unique ability to explicitly control the location, orientation, type, and number of the irregular vertices (valence not equal to four) in the mesh while preserving sharp edges. We provide theoretical analysis on what editing operations are possible and impossible and introduce three fundamental operations to move and re-orient a pair of irregular vertices. We argue that our editing operations are fundamental, because they only change the quad mesh in the smallest possible region and involve the fewest irregular vertices (i.e., two). The irregular vertex movement operations are supplemented by operations for the splitting, merging, canceling, and aligning of irregular vertices. We explain how the proposed highlevel operations are realized through graph-level editing operations such as quad collapses, edge flips, and edge splits. The utility of these mesh editing operations are demonstrated by improving the connectivity of quad meshes generated from state-of-art quadrangulation techniques. © 2011 ACM.
International Nuclear Information System (INIS)
Marion, Pierre; Saint-Antonin, Valerie
2011-11-01
The major uncertainty characterizing the global energy landscape impacts particularly on transport, which remains the virtually-exclusive bastion of the oil industry. The industry must therefore respond to increasing demand for mobility against a background marked by the emergence of alternatives to oil-based fuels and the need to reduce emissions of pollutants and greenhouse gases (GHG). It is in this context that the 'Refining 2030' study conducted by IFP Energies Nouvelles (IFPEN) forecasts what the global supply and demand balance for oil products could be, and highlights the type and geographical location of the refinery investment required. Our study shows that the bulk of the refining investment will be concentrated in the emerging countries (mainly those in Asia), whilst the areas historically strong in refining (Europe and North America) face reductions in capacity. In this context, the drastic reduction in the sulphur specification of bunker oil emerges as a structural issue for European refining, in the same way as increasingly restrictive regulation of refinery CO 2 emissions (quotas/taxation) and the persistent imbalance between gasoline and diesel fuels. (authors)
Acquiring Plausible Predications from MEDLINE by Clustering MeSH Annotations.
Miñarro-Giménez, Jose Antonio; Kreuzthaler, Markus; Bernhardt-Melischnig, Johannes; Martínez-Costa, Catalina; Schulz, Stefan
2015-01-01
The massive accumulation of biomedical knowledge is reflected by the growth of the literature database MEDLINE with over 23 million bibliographic records. All records are manually indexed by MeSH descriptors, many of them refined by MeSH subheadings. We use subheading information to cluster types of MeSH descriptor co-occurrences in MEDLINE by processing co-occurrence information provided by the UMLS. The goal is to infer plausible predicates to each resulting cluster. In an initial experiment this was done by grouping disease-pharmacologic substance co-occurrences into six clusters. Then, a domain expert manually performed the assignment of meaningful predicates to the clusters. The mean accuracy of the best ten generated biomedical facts of each cluster was 85%. This result supports the evidence of the potential of MeSH subheadings for extracting plausible medical predications from MEDLINE.
Refinement by interface instantiation
DEFF Research Database (Denmark)
Hallerstede, Stefan; Hoang, Thai Son
2012-01-01
Decomposition is a technique to separate the design of a complex system into smaller sub-models, which improves scalability and team development. In the shared-variable decomposition approach for Event-B sub-models share external variables and communicate through external events which cannot...... be easily refined. Our first contribution hence is a proposal for a new construct called interface that encapsulates the external variables, along with a mechanism for interface instantiation. Using the new construct and mechanism, external variables can be refined consistently. Our second contribution...... is an approach for verifying the correctness of Event-B extensions using the supporting Rodin tool. We illustrate our approach by proving the correctness of interface instantiation....
International Nuclear Information System (INIS)
Yamaguchi, N.D.
1998-01-01
The paper reviews the history, present position and future prospects of the petroleum industry in the USA. The main focus is on supply and demand, the high quality of the products, refinery capacity and product trade balances. Diagrams show historical trends in output, product demand, demand for transport fuels and oil, refinery capacity, refinery closures, and imports and exports. Some particularly salient points brought out were (i) production of US crude shows a marked downward trend but imports of crude will continue to increase, (ii) product demand will continue to grow even though the levels are already high, (iii) the demand is dominated by those products that typically yield the highest income for the refiner, (i.e. high quality transport fuels for environmental compliance), (iv) refinery capacity has decreased since 1980 and (v) refining will continue to have financial problems but will still be profitable. (UK)
REFINING FLUORINATED COMPOUNDS
Linch, A.L.
1963-01-01
This invention relates to the method of refining a liquid perfluorinated hydrocarbon oil containing fluorocarbons from 12 to 28 carbon atoms per molecule by distilling between 150 deg C and 300 deg C at 10 mm Hg absolute pressure. The perfluorinated oil is washed with a chlorinated lower aliphatic hydrocarbon, which mairtains a separate liquid phase when mixed with the oil. Impurities detrimental to the stability of the oil are extracted by the chlorinated lower aliphatic hydrocarbon. (AEC)
International Nuclear Information System (INIS)
Calvet, B.
1993-01-01
Over recent years, the refining industry has had to grapple with a growing burden of environmental and safety regulations concerning not only its plants and other facilities, but also its end products. At the same time, it has had to bear the effects of the reduction of the special status that used to apply to petroleum, and the consequences of economic freedom, to which we should add, as specifically concerns the French market, the impact of energy policy and the pro-nuclear option. The result is a drop in heavy fuel oil from 36 million tonnes per year in 1973 to 6.3 million in 1992, and in home-heating fuel from 37 to 18 million per year. This fast-moving market is highly competitive. The French market in particular is wide open to imports, but the refining companies are still heavy exporters for those products with high added-value, like lubricants, jet fuel, and lead-free gasolines. The competition has led the refining companies to commit themselves to quality, and to publicize their efforts in this direction. This is why the long-term perspectives for petroleum fuels are still wide open. This is supported by the probable expectation that the goal of economic efficiency is likely to soften the effects of the energy policy, which penalizes petroleum products, in that they have now become competitive again. In the European context, with the challenge of environmental protection and the decline in heavy fuel outlets, French refining has to keep on improving the quality of its products and plants, which means major investments. The industry absolutely must return to a more normal level of profitability, in order to sustain this financial effort, and generate the prosperity of its high-performance plants and equipment. 1 fig., 5 tabs
International Nuclear Information System (INIS)
2008-01-01
For oil companies to invest in new refining and conversion capacity, favorable conditions over time are required. In other words, refining margins must remain high and demand sustained over a long period. That was the situation prevailing before the onset of the financial crisis in the second half of 2008. The economic conjuncture has taken a substantial turn for the worse since then and the forecasts for 2009 do not look bright. Oil demand is expected to decrease in the OECD countries and to grow much more slowly in the emerging countries. It is anticipated that refining margins will fall in 2009 - in 2008, they slipped significantly in the United States - as a result of increasingly sluggish demand, especially for light products. The next few months will probably be unfavorable to investment. In addition to a gloomy business outlook, there may also be a problem of access to sources of financing. As for investment projects, a mainstream trend has emerged in the last few years: a shift away from the regions that have historically been most active (the OECD countries) towards certain emerging countries, mostly in Asia or the Middle East. The new conjuncture will probably not change this trend
Proceedings of the workshop on adaptive grid methods for fusion plasmas
Energy Technology Data Exchange (ETDEWEB)
Koniges, A.E.; Craddock, G.G.; Schnack, D.D.; Strauss, H.R.
1995-07-01
The purpose of the workshop was to assemble workers, both within and outside of the fusion-related computations areas, for discussion regarding the issues of dynamically adaptive gridding. There were three invited talks related to adaptive gridding application experiences in various related fields of computational fluid dynamics (CFD), and nine short talks reporting on the progress of adaptive techniques in the specific areas of scrape-off-layer (SOL) modeling and magnetohydrodynamic (MHD) stability. Adaptive mesh methods have been successful in a number of diverse fields of CFD for over a decade. The method involves dynamic refinement of computed field profiles in a way that disperses uniformly the numerical errors associated with discrete approximations. Because the process optimizes computational effort, adaptive mesh methods can be used to study otherwise the intractable physical problems that involve complex boundary shapes or multiple spatial/temporal scales. Recent results indicate that these adaptive techniques will be required for tokamak fluid-based simulations involving the diverted tokamak SOL modeling and MHD simulations problems related to the highest priority ITER relevant issues.Individual papers are indexed separately on the energy data bases.
Self-Adaptive Event-Driven Simulation of Multi-Scale Plasma Systems
Omelchenko, Yuri; Karimabadi, Homayoun
2005-10-01
Multi-scale plasmas pose a formidable computational challenge. The explicit time-stepping models suffer from the global CFL restriction. Efficient application of adaptive mesh refinement (AMR) to systems with irregular dynamics (e.g. turbulence, diffusion-convection-reaction, particle acceleration etc.) may be problematic. To address these issues, we developed an alternative approach to time stepping: self-adaptive discrete-event simulation (DES). DES has origin in operations research, war games and telecommunications. We combine finite-difference and particle-in-cell techniques with this methodology by assuming two caveats: (1) a local time increment, dt for a discrete quantity f can be expressed in terms of a physically meaningful quantum value, df; (2) f is considered to be modified only when its change exceeds df. Event-driven time integration is self-adaptive as it makes use of causality rules rather than parametric time dependencies. This technique enables asynchronous flux-conservative update of solution in accordance with local temporal scales, removes the curse of the global CFL condition, eliminates unnecessary computation in inactive spatial regions and results in robust and fast parallelizable codes. It can be naturally combined with various mesh refinement techniques. We discuss applications of this novel technology to diffusion-convection-reaction systems and hybrid simulations of magnetosonic shocks.
Towards automated crystallographic structure refinement with phenix.refine
Energy Technology Data Exchange (ETDEWEB)
Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Echols, Nathaniel; Headd, Jeffrey J.; Moriarty, Nigel W. [Lawrence Berkeley National Laboratory, One Cyclotron Road, MS64R0121, Berkeley, CA 94720 (United States); Mustyakimov, Marat; Terwilliger, Thomas C. [Los Alamos National Laboratory, M888, Los Alamos, NM 87545 (United States); Urzhumtsev, Alexandre [CNRS–INSERM–UdS, 1 Rue Laurent Fries, BP 10142, 67404 Illkirch (France); Université Henri Poincaré, Nancy 1, BP 239, 54506 Vandoeuvre-lès-Nancy (France); Zwart, Peter H. [Lawrence Berkeley National Laboratory, One Cyclotron Road, MS64R0121, Berkeley, CA 94720 (United States); Adams, Paul D. [Lawrence Berkeley National Laboratory, One Cyclotron Road, MS64R0121, Berkeley, CA 94720 (United States); University of California Berkeley, Berkeley, CA 94720 (United States)
2012-04-01
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods. phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. It has several automation features and is also highly flexible. Several hundred parameters enable extensive customizations for complex use cases. Multiple user-defined refinement strategies can be applied to specific parts of the model in a single refinement run. An intuitive graphical user interface is available to guide novice users and to assist advanced users in managing refinement projects. X-ray or neutron diffraction data can be used separately or jointly in refinement. phenix.refine is tightly integrated into the PHENIX suite, where it serves as a critical component in automated model building, final structure refinement, structure validation and deposition to the wwPDB. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods.
Fitting polynomial surfaces to triangular meshes with Voronoi Squared Distance Minimization
Nivoliers, Vincent
2011-12-01
This paper introduces Voronoi Squared Distance Minimization (VSDM), an algorithm that fits a surface to an input mesh. VSDM minimizes an objective function that corresponds to a Voronoi-based approximation of the overall squared distance function between the surface and the input mesh (SDM). This objective function is a generalization of Centroidal Voronoi Tesselation (CVT), and can be minimized by a quasi-Newton solver. VSDM naturally adapts the orientation of the mesh to best approximate the input, without estimating any differential quantities. Therefore it can be applied to triangle soups or surfaces with degenerate triangles, topological noise and sharp features. Applications of fitting quad meshes and polynomial surfaces to input triangular meshes are demonstrated.
Fitting polynomial surfaces to triangular meshes with Voronoi squared distance minimization
Nivoliers, Vincent
2012-11-06
This paper introduces Voronoi squared distance minimization (VSDM), an algorithm that fits a surface to an input mesh. VSDM minimizes an objective function that corresponds to a Voronoi-based approximation of the overall squared distance function between the surface and the input mesh (SDM). This objective function is a generalization of the one minimized by centroidal Voronoi tessellation, and can be minimized by a quasi-Newton solver. VSDM naturally adapts the orientation of the mesh elements to best approximate the input, without estimating any differential quantities. Therefore, it can be applied to triangle soups or surfaces with degenerate triangles, topological noise and sharp features. Applications of fitting quad meshes and polynomial surfaces to input triangular meshes are demonstrated. © 2012 Springer-Verlag London.
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Energy Technology Data Exchange (ETDEWEB)
Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.
1998-12-10
OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
On the Mesh Array for Matrix Multiplication
Kak, Subhash
2010-01-01
This article presents new properties of the mesh array for matrix multiplication. In contrast to the standard array that requires 3n-2 steps to complete its computation, the mesh array requires only 2n-1 steps. Symmetries of the mesh array computed values are presented which enhance the efficiency of the array for specific applications. In multiplying symmetric matrices, the results are obtained in 3n/2+1 steps. The mesh array is examined for its application as a scrambling system.
Lobos, Claudio; González, Eugenio
2015-12-01
This article introduces a meshing technique focused on fast and real-time simulation in a biomedical context. We describe in details our algorithm, which starts from a basic Octree regarding the constraints imposed by the simulation, and then, mixed-element patterns are applied over transitions between coarse and fine regions. The use of surface patterns, also composed by mixed elements, allows us to better represent curved domains decreasing the odds of creating invalid elements by adding as few nodes as possible. In contrast with other meshing techniques, we let the user define regions of greater refinement, and as a consequence of that refinement, we add as few nodes as possible to produce a mesh that is topologically correct. Therefore, our meshing technique gives more control on the number of nodes of the final mesh. We show several examples where the quality of the final mesh is acceptable, even without using quality filters. We believe that this new meshing technique is in the correct direction toward real-time simulation in the biomedical field. Copyright © 2015 John Wiley & Sons, Ltd.
Distribution feeder reconfiguration with refined genetic algorithm
Energy Technology Data Exchange (ETDEWEB)
Lin, W.-M.; Cheng, F.-S. [National Sun Yat-Sen University, Kaohsiung (China). Dept. of Electrical Engineering; Tsay, M.-T. [Cheng-Shiu Institute of Technology, Kaohsiung (China). Dept. of Electrical Engineering
2000-11-01
A refined genetic algorithm for a distribution feeder reconfiguration to reduce losses is presented. The problem is optimised in a stochastic searching manner similar to that of the conventional GA. The initial population is determined by opening the switches with the lowest current in every mesh derived in the optimal power flow (OPF), with all switches closed. Solutions provided by OPF are generally the optimum or near-optimal solutions for most cases, so prematurity could occur. To avoid prematurity, the conventional crossover and mutation scheme was refined by a competition mechanism. So the dilemma of choosing a proper probability for crossover and mutation can be avoided. The two processes were also combined into one to save computation time. Tabu lists with heuristic rules were also employed in the searching process to enhance performance. The new approach provides an overall switching decision instead of a successive pattern, which tends to converge to a local optimum. Many tests were conducted and the results have shown that RGA has advantages over many other previously developed algorithms. (author)
The mesh network protocol evaluation and development
Pei, Ping; Petrenko, Y. N.
2015-01-01
In this paper, we introduce a Mesh network protocol evaluation and development. It has a special protocol. We could easily to understand that how different protocols are used in mesh network. In addition to our comprehension, Multi – hop routing protocol could provide robustness and load balancing to communication in wireless mesh networks.
Mesh network achieve its fuction on Linux
Pei, Ping; Petrenko, Y. N.
2015-01-01
In this paper, we introduce a Mesh network protocol evaluation and development. It has a special protocol. We could easily understand the Linux operation principles which are in use in mesh network. In addition to our comprehension, we describe the graph which shows package routing way. At last according to testing we prove that Mesh protocol AODV satisfy Linux platform performance requirements.
User Manual for the PROTEUS Mesh Tools
Energy Technology Data Exchange (ETDEWEB)
Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, Emily R [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-09-19
PROTEUS is built around a finite element representation of the geometry for visualization. In addition, the PROTEUS-SN solver was built to solve the even-parity transport equation on a finite element mesh provided as input. Similarly, PROTEUS-MOC and PROTEUS-NEMO were built to apply the method of characteristics on unstructured finite element meshes. Given the complexity of real world problems, experience has shown that using commercial mesh generator to create rather simple input geometries is overly complex and slow. As a consequence, significant effort has been put into place to create multiple codes that help assist in the mesh generation and manipulation. There are three input means to create a mesh in PROTEUS: UFMESH, GRID, and NEMESH. At present, the UFMESH is a simple way to generate two-dimensional Cartesian and hexagonal fuel assembly geometries. The UFmesh input allows for simple assembly mesh generation while the GRID input allows the generation of Cartesian, hexagonal, and regular triangular structured grid geometry options. The NEMESH is a way for the user to create their own mesh or convert another mesh file format into a PROTEUS input format. Given that one has an input mesh format acceptable for PROTEUS, we have constructed several tools which allow further mesh and geometry construction (i.e. mesh extrusion and merging). This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial
Capelli, Silvia C; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan
2014-09-01
Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly-l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree-Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints - even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu's), all other structural parameters agree within less than 2 csu's. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å(2) as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements - an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å.
International Nuclear Information System (INIS)
Benazzi, E.
2003-01-01
Down sharply in 2002, refining margins showed a clear improvement in the first half-year of 2003. As a result, the earnings reported by oil companies for financial year 2002 were significantly lower than in 2001, but the prospects are brighter for 2003. In the petrochemicals sector, slow demand and higher feedstock prices eroded margins in 2002, especially in Europe and the United States. The financial results for the first part of 2003 seem to indicate that sector profitability will not improve before 2004. (author)
International Nuclear Information System (INIS)
Benazzi, E.; Alario, F.
2004-01-01
In 2003, refining margins showed a clear improvement that continued throughout the first three quarters of 2004. Oil companies posted significantly higher earnings in 2003 compared to 2002, with the results of first quarter 2004 confirming this trend. Due to higher feedstock prices, the implementation of new capacity and more intense competition, the petrochemicals industry was not able to boost margins in 2003. In such difficult business conditions, aggravated by soaring crude prices, the petrochemicals industry is not likely to see any improvement in profitability before the second half of 2004. (author)
The Village Telco project: a reliable and practical wireless mesh telephony infrastructure
Directory of Open Access Journals (Sweden)
Gardner-Stephen Paul
2011-01-01
Full Text Available Abstract VoIP (Voice over IP over mesh networks could be a potential solution to the high cost of making phone calls in most parts of Africa. The Village Telco (VT is an easy to use and scalable VoIP over meshed WLAN (Wireless Local Area Network telephone infrastructure. It uses a mesh network of mesh potatoes to form a peer-to-peer network to relay telephone calls without landlines or cell phone towers. This paper discusses the Village Telco infrastructure, how it addresses the numerous difficulties associated with wireless mesh networks, and its efficient deployment for VoIP services in some communities around the globe. The paper also presents the architecture and functions of a mesh potato and a novel combined analog telephone adapter (ATA and WiFi access point that routes calls. Lastly, the paper presents the results of preliminary tests that have been conducted on a mesh potato. The preliminary results indicate very good performance and user acceptance of the mesh potatoes. The results proved that the infrastructure is deployable in severe and under-resourced environments as a means to make cheap phone calls and render Internet and IP-based services. As a result, the VT project contributes to bridging the digital divide in developing areas.
22nd International Meshing Roundtable
Staten, Matthew
2014-01-01
This volume contains the articles presented at the 22nd International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held on Oct 13-16, 2013 in Orlando, Florida, USA. The first IMR was held in 1992, and the conference series has been held annually since. Each year the IMR brings together researchers, developers, and application experts in a variety of disciplines, from all over the world, to present and discuss ideas on mesh generation and related topics. The technical papers in this volume present theoretical and novel ideas and algorithms with practical potential, as well as technical applications in science and engineering, geometric modeling, computer graphics and visualization.
21st International Meshing Roundtable
Weill, Jean-Christophe
2013-01-01
This volume contains the articles presented at the 21st International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held on October 7–10, 2012 in San Jose, CA, USA. The first IMR was held in 1992, and the conference series has been held annually since. Each year the IMR brings together researchers, developers, and application experts in a variety of disciplines, from all over the world, to present and discuss ideas on mesh generation and related topics. The technical papers in this volume present theoretical and novel ideas and algorithms with practical potential, as well as technical applications in science and engineering, geometric modeling, computer graphics, and visualization.
Voltammetry at micro-mesh electrodes
Directory of Open Access Journals (Sweden)
Wadhawan Jay D.
2003-01-01
Full Text Available The voltammetry at three micro-mesh electrodes is explored. It is found that at sufficiently short experimental durations, the micro-mesh working electrode first behaves as an ensemble of microband electrodes, then follows the behaviour anticipated for an array of diffusion-independent micro-ring electrodes of the same perimeter as individual grid-squares within the mesh. During prolonged electrolysis, the micro-mesh electrode follows that behaviour anticipated theoretically for a cubically-packed partially-blocked electrode. Application of the micro-mesh electrode for the electrochemical determination of carbon dioxide in DMSO electrolyte solutions is further illustrated.
On the flexibility of Kokotsakis meshes
Karpenkov, Oleg
2008-01-01
In this paper we study geometric, algebraic, and computational aspects of flexibility and infinitesimal flexibility of Kokotsakis meshes. A Kokotsakis mesh is a mesh that consists of a face in the middle and a certain band of faces attached to the middle face by its perimeter. In particular any 3x3-mesh made of quadrangles is a Kokotsakis mesh. We express the infinitesimal flexibility condition in terms of Ceva and Menelaus theorems. Further we study semi-algebraic properties of the set of fl...
Macromolecular crystallographic estructure refinement
Directory of Open Access Journals (Sweden)
Afonine, Pavel V.
2015-04-01
Full Text Available Model refinement is a key step in crystallographic structure determination that ensures final atomic structure of macromolecule represents measured diffraction data as good as possible. Several decades have been put into developing methods and computational tools to streamline this step. In this manuscript we provide a brief overview of major milestones of crystallographic computing and methods development pertinent to structure refinement.El refinamiento es un paso clave en el proceso de determinación de una estructura cristalográfica al garantizar que la estructura atómica de la macromolécula final represente de la mejor manera posible los datos de difracción. Han hecho falta varias décadas para poder desarrollar nuevos métodos y herramientas computacionales dirigidas a dinamizar esta etapa. En este artículo ofrecemos un breve resumen de los principales hitos en la computación cristalográfica y de los nuevos métodos relevantes para el refinamiento de estructuras.
Petroleum refining industry in China
International Nuclear Information System (INIS)
Walls, W.D.
2010-01-01
The oil refining industry in China has faced rapid growth in oil imports of increasingly sour grades of crude with which to satisfy growing domestic demand for a slate of lighter and cleaner finished products sold at subsidized prices. At the same time, the world petroleum refining industry has been moving from one that serves primarily local and regional markets to one that serves global markets for finished products, as world refining capacity utilization has increased. Globally, refined product markets are likely to experience continued globalization until refining investments significantly expand capacity in key demand regions. We survey the oil refining industry in China in the context of the world market for heterogeneous crude oils and growing world trade in refined petroleum products.
SHARP/PRONGHORN Interoperability: Mesh Generation
Energy Technology Data Exchange (ETDEWEB)
Avery Bingham; Javier Ortensi
2012-09-01
Progress toward collaboration between the SHARP and MOOSE computational frameworks has been demonstrated through sharing of mesh generation and ensuring mesh compatibility of both tools with MeshKit. MeshKit was used to build a three-dimensional, full-core very high temperature reactor (VHTR) reactor geometry with 120-degree symmetry, which was used to solve a neutron diffusion critical eigenvalue problem in PRONGHORN. PRONGHORN is an application of MOOSE that is capable of solving coupled neutron diffusion, heat conduction, and homogenized flow problems. The results were compared to a solution found on a 120-degree, reflected, three-dimensional VHTR mesh geometry generated by PRONGHORN. The ability to exchange compatible mesh geometries between the two codes is instrumental for future collaboration and interoperability. The results were found to be in good agreement between the two meshes, thus demonstrating the compatibility of the SHARP and MOOSE frameworks. This outcome makes future collaboration possible.
TESS: A RELATIVISTIC HYDRODYNAMICS CODE ON A MOVING VORONOI MESH
International Nuclear Information System (INIS)
Duffell, Paul C.; MacFadyen, Andrew I.
2011-01-01
We have generalized a method for the numerical solution of hyperbolic systems of equations using a dynamic Voronoi tessellation of the computational domain. The Voronoi tessellation is used to generate moving computational meshes for the solution of multidimensional systems of conservation laws in finite-volume form. The mesh-generating points are free to move with arbitrary velocity, with the choice of zero velocity resulting in an Eulerian formulation. Moving the points at the local fluid velocity makes the formulation effectively Lagrangian. We have written the TESS code to solve the equations of compressible hydrodynamics and magnetohydrodynamics for both relativistic and non-relativistic fluids on a dynamic Voronoi mesh. When run in Lagrangian mode, TESS is significantly less diffusive than fixed mesh codes and thus preserves contact discontinuities to high precision while also accurately capturing strong shock waves. TESS is written for Cartesian, spherical, and cylindrical coordinates and is modular so that auxiliary physics solvers are readily integrated into the TESS framework and so that this can be readily adapted to solve general systems of equations. We present results from a series of test problems to demonstrate the performance of TESS and to highlight some of the advantages of the dynamic tessellation method for solving challenging problems in astrophysical fluid dynamics.
Quinoa - Adaptive Computational Fluid Dynamics, 0.2
Energy Technology Data Exchange (ETDEWEB)
2017-09-22
Quinoa is a set of computational tools that enables research and numerical analysis in fluid dynamics. At this time it remains a test-bed to experiment with various algorithms using fully asynchronous runtime systems. Currently, Quinoa consists of the following tools: (1) Walker, a numerical integrator for systems of stochastic differential equations in time. It is a mathematical tool to analyze and design the behavior of stochastic differential equations. It allows the estimation of arbitrary coupled statistics and probability density functions and is currently used for the design of statistical moment approximations for multiple mixing materials in variable-density turbulence. (2) Inciter, an overdecomposition-aware finite element field solver for partial differential equations using 3D unstructured grids. Inciter is used to research asynchronous mesh-based algorithms and to experiment with coupling asynchronous to bulk-synchronous parallel code. Two planned new features of Inciter, compared to the previous release (LA-CC-16-015), to be implemented in 2017, are (a) a simple Navier-Stokes solver for ideal single-material compressible gases, and (b) solution-adaptive mesh refinement (AMR), which enables dynamically concentrating compute resources to regions with interesting physics. Using the NS-AMR problem we plan to explore how to scale such high-load-imbalance simulations, representative of large production multiphysics codes, to very large problems on very large computers using an asynchronous runtime system. (3) RNGTest, a test harness to subject random number generators to stringent statistical tests enabling quantitative ranking with respect to their quality and computational cost. (4) UnitTest, a unit test harness, running hundreds of tests per second, capable of testing serial, synchronous, and asynchronous functions. (5) MeshConv, a mesh file converter that can be used to convert 3D tetrahedron meshes from and to either of the following formats: Gmsh
From medical images to flow computations without user-generated meshes.
Dillard, Seth I; Mousel, John A; Shrestha, Liza; Raghavan, Madhavan L; Vigmostad, Sarah C
2014-10-01
Biomedical flow computations in patient-specific geometries require integrating image acquisition and processing with fluid flow solvers. Typically, image-based modeling processes involve several steps, such as image segmentation, surface mesh generation, volumetric flow mesh generation, and finally, computational simulation. These steps are performed separately, often using separate pieces of software, and each step requires considerable expertise and investment of time on the part of the user. In this paper, an alternative framework is presented in which the entire image-based modeling process is performed on a Cartesian domain where the image is embedded within the domain as an implicit surface. Thus, the framework circumvents the need for generating surface meshes to fit complex geometries and subsequent creation of body-fitted flow meshes. Cartesian mesh pruning, local mesh refinement, and massive parallelization provide computational efficiency; the image-to-computation techniques adopted are chosen to be suitable for distributed memory architectures. The complete framework is demonstrated with flow calculations computed in two 3D image reconstructions of geometrically dissimilar intracranial aneurysms. The flow calculations are performed on multiprocessor computer architectures and are compared against calculations performed with a standard multistep route. Copyright © 2014 John Wiley & Sons, Ltd.
Refines Efficiency Improvement
Energy Technology Data Exchange (ETDEWEB)
WRI
2002-05-15
Refinery processes that convert heavy oils to lighter distillate fuels require heating for distillation, hydrogen addition or carbon rejection (coking). Efficiency is limited by the formation of insoluble carbon-rich coke deposits. Heat exchangers and other refinery units must be shut down for mechanical coke removal, resulting in a significant loss of output and revenue. When a residuum is heated above the temperature at which pyrolysis occurs (340 C, 650 F), there is typically an induction period before coke formation begins (Magaril and Aksenova 1968, Wiehe 1993). To avoid fouling, refiners often stop heating a residuum before coke formation begins, using arbitrary criteria. In many cases, this heating is stopped sooner than need be, resulting in less than maximum product yield. Western Research Institute (WRI) has developed innovative Coking Index concepts (patent pending) which can be used for process control by refiners to heat residua to the threshold, but not beyond the point at which coke formation begins when petroleum residua materials are heated at pyrolysis temperatures (Schabron et al. 2001). The development of this universal predictor solves a long standing problem in petroleum refining. These Coking Indexes have great potential value in improving the efficiency of distillation processes. The Coking Indexes were found to apply to residua in a universal manner, and the theoretical basis for the indexes has been established (Schabron et al. 2001a, 2001b, 2001c). For the first time, a few simple measurements indicates how close undesired coke formation is on the coke formation induction time line. The Coking Indexes can lead to new process controls that can improve refinery distillation efficiency by several percentage points. Petroleum residua consist of an ordered continuum of solvated polar materials usually referred to as asphaltenes dispersed in a lower polarity solvent phase held together by intermediate polarity materials usually referred to as
Dynamic grid refinement for partial differential equations on parallel computers
International Nuclear Information System (INIS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems. 6 refs
Refining Radchem Detectors: Iridium
Arnold, C. W.; Bredeweg, T. A.; Vieira, D. J.; Bond, E. M.; Jandel, M.; Rusev, G.; Moody, W. A.; Ullmann, J. L.; Couture, A. J.; Mosby, S.; O'Donnell, J. M.; Haight, R. C.
2013-10-01
Accurate determination of neutron fluence is an important diagnostic of nuclear device performance, whether the device is a commercial reactor, a critical assembly or an explosive device. One important method for neutron fluence determination, generally referred to as dosimetry, is based on exploiting various threshold reactions of elements such as iridium. It is possible to infer details about the integrated neutron energy spectrum to which the dosimetry sample or ``radiochemical detector'' was exposed by measuring specific activation products post-irradiation. The ability of radchem detectors like iridium to give accurate neutron fluence measurements is limited by the precision of the cross-sections in the production/destruction network (189Ir-193Ir). The Detector for Advanced Neutron Capture Experiments (DANCE) located at LANSCE is ideal for refining neutron capture cross sections of iridium isotopes. Recent results from a measurement of neutron capture on 193-Ir are promising. Plans to measure other iridium isotopes are underway.
Balla, Andrea; Quaresima, Silvia; Smolarek, Sebastian; Shalaby, Mostafa; Missori, Giulia; Sileri, Pierpaolo
2017-04-01
This review reports the incidence of mesh-related erosion after ventral mesh rectopexy to determine whether any difference exists in the erosion rate between synthetic and biological mesh. A systematic search of the MEDLINE and the Ovid databases was conducted to identify suitable articles published between 2004 and 2015. The search strategy capture terms were laparoscopic ventral mesh rectopexy, laparoscopic anterior rectopexy, robotic ventral rectopexy, and robotic anterior rectopexy. Eight studies (3,956 patients) were included in this review. Of those patients, 3,517 patients underwent laparoscopic ventral rectopexy (LVR) using synthetic mesh and 439 using biological mesh. Sixty-six erosions were observed with synthetic mesh (26 rectal, 32 vaginal, 8 recto-vaginal fistulae) and one (perineal erosion) with biological mesh. The synthetic and the biological mesh-related erosion rates were 1.87% and 0.22%, respectively. The time between rectopexy and diagnosis of mesh erosion ranged from 1.7 to 124 months. No mesh-related mortalities were reported. The incidence of mesh-related erosion after LVR is low and is more common after the placement of synthetic mesh. The use of biological mesh for LVR seems to be a safer option; however, large, multicenter, randomized, control trials with long follow-ups are required if a definitive answer is to be obtained.
Towards automated crystallographic structure refinement with phenix.refine
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Echols, Nathaniel; Headd, Jeffrey J.; Moriarty, Nigel W.; Mustyakimov, Marat; Terwilliger, Thomas C.; Urzhumtsev, Alexandre; Zwart, Peter H.; Adams, Paul D.
2012-01-01
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. It has several automation features and is also highly flexible. Several hundred parameters enable extensive customizations for complex use cases. Multiple user-defined refinement strategies can be applied to specific parts of the model in a single refinement run. An intuitive graphical user interface is available to guide novice users and to assist advanced users in managing refinement projects. X-ray or neutron diffraction data can be used separately or jointly in refinement. phenix.refine is tightly integrated into the PHENIX suite, where it serves as a critical component in automated model building, final structure refinement, structure validation and deposition to the wwPDB. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods. PMID:22505256
Cell adhesion on NiTi thin film sputter-deposited meshes
Energy Technology Data Exchange (ETDEWEB)
Loger, K. [Inorganic Functional Materials, Institute for Materials Science, Faculty of Engineering, University of Kiel (Germany); Engel, A.; Haupt, J. [Department of Cardiovascular Surgery, University Hospital of Schleswig-Holstein, Kiel (Germany); Li, Q. [Biocompatible Nanomaterials, Institute for Materials Science, Faculty of Engineering, University of Kiel (Germany); Lima de Miranda, R. [Inorganic Functional Materials, Institute for Materials Science, Faculty of Engineering, University of Kiel (Germany); ACQUANDAS GmbH, Kiel (Germany); Quandt, E. [Inorganic Functional Materials, Institute for Materials Science, Faculty of Engineering, University of Kiel (Germany); Lutter, G. [Department of Cardiovascular Surgery, University Hospital of Schleswig-Holstein, Kiel (Germany); Selhuber-Unkel, C. [Biocompatible Nanomaterials, Institute for Materials Science, Faculty of Engineering, University of Kiel (Germany)
2016-02-01
Scaffolds for tissue engineering enable the possibility to fabricate and form biomedical implants in vitro, which fulfill special functionality in vivo. In this study, free-standing Nickel–Titanium (NiTi) thin film meshes were produced by means of magnetron sputter deposition. Meshes contained precisely defined rhombic holes in the size of 440 to 1309 μm{sup 2} and a strut width ranging from 5.3 to 9.2 μm. The effective mechanical properties of the microstructured superelastic NiTi thin film were examined by tensile testing. These results will be adapted for the design of the holes in the film. The influence of hole and strut dimensions on the adhesion of sheep autologous cells (CD133 +) was studied after 24 h and after seven days of incubation. Optical analysis using fluorescence microscopy and scanning electron microscopy showed that cell adhesion depends on the structural parameters of the mesh. After 7 days in cell culture a large part of the mesh was covered with aligned fibrous material. Cell adhesion is particularly facilitated on meshes with small rhombic holes of 440 μm{sup 2} and a strut width of 5.3 μm. Our results demonstrate that free-standing NiTi thin film meshes have a promising potential for applications in cardiovascular tissue engineering, particularly for the fabrication of heart valves. - Highlights: • Freestanding NiTi thin film scaffolds were fabricated with magnetron sputtering process. • Effective mechanical properties of NiTi scaffolds can be adapted by the mesh structure parameters. • Cell adhesion on the NiTi thin film scaffold is controlled by the structure parameters of the mesh. • Cells strongly adhere after seven days and form a confluent layer on the mesh.
Cell adhesion on NiTi thin film sputter-deposited meshes
International Nuclear Information System (INIS)
Loger, K.; Engel, A.; Haupt, J.; Li, Q.; Lima de Miranda, R.; Quandt, E.; Lutter, G.; Selhuber-Unkel, C.
2016-01-01
Scaffolds for tissue engineering enable the possibility to fabricate and form biomedical implants in vitro, which fulfill special functionality in vivo. In this study, free-standing Nickel–Titanium (NiTi) thin film meshes were produced by means of magnetron sputter deposition. Meshes contained precisely defined rhombic holes in the size of 440 to 1309 μm 2 and a strut width ranging from 5.3 to 9.2 μm. The effective mechanical properties of the microstructured superelastic NiTi thin film were examined by tensile testing. These results will be adapted for the design of the holes in the film. The influence of hole and strut dimensions on the adhesion of sheep autologous cells (CD133 +) was studied after 24 h and after seven days of incubation. Optical analysis using fluorescence microscopy and scanning electron microscopy showed that cell adhesion depends on the structural parameters of the mesh. After 7 days in cell culture a large part of the mesh was covered with aligned fibrous material. Cell adhesion is particularly facilitated on meshes with small rhombic holes of 440 μm 2 and a strut width of 5.3 μm. Our results demonstrate that free-standing NiTi thin film meshes have a promising potential for applications in cardiovascular tissue engineering, particularly for the fabrication of heart valves. - Highlights: • Freestanding NiTi thin film scaffolds were fabricated with magnetron sputtering process. • Effective mechanical properties of NiTi scaffolds can be adapted by the mesh structure parameters. • Cell adhesion on the NiTi thin film scaffold is controlled by the structure parameters of the mesh. • Cells strongly adhere after seven days and form a confluent layer on the mesh.
IFCPT S-Duct Grid-Adapted FUN3D Computations for the Third Propulsion Aerodynamics Works
Davis, Zach S.; Park, M. A.
2017-01-01
Contributions of the unstructured Reynolds-averaged Navier-Stokes code, FUN3D, to the 3rd AIAA Propulsion Aerodynamics Workshop are described for the diffusing IFCPT S-Duct. Using workshop-supplied grids, results for the baseline S-Duct, baseline S-Duct with Aerodynamic Interface Plane (AIP) rake hardware, and baseline S-Duct with flow control devices are compared with experimental data and results computed with output-based, off-body grid adaptation in FUN3D. Due to the absence of influential geometry components, total pressure recovery is overpredicted on the baseline S-Duct and S-Duct with flow control vanes when compared to experimental values. An estimate for the exact value of total pressure recovery is derived for these cases given an infinitely refined mesh. When results from output-based mesh adaptation are compared with those computed on workshop-supplied grids, a considerable improvement in predicting total pressure recovery is observed. By including more representative geometry, output-based mesh adaptation compares very favorably with experimental data in terms of predicting the total pressure recovery cost-function; whereas, results computed using the workshop-supplied grids are underpredicted.
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Energy Technology Data Exchange (ETDEWEB)
Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron
1998-12-08
Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
Schmalzl, JöRg; Loddoch, Alexander
2003-09-01
We present a new method for investigating the transport of an active chemical component in a convective flow. We apply a three-dimensional front tracking method using a triangular mesh. For the refinement of the mesh we use subdivision surfaces which have been developed over the last decade primarily in the field of computer graphics. We present two different subdivision schemes and discuss their applicability to problems related to fluid dynamics. For adaptive refinement we propose a weight function based on the length of triangle edge and the sum of the angles of the triangle formed with neighboring triangles. In order to remove excess triangles we apply an adaptive surface simplification method based on quadric error metrics. We test these schemes by advecting a blob of passive material in a steady state flow in which the total volume is well preserved over a long time. Since for time-dependent flows the number of triangles may increase exponentially in time we propose the use of a subdivision scheme with diffusive properties in order to remove the small scale features of the chemical field. By doing so we are able to follow the evolution of a heavy chemical component in a vigorously convecting field. This calculation is aimed at the fate of a heavy layer at the Earth's core-mantle boundary. Since the viscosity variation with temperature is of key importance we also present a calculation with a strongly temperature-dependent viscosity.
Comparing Refinements for Failure and Bisimulation Semantics
Eshuis, H.; Fokkinga, M.M.
2002-01-01
Refinement in bisimulation semantics is defined differently from refinement in failure semantics: in bisimulation semantics refinement is based on simulations between labelled transition systems, whereas in failure semantics refinement is based on inclusions between failure systems. There exist
Multivariate refined composite multiscale entropy analysis
Energy Technology Data Exchange (ETDEWEB)
Humeau-Heurtier, Anne, E-mail: anne.humeau@univ-angers.fr
2016-04-01
Multiscale entropy (MSE) has become a prevailing method to quantify signals complexity. MSE relies on sample entropy. However, MSE may yield imprecise complexity estimation at large scales, because sample entropy does not give precise estimation of entropy when short signals are processed. A refined composite multiscale entropy (RCMSE) has therefore recently been proposed. Nevertheless, RCMSE is for univariate signals only. The simultaneous analysis of multi-channel (multivariate) data often over-performs studies based on univariate signals. We therefore introduce an extension of RCMSE to multivariate data. Applications of multivariate RCMSE to simulated processes reveal its better performances over the standard multivariate MSE. - Highlights: • Multiscale entropy quantifies data complexity but may be inaccurate at large scale. • A refined composite multiscale entropy (RCMSE) has therefore recently been proposed. • Nevertheless, RCMSE is adapted to univariate time series only. • We herein introduce an extension of RCMSE to multivariate data. • It shows better performances than the standard multivariate multiscale entropy.
Directory of Open Access Journals (Sweden)
Jennings Jason
2010-01-01
Full Text Available Laparoscopic inguinal herniorraphy via a transabdominal preperitoneal (TAPP approach using Polypropylene Mesh (Mesh and staples is an accepted technique. Mesh induces a localised inflammatory response that may extend to, and involve, adjacent abdominal and pelvic viscera such as the appendix. We present an interesting case of suspected Mesh-induced appendicitis treated successfully with laparoscopic appendicectomy, without Mesh removal, in an elderly gentleman who presented with symptoms and signs of acute appendicitis 18 months after laparoscopic inguinal hernia repair. Possible mechanisms for Mesh-induced appendicitis are briefly discussed.
Mesh Plug Repair of Inguinal Hernia; Single Surgeon Experience
Directory of Open Access Journals (Sweden)
Ahmet Serdar Karaca
2013-10-01
Full Text Available Aim: Mesh repair of inguinal hernia repairs are shown to be an effective and reliable method. In this study, a single surgeon%u2019s experience with plug-mesh method performs inguinal hernia repair have been reported. Material and Method: 587 patients with plug-mesh repair of inguinal hernia, preoperative age, body / mass index, comorbid disease were recorded in terms of form. All of the patients during the preoperative and postoperative hernia classification of information, duration of operation, antibiotics, perioperative complications, and later, the early and late postoperative complications, infection, recurrence rates and return to normal daily activity, verbal pain scales in terms of time and postoperative pain were evaluated. Added to this form of long-term pain ones. The presence of wound infection was assessed by the presence of purulent discharge from the incision. Visual analog scale pain status of the patients was measured. Results: 587 patients underwent repair of primary inguinal hernia mesh plug. One of the patients, 439 (74% of them have adapted follow-ups. Patients%u2019 ages ranged from 18-86. Was calculated as the mean of 47±18:07. Follow-up period of the patients was found to be a minimum of 3 months, maximum 55 months. Found an average of 28.2±13.4 months. Mean duration of surgery was 35.07±4.00 min (min:22mn-max:52mn, respectively. When complication rates of patients with recurrence in 2 patients (0.5%, hematoma development (1.4% in 6 patients, the development of infection in 11 patients (2.5% and long-term groin pain in 4 patients (0.9% appeared. Discussion: In our experience, the plug-mesh repair of primary inguinal hernia repair safe, effective low recurrence and complication rates can be used.
AbouEisha, Hassan M.
2016-06-02
In this paper we present a multi-criteria optimization of element partition trees and resulting orderings for multi-frontal solver algorithms executed for two dimensional h adaptive finite element method. In particular, the problem of optimal ordering of elimination of rows in the sparse matrices resulting from adaptive finite element method computations is reduced to the problem of finding of optimal element partition trees. Given a two dimensional h refined mesh, we find all optimal element partition trees by using the dynamic programming approach. An element partition tree defines a prescribed order of elimination of degrees of freedom over the mesh. We utilize three different metrics to estimate the quality of the element partition tree. As the first criterion we consider the number of floating point operations(FLOPs) performed by the multi-frontal solver. As the second criterion we consider the number of memory transfers (MEMOPS) performed by the multi-frontal solver algorithm. As the third criterion we consider memory usage (NONZEROS) of the multi-frontal direct solver. We show the optimization results for FLOPs vs MEMOPS as well as for the execution time estimated as FLOPs+100MEMOPS vs NONZEROS. We obtain Pareto fronts with multiple optimal trees, for each mesh, and for each refinement level. We generate a library of optimal elimination trees for small grids with local singularities. We also propose an algorithm that for a given large mesh with identified local sub-grids, each one with local singularity. We compute Schur complements over the sub-grids using the optimal trees from the library, and we submit the sequence of Schur complements into the iterative solver ILUPCG.
Commercial refining in the Mediterranean
International Nuclear Information System (INIS)
Packer, P.
1999-01-01
About 9% of the world's oil refining capacity is on the Mediterranean: some of the world's biggest and most advanced refineries are on Sicily and Sardinia. The Mediterranean refineries are important suppliers to southern Europe and N. Africa. The article discusses commercial refining in the Mediterranean under the headings of (i) historic development, (ii) product demand, (iii) refinery configurations, (iv) refined product trade, (v) financial performance and (vi) future outlook. Although some difficulties are foreseen, refining in the Mediterranean is likely to continue to be important well into the 21st century. (UK)
INGEN: a general-purpose mesh generator for finite element codes
International Nuclear Information System (INIS)
Cook, W.A.
1979-05-01
INGEN is a general-purpose mesh generator for two- and three-dimensional finite element codes. The basic parts of the code are surface and three-dimensional region generators that use linear-blending interpolation formulas. These generators are based on an i, j, k index scheme that is used to number nodal points, construct elements, and develop displacement and traction boundary conditions. This code can generate truss elements (2 modal points); plane stress, plane strain, and axisymmetry two-dimensional continuum elements (4 to 8 nodal points); plate elements (4 to 8 nodal points); and three-dimensional continuum elements (8 to 21 nodal points). The traction loads generated are consistent with the element generated. The expansion--contraction option is of special interest. This option makes it possible to change an existing mesh such that some regions are refined and others are made coarser than the original mesh. 9 figures
Bluetooth Low Energy Mesh Networks: A Survey.
Darroudi, Seyed Mahdi; Gomez, Carles
2017-06-22
Bluetooth Low Energy (BLE) has gained significant momentum. However, the original design of BLE focused on star topology networking, which limits network coverage range and precludes end-to-end path diversity. In contrast, other competing technologies overcome such constraints by supporting the mesh network topology. For these reasons, academia, industry, and standards development organizations have been designing solutions to enable BLE mesh networks. Nevertheless, the literature lacks a consolidated view on this emerging area. This paper comprehensively surveys state of the art BLE mesh networking. We first provide a taxonomy of BLE mesh network solutions. We then review the solutions, describing the variety of approaches that leverage existing BLE functionality to enable BLE mesh networks. We identify crucial aspects of BLE mesh network solutions and discuss their advantages and drawbacks. Finally, we highlight currently open issues.
Unstructured Polyhedral Mesh Thermal Radiation Diffusion
International Nuclear Information System (INIS)
Palmer, T.S.; Zika, M.R.; Madsen, N.K.
2000-01-01
Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module
Dynamically adaptive data-driven simulation of extreme hydrological flows
Kumar Jain, Pushkar; Mandli, Kyle; Hoteit, Ibrahim; Knio, Omar; Dawson, Clint
2018-02-01
Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.
Dynamically adaptive data-driven simulation of extreme hydrological flows
Kumar Jain, Pushkar
2017-12-27
Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.
Meshes optimized for discrete exterior calculus (DEC).
Energy Technology Data Exchange (ETDEWEB)
Mousley, Sarah C. [Univ. of Illinois, Urbana-Champaign, IL (United States); Deakin, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knupp, Patrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-12-01
We study the optimization of an energy function used by the meshing community to measure and improve mesh quality. This energy is non-traditional because it is dependent on both the primal triangulation and its dual Voronoi (power) diagram. The energy is a measure of the mesh's quality for usage in Discrete Exterior Calculus (DEC), a method for numerically solving PDEs. In DEC, the PDE domain is triangulated and this mesh is used to obtain discrete approximations of the continuous operators in the PDE. The energy of a mesh gives an upper bound on the error of the discrete diagonal approximation of the Hodge star operator. In practice, one begins with an initial mesh and then makes adjustments to produce a mesh of lower energy. However, we have discovered several shortcomings in directly optimizing this energy, e.g. its non-convexity, and we show that the search for an optimized mesh may lead to mesh inversion (malformed triangles). We propose a new energy function to address some of these issues.
Directory of Open Access Journals (Sweden)
E. Pavarino
2013-01-01
Full Text Available The Finite Element Method is a well-known technique, being extensively applied in different areas. Studies using the Finite Element Method (FEM are targeted to improve cardiac ablation procedures. For such simulations, the finite element meshes should consider the size and histological features of the target structures. However, it is possible to verify that some methods or tools used to generate meshes of human body structures are still limited, due to nondetailed models, nontrivial preprocessing, or mainly limitation in the use condition. In this paper, alternatives are demonstrated to solid modeling and automatic generation of highly refined tetrahedral meshes, with quality compatible with other studies focused on mesh generation. The innovations presented here are strategies to integrate Open Source Software (OSS. The chosen techniques and strategies are presented and discussed, considering cardiac structures as a first application context.
Biologic mesh versus synthetic mesh in open inguinal hernia repair: system review and meta-analysis.
Fang, Zhixue; Ren, Feng; Zhou, Jianping; Tian, Jiao
2015-12-01
Biologic meshes are mostly used for abdominal wall reinforcement in infected fields, but no consensus has been reached on its use for inguinal hernia repairing. The purpose of this study was to compare biologic mesh with synthetic mesh in open inguinal herniorrhaphy. A systematic literature review and meta-analysis was undertaken to identify studies comparing the outcomes of biologic mesh and synthetic mesh in open inguinal hernia repair. Published studies were identified by the databases PubMed, EMBASE and the Cochrane Library. A total of 382 patients in five randomized controlled trials were reviewed (179 patients in biologic mesh group; 203 patients in synthetic mesh group). The two groups did not significantly differ in chronic groin pain (P = 0.06) or recurrence (P = 0.38). The incidence of seroma trended higher in biologic mesh group (P = 0.03). Operating time was significantly longer with biologic mesh (P = 0.03). There was no significant difference in hematomas (P = 0.23) between the two groups. From the data of this study, biologic mesh had no superiority to synthetic mesh in open inguinal hernia repair with similar recurrence rates and incidence of chronic groin pain, but higher rate of seroma and longer operating time. However, this mesh still needs to be assessed in a large, multicentre, well-designed randomized controlled trial. © 2015 Royal Australasian College of Surgeons.
Tensile Behaviour of Welded Wire Mesh and Hexagonal Metal Mesh for Ferrocement Application
Tanawade, A. G.; Modhera, C. D.
2017-08-01
Tension tests were conducted on welded mesh and hexagonal Metal mesh. Welded Mesh is available in the market in different sizes. The two types are analysed viz. Ø 2.3 mm and Ø 2.7 mm welded mesh, having opening size 31.75 mm × 31.75 mm and 25.4 mm × 25.4 mm respectively. Tensile strength test was performed on samples of welded mesh in three different orientations namely 0°, 30° and 45° degrees with the loading axis and hexagonal Metal mesh of Ø 0.7 mm, having opening 19.05 × 19.05 mm. Experimental tests were conducted on samples of these meshes. The objective of this study was to investigate the behaviour of the welded mesh and hexagonal Metal mesh. The result shows that the tension load carrying capacity of welded mesh of Ø 2.7 mm of 0° orientation is good as compared to Ø2.3 mm mesh and ductility of hexagonal Metal mesh is good in behaviour.
International Nuclear Information System (INIS)
Miniati, Francesco
2014-01-01
We study the statistical properties of turbulence driven by structure formation in a massive merging galaxy cluster at redshift z = 0. The development of turbulence is ensured as the largest eddy turnover time is much shorter than the Hubble time independent of mass and redshift. We achieve a large dynamic range of spatial scales through a novel Eulerian refinement strategy where the cluster volume is refined with progressively finer uniform nested grids during gravitational collapse. This provides an unprecedented resolution of 7.3 h –1 kpc across the virial volume. The probability density functions of various velocity-derived quantities exhibit the features characteristic of fully developed compressible turbulence observed in dedicated periodic-box simulations. Shocks generate only 60% of the total vorticity within the R vir /3 region and 40% beyond that. We compute second- and third-order longitudinal and transverse structure functions for both solenoidal and compressional components in the cluster core, virial region, and beyond. The structure functions exhibit a well-defined inertial range of turbulent cascade. The injection scale is comparable to the virial radius but increases toward the outskirts. Within R vir /3, the spectral slope of the solenoidal component is close to Kolmogorov's, but for the compressional component is substantially steeper and close to Burgers's; the flow is mostly solenoidal and statistically rigorously, which is consistent with fully developed homogeneous and isotropic turbulence. Small-scale anisotropy appears due to numerical artifact. Toward the virial region, the flow becomes increasingly compressional, the structure functions become flatter, and modest genuine anisotropy appear particularly close to the injection scale. In comparison, mesh adaptivity based on Lagrangian refinement and the same finest resolution leads to a lack of turbulent power on a small scales, an excess thereof on large scales, and unreliable
Energy Technology Data Exchange (ETDEWEB)
Jablonowski, Christiane [Univ. of Michigan, Ann Arbor, MI (United States)
2015-07-14
The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project
Zhengyong, R.; Jingtian, T.; Changsheng, L.; Xiao, X.
2007-12-01
Although adaptive finite-element (AFE) analysis is becoming more and more focused in scientific and engineering fields, its efficient implementations are remain to be a discussed problem as its more complex procedures. In this paper, we propose a clear C++ framework implementation to show the powerful properties of Object-oriented philosophy (OOP) in designing such complex adaptive procedure. In terms of the modal functions of OOP language, the whole adaptive system is divided into several separate parts such as the mesh generation or refinement, a-posterior error estimator, adaptive strategy and the final post processing. After proper designs are locally performed on these separate modals, a connected framework of adaptive procedure is formed finally. Based on the general elliptic deferential equation, little efforts should be added in the adaptive framework to do practical simulations. To show the preferable properties of OOP adaptive designing, two numerical examples are tested. The first one is the 3D direct current resistivity problem in which the powerful framework is efficiently shown as only little divisions are added. And then, in the second induced polarization£¨IP£©exploration case, new adaptive procedure is easily added which adequately shows the strong extendibility and re-usage of OOP language. Finally we believe based on the modal framework adaptive implementation by OOP methodology, more advanced adaptive analysis system will be available in future.
SIMULATION OF PULSED BREAKDOWN IN HELIUM BY ADAPTIVE METHODS
Directory of Open Access Journals (Sweden)
S. I. Eliseev
2014-09-01
Full Text Available The paper deals with the processes occurring during electrical breakdown in gases as well as numerical simulation of these processes using adaptive mesh refinement methods. Discharge between needle electrodes in helium at atmospheric pressure is selected for the test simulation. Physical model of the accompanying breakdown processes is based on self- consistent system of continuity equations for streams of charged particles (electrons and positive ions and Poisson equation for electric potential. Sharp plasma heterogeneity in the area of streamers requires the usage of adaptive algorithms for constructing of computational grids for modeling. The method for grid adaptive construction together with justification of its effectiveness for significantly unsteady gas breakdown simulation at atmospheric pressure is described. Upgraded version of Gerris package is used for numerical simulation of electrical gas breakdown. Software package, originally focused on solution of nonlinear problems in fluid dynamics, appears to be suitable for processes modeling in non-stationary plasma described by continuity equations. The usage of adaptive grids makes it possible to get an adequate numerical model for the breakdown development in the system of needle electrodes. Breakdown dynamics is illustrated by contour plots of electron densities and electric field intensity obtained in the course of solving. Breakdown mechanism of positive and negative (orientated to anode streamers formation is demonstrated and analyzed. Correspondence between adaptive building of computational grid and generated plasma gradients is shown. Obtained results can be used as a basis for full-scale numerical experiments on electric breakdown in gases.
Converting skeletal structures to quad dominant meshes
DEFF Research Database (Denmark)
Bærentzen, Jakob Andreas; Misztal, Marek Krzysztof; Welnicka, Katarzyna
2012-01-01
We propose the Skeleton to Quad-dominant polygonal Mesh algorithm (SQM), which converts skeletal structures to meshes composed entirely of polar and annular regions. Both types of regions have a regular structure where all faces are quads except for a single ring of triangles at the center of eac...
Parallel mesh management using interoperable tools.
Energy Technology Data Exchange (ETDEWEB)
Tautges, Timothy James (Argonne National Laboratory); Devine, Karen Dragon
2010-10-01
This presentation included a discussion of challenges arising in parallel mesh management, as well as demonstrated solutions. They also described the broad range of software for mesh management and modification developed by the Interoperable Technologies for Advanced Petascale Simulations (ITAPS) team, and highlighted applications successfully using the ITAPS tool suite.
7th International Meshing Roundtable '98
Energy Technology Data Exchange (ETDEWEB)
Eldred, T.J.
1998-10-01
The goal of the 7th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the past, the Roundtable has enjoyed significant participation from each of these groups from a wide variety of countries.
Laparoscopic Pelvic Floor Repair Using Polypropylene Mesh
Directory of Open Access Journals (Sweden)
Shih-Shien Weng
2008-09-01
Conclusion: Laparoscopic pelvic floor repair using a single piece of polypropylene mesh combined with uterosacral ligament suspension appears to be a feasible procedure for the treatment of advanced vaginal vault prolapse and enterocele. Fewer mesh erosions and postoperative pain syndromes were seen in patients who had no previous pelvic floor reconstructive surgery.
A Comparative Study of Navigation Meshes
van Toll, W.G.; Triesscheijn, Roy; Kallmann, Marcelo; Oliva, Ramon; Pelechano, Nuria; Pettré, Julien; Geraerts, R.J.
2016-01-01
A navigation mesh is a representation of a 2D or 3D virtual environment that enables path planning and crowd simulation for walking characters. Various state-of-the-art navigation meshes exist, but there is no standardized way of evaluating or comparing them. Each implementation is in a different
Evaluation of mesh morphing and mapping techniques in patient specific modeling of the human pelvis.
Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa
2013-01-01
Robust generation of pelvic finite element models is necessary to understand the variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis and their strain distributions evaluated. Morphing and mapping techniques were effectively applied to generate good quality geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.
Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa
2012-08-01
Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.
Accurate, finite-volume methods for 3D MHD on unstructured Lagrangian meshes
International Nuclear Information System (INIS)
Barnes, D.C.; Rousculp, C.L.
1998-10-01
Previous 2D methods for magnetohydrodynamics (MHD) have contributed both to development of core code capability and to physics applications relevant to AGEX pulsed-power experiments. This strategy is being extended to 3D by development of a modular extension of an ASCI code. Extension to 3D not only increases complexity by problem size, but also introduces new physics, such as magnetic helicity transport. The authors have developed a method which incorporates all known conservation properties into the difference scheme on a Lagrangian unstructured mesh. Because the method does not depend on the mesh structure, mesh refinement is possible during a calculation to prevent the well known problem of mesh tangling. Arbitrary polyhedral cells are decomposed into tetrahedrons. The action of the magnetic vector potential, A · δl, is centered on the edges of this extended mesh. For ideal flow, this maintains ∇ · B = 0 to round-off error. Vertex forces are derived by the variation of magnetic energy with respect to vertex positions, F = -∂W B /∂r. This assures symmetry as well as magnetic flux, momentum, and energy conservation. The method is local so that parallelization by domain decomposition is natural for large meshes. In addition, a simple, ideal-gas, finite pressure term has been included. The resistive diffusion part is calculated using the support operator method, to obtain an energy conservative, symmetric method on an arbitrary mesh. Implicit time difference equations are solved by preconditioned, conjugate gradient methods. Results of convergence tests are presented. Initial results of an annular Z-pinch implosion problem illustrate the application of these methods to multi-material problems
Physical refining of sunflower oil
Directory of Open Access Journals (Sweden)
Kovari Katalin
2000-07-01
Full Text Available Physical refining has several advantages compared to the classical chemical one. This process is more economical (improved yield, lower investment cost, less chemicals used environmental friendly process (no soapstock to be treated, splitted but more sensitive to the crude oil quality. Physical refining of sunflower oil is discussed in details. Recent developments in the field of processes, equipment and control have made it possible to refine by physical way the high phosphatide containing seed oils as well. Special degumming processes, improved performance of bleaching materials, better design of deodorizers are applied in new installations; huge capacity one-line physical refineries are successfully operated in different countries.
An immersed interface vortex particle-mesh solver
Marichal, Yves; Chatelain, Philippe; Winckelmans, Gregoire
2014-11-01
An immersed interface-enabled vortex particle-mesh (VPM) solver is presented for the simulation of 2-D incompressible viscous flows, in the framework of external aerodynamics. Considering the simulation of free vortical flows, such as wakes and jets, vortex particle-mesh methods already provide a valuable alternative to standard CFD methods, thanks to the interesting numerical properties arising from its Lagrangian nature. Yet, accounting for solid bodies remains challenging, despite the extensive research efforts that have been made for several decades. The present immersed interface approach aims at improving the consistency and the accuracy of one very common technique (based on Lighthill's model) for the enforcement of the no-slip condition at the wall in vortex methods. Targeting a sharp treatment of the wall calls for substantial modifications at all computational levels of the VPM solver. More specifically, the solution of the underlying Poisson equation, the computation of the diffusion term and the particle-mesh interpolation are adapted accordingly and the spatial accuracy is assessed. The immersed interface VPM solver is subsequently validated on the simulation of some challenging impulsively started flows, such as the flow past a cylinder and that past an airfoil. Research Fellow (PhD student) of the F.R.S.-FNRS of Belgium.
Mesh Refinement in Finite Element Analysis by Minimization of the Stiffness Matrix Trace
1989-11-01
preceed the last card "CEND" in the Executive Control Deck. The following set of DMAP instructions were used in the trace calculations: Nastran Executive...3 edges. NASTRAN MSGMESH t , GIFTSc , SUPERSAP. and SUPARTABt (in I-DEAS) have this capability. For a more complicated geometry Schwarz-Christoffel...distortion factors of the elements. t NASTRAN MSGMESH is developed by MacNeal-Schwendler Corporation C) GIFTS is developed by Sperry Univac Computer System
On global and local mesh refinements by a generalized conforming bisection algorithm
Czech Academy of Sciences Publication Activity Database
Hannukainen, A.; Korotov, S.; Křížek, Michal
2010-01-01
Roč. 235, č. 2 (2010), s. 419-436 ISSN 0377-0427 R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : Zlámal's minimum angle condition * finite element method * nested triangulations * conforming longest-edge bisection algorithm * high aspect ratio elements Subject RIV: BA - General Mathematics Impact factor: 1.029, year: 2010 http://www.sciencedirect.com/science/article/pii/S0377042710003365
Steel refining possibilities in LF
Dumitru, M. G.; Ioana, A.; Constantin, N.; Ciobanu, F.; Pollifroni, M.
2018-01-01
This article presents the main possibilities for steel refining in Ladle Furnace (LF). These, are presented: steelmaking stages, steel refining through argon bottom stirring, online control of the bottom stirring, bottom stirring diagram during LF treatment of a heat, porous plug influence over the argon stirring, bottom stirring porous plug, analysis of porous plugs disposal on ladle bottom surface, bottom stirring simulation with ANSYS, bottom stirring simulation with Autodesk CFD.
South Korea - oil refining overview
International Nuclear Information System (INIS)
Hayes, D.
1999-01-01
Following the economic problems of the 1990s, the petroleum refining industry of South Korea underwent much involuntary restructuring in 1999 with respect to takeovers and mergers and these are discussed. The demand for petroleum has now pretty well recovered. The reasons for fluctuating prices in the 1990s, how the new structure should be cushioned against changes in the future, and the potential for South Korea to export refined petroleum, are all discussed
Directory of Open Access Journals (Sweden)
S.T. Jayanth
2015-12-01
Conclusion: Chitosan coated polypropylene mesh was found to have similar efficacy to Proceed™ mesh. Chitosan coated polypropylene mesh, can act as an anti adhesive barrier when used in the repair of incisional hernias and abdominal wall defects.
Boundary denoising for open surface meshes
Lee, Wei Zhe; Lim, Wee Keong; Soo, Wooi King
2013-04-01
Recently, applications of open surfaces in 3D have emerged to be an interesting research topic due to the popularity of range cameras such as the Microsoft Kinect. However, surface meshes representing such open surfaces are often corrupted with noises especially at the boundary. Such deformity needs to be treated to facilitate further applications such as texture mapping and zippering of multiple open surface meshes. Conventional methods perform denoising by removing components with high frequencies, thus smoothing the boundaries. However, this may result in loss of information, as not all high frequency transitions at the boundaries correspond to noises. To overcome such shortcoming, we propose a combination of local information and geometric features to single out the noises or unusual vertices at the mesh boundaries. The local shape of the selected mesh boundaries regions, characterized by the mean curvature value, is compared with that of the neighbouring interior region. The neighbouring interior region is chosen such that it is the closest to the corresponding boundary region, while curvature evaluation is independent of the boundary. The smoothing processing is done via Laplacian smoothing with our modified weights to reduce boundary shrinkage. The evaluation of the algorithm is done by noisy meshes generated from controlled model clean meshes. The Hausdorff distance is used as the measurement between the meshes. We show that our method produces better results than conventional smoothing of the whole boundary loop.
[CLINICAL EVALUATION OF THE NEW ANTISEPTIC MESHES].
Gogoladze, M; Kiladze, M; Chkhikvadze, T; Jiqia, D
2016-12-01
Improving the results of hernia treatment and prevention of complications became a goal of our research which included two parts - experimental and clinical. Histomorphological and bacteriological researches showed that the best result out of the 3 control groups was received in case of covering implant "Coladerm"+ with chlorhexidine. Based on the experiment results working process continued in clinics in order to test and introduce new "coladerm"+ chlorhexidine covered poliprophilene meshes into practice. For clinical illustration there were 60 patients introduced to the research who had hernioplasty procedures by different nets: I group - standard meshes+"coladerm"+chlorhexidine, 35 patients; II group - standard meshes +"coladerm", 15 patients; III group - standard meshes, 10 patients. Assessment of the wound and echo-control was done post-surgery on the 8th, 30th and 90th days. This clinical research based on the experimental results once again showed the best anti-microbe features of new antiseptic polymeric biocomposite meshes (standard meshes+"coladerm"+chlorhexidine); timely termination of regeneration and reparation processes without any post-surgery suppurative complications. We hope that new antiseptic polymeric biocomposite meshes presented by us will be successfully used in surgical practice of hernia treatment based on and supported by expermental-clinical research.
Fog water collection effectiveness: Mesh intercomparisons
Fernandez, Daniel; Torregrosa, Alicia; Weiss-Penzias, Peter; Zhang, Bong June; Sorensen, Deckard; Cohen, Robert; McKinley, Gareth; Kleingartner, Justin; Oliphant, Andrew; Bowman, Matthew
2018-01-01
To explore fog water harvesting potential in California, we conducted long-term measurements involving three types of mesh using standard fog collectors (SFC). Volumetric fog water measurements from SFCs and wind data were collected and recorded in 15-minute intervals over three summertime fog seasons (2014–2016) at four California sites. SFCs were deployed with: standard 1.00 m2 double-layer 35% shade coefficient Raschel; stainless steel mesh coated with the MIT-14 hydrophobic formulation; and FogHa-Tin, a German manufactured, 3-dimensional spacer fabric deployed in two orientations. Analysis of 3419 volumetric samples from all sites showed strong relationships between mesh efficiency and wind speed. Raschel mesh collected 160% more fog water than FogHa-Tin at wind speeds less than 1 m s–1 and 45% less for wind speeds greater than 5 m s–1. MIT-14 coated stainless-steel mesh collected more fog water than Raschel mesh at all wind speeds. At low wind speeds of wind speeds of 4–5 m s–1, it collected 41% more. FogHa-Tin collected 5% more fog water when the warp of the weave was oriented vertically, per manufacturer specification, than when the warp of the weave was oriented horizontally. Time series measurements of three distinct mesh across similar wind regimes revealed inconsistent lags in fog water collection and inconsistent performance. Since such differences occurred under similar wind-speed regimes, we conclude that other factors play important roles in mesh performance, including in-situ fog event and aerosol dynamics that affect droplet-size spectra and droplet-to-mesh surface interactions.
CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD
Energy Technology Data Exchange (ETDEWEB)
Anninos, Peter; Lau, Cheuk [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94550 (United States); Bryant, Colton [Department of Engineering Sciences and Applied Mathematics, Northwestern University, 2145 Sheridan Road, Evanston, Illinois, 60208 (United States); Fragile, P. Chris [Department of Physics and Astronomy, College of Charleston, 66 George Street, Charleston, SC 29424 (United States); Holgado, A. Miguel [Department of Astronomy and National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, Urbana, Illinois, 61801 (United States); Nemergut, Daniel [Operations and Engineering Division, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)
2017-08-01
We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.
CosmosDG: An hp-adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD
Anninos, Peter; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Lau, Cheuk; Nemergut, Daniel
2017-08-01
We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge-Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.
Zhang, Fang
2011-02-01
Mesh current collectors made of stainless steel (SS) can be integrated into microbial fuel cell (MFC) cathodes constructed of a reactive carbon black and Pt catalyst mixture and a poly(dimethylsiloxane) (PDMS) diffusion layer. It is shown here that the mesh properties of these cathodes can significantly affect performance. Cathodes made from the coarsest mesh (30-mesh) achieved the highest maximum power of 1616 ± 25 mW m-2 (normalized to cathode projected surface area; 47.1 ± 0.7 W m-3 based on liquid volume), while the finest mesh (120-mesh) had the lowest power density (599 ± 57 mW m-2). Electrochemical impedance spectroscopy showed that charge transfer and diffusion resistances decreased with increasing mesh opening size. In MFC tests, the cathode performance was primarily limited by reaction kinetics, and not mass transfer. Oxygen permeability increased with mesh opening size, accounting for the decreased diffusion resistance. At higher current densities, diffusion became a limiting factor, especially for fine mesh with low oxygen transfer coefficients. These results demonstrate the critical nature of the mesh size used for constructing MFC cathodes. © 2010 Elsevier B.V. All rights reserved.
Markov random fields on triangle meshes
DEFF Research Database (Denmark)
Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas
2010-01-01
In this paper we propose a novel anisotropic smoothing scheme based on Markov Random Fields (MRF). Our scheme is formulated as two coupled processes. A vertex process is used to smooth the mesh by displacing the vertices according to a MRF smoothness prior, while an independent edge process labels...... mesh edges according to a feature detecting prior. Since we should not smooth across a sharp feature, we use edge labels to control the vertex process. In a Bayesian framework, MRF priors are combined with the likelihood function related to the mesh formation method. The output of our algorithm...
Engagement of Metal Debris into Gear Mesh
handschuh, Robert F.; Krantz, Timothy L.
2010-01-01
A series of bench-top experiments was conducted to determine the effects of metallic debris being dragged through meshing gear teeth. A test rig that is typically used to conduct contact fatigue experiments was used for these tests. Several sizes of drill material, shim stock and pieces of gear teeth were introduced and then driven through the meshing region. The level of torque required to drive the "chip" through the gear mesh was measured. From the data gathered, chip size sufficient to jam the mechanism can be determined.
Robust a Posteriori Error Control and Adaptivity for Multiscale, Multinumerics, and Mortar Coupling
Pencheva, Gergina V.
2013-01-01
We consider discretizations of a model elliptic problem by means of different numerical methods applied separately in different subdomains, termed multinumerics, coupled using the mortar technique. The grids need not match along the interfaces. We are also interested in the multiscale setting, where the subdomains are partitioned by a mesh of size h, whereas the interfaces are partitioned by a mesh of much coarser size H, and where lower-order polynomials are used in the subdomains and higher-order polynomials are used on the mortar interface mesh. We derive several fully computable a posteriori error estimates which deliver a guaranteed upper bound on the error measured in the energy norm. Our estimates are also locally efficient and one of them is robust with respect to the ratio H/h under an assumption of sufficient regularity of the weak solution. The present approach allows bounding separately and comparing mutually the subdomain and interface errors. A subdomain/interface adaptive refinement strategy is proposed and numerically tested. © 2013 Society for Industrial and Applied Mathematics.
Sentís, Manuel Lorenzo; Gable, Carl W.
2017-11-01
There are many applications in science and engineering modeling where an accurate representation of a complex model geometry in the form of a mesh is important. In applications of flow and transport in subsurface porous media, this is manifest in models that must capture complex geologic stratigraphy, structure (faults, folds, erosion, deposition) and infrastructure (tunnels, boreholes, excavations). Model setup, defined as the activities of geometry definition, mesh generation (creation, optimization, modification, refine, de-refine, smooth), assigning material properties, initial conditions and boundary conditions requires specialized software tools to automate and streamline the process. In addition, some model setup tools will provide more utility if they are designed to interface with and meet the needs of a particular flow and transport software suite. A control volume discretization that uses a two point flux approximation is for example most accurate when the underlying control volumes are 2D or 3D Voronoi tessellations. In this paper we will present the coupling of LaGriT, a mesh generation and model setup software suite and TOUGH2 (Pruess et al., 1999) to model subsurface flow problems and we show an example of how LaGriT can be used as a model setup tool for the generation of a Voronoi mesh for the simulation program TOUGH2. To generate the MESH file for TOUGH2 from the LaGriT output a standalone module Lagrit2Tough2 was developed, which is presented here and will be included in a future release of LaGriT. In this paper an alternative method to generate a Voronoi mesh for TOUGH2 with LaGriT is presented and thanks to the modular and command based structure of LaGriT this method is well suited to generating a mesh for complex models.
Generation and Adaptive Modification of Anisotropic Meshes, Phase II
National Aeronautics and Space Administration — The ability to quickly and reliably simulate high-speed flows over a wide range of geometrically complex configurations is critical to many of NASA's missions....
Generation and Adaptive Modification of Anisotropic Meshes, Phase I
National Aeronautics and Space Administration — The ability to quickly and reliably simulate high-speed flows over a wide range of geometrically complex configurations is critical to many of NASA's missions....
LR: Compact connectivity representation for triangle meshes
Energy Technology Data Exchange (ETDEWEB)
Gurung, T; Luffel, M; Lindstrom, P; Rossignac, J
2011-01-28
We propose LR (Laced Ring) - a simple data structure for representing the connectivity of manifold triangle meshes. LR provides the option to store on average either 1.08 references per triangle or 26.2 bits per triangle. Its construction, from an input mesh that supports constant-time adjacency queries, has linear space and time complexity, and involves ordering most vertices along a nearly-Hamiltonian cycle. LR is best suited for applications that process meshes with fixed connectivity, as any changes to the connectivity require the data structure to be rebuilt. We provide an implementation of the set of standard random-access, constant-time operators for traversing a mesh, and show that LR often saves both space and traversal time over competing representations.
Obtuse triangle suppression in anisotropic meshes
Sun, Feng
2011-12-01
Anisotropic triangle meshes are used for efficient approximation of surfaces and flow data in finite element analysis, and in these applications it is desirable to have as few obtuse triangles as possible to reduce the discretization error. We present a variational approach to suppressing obtuse triangles in anisotropic meshes. Specifically, we introduce a hexagonal Minkowski metric, which is sensitive to triangle orientation, to give a new formulation of the centroidal Voronoi tessellation (CVT) method. Furthermore, we prove several relevant properties of the CVT method with the newly introduced metric. Experiments show that our algorithm produces anisotropic meshes with much fewer obtuse triangles than using existing methods while maintaining mesh anisotropy. © 2011 Elsevier B.V. All rights reserved.
Mesh Processing in Medical Image Analysis
DEFF Research Database (Denmark)
The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....
Shape space exploration of constrained meshes
Yang, Yongliang
2011-12-12
We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.
Zone refining of plutonium metal
International Nuclear Information System (INIS)
Blau, M.S.
1994-08-01
The zone refining process was applied to Pu metal containing known amounts of impurities. Rod specimens of plutonium metal were melted into and contained in tantalum boats, each of which was passed horizontally through a three-turn, high-frequency coil in such a manner as to cause a narrow molten zone to pass through the Pu metal rod 10 times. The impurity elements Co, Cr, Fe, Ni, Np, U were found to move in the same direction as the molten zone as predicted by binary phase diagrams. The elements Al, Am, and Ga moved in the opposite direction of the molten zone as predicted by binary phase diagrams. As the impurity alloy was zone refined, δ-phase plutonium metal crystals were produced. The first few zone refining passes were more effective than each later pass because an oxide layer formed on the rod surface. There was no clear evidence of better impurity movement at the slower zone refining speed. Also, constant or variable coil power appeared to have no effect on impurity movement during a single run (10 passes). This experiment was the first step to developing a zone refining process for plutonium metal
Towards Blockchain-enabled Wireless Mesh Networks
Selimi, Mennan; Kabbinale, Aniruddh Rao; Ali, Anwaar; Navarro, Leandro; Sathiaseelan, Arjuna
2018-01-01
Recently, mesh networking and blockchain are two of the hottest technologies in the telecommunications industry. Combining both can reformulate internet access and make connecting to the Internet not only easy, but affordable too. Hyperledger Fabric (HLF) is a blockchain framework implementation and one of the Hyperledger projects hosted by The Linux Foundation. We evaluate HLF in a real production mesh network and in the laboratory, quantify its performance, bottlenecks and limitations of th...
Unstructured Mesh Movement and Viscous Mesh Generation for CFD-Based Design Optimization Project
National Aeronautics and Space Administration — The innovations proposed are twofold: 1) a robust unstructured mesh movement method able to handle isotropic (Euler), anisotropic (viscous), mixed element (hybrid)...
The finite element method in making up meshes in ANSYS Meshing for CFD models
Directory of Open Access Journals (Sweden)
Віктор Іванович Троханяк
2015-11-01
Full Text Available Method of finite elements (FEM is used in calculating tasks of hydrodynamics and heat transfer tasks. The essence of the method consists in the approximate solution of a variational task. To formulate this task a functional concept is used. The type of a functional is different for different tasks and is selected through a special choice. Currently FEM is widely used in calculating the strength and in solving tasks of heat transfer in solids. However, it can be applied in calculating the flow of liquids and gases. There are also methods that combine elements of the finite volumes and finite elements methods. The combination of these methods make it possible to use a wide range of computational meshes ( tetragonal meshes, pyramidal meshes, prismatic meshes, polyhedral meshes what is necessary for solving tasks with complex geometry. This approach is used by CFD packages Ansys CFX, Ansys Fluent, Star-CD, Star-CCM +, Comsol and others. The method and the analysis of 2D mesh were carried out, using a method of final elements in ANSYS Meshing for heat exchangers with an inline arrangement of tubes in banks and with their curvilinear arrangement in compact banks of tubes of a new design. Particular features were considered and the algorithm of making up a mesh was developed for tasks of hydraulic and gas dynamics and thermal mass transfer. The most optimum and qualitative meshes for CFD models were chosen
Unstructured Mesh Movement and Viscous Mesh Generation for CFD-Based Design Optimization, Phase II
National Aeronautics and Space Administration — The innovations proposed are twofold: 1) a robust unstructured mesh movement method able to handle isotropic (Euler), anisotropic (viscous), mixed element (hybrid)...
Automatic Scheme Selection for Toolkit Hex Meshing
Energy Technology Data Exchange (ETDEWEB)
TAUTGES,TIMOTHY J.; WHITE,DAVID R.
1999-09-27
Current hexahedral mesh generation techniques rely on a set of meshing tools, which when combined with geometry decomposition leads to an adequate mesh generation process. Of these tools, sweeping tends to be the workhorse algorithm, accounting for at least 50% of most meshing applications. Constraints which must be met for a volume to be sweepable are derived, and it is proven that these constraints are necessary but not sufficient conditions for sweepability. This paper also describes a new algorithm for detecting extruded or sweepable geometries. This algorithm, based on these constraints, uses topological and local geometric information, and is more robust than feature recognition-based algorithms. A method for computing sweep dependencies in volume assemblies is also given. The auto sweep detect and sweep grouping algorithms have been used to reduce interactive user time required to generate all-hexahedral meshes by filtering out non-sweepable volumes needing further decomposition and by allowing concurrent meshing of independent sweep groups. Parts of the auto sweep detect algorithm have also been used to identify independent sweep paths, for use in volume-based interval assignment.
How to model wireless mesh networks topology
International Nuclear Information System (INIS)
Sanni, M L; Hashim, A A; Anwar, F; Ali, S; Ahmed, G S M
2013-01-01
The specification of network connectivity model or topology is the beginning of design and analysis in Computer Network researches. Wireless Mesh Networks is an autonomic network that is dynamically self-organised, self-configured while the mesh nodes establish automatic connectivity with the adjacent nodes in the relay network of wireless backbone routers. Researches in Wireless Mesh Networks range from node deployment to internetworking issues with sensor, Internet and cellular networks. These researches require modelling of relationships and interactions among nodes including technical characteristics of the links while satisfying the architectural requirements of the physical network. However, the existing topology generators model geographic topologies which constitute different architectures, thus may not be suitable in Wireless Mesh Networks scenarios. The existing methods of topology generation are explored, analysed and parameters for their characterisation are identified. Furthermore, an algorithm for the design of Wireless Mesh Networks topology based on square grid model is proposed in this paper. The performance of the topology generated is also evaluated. This research is particularly important in the generation of a close-to-real topology for ensuring relevance of design to the intended network and validity of results obtained in Wireless Mesh Networks researches
Romanian refining industry assesses restructuring
International Nuclear Information System (INIS)
Tanasescu, D.G.
1991-01-01
The Romanian crude oil refining industry, as all the other economic sectors, faces the problems accompanying the transition from a centrally planned economy to a market economy. At present, all refineries have registered as joint-stock companies and all are coordinated and assisted by Rafirom S.A., from both a legal and a production point of view. Rafirom S.A. is a joint-stock company that holds shares in refineries and other stock companies with activities related to oil refining. Such activities include technological research, development, design, transportation, storage, and domestic and foreign marketing. This article outlines the market forces that are expected to: drive rationalization and restructuring of refining operations and define the targets toward which the reconfigured refineries should strive
Data refinement for true concurrency
Directory of Open Access Journals (Sweden)
Brijesh Dongol
2013-05-01
Full Text Available The majority of modern systems exhibit sophisticated concurrent behaviour, where several system components modify and observe the system state with fine-grained atomicity. Many systems (e.g., multi-core processors, real-time controllers also exhibit truly concurrent behaviour, where multiple events can occur simultaneously. This paper presents data refinement defined in terms of an interval-based framework, which includes high-level operators that capture non-deterministic expression evaluation. By modifying the type of an interval, our theory may be specialised to cover data refinement of both discrete and continuous systems. We present an interval-based encoding of forward simulation, then prove that our forward simulation rule is sound with respect to our data refinement definition. A number of rules for decomposing forward simulation proofs over both sequential and parallel composition are developed.
Bauxite Mining and Alumina Refining
Frisch, Neale; Olney, David
2014-01-01
Objective: To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Methods: Review article. Results: The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust, alumina dust, and caustic mist in contemporary best-practice bauxite mining and alumina refining operations have not been demonstrated to be associated with clinically significant decrements in lung function. Exposures to bauxite dust and alumina dust at such operations are also not associated with the incidence of cancer. Conclusions: A range of occupational health risks in bauxite mining and alumina refining require the maintenance of effective control measures. PMID:24806720
Design and Implementation of a Wireless ZigBee Mesh Network
Ouyang, Chenxi
2014-01-01
ZigBee is officially a wireless network protocol that is designed to be used with the low-data-rate sensor and control networks. The objective of this thesis was to implement a ZigBee mesh network with XBee 802.15.4 RF module and Raspberry Pi. The target was to make a mesh network with three nodes. The project consists of two parts: using the X-CTU application to implement the ZigBee network with XBee RF modules, XBee 802.15.4 Starter Development Kit and XBee Adapter Board, and then the imple...
Far infrared metal mesh filters and Fabry-Perot interferometry
International Nuclear Information System (INIS)
Kiyomi, S.; Genzel, L.
1983-01-01
The use of metal meshes is becoming increasingly important for applications in the far infrared. This paper reviews the important aspects of metal meshes, metal mesh filters, Fabry-Perot interferometers and their applications. The article includes the following: an introductory description and historical review of far-infrared metal meshes, the optical properties of metal meshes, beam splitters and filters based on inherent optical properties of meshes, Fabry-Perot interferometers, multimesh filters, applications of metal mesh filters and Fabry-Perot interferometers for far-infrared lasers, for plasma diagnostics and for astronomy
Parallel octree-based hexahedral mesh generation for eulerian to lagrangian conversion.
Energy Technology Data Exchange (ETDEWEB)
Staten, Matthew L.; Owen, Steven James
2010-09-01
Computational simulation must often be performed on domains where materials are represented as scalar quantities or volume fractions at cell centers of an octree-based grid. Common examples include bio-medical, geotechnical or shock physics calculations where interface boundaries are represented only as discrete statistical approximations. In this work, we introduce new methods for generating Lagrangian computational meshes from Eulerian-based data. We focus specifically on shock physics problems that are relevant to ASC codes such as CTH and Alegra. New procedures for generating all-hexahedral finite element meshes from volume fraction data are introduced. A new primal-contouring approach is introduced for defining a geometric domain. New methods for refinement, node smoothing, resolving non-manifold conditions and defining geometry are also introduced as well as an extension of the algorithm to handle tetrahedral meshes. We also describe new scalable MPI-based implementations of these procedures. We describe a new software module, Sculptor, which has been developed for use as an embedded component of CTH. We also describe its interface and its use within the mesh generation code, CUBIT. Several examples are shown to illustrate the capabilities of Sculptor.
Refining Nodes and Edges of State Machines
DEFF Research Database (Denmark)
Hallerstede, Stefan; Snook, Colin
2011-01-01
State machines are hierarchical automata that are widely used to structure complex behavioural specifications. We develop two notions of refinement of state machines, node refinement and edge refinement. We compare the two notions by means of examples and argue that, by adopting simple conventions...... refinement theory and UML-B state machine refinement influences the style of node refinement. Hence we propose a method with direct proof of state machine refinement avoiding the detour via Event-B that is needed by UML-B....
The Benefits of Adaptive Partitioning for Parallel AMR Applications
Energy Technology Data Exchange (ETDEWEB)
Steensland, Johan [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Advanced Software Research and Development
2008-07-01
Parallel adaptive mesh refinement methods potentially lead to realistic modeling of complex three-dimensional physical phenomena. However, the dynamics inherent in these methods present significant challenges in data partitioning and load balancing. Significant human resources, including time, effort, experience, and knowledge, are required for determining the optimal partitioning technique for each new simulation. In reality, scientists resort to using the on-board partitioner of the computational framework, or to using the partitioning industry standard, ParMetis. Adaptive partitioning refers to repeatedly selecting, configuring and invoking the optimal partitioning technique at run-time, based on the current state of the computer and application. In theory, adaptive partitioning automatically delivers superior performance and eliminates the need for repeatedly spending valuable human resources for determining the optimal static partitioning technique. In practice, however, enabling frameworks are non-existent due to the inherent significant inter-disciplinary research challenges. This paper presents a study of a simple implementation of adaptive partitioning and discusses implied potential benefits from the perspective of common groups of users within computational science. The study is based on a large set of data derived from experiments including six real-life, multi-time-step adaptive applications from various scientific domains, five complementing and fundamentally different partitioning techniques, a large set of parameters corresponding to a wide spectrum of computing environments, and a flexible cost function that considers the relative impact of multiple partitioning metrics and diverse partitioning objectives. The results show that even a simple implementation of adaptive partitioning can automatically generate results statistically equivalent to the best static partitioning. Thus, it is possible to effectively eliminate the problem of determining the
Panorama 2007: Refining and Petrochemicals
International Nuclear Information System (INIS)
Silva, C.
2007-01-01
The year 2005 saw a new improvement in refining margins that continued during the first three quarters of 2006. The restoration of margins in the last three years has allowed the refining sector to regain its profitability. In this context, the oil companies reported earnings for fiscal year 2005 that were up significantly compared to 2004, and the figures for the first half-year 2006 confirm this trend. Despite this favorable business environment, investments only saw a minimal increase in 2005 and the improvement expected for 2006 should remain fairly limited. Looking to 2010-2015, it would appear that the planned investment projects with the highest probability of reaching completion will be barely adequate to cover the increase in demand. Refining sector should continue to find itself under pressure. As for petrochemicals, despite a steady up-trend in the naphtha price, the restoration of margins consolidated a comeback that started in 2005. All in all, capital expenditure remained fairly low in both the refining and petrochemicals sectors, but many projects are planned for the next ten years. (author)
Refining analgesia strategies using lasers.
Hampshire, Victoria
2015-08-01
Sound programs for the humane care and use of animals within research facilities incorporate experimental refinements such as multimodal approaches for pain management. These approaches can include non-traditional strategies along with more established ones. The use of lasers for pain relief is growing in popularity among companion animal veterinary practitioners and technologists. Therefore, its application in the research sector warrants closer consideration.
On syntactic action refinement and logic
Majster-Cederbaum, Mila; Salger, Frank
1999-01-01
Action refinement is a useful methodology for the development of concurrent processes in a stepwise manner. We are here interested in establishing a connection between syntactic action refinement and logic. In the syntactic approach to action refinement, reduction functions are used to remove the refinement operators from process-algebraic expressions thereby providing semantics for them. We incorporate a syntactic action refinement operator to the Hennessy-Milner-Logic and define a logical r...
Ultrasound appearances after mesh implantation-evidence of mesh contraction or folding?
Czech Academy of Sciences Publication Activity Database
Švabík, K.; Martan, A.; Mašata, J.; Haddad El, R.; Hubka, P.; Pavlíková, Markéta
2011-01-01
Roč. 22, č. 5 (2011), s. 529-533 ISSN 0937-3462 Grant - others:GA MZd(CZ) NR9216 Institutional research plan: CEZ:AV0Z10300504 Keywords : prolift anterior * mesh shrinking * mesh retraction * vaginal ultrasound * vaginal surgery Subject RIV: FK - Gynaecology, Childbirth Impact factor: 1.832, year: 2011
Fictitious boundary and moving mesh methods for the numerical simulation of rigid particulate flows
Wan, Decheng; Turek, Stefan
2007-03-01
In this paper, we investigate the numerical simulation of particulate flows using a new moving mesh method combined with the multigrid fictitious boundary method (FBM) [S. Turek, D.C. Wan, L.S. Rivkind, The fictitious boundary method for the implicit treatment of Dirichlet boundary conditions with applications to incompressible flow simulations. Challenges in Scientific Computing, Lecture Notes in Computational Science and Engineering, vol. 35, Springer, Berlin, 2003, pp. 37-68; D.C. Wan, S. Turek, L.S. Rivkind, An efficient multigrid FEM solution technique for incompressible flow with moving rigid bodies. Numerical Mathematics and Advanced Applications, ENUMATH 2003, Springer, Berlin, 2004, pp. 844-853; D.C. Wan, S. Turek, Direct numerical simulation of particulate flow via multigrid FEM techniques and the fictitious boundary method, Int. J. Numer. Method Fluids 51 (2006) 531-566]. With this approach, the mesh is dynamically relocated through a (linear) partial differential equation to capture the surface of the moving particles with a relatively small number of grid points. The complete system is realized by solving the mesh movement and the partial differential equations of the flow problem alternately via an operator-splitting approach. The flow is computed by a special ALE formulation with a multigrid finite element solver, and the solid particles are allowed to move freely through the computational mesh which is adaptively aligned by the moving mesh method in every time step. One important aspect is that the data structure of the undeformed initial mesh, in many cases a tensor-product mesh or a semi-structured grid consisting of many tensor-product meshes, is preserved, while only the spacing between the grid points is adapted in each time step so that the high efficiency of structured meshes can be exploited. Numerical results demonstrate that the interaction between the fluid and the particles can be accurately and efficiently handled by the presented
International Nuclear Information System (INIS)
Michael J. Bockelie
2002-01-01
This DOE SBIR Phase II final report summarizes research that has been performed to develop a parallel adaptive tool for modeling steady, two phase turbulent reacting flow. The target applications for the new tool are full scale, fossil-fuel fired boilers and furnaces such as those used in the electric utility industry, chemical process industry and mineral/metal process industry. The type of analyses to be performed on these systems are engineering calculations to evaluate the impact on overall furnace performance due to operational, process or equipment changes. To develop a Computational Fluid Dynamics (CFD) model of an industrial scale furnace requires a carefully designed grid that will capture all of the large and small scale features of the flowfield. Industrial systems are quite large, usually measured in tens of feet, but contain numerous burners, air injection ports, flames and localized behavior with dimensions that are measured in inches or fractions of inches. To create an accurate computational model of such systems requires capturing length scales within the flow field that span several orders of magnitude. In addition, to create an industrially useful model, the grid can not contain too many grid points - the model must be able to execute on an inexpensive desktop PC in a matter of days. An adaptive mesh provides a convenient means to create a grid that can capture both fine flow field detail within a very large domain with a ''reasonable'' number of grid points. However, the use of an adaptive mesh requires the development of a new flow solver. To create the new simulation tool, we have combined existing reacting CFD modeling software with new software based on emerging block structured Adaptive Mesh Refinement (AMR) technologies developed at Lawrence Berkeley National Laboratory (LBNL). Specifically, we combined: -physical models, modeling expertise, and software from existing combustion simulation codes used by Reaction Engineering International
Cell adhesion on NiTi thin film sputter-deposited meshes.
Loger, K; Engel, A; Haupt, J; Li, Q; Lima de Miranda, R; Quandt, E; Lutter, G; Selhuber-Unkel, C
2016-02-01
Scaffolds for tissue engineering enable the possibility to fabricate and form biomedical implants in vitro, which fulfill special functionality in vivo. In this study, free-standing Nickel–Titanium(NiTi) thin film mesheswere produced by means of magnetron sputter deposition.Meshes contained precisely defined rhombic holes in the size of 440 to 1309 μm2 and a strut width ranging from 5.3 to 9.2 μm. The effective mechanical properties of the microstructured superelastic NiTi thin film were examined by tensile testing. These results will be adapted for the design of the holes in the film. The influence of hole and strut dimensions on the adhesion of sheep autologous cells (CD133+) was studied after 24 h and after seven days of incubation. Optical analysis using fluorescence microscopy and scanning electron microscopy showed that cell adhesion depends on the structural parameters of the mesh. After 7 days in cell culture a large part of the mesh was covered with aligned fibrous material. Cell adhesion is particularly facilitated on meshes with small rhombic holes of 440 μm2 and a strut width of 5.3 μm. Our results demonstrate that free-standing NiTi thin film meshes have a promising potential for applicationsin cardiovascular tissue engineering, particularly for the fabrication of heart valves.
Open preperitoneal groin hernia repair with mesh
DEFF Research Database (Denmark)
Andresen, Kristoffer; Rosenberg, Jacob
2017-01-01
BACKGROUND: For the repair of inguinal hernias, several surgical methods have been presented where the purpose is to place a mesh in the preperitoneal plane through an open access. The aim of this systematic review was to describe preperitoneal repairs with emphasis on the technique. DATA SOURCES......: A systematic review was conducted and reported according to the PRISMA statement. PubMed, Cochrane library and Embase were searched systematically. Studies were included if they provided clinical data with more than 30 days follow up following repair of an inguinal hernia with an open preperitoneal mesh......-analysis. Open preperitoneal techniques with placement of a mesh through an open approach seem promising compared with the standard anterior techniques. This systematic review provides an overview of these techniques together with a description of surgical methods and clinical outcomes....
Open preperitoneal groin hernia repair with mesh
DEFF Research Database (Denmark)
Andresen, Kristoffer; Rosenberg, Jacob
2017-01-01
Background For the repair of inguinal hernias, several surgical methods have been presented where the purpose is to place a mesh in the preperitoneal plane through an open access. The aim of this systematic review was to describe preperitoneal repairs with emphasis on the technique. Data sources...... A systematic review was conducted and reported according to the PRISMA statement. PubMed, Cochrane library and Embase were searched systematically. Studies were included if they provided clinical data with more than 30 days follow up following repair of an inguinal hernia with an open preperitoneal mesh......-analysis. Open preperitoneal techniques with placement of a mesh through an open approach seem promising compared with the standard anterior techniques. This systematic review provides an overview of these techniques together with a description of surgical methods and clinical outcomes....
NASA Lewis Meshed VSAT Workshop meeting summary
Ivancic, William
1993-11-01
NASA Lewis Research Center's Space Electronics Division (SED) hosted a workshop to address specific topics related to future meshed very small-aperture terminal (VSAT) satellite communications networks. The ideas generated by this workshop will help to identify potential markets and focus technology development within the commercial satellite communications industry and NASA. The workshop resulted in recommendations concerning these principal points of interest: the window of opportunity for a meshed VSAT system; system availability; ground terminal antenna sizes; recommended multifrequency for time division multiple access (TDMA) uplink; a packet switch design concept for narrowband; and fault tolerance design concepts. This report presents a summary of group presentations and discussion associated with the technological, economic, and operational issues of meshed VSAT architectures that utilize processing satellites.
Connectivity editing for quad-dominant meshes
Peng, Chihan
2013-08-01
We propose a connectivity editing framework for quad-dominant meshes. In our framework, the user can edit the mesh connectivity to control the location, type, and number of irregular vertices (with more or fewer than four neighbors) and irregular faces (non-quads). We provide a theoretical analysis of the problem, discuss what edits are possible and impossible, and describe how to implement an editing framework that realizes all possible editing operations. In the results, we show example edits and illustrate the advantages and disadvantages of different strategies for quad-dominant mesh design. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.
Calculation of coherent synchrotron radiation using mesh
Directory of Open Access Journals (Sweden)
T. Agoh
2004-05-01
Full Text Available We develop a new method to simulate coherent synchrotron radiation numerically. It is based on the mesh calculation of the electromagnetic field in the frequency domain. We make an approximation in the Maxwell equation which allows a mesh size much larger than the relevant wavelength so that the computing time is tolerable. Using the equation, we can perform a mesh calculation of coherent synchrotron radiation in transient states with shielding effects by the vacuum chamber. The simulation results obtained by this method are compared with analytic solutions. Though, for the comparison with theories, we adopt simplifications such as longitudinal Gaussian distribution, zero-width transverse distribution, horizontal uniform bend, and a vacuum chamber with rectangular cross section, the method is applicable to general cases.
The generation of hexahedral meshes for assembly geometries: A survey
Energy Technology Data Exchange (ETDEWEB)
TAUTGES,TIMOTHY J.
2000-02-14
The finite element method is being used today to model component assemblies in a wide variety of application areas, including structural mechanics, fluid simulations, and others. Generating hexahedral meshes for these assemblies usually requires the use of geometry decomposition, with different meshing algorithms applied to different regions. While the primary motivation for this approach remains the lack of an automatic, reliable all-hexahedral meshing algorithm, requirements in mesh quality and mesh configuration for typical analyses are also factors. For these reasons, this approach is also sometimes required when producing other types of unstructured meshes. This paper will review progress to date in automating many parts of the hex meshing process, which has halved the time to produce all-hex meshes for large assemblies. Particular issues which have been exposed due to this progress will also be discussed, along with their applicability to the general unstructured meshing problem.
Mesh control information of windmill designed by Solidwork program
Mulyana, T.; Sebayang, D.; Rafsanjani, A. M. D.; Adani, J. H. D.; Muhyiddin, Y. S.
2017-12-01
This paper presents the mesh control information imposed on the windmill already designed. The accuracy of Simulation results is influenced by the quality of the created mesh. However, compared to the quality of the mesh is made, the simulation time running will be done software also increases. The smaller the size of the elements created when making the mesh, the better the mesh quality will be generated. When adjusting the mesh size, there is a slider that acts as the density regulator of the element. SolidWorks Simulation also has Mesh Control facility. Features that can adjust mesh density only in the desired part. The best results of mesh control obtained for both static and thermal simulation have ratio 1.5.
MUSIC: a mesh-unrestricted simulation code
International Nuclear Information System (INIS)
Bonalumi, R.A.; Rouben, B.; Dastur, A.R.; Dondale, C.S.; Li, H.Y.H.
1978-01-01
A general formalism to solve the G-group neutron diffusion equation is described. The G-group flux is represented by complementing an ''asymptotic'' mode with (G-1) ''transient'' modes. A particular reduction-to-one-group technique gives a high computational efficiency. MUSIC, a 2-group code using the above formalism, is presented. MUSIC is demonstrated on a fine-mesh calculation and on 2 coarse-mesh core calculations: a heavy-water reactor (HWR) problem and the 2-D lightwater reactor (LWR) IAEA benchmark. Comparison is made to finite-difference results
Bauxite Mining and Alumina Refining
Donoghue, A. Michael; Frisch, Neale; Olney, David
2014-01-01
Objective: To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Methods: Review article. Results: The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust,...
The Charfuel coal refining process
International Nuclear Information System (INIS)
Meyer, L.G.
1991-01-01
The patented Charfuel coal refining process employs fluidized hydrocracking to produce char and liquid products from virtually all types of volatile-containing coals, including low rank coal and lignite. It is not gasification or liquefaction which require the addition of expensive oxygen or hydrogen or the use of extreme heat or pressure. It is not the German pyrolysis process that merely 'cooks' the coal, producing coke and tar-like liquids. Rather, the Charfuel coal refining process involves thermal hydrocracking which results in the rearrangement of hydrogen within the coal molecule to produce a slate of co-products. In the Charfuel process, pulverized coal is rapidly heated in a reducing atmosphere in the presence of internally generated process hydrogen. This hydrogen rearrangement allows refinement of various ranks of coals to produce a pipeline transportable, slurry-type, environmentally clean boiler fuel and a slate of value-added traditional fuel and chemical feedstock co-products. Using coal and oxygen as the only feedstocks, the Charfuel hydrocracking technology economically removes much of the fuel nitrogen, sulfur, and potential air toxics (such as chlorine, mercury, beryllium, etc.) from the coal, resulting in a high heating value, clean burning fuel which can increase power plant efficiency while reducing operating costs. The paper describes the process, its thermal efficiency, its use in power plants, its pipeline transport, co-products, environmental and energy benefits, and economics
International Nuclear Information System (INIS)
Anon.
1992-01-01
This paper reports that at a time when profit margins are slim and gasoline demand is down, the U.S. petroleum-refining industry is facing one of its greatest challenges; How to meet new federal and state laws for reformulated gasoline, oxygenated fuels, low-sulfur diesel and other measures to improve the environment. The American Petroleum Institute (API) estimates that industry will spend between $15 and $23 billion by the end of the decade to meet the U.S. Clean Air Act Amendments (CAAA) of 1990, and other legislation. ENSR Consulting and Engineering's capital-spending figure runs to between $70 and 100 billion this decade, including $24 billion to produce reformulated fuels and $10-12 billion to reduce refinery emissions. M.W. Kellogg Co. estimates that refiners may have to spend up to $30 billion this decade to meet the demand for reformulated gasoline. The estimates are wide-ranging because refiners are still studying their options and delaying final decisions as long as they can, to try to ensure they are the best and least-costly decisions. Oxygenated fuels will be required next winter, but federal regulations for reformulated gasoline won't go into effect until 1995, while California's tougher reformulated-fuels law will kick in the following year
Abdominal reoperation and mesh explantation following open ventral hernia repair with mesh.
Liang, Mike K; Li, Linda T; Nguyen, Mylan T; Berger, Rachel L; Hicks, Stephanie C; Kao, Lillian S
2014-10-01
This study sought to identify the incidence, indications, and predictors of abdominal reoperation and mesh explantation following open ventral hernia repair with mesh (OVHR). A retrospective cohort study of all patients at a single institution who underwent an OVHR from 2000 to 2010 was performed. Patients who required subsequent abdominal reoperation or mesh explantation were compared with those who did not. Reasons for reoperation were recorded. The 2 groups were compared using univariate and multivariate analysis (MVA). A total of 407 patients were followed for a median (range) of 57 (1 to 143) months. Subsequent abdominal reoperation was required in 69 (17%) patients. The most common reasons for reoperation were recurrence and surgical site infection. Only the number of prior abdominal surgeries was associated with abdominal reoperation on MVA. Twenty-eight patients (6.9%) underwent subsequent mesh explantation. Only the Ventral Hernia Working Group grade was associated with mesh explantation on MVA. Abdominal reoperation and mesh explantation following OVHR are common. Overwhelmingly, surgical complications are themost common causes for reoperation and mesh explantation. Copyright © 2014 Elsevier Inc. All rights reserved.
Properties of meshes used in hernia repair: a comprehensive review of synthetic and biologic meshes.
Ibrahim, Ahmed M S; Vargas, Christina R; Colakoglu, Salih; Nguyen, John T; Lin, Samuel J; Lee, Bernard T
2015-02-01
Data on the mechanical properties of the adult human abdominal wall have been difficult to obtain rendering manufacture of the ideal mesh for ventral hernia repair a challenge. An ideal mesh would need to exhibit greater biomechanical strength and elasticity than that of the abdominal wall. The aim of this study is to quantitatively compare the biomechanical properties of the most commonly used synthetic and biologic meshes in ventral hernia repair and presents a comprehensive literature review. A narrative review of the literature was performed using the PubMed database spanning articles from 1982 to 2012 including a review of company Web sites to identify all available information relating to the biomechanical properties of various synthetic and biologic meshes used in ventral hernia repair. There exist differences in the mechanical properties and the chemical nature of different meshes. In general, most synthetic materials have greater stiffness and elasticity than what is required for abdominal wall reconstruction; however, each exhibits unique properties that may be beneficial for clinical use. On the contrary, biologic meshes are more elastic but less stiff and with a lower tensile strength than their synthetic counterparts. The current standard of practice for the treatment of ventral hernias is the use of permanent synthetic mesh material. Recently, biologic meshes have become more frequently used. Most meshes exhibit biomechanical properties over the known abdominal wall thresholds. Augmenting strength requires increasing amounts of material contributing to more stiffness and foreign body reaction, which is not necessarily an advantage. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime; Liebschner, Michael A K; Xia, James J
2018-04-01
Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians' need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical
Highly Symmetric and Congruently Tiled Meshes for Shells and Domes
Rasheed, Muhibur; Bajaj, Chandrajit
2016-01-01
We describe the generation of all possible shell and dome shapes that can be uniquely meshed (tiled) using a single type of mesh face (tile), and following a single meshing (tiling) rule that governs the mesh (tile) arrangement with maximal vertex, edge and face symmetries. Such tiling arrangements or congruently tiled meshed shapes, are frequently found in chemical forms (fullerenes or Bucky balls, crystals, quasi-crystals, virus nano shells or capsids), and synthetic shapes (cages, sports domes, modern architectural facades). Congruently tiled meshes are both aesthetic and complete, as they support maximal mesh symmetries with minimal complexity and possess simple generation rules. Here, we generate congruent tilings and meshed shape layouts that satisfy these optimality conditions. Further, the congruent meshes are uniquely mappable to an almost regular 3D polyhedron (or its dual polyhedron) and which exhibits face-transitive (and edge-transitive) congruency with at most two types of vertices (each type transitive to the other). The family of all such congruently meshed polyhedra create a new class of meshed shapes, beyond the well-studied regular, semi-regular and quasi-regular classes, and their duals (platonic, Catalan and Johnson). While our new mesh class is infinite, we prove that there exists a unique mesh parametrization, where each member of the class can be represented by two integer lattice variables, and moreover efficiently constructable. PMID:27563368
Saye, Robert
2017-09-01
In this two-part paper, a high-order accurate implicit mesh discontinuous Galerkin (dG) framework is developed for fluid interface dynamics, facilitating precise computation of interfacial fluid flow in evolving geometries. The framework uses implicitly defined meshes-wherein a reference quadtree or octree grid is combined with an implicit representation of evolving interfaces and moving domain boundaries-and allows physically prescribed interfacial jump conditions to be imposed or captured with high-order accuracy. Part one discusses the design of the framework, including: (i) high-order quadrature for implicitly defined elements and faces; (ii) high-order accurate discretisation of scalar and vector-valued elliptic partial differential equations with interfacial jumps in ellipticity coefficient, leading to optimal-order accuracy in the maximum norm and discrete linear systems that are symmetric positive (semi)definite; (iii) the design of incompressible fluid flow projection operators, which except for the influence of small penalty parameters, are discretely idempotent; and (iv) the design of geometric multigrid methods for elliptic interface problems on implicitly defined meshes and their use as preconditioners for the conjugate gradient method. Also discussed is a variety of aspects relating to moving interfaces, including: (v) dG discretisations of the level set method on implicitly defined meshes; (vi) transferring state between evolving implicit meshes; (vii) preserving mesh topology to accurately compute temporal derivatives; (viii) high-order accurate reinitialisation of level set functions; and (ix) the integration of adaptive mesh refinement. In part two, several applications of the implicit mesh dG framework in two and three dimensions are presented, including examples of single phase flow in nontrivial geometry, surface tension-driven two phase flow with phase-dependent fluid density and viscosity, rigid body fluid-structure interaction, and free
Saye, Robert
2017-09-01
In this two-part paper, a high-order accurate implicit mesh discontinuous Galerkin (dG) framework is developed for fluid interface dynamics, facilitating precise computation of interfacial fluid flow in evolving geometries. The framework uses implicitly defined meshes-wherein a reference quadtree or octree grid is combined with an implicit representation of evolving interfaces and moving domain boundaries-and allows physically prescribed interfacial jump conditions to be imposed or captured with high-order accuracy. Part one discusses the design of the framework, including: (i) high-order quadrature for implicitly defined elements and faces; (ii) high-order accurate discretisation of scalar and vector-valued elliptic partial differential equations with interfacial jumps in ellipticity coefficient, leading to optimal-order accuracy in the maximum norm and discrete linear systems that are symmetric positive (semi)definite; (iii) the design of incompressible fluid flow projection operators, which except for the influence of small penalty parameters, are discretely idempotent; and (iv) the design of geometric multigrid methods for elliptic interface problems on implicitly defined meshes and their use as preconditioners for the conjugate gradient method. Also discussed is a variety of aspects relating to moving interfaces, including: (v) dG discretisations of the level set method on implicitly defined meshes; (vi) transferring state between evolving implicit meshes; (vii) preserving mesh topology to accurately compute temporal derivatives; (viii) high-order accurate reinitialisation of level set functions; and (ix) the integration of adaptive mesh refinement. In part two, several applications of the implicit mesh dG framework in two and three dimensions are presented, including examples of single phase flow in nontrivial geometry, surface tension-driven two phase flow with phase-dependent fluid density and viscosity, rigid body fluid-structure interaction, and free
Salinas, P.; Pavlidis, D.; Jacquemyn, C.; Lei, Q.; Xie, Z.; Pain, C.; Jackson, M.
2017-12-01
It is well known that the pressure gradient into a production well increases with decreasing distance to the well. To properly capture the local pressure drawdown into the well a high grid or mesh resolution is required; moreover, the location of the well must be captured accurately. In conventional simulation models, the user must interact with the model to modify grid resolution around wells of interest, and the well location is approximated on a grid defined early in the modelling process.We report a new approach for improved simulation of near wellbore flow in reservoir scale models through the use of dynamic mesh optimisation and the recently presented double control volume finite element method. Time is discretized using an adaptive, implicit approach. Heterogeneous geologic features are represented as volumes bounded by surfaces. Within these volumes, termed geologic domains, the material properties are constant. Up-, cross- or down-scaling of material properties during dynamic mesh optimization is not required, as the properties are uniform within each geologic domain. A given model typically contains numerous such geologic domains. Wells are implicitly coupled with the domain, and the fluid flows is modelled inside the wells. The method is novel for two reasons. First, a fully unstructured tetrahedral mesh is used to discretize space, and the spatial location of the well is specified via a line vector, ensuring its location even if the mesh is modified during the simulation. The well location is therefore accurately captured, the approach allows complex well trajectories and wells with many laterals to be modelled. Second, computational efficiency is increased by use of dynamic mesh optimization, in which an unstructured mesh adapts in space and time to key solution fields (preserving the geometry of the geologic domains), such as pressure, velocity or temperature, this also increases the quality of the solutions by placing higher resolution where required
Resterilized Polypropylene Mesh for Inguinal Hernia Repair
African Journals Online (AJOL)
2018-04-19
Apr 19, 2018 ... Cost, availability of mesh, and perhaps reluctance to adopt a new technique are factors which prevent widespread ..... critical to preventing postoperative surgical site infection which potentially increases the cost of ... not recorded in 10 years of practice an incident of wound infection. In vitro bacteriological ...
MESH Release 2 implementation at CTIT
Diakov, N.K.; van Sinderen, Marten J.; Koprinkov, G.T.
This document contains a description of the development done at CTIT on the MESH services platform, a TINA-based platform for the deployment and exploitation of services to support teamwork. It provides an overview of the results and the usability of architectural solutions and technologies used
Markov Random Fields on Triangle Meshes
DEFF Research Database (Denmark)
Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas
2010-01-01
In this paper we propose a novel anisotropic smoothing scheme based on Markov Random Fields (MRF). Our scheme is formulated as two coupled processes. A vertex process is used to smooth the mesh by displacing the vertices according to a MRF smoothness prior, while an independent edge process labels...
Polypropylene mesh: evidence for lack of carcinogenicity
Moalli, Pamela; Brown, Bryan; Reitman, Maureen T. F.
2016-01-01
Tumors related to the implantation of surgical grade polypropylene in humans have never been reported. In this commentary we present a balanced review of the information on what is known regarding the host response to polypropylene and provide data as to why the potential for carcinogenicity of polypropylene mesh is exceedingly small. PMID:24614956
Refined topological strings on local ℙ2
International Nuclear Information System (INIS)
Iqbal, Amer; Kozçaz, Can
2017-01-01
We calculate the refined topological string partition function of the Calabi-Yau threefold which is the total space of the canonical bundle on ℙ 2 (the local ℙ 2 ). The refined topological vertex formalism can not be directly applied to local ℙ 2 therefore we use the properties of the refined Hopf link to define a new two legged vertex which together with the refined vertex gives the partition function of the local ℙ 2 .
Latin American oil markets and refining
International Nuclear Information System (INIS)
Yamaguchi, N.D.; Obadia, C.
1999-01-01
This paper provides an overview of the oil markets and refining in Argentina, Brazil, Chile, Colombia, Ecuador, Mexico, Peru and Venezuela, and examines the production of crude oil in these countries. Details are given of Latin American refiners highlighting trends in crude distillation unit capacity, cracking to distillation ratios, and refining in the different countries. Latin American oil trade is discussed, and charts are presented illustrating crude production, oil consumption, crude refining capacity, cracking to distillation ratios, and oil imports and exports
[Implants for genital prolapse : Pro mesh surgery].
Neymeyer, J; Moldovan, D-E; Kornienko, K; Miller, K; Weichert, A
2017-12-01
There has been an overall increase in pelvic organ prolapse due to demographic changes (increased life expectancy). Increasing sociocultural demands of women require treatments that are more effective with methods that are more successful. In the treatment of pelvic floor insufficiency and uterovaginal prolapse, pelvic floor reconstructions with mesh implants have proven to be superior to conventional methods such as the classic colporrhaphy, reconstructions with biomaterial, and native tissue repair in appropriately selected patients and when applying exact operation techniques, especially because of good long-term results and low recurrence rates. When making a systematic therapy plan, one should adhere to certain steps, for example, a pelvic floor reconstruction should be undertaken before performing the corrective procedure for incontinence. The approach, if vaginal, laparoscopic, or abdominal should be chosen wisely, taking into consideration the required space of action, in such a way that none or only minimal collateral damage related to the operation occurs. The use of instrumental suturing techniques and operation robots are advantageous in the case of difficult approaches and limited anatomical spaces. In principle, the surgeon who implants meshes should be able to explant them! The surgical concept of mesh-related interventions in the pelvis must meet established rules. "Implant as little mesh as possible and only as much suitable (!) mesh as absolutely necessary!" In the case of apical direct fixations, a therapeutically relevant target variable is the elevation angle of vagina (EAV). Established anatomical fixation points are preferable. A safe distance between implants and vulnerable tissue is to be maintained. Mesh-based prolapse repairs are indicated in recurrences, in primary situations, in combined defects of the anterior compartment, in central defects of multimorbid and elderly patients, and above all, when organ preservation is wanted
Neutron Powder Diffraction and Constrained Refinement
DEFF Research Database (Denmark)
Pawley, G. S.; Mackenzie, Gordon A.; Dietrich, O. W.
1977-01-01
The first use of a new program, EDINP, is reported. This program allows the constrained refinement of molecules in a crystal structure with neutron diffraction powder data. The structures of p-C6F4Br2 and p-C6F4I2 are determined by packing considerations and then refined with EDINP. Refinement...
The Refined Function-Behaviour-Structure Framework
Diertens, B.
2013-01-01
We refine the function-behaviour-structure framework for design introduced by John Gero in order to deal with complexity. We do this by connecting the frameworks for the desing of two models, one the refinement of the other. The result is a refined framework for the design of an object on two levels
Grain refinement of aluminum and its alloys
International Nuclear Information System (INIS)
Zaid, A.I.O.
2001-01-01
Grain refinement of aluminum and its alloys by the binary Al-Ti and Ternary Al-Ti-B master alloys is reviewed and discussed. The importance of grain refining to the cast industry and the parameters affecting it are presented and discussed. These include parameters related to the cast, parameters related to the grain refining alloy and parameters related to the process. The different mechanisms, suggested in the literature for the process of grain refining are presented and discussed, from which it is found that although the mechanism of refining by the binary Al-Ti is well established the mechanism of grain refining by the ternary Al-Ti-B is still a controversial matter and some research work is still needed in this area. The effect of the addition of other alloying elements in the presence of the grain refiner on the grain refining efficiency is also reviewed and discussed. It is found that some elements e.g. V, Mo, C improves the grain refining efficiency, whereas other elements e.g. Cr, Zr, Ta poisons the grain refinement. Based on the parameters affecting the grain refinement and its mechanism, a criterion for selection of the optimum grain refiner is forwarded and discussed. (author)
Oxidation and degradation of polypropylene transvaginal mesh.
Talley, Anne D; Rogers, Bridget R; Iakovlev, Vladimir; Dunn, Russell F; Guelcher, Scott A
2017-04-01
Polypropylene (PP) transvaginal mesh (TVM) repair for stress urinary incontinence (SUI) has shown promising short-term objective cure rates. However, life-altering complications have been associated with the placement of PP mesh for SUI repair. PP degradation as a result of the foreign body reaction (FBR) has been proposed as a contributing factor to mesh complications. We hypothesized that PP oxidizes under in vitro conditions simulating the FBR, resulting in degradation of the PP. Three PP mid-urethral slings from two commercial manufacturers were evaluated. Test specimens (n = 6) were incubated in oxidative medium for up to 5 weeks. Oxidation was assessed by Fourier Transform Infrared Spectroscopy (FTIR), and degradation was evaluated by scanning electron microscopy (SEM). FTIR spectra of the slings revealed evidence of carbonyl and hydroxyl peaks after 5 weeks of incubation time, providing evidence of oxidation of PP. SEM images at 5 weeks showed evidence of surface degradation, including pitting and flaking. Thus, oxidation and degradation of PP pelvic mesh were evidenced by chemical and physical changes under simulated in vivo conditions. To assess changes in PP surface chemistry in vivo, fibers were recovered from PP mesh explanted from a single patient without formalin fixation, untreated (n = 5) or scraped (n = 5) to remove tissue, and analyzed by X-ray photoelectron spectroscopy. Mechanical scraping removed adherent tissue, revealing an underlying layer of oxidized PP. These findings underscore the need for further research into the relative contribution of oxidative degradation to complications associated with PP-based TVM devices in larger cohorts of patients.
Surgical Management of Pelvic floor Prolapse in women using Mesh
African Journals Online (AJOL)
RAH
polytetrafluoroethylene) . This article reviews our experience with polypropylene mesh in pelvic floor repair at the. Southern General Hospital Glasgow. The objective was to determine the safety and effectiveness of the prolene mesh in the repair ...
A Fully Automated Mesh Generation Tool, Phase I
National Aeronautics and Space Administration — This SBIR Phase I project proposes to develop a fully automated mesh generation tool which contains two parts: surface mesh generation from the imported Computer...
Jali - Unstructured Mesh Infrastructure for Multi-Physics Applications
Energy Technology Data Exchange (ETDEWEB)
2017-04-13
Jali is a parallel unstructured mesh infrastructure library designed for use by multi-physics simulations. It supports 2D and 3D arbitrary polyhedral meshes distributed over hundreds to thousands of nodes. Jali can read write Exodus II meshes along with fields and sets on the mesh and support for other formats is partially implemented or is (https://github.com/MeshToolkit/MSTK), an open source general purpose unstructured mesh infrastructure library from Los Alamos National Laboratory. While it has been made to work with other mesh frameworks such as MOAB and STKmesh in the past, support for maintaining the interface to these frameworks has been suspended for now. Jali supports distributed as well as on-node parallelism. Support of on-node parallelism is through direct use of the the mesh in multi-threaded constructs or through the use of "tiles" which are submeshes or sub-partitions of a partition destined for a compute node.
CUBIT mesh generation environment. Volume 1: Users manual
Energy Technology Data Exchange (ETDEWEB)
Blacker, T.D.; Bohnhoff, W.J.; Edwards, T.L. [and others
1994-05-01
The CUBIT mesh generation environment is a two- and three-dimensional finite element mesh generation tool which is being developed to pursue the goal of robust and unattended mesh generation--effectively automating the generation of quadrilateral and hexahedral elements. It is a solid-modeler based preprocessor that meshes volume and surface solid models for finite element analysis. A combination of techniques including paving, mapping, sweeping, and various other algorithms being developed are available for discretizing the geometry into a finite element mesh. CUBIT also features boundary layer meshing specifically designed for fluid flow problems. Boundary conditions can be applied to the mesh through the geometry and appropriate files for analysis generated. CUBIT is specifically designed to reduce the time required to create all-quadrilateral and all-hexahedral meshes. This manual is designed to serve as a reference and guide to creating finite element models in the CUBIT environment.
Mikhaylov, Rebecca; Dawson, Douglas; Kwack, Eug
2014-01-01
NASA's Earth observing Soil Moisture Active & Passive (SMAP) Mission is scheduled to launch in November 2014 into a 685 km near-polar, sun synchronous orbit. SMAP will provide comprehensive global mapping measurements of soil moisture and freeze/thaw state in order to enhance understanding of the processes that link the water, energy, and carbon cycles. The primary objectives of SMAP are to improve worldwide weather and flood forecasting, enhance climate prediction, and refine drought and agriculture monitoring during its 3 year mission. The SMAP instrument architecture incorporates an L-band radar and an L-band radiometer which share a common feed horn and parabolic mesh reflector. The instrument rotates about the nadir axis at approximately 15 rpm, thereby providing a conically scanning wide swath antenna beam that is capable of achieving global coverage within 3 days. In order to make the necessary precise surface emission measurements from space, a temperature knowledge of 60 deg C for the mesh reflector is required. In order to show compliance, a thermal vacuum test was conducted using a portable solar simulator to illuminate a non flight, but flight-like test article through the quartz window of the vacuum chamber. The molybdenum wire of the antenna mesh is too fine to accommodate thermal sensors for direct temperature measurements. Instead, the mesh temperature was inferred from resistance measurements made during the test. The test article was rotated to five separate angles between 10 deg and 90 deg via chamber breaks to simulate the maximum expected on-orbit solar loading during the mission. The resistance measurements were converted to temperature via a resistance versus temperature calibration plot that was constructed from data collected in a separate calibration test. A simple thermal model of two different representations of the mesh (plate and torus) was created to correlate the mesh temperature predictions to within 60 deg C. The on-orbit mesh
Research on backbone node deployment for Wireless Mesh Networks in dynamic environments
Li, Meiyi; Cao, Shengling
2017-08-01
Wireless Mesh Network is a type of wireless networks in which demands of bandwidth for users has mobility. The backbone node placement of wireless mesh networks in a dynamic scenario is investigated, and the TSDPSO algorithm is used to adapt the dynamic environment, which updates node deployment location to adapt to changes in demand if it detects environmental changes at the beginning of the cycle time. In order to meet the demands of bandwidth for users and network connectivity, particle swarm optimization algorithm is employed to select the gateway location, then nodes to the backbone network is added constantly until all requirement is covered. The experimental results show that algorithm could get effective solution in dynamic environment.
On Interaction Refinement in Middleware
DEFF Research Database (Denmark)
Truyen, Eddy; Jørgensen, Bo Nørregaard; Joosen, Wouter
2000-01-01
Component framework technology has become the cornerstone of building a family of systems and applications. A component framework defines a generic architecture into which specialized components can be plugged. As such, the component framework leverages the glue that connects the different inserted...... components together. We have examined a reflective technique that improve the dynamics of this gluing process such that interaction between components can be refined at run-time. In this paper, we show how we have used this reflective technique to dynamically integrate into the architecture of middleware...
Refining method for bismuth nitrate
International Nuclear Information System (INIS)
Shibata, Shigeyuki.
1997-01-01
The present invention concerns a method of separating and removing α ray emitting nuclides present in an aqueous solution of bismuth nitrate by an industrially convenient method. A nitric acid concentration in the aqueous solution of bismuth nitrate in which α ray emitting nuclides are dissolved is lowered to coprecipitate the bismuth oxynitrate and the α ray emitting nuclides. The coprecipitation materials are separated from the aqueous solution of bismuth nitrate to separate the α ray emitting nuclides dissolved in the aqueous solution of bismuth nitrate thereby refining the aqueous solution of bismuth nitrate. (T.M.)
Sewing machine technique for laparoscopic mesh fixation in intra-peritoneal on-lay mesh.
Dastoor, Khojasteh Sam; Balsara, Kaiomarz P; Gazi, Asif Y
2018-01-01
: Mesh fixation in laparoscopic ventral hernia is accomplished using tacks or tacks with transfascial sutures. This is a painful operation and the pain is believed to be more due to transfascial sutures. We describe a method of transfascial suturing which fixes the mesh securely and probably causes less pain. : Up to six ports may be necessary, three on each side. A suitable-sized mesh is used and fixed with tacks all around. A 20G spinal needle is passed from the skin through one corner of the mesh. A 0 prolene suture is passed through into the peritoneum. With the prolene within, the needle is withdrawn above the anterior rectus sheath and passed again at an angle into the abdomen just outside the mesh. A loop of prolene is thus created which is tied under vision using intra-corporeal knotting. : This method gives a secure mesh fixation and causes less pain than conventional methods. This technique is easy to learn but needs expertise in intra-corporeal knotting.
On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach
Liu, Zheng; Xue, Kaiping; Hong, Peilin
The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.
W. European refiners face tough environmental rules
International Nuclear Information System (INIS)
Anon.
1993-01-01
This paper reports that western Europe's refiners are beginning to grapple with the kind of fundamental changes and huge outlays related to stringent new environmental standards their U.S. counterparts are encountering. Just as the 1990 amendments to the Clean Air Act (CAAA) is transforming the shape of U.S refining and imposing staggering costs on refiners there, new directives proposed by the European Community Commission threaten the same for western European refiners. Given the current state of the industry in Europe, industry officials warn, refiners there might not be able to fund those massive environmental outlays
Niobium-base grain refiner for aluminium
International Nuclear Information System (INIS)
Silva Pontes, P. da; Robert, M.H.; Cupini, N.L.
1980-01-01
A new chemical grain refiner for aluminium has been developed, using inoculation of a niobium-base compound. When a bath of molten aluminium is inoculated whith this refiner, an intermetallic aluminium-niobium compound is formed which acts as a powerful nucleant, producing extremely fine structure comparable to those obtained by means of the traditional grain refiner based on titanium and boron. It was found that the refinement of the structure depends upon the weight percentage of the new refiner inoculated as well as the time of holding the bath after inoculation and before pouring, but mainly on the inoculating temperature. (Author) [pt
Shah, Ketul; Nikolavsky, Dmitriy; Gilsdorf, Daniel; Flynn, Brian J
2013-12-01
We present our management of lower urinary tract (LUT) mesh perforation after mid-urethral polypropylene mesh sling using a novel combination of surgical techniques including total or near total mesh excision, urinary tract reconstruction, and concomitant pubovaginal sling with autologous rectus fascia in a single operation. We retrospectively reviewed the medical records of 189 patients undergoing transvaginal removal of polypropylene mesh from the lower urinary tract or vagina. The focus of this study is 21 patients with LUT mesh perforation after mid-urethral polypropylene mesh sling. We excluded patients with LUT mesh perforation from prolapse kits (n = 4) or sutures (n = 11), or mesh that was removed because of isolated vaginal wall exposure without concomitant LUT perforation (n = 164). Twenty-one patients underwent surgical removal of mesh through a transvaginal approach or combined transvaginal/abdominal approaches. The location of the perforation was the urethra in 14 and the bladder in 7. The mean follow-up was 22 months. There were no major intraoperative complications. All patients had complete resolution of the mesh complication and the primary symptom. Of the patients with urethral perforation, continence was achieved in 10 out of 14 (71.5 %). Of the patients with bladder perforation, continence was achieved in all 7. Total or near total removal of lower urinary tract (LUT) mesh perforation after mid-urethral polypropylene mesh sling can completely resolve LUT mesh perforation in a single operation. A concomitant pubovaginal sling can be safely performed in efforts to treat existing SUI or avoid future surgery for SUI.
21 CFR 870.3650 - Pacemaker polymeric mesh bag.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Pacemaker polymeric mesh bag. 870.3650 Section 870...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Prosthetic Devices § 870.3650 Pacemaker polymeric mesh bag. (a) Identification. A pacemaker polymeric mesh bag is an implanted device used to hold a...
Cryo-mesh: a simple alternative cryopreservation protocol.
Funnekotter, B; Bunn, E; Mancera, R L
The continued development of new cryopreservation protocols has improved post-cryogenic success rates for a wide variety of plant species. Methods like the cryo-plate have proven beneficial in simplifying the cryopreservation procedure. This study assessed the practicality of a stainless steel mesh strip (cryo-mesh) for cryopreserving shoot tips from Anigozanthos viridis. Shoot tips of A. viridis (Kangaroo Paw) were precultured on 0.4 M sucrose medium for 48 h. Precultured shoot tips were coated in a 2% alginate solution and placed onto the cryo-mesh (a 25 x 7 mm, 0.4 mm aperture, 0.224 mm diameter wire stainless steel mesh strip). The alginate was set for 20 min in a loading solution containing 100 mM CaCl2, anchoring the shoot tips to the cryo-mesh. The cryo-mesh was then transferred to PVS2 on ice for 20, 30 or 40 min prior to plunging the cryo-mesh into liquid nitrogen. The cryo-mesh protocol was compared to the droplet-vitrification protocol. A maximum of 83% post-cryogenic regeneration was achieved with the cryo-mesh when exposed to PVS2 for 30 min. No significant difference in post-cryogenic regeneration was observed between the cryo-mesh and droplet-vitrification protocols. Anigozanthos viridis shoot tips were successfully cryopreserved utilising the new cryo-mesh. The cryo-mesh thus provides a simple and successful alternative for cryopreservation.
Multiphase Flow of Immiscible Fluids on Unstructured Moving Meshes
DEFF Research Database (Denmark)
Misztal, Marek Krzysztof; Erleben, Kenny; Bargteil, Adam
2013-01-01
In this paper, we present a method for animating multiphase flow of immiscible fluids using unstructured moving meshes. Our underlying discretization is an unstructured tetrahedral mesh, the deformable simplicial complex (DSC), that moves with the flow in a Lagrangian manner. Mesh optimization op...
Refined gasoline in the subsurface
International Nuclear Information System (INIS)
Bruce, L.G.
1993-01-01
Geologists today are being called upon not only to find naturally occurring petroleum, but also to help assess and remediate the problem of refined hydrocarbons and other man-made contaminants in the subsurface that may endanger freshwater resources or human health. Petroleum geologists already have many of the skills required and are at ease working with fluid flow in the subsurface. If called for environmental projects, however, they will need to know the language and additional concepts necessary to deal with the hydrogeologic problems. Most releases of refined hydrocarbons and other man-made contaminants occur in the shallow unconfined groundwater environment. This is divided into three zones: the saturated zone, unsaturated zone, and capillary fringe. All three have unique characteristics, and contamination behaves differently in each. Gasoline contamination partitions into four phases in this environment; vapor phase, residual phase, free phase, and dissolved phase. Each has a different degree of mobility in the three subsurface zones. Their direction and rate of movement can be estimated using basic concepts, but geological complexities frequently complicate this issue. 24 refs., 19 figs., 4 tabs
Prosthetic Mesh Repair for Incarcerated Inguinal Hernia
Directory of Open Access Journals (Sweden)
Cihad Tatar
2016-08-01
Full Text Available Background: Incarcerated inguinal hernia is a commonly encountered urgent surgical condition, and tension-free repair is a well-established method for the treatment of noncomplicated cases. However, due to the risk of prosthetic material-related infections, the use of mesh in the repair of strangulated or incarcerated hernia has often been subject to debate. Recent studies have demonstrated that biomaterials represent suitable materials for performing urgent hernia repair. Certain studies recommend mesh repair only for cases where no bowel resection is required; other studies, however, recommend mesh repair for patients requiring bowel resection as well. Aim: The aim of this study was to compare the outcomes of different surgical techniques performed for strangulated hernia, and to evaluate the effect of mesh use on postoperative complications. Study Design: Retrospective cross-sectional study. Methods: This retrospective study was performed with 151 patients who had been admitted to our hospital’s emergency department to undergo surgery for a diagnosis of incarcerated inguinal hernia. The patients were divided into two groups based on the applied surgical technique. Group 1 consisted of 112 patients treated with mesh-based repair techniques, while Group 2 consisted of 39 patients treated with tissue repair techniques. Patients in Group 1 were further divided into two sub-groups: one consisting of patients undergoing bowel resection (Group 3, and the other consisting of patients not undergoing bowel resection (Group 4. Results: In Group 1, it was observed that eight (7.14% of the patients had wound infections, while two (1.78% had hematomas, four (3.57% had seromas, and one (0.89% had relapse. In Group 2, one (2.56% of the patients had a wound infection, while three (7.69% had hematomas, one (2.56% had seroma, and none had relapses. There were no statistically significant differences between the two groups with respect to wound infection
To mesh or not to mesh: a review of pelvic organ reconstructive surgery
Directory of Open Access Journals (Sweden)
Dällenbach P
2015-04-01
Full Text Available Patrick Dällenbach Department of Gynecology and Obstetrics, Division of Gynecology, Urogynecology Unit, Geneva University Hospitals, Geneva, Switzerland Abstract: Pelvic organ prolapse (POP is a major health issue with a lifetime risk of undergoing at least one surgical intervention estimated at close to 10%. In the 1990s, the risk of reoperation after primary standard vaginal procedure was estimated to be as high as 30% to 50%. In order to reduce the risk of relapse, gynecological surgeons started to use mesh implants in pelvic organ reconstructive surgery with the emergence of new complications. Recent studies have nevertheless shown that the risk of POP recurrence requiring reoperation is lower than previously estimated, being closer to 10% rather than 30%. The development of mesh surgery – actively promoted by the marketing industry – was tremendous during the past decade, and preceded any studies supporting its benefit for our patients. Randomized trials comparing the use of mesh to native tissue repair in POP surgery have now shown better anatomical but similar functional outcomes, and meshes are associated with more complications, in particular for transvaginal mesh implants. POP is not a life-threatening condition, but a functional problem that impairs quality of life for women. The old adage “primum non nocere” is particularly appropriate when dealing with this condition which requires no treatment when asymptomatic. It is currently admitted that a certain degree of POP is physiological with aging when situated above the landmark of the hymen. Treatment should be individualized and the use of mesh needs to be selective and appropriate. Mesh implants are probably an important tool in pelvic reconstructive surgery, but the ideal implant has yet to be found. The indications for its use still require caution and discernment. This review explores the reasons behind the introduction of mesh augmentation in POP surgery, and aims to
Zhang, Hong; Zegeling, Paul Andries
2017-01-01
An adaptive moving mesh finite difference method is presented to solve two types of equations with dynamic capillary pressure effect in porous media. One is the non-equilibrium Richards Equation and the other is the modified Buckley-Leverett equation. The governing equations are discretized with an
A Progressive Refinement Approach for the Visualisation of Implicit Surfaces
Gamito, Manuel N.; Maddock, Steve C.
Visualising implicit surfaces with the ray casting method is a slow procedure. The design cycle of a new implicit surface is, therefore, fraught with long latency times as a user must wait for the surface to be rendered before being able to decide what changes should be introduced in the next iteration. In this paper, we present an attempt at reducing the design cycle of an implicit surface modeler by introducing a progressive refinement rendering approach to the visualisation of implicit surfaces. This progressive refinement renderer provides a quick previewing facility. It first displays a low quality estimate of what the final rendering is going to be and, as the computation progresses, increases the quality of this estimate at a steady rate. The progressive refinement algorithm is based on the adaptive subdivision of the viewing frustrum into smaller cells. An estimate for the variation of the implicit function inside each cell is obtained with an affine arithmetic range estimation technique. Overall, we show that our progressive refinement approach not only provides the user with visual feedback as the rendering advances but is also capable of completing the image faster than a conventional implicit surface rendering algorithm based on ray casting.
Meshed split skin graft for extensive vitiligo
Directory of Open Access Journals (Sweden)
Srinivas C
2004-05-01
Full Text Available A 30 year old female presented with generalized stable vitiligo involving large areas of the body. Since large areas were to be treated it was decided to do meshed split skin graft. A phototoxic blister over recipient site was induced by applying 8 MOP solution followed by exposure to UVA. The split skin graft was harvested from donor area by Padgett dermatome which was meshed by an ampligreffe to increase the size of the graft by 4 times. Significant pigmentation of the depigmented skin was seen after 5 months. This procedure helps to cover large recipient areas, when pigmented donor skin is limited with minimal risk of scarring. Phototoxic blister enables easy separation of epidermis thus saving time required for dermabrasion from recipient site.
Nondispersive optical activity of meshed helical metamaterials.
Park, Hyun Sung; Kim, Teun-Teun; Kim, Hyeon-Don; Kim, Kyungjin; Min, Bumki
2014-11-17
Extreme optical properties can be realized by the strong resonant response of metamaterials consisting of subwavelength-scale metallic resonators. However, highly dispersive optical properties resulting from strong resonances have impeded the broadband operation required for frequency-independent optical components or devices. Here we demonstrate that strong, flat broadband optical activity with high transparency can be obtained with meshed helical metamaterials in which metallic helical structures are networked and arranged to have fourfold rotational symmetry around the propagation axis. This nondispersive optical activity originates from the Drude-like response as well as the fourfold rotational symmetry of the meshed helical metamaterials. The theoretical concept is validated in a microwave experiment in which flat broadband optical activity with a designed magnitude of 45° per layer of metamaterial is measured. The broadband capabilities of chiral metamaterials may provide opportunities in the design of various broadband optical systems and applications.
Isomorphic routing on a toroidal mesh
Mao, Weizhen; Nicol, David M.
1993-01-01
We study a routing problem that arises on SIMD parallel architectures whose communication network forms a toroidal mesh. We assume there exists a set of k message descriptors (xi, yi), where (xi, yi) indicates that the ith message's recipient is offset from its sender by xi hops in one mesh dimension, and yi hops in the other. Every processor has k messages to send, and all processors use the same set of message routing descriptors. The SIMD constraint implies that at any routing step, every processor is actively routing messages with the same descriptors as any other processor. We call this isomorphic routing. Our objective is to find the isomorphic routing schedule with least makespan. We consider a number of variations on the problem, yielding complexity results from O(k) to NP-complete. Most of our results follow after we transform the problem into a scheduling problem, where it is related to other well-known scheduling problems.
Partitioning of unstructured meshes for load balancing
International Nuclear Information System (INIS)
Martin, O.C.; Otto, S.W.
1994-01-01
Many large-scale engineering and scientific calculations involve repeated updating of variables on an unstructured mesh. To do these types of computations on distributed memory parallel computers, it is necessary to partition the mesh among the processors so that the load balance is maximized and inter-processor communication time is minimized. This can be approximated by the problem, of partitioning a graph so as to obtain a minimum cut, a well-studied combinatorial optimization problem. Graph partitioning algorithms are discussed that give good but not necessarily optimum solutions. These algorithms include local search methods recursive spectral bisection, and more general purpose methods such as simulated annealing. It is shown that a general procedure enables to combine simulated annealing with Kernighan-Lin. The resulting algorithm is both very fast and extremely effective. (authors) 23 refs., 3 figs., 1 tab
Variational mesh segmentation via quadric surface fitting
Yan, Dongming
2012-11-01
We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.
Diffusive mesh relaxation in ALE finite element numerical simulations
Energy Technology Data Exchange (ETDEWEB)
Dube, E.I.
1996-06-01
The theory for a diffusive mesh relaxation algorithm is developed for use in three-dimensional Arbitary Lagrange/Eulerian (ALE) finite element simulation techniques. This mesh relaxer is derived by a variational principle for an unstructured 3D grid using finite elements, and incorporates hourglass controls in the numerical implementation. The diffusive coefficients are based on the geometric properties of the existing mesh, and are chosen so as to allow for a smooth grid that retains the general shape of the original mesh. The diffusive mesh relaxation algorithm is then applied to an ALE code system, and results from several test cases are discussed.
Bode, Paul
2013-05-01
TPM carries out collisionless (dark matter) cosmological N-body simulations, evolving a system of N particles as they move under their mutual gravitational interaction. It combines aspects of both Tree and Particle-Mesh algorithms. After the global PM forces are calculated, spatially distinct regions above a given density contrast are located; the tree code calculates the gravitational interactions inside these denser objects at higher spatial and temporal resolution. The code is parallel and uses MPI for message passing.
Wireless experiments on a Motorola mesh testbed.
Energy Technology Data Exchange (ETDEWEB)
Riblett, Loren E., Jr.; Wiseman, James M.; Witzke, Edward L.
2010-06-01
Motomesh is a Motorola product that performs mesh networking at both the client and access point levels and allows broadband mobile data connections with or between clients moving at vehicular speeds. Sandia National aboratories has extensive experience with this product and its predecessors in infrastructure-less mobile environments. This report documents experiments, which characterize certain aspects of how the Motomesh network performs when obile units are added to a fixed network infrastructure.
Symmetries and the coarse-mesh method
International Nuclear Information System (INIS)
Makai, M.
1980-10-01
This report approaches the basic problem of the coarse-mesh method from a new side. Group theory is used for the determination of the space dependency of the flux. The result is a method called ANANAS after the analytic-analytic solution. This method was tested on two benchmark problems: one given by Melice and the IAEA benchmark. The ANANAS program is an experimental one. The method was intended for use in hexagonal geometry. (Auth.)
Hoard, C.J.
2010-01-01
The U.S. Geological Survey is evaluating water availability and use within the Great Lakes Basin. This is a pilot effort to develop new techniques and methods to aid in the assessment of water availability. As part of the pilot program, a regional groundwater-flow model for the Lake Michigan Basin was developed using SEAWAT-2000. The regional model was used as a framework for assessing local-scale water availability through grid-refinement techniques. Two grid-refinement techniques, telescopic mesh refinement and local grid refinement, were used to illustrate the capability of the regional model to evaluate local-scale problems. An intermediate model was developed in central Michigan spanning an area of 454 square miles (mi2) using telescopic mesh refinement. Within the intermediate model, a smaller local model covering an area of 21.7 mi2 was developed and simulated using local grid refinement. Recharge was distributed in space and time using a daily output from a modified Thornthwaite-Mather soil-water-balance method. The soil-water-balance method derived recharge estimates from temperature and precipitation data output from an atmosphere-ocean coupled general-circulation model. The particular atmosphere-ocean coupled general-circulation model used, simulated climate change caused by high global greenhouse-gas emissions to the atmosphere. The surface-water network simulated in the regional model was refined and simulated using a streamflow-routing package for MODFLOW. The refined models were used to demonstrate streamflow depletion and potential climate change using five scenarios. The streamflow-depletion scenarios include (1) natural conditions (no pumping), (2) a pumping well near a stream; the well is screened in surficial glacial deposits, (3) a pumping well near a stream; the well is screened in deeper glacial deposits, and (4) a pumping well near a stream; the well is open to a deep bedrock aquifer. Results indicated that a range of 59 to 50 percent of the
Numerical Investigation of Corrugated Wire Mesh Laminate
Directory of Open Access Journals (Sweden)
Jeongho Choi
2013-01-01
Full Text Available The aim of this work is to develop a numerical model of Corrugated Wire Mesh Laminate (CWML capturing all its complexities such as nonlinear material properties, nonlinear geometry and large deformation behaviour, and frictional behaviour. Development of such a model will facilitate numerical simulation of the mechanical behaviour of the wire mesh structure under various types of loading as well as the variation of the CWML configuration parameters to tailor its mechanical properties to suit the intended application. Starting with a single strand truss model consisting of four waves with a bilinear stress-strain model to represent the plastic behaviour of stainless steel, the finite element model is gradually built up to study single-layer structures with 18 strands of corrugated wire meshes consistency and double- and quadruple-layered laminates with alternating crossply orientations. The compressive behaviour of the CWML model is simulated using contact elements to model friction and is compared to the load-deflection behaviour determined experimentally in uniaxial compression tests. The numerical model of the CWML is then employed to conduct the aim of establishing the upper and lower bounds of stiffness and load capacity achievable by such structures.
FPGA Congestion-Driven Placement Refinement
Energy Technology Data Exchange (ETDEWEB)
Vicente de, J.
2005-07-01
The routing congestion usually limits the complete proficiency of the FPGA logic resources. A key question can be formulated regarding the benefits of estimating the congestion at placement stage. In the last years, it is gaining acceptance the idea of a detailed placement taking into account congestion. In this paper, we resort to the Thermodynamic Simulated Annealing (TSA) algorithm to perform a congestion-driven placement refinement on the top of the common Bounding-Box pre optimized solution. The adaptive properties of TSA allow the search to preserve the solution quality of the pre optimized solution while improving other fine-grain objectives. Regarding the cost function two approaches have been considered. In the first one Expected Occupation (EO), a detailed probabilistic model to account for channel congestion is evaluated. We show that in spite of the minute detail of EO, the inherent uncertainty of this probabilistic model impedes to relieve congestion beyond the sole application of the Bounding-Box cost function. In the second approach we resort to the fast Rectilinear Steiner Regions algorithm to perform not an estimation but a measurement of the global routing congestion. This second strategy allows us to successfully reduce the requested channel width for a set of benchmark circuits with respect to the widespread Versatile Place and Route (VPR) tool. (Author) 31 refs.
Collagen/Polypropylene composite mesh biocompatibility in abdominal wall reconstruction.
Lukasiewicz, Aleksander; Skopinska-Wisniewska, Joanna; Marszalek, Andrzej; Molski, Stanislaw; Drewa, Tomasz
2013-05-01
Intraperitoneal placement of polypropylene mesh leads to extensive visceral adhesions and is contraindicated. Different coatings are used to improve polypropylene mesh properties. Collagen is a protein with unique biocompatibility and cell ingrowth enhancement potential. A novel acetic acid extracted collagen coating was developed to allow placement of polypropylene mesh in direct contact with viscera. The authors' aim was to evaluate the long-term influence of acetic acid extracted collagen coating on surgical aspects and biomechanical properties of polypropylene mesh implanted in direct contact with viscera, including complications, adhesions with viscera, strength of incorporation, and microscopic inflammatory reaction. Forty adult Wistar rats were divided into two groups: experimental (polypropylene mesh/acetic acid extracted collagen coating) and control (polypropylene mesh only). Astandardized procedure of mesh implantation was performed. Animals were killed 3 months after surgery and analyzed for complications, mesh area covered by adhesions, type of adhesions, strength of incorporation, and intensity of inflammatory response. The mean adhesion area was lower for polypropylene mesh/acetic acid extracted collagen coating (14.5 percent versus 69.9 percent, p polypropylene mesh are significantly reduced because of acetic acid extracted collagen coating. The collagen coating does not increase complications or induce alterations of polypropylene mesh incorporation.
MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank.
Mao, Yuqing; Lu, Zhiyong
2017-04-17
MeSH indexing is the task of assigning relevant MeSH terms based on a manual reading of scholarly publications by human indexers. The task is highly important for improving literature retrieval and many other scientific investigations in biomedical research. Unfortunately, given its manual nature, the process of MeSH indexing is both time-consuming (new articles are not immediately indexed until 2 or 3 months later) and costly (approximately ten dollars per article). In response, automatic indexing by computers has been previously proposed and attempted but remains challenging. In order to advance the state of the art in automatic MeSH indexing, a community-wide shared task called BioASQ was recently organized. We propose MeSH Now, an integrated approach that first uses multiple strategies to generate a combined list of candidate MeSH terms for a target article. Through a novel learning-to-rank framework, MeSH Now then ranks the list of candidate terms based on their relevance to the target article. Finally, MeSH Now selects the highest-ranked MeSH terms via a post-processing module. We assessed MeSH Now on two separate benchmarking datasets using traditional precision, recall and F 1 -score metrics. In both evaluations, MeSH Now consistently achieved over 0.60 in F-score, ranging from 0.610 to 0.612. Furthermore, additional experiments show that MeSH Now can be optimized by parallel computing in order to process MEDLINE documents on a large scale. We conclude that MeSH Now is a robust approach with state-of-the-art performance for automatic MeSH indexing and that MeSH Now is capable of processing PubMed scale documents within a reasonable time frame. http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/MeSHNow/ .
Prosthetic mesh repair of abdominal wall hernias in horses.
Tóth, Ferenc; Schumacher, Jim
2018-02-05
Repair of hernias of the abdominal wall of horses is often augmented by inserting a prosthetic mesh. In this review, we describe the various characteristics of prosthetic meshes used for hernia repair and present 2 systems that are used by surgeons in the human medical field to classify techniques of prosthetic mesh herniorrhaphy. Both of these classification systems distinguish between onlay, inlay, sublay, and underlay placements of mesh, based on the location within the abdominal wall in which the prosthetic mesh is inserted. We separate the published techniques of prosthetic mesh herniorrhaphy of horses using this classification system, ascribing names to the techniques of herniorrhaphy where none existed, and report the success rates and complications associated with each technique. By introducing a classification system widely used in the human medical field and illustrating each technique in a figure, we hope to clarify inconsistent nomenclature associated with prosthetic mesh herniorrhaphy performed by veterinary surgeons. © 2018 The American College of Veterinary Surgeons.
Data-Parallel Mesh Connected Components Labeling and Analysis
Energy Technology Data Exchange (ETDEWEB)
Harrison, Cyrus; Childs, Hank; Gaither, Kelly
2011-04-10
We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.
A Hexapod Robot to Demonstrate Mesh Walking in a Microgravity Environment
Foor, David C.
2005-01-01
The JPL Micro-Robot Explorer (MRE) Spiderbot is a robot that takes advantage of its small size to perform precision tasks suitable for space applications. The Spiderbot is a legged robot that can traverse harsh terrain otherwise inaccessible to wheeled robots. A team of Spiderbots can network and can exhibit collaborative efforts to SUCCeSSfUlly complete a set of tasks. The Spiderbot is designed and developed to demonstrate hexapods that can walk on flat surfaces, crawl on meshes, and assemble simple structures. The robot has six legs consisting of two spring-compliant joints and a gripping actuator. A hard-coded set of gaits allows the robot to move smoothly in a zero-gravity environment along the mesh. The primary objective of this project is to create a Spiderbot that traverses a flexible, deployable mesh, for use in space repair. Verification of this task will take place aboard a zero-gravity test flight. The secondary objective of this project is to adapt feedback from the joints to allow the robot to test each arm for a successful grip of the mesh. The end result of this research lends itself to a fault-tolerant robot suitable for a wide variety of space applications.
Directory of Open Access Journals (Sweden)
Shuai Mo
2017-01-01
Full Text Available This paper studies the multiple-split load sharing mechanism of gears in two-stage external meshing planetary transmission system of aeroengine. According to the eccentric error, gear tooth thickness error, pitch error, installation error, and bearing manufacturing error, we performed the meshing error analysis of equivalent angles, respectively, and we also considered the floating meshing error caused by the variation of the meshing backlash, which is from the floating of all gears at the same time. Finally, we obtained the comprehensive angle meshing error of the two-stage meshing line, established a refined mathematical computational model of 2-stage external 3-split loading sharing coefficient in consideration of displacement compatibility, got the regular curves of the load sharing coefficient and load sharing characteristic curve of full floating multiple-split and multiple-stage system, and took the variation law of the floating track and the floating quantity of the center wheel. These provide a scientific theory to determine the load sharing coefficient, reasonable load distribution, and control tolerances in aviation design and manufacturing.
Refinement and standardization of synthetic biological parts and devices.
Canton, Barry; Labno, Anna; Endy, Drew
2008-07-01
The ability to quickly and reliably engineer many-component systems from libraries of standard interchangeable parts is one hallmark of modern technologies. Whether the apparent complexity of living systems will permit biological engineers to develop similar capabilities is a pressing research question. We propose to adapt existing frameworks for describing engineered devices to biological objects in order to (i) direct the refinement and use of biological 'parts' and 'devices', (ii) support research on enabling reliable composition of standard biological parts and (iii) facilitate the development of abstraction hierarchies that simplify biological engineering. We use the resulting framework to describe one engineered biological device, a genetically encoded cell-cell communication receiver named BBa_F2620. The description of the receiver is summarized via a 'datasheet' similar to those widely used in engineering. The process of refinement and characterization leading to the BBa_F2620 datasheet may serve as a starting template for producing many standardized genetically encoded objects.
Refined isogeometric analysis for a preconditioned conjugate gradient solver
Garcia, Daniel
2018-02-12
Starting from a highly continuous Isogeometric Analysis (IGA) discretization, refined Isogeometric Analysis (rIGA) introduces C0 hyperplanes that act as separators for the direct LU factorization solver. As a result, the total computational cost required to solve the corresponding system of equations using a direct LU factorization solver dramatically reduces (up to a factor of 55) Garcia et al. (2017). At the same time, rIGA enriches the IGA spaces, thus improving the best approximation error. In this work, we extend the complexity analysis of rIGA to the case of iterative solvers. We build an iterative solver as follows: we first construct the Schur complements using a direct solver over small subdomains (macro-elements). We then assemble those Schur complements into a global skeleton system. Subsequently, we solve this system iteratively using Conjugate Gradients (CG) with an incomplete LU (ILU) preconditioner. For a 2D Poisson model problem with a structured mesh and a uniform polynomial degree of approximation, rIGA achieves moderate savings with respect to IGA in terms of the number of Floating Point Operations (FLOPs) and computational time (in seconds) required to solve the resulting system of linear equations. For instance, for a mesh with four million elements and polynomial degree p=3, the iterative solver is approximately 2.6 times faster (in time) when applied to the rIGA system than to the IGA one. These savings occur because the skeleton rIGA system contains fewer non-zero entries than the IGA one. The opposite situation occurs for 3D problems, and as a result, 3D rIGA discretizations provide no gains with respect to their IGA counterparts when considering iterative solvers.
Janssen, Bärbel
2011-01-01
A multilevel method on adaptive meshes with hanging nodes is presented, and the additional matrices appearing in the implementation are derived. Smoothers of overlapping Schwarz type are discussed; smoothing is restricted to the interior of the subdomains refined to the current level; thus it has optimal computational complexity. When applied to conforming finite element discretizations of elliptic problems and Maxwell equations, the method\\'s convergence rates are very close to those for the nonadaptive version. Furthermore, the smoothers remain efficient for high order finite elements. We discuss the implementation in a general finite element code using the example of the deal.II library. © 2011 Societ y for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Marijan Lužnik
2018-02-01
Full Text Available Background. Use of alloplastic mesh implantates allow a new urogynecologycal surgical techniques achieve a marked improvement in pelvic organ static and pelvic floor function with minimally invasive needle transvaginal intervention like an anterior transobturator mesh (ATOM and a posterior ischiorectal mesh (PIRM procedures. Methods. In three years, between April 2006 and May 2009, we performed one hundred and eightyfour operative corrections of female pelvic organ prolapse (POP and pelvic floor dysfunction (PFD with mesh implantates. The eighty-three patients with surgical procedure TVT-O or Monarc as solo intervention indicated by stress urinary incontinence without POP, are not included in this number. In 97 % of mesh operations, Gynemesh 10 × 15 cm was used. For correction of anterior vaginal prolapse with ATOM procedure, Gynemesh was individually trimmed in mesh with 6 free arms for tension-free transobturator application and tension-free apical collar. IVS (Intravaginal sling 04 Tunneller (Tyco needle system was used for transobturator application of 6 arms through 4 dermal incisions (2 on right and 2 on left. Minimal anterior median colpotomy was made in two separate parts. For correction of posterior vaginal prolapse with PIRM procedure Gynemesh was trimmed in mesh with 4 free arms and tension-free collar. Two ischiorectal long arms for tension-free application through fossa ischiorectale – right and left, and two short arms for perineal body also on both sides. IVS 02 Tunneller (Tyco needle system was used for tension-free application of 4 arms through 4 dermal incisions (2 on right and 2 on left in PIRM. Results. All 184 procedures were performed relatively safely. In 9 cases of ATOM we had perforation of bladder, in 5 by application of anterior needle, in 3 by application of posterior needle and in one case with pincette when collar was inserted in lateral vesico – vaginal space. In 2 cases of PIRM we had perforation of rectum
Diffusion of compact macromolecules through polymer meshes: mesh dynamics and probe dynamics
International Nuclear Information System (INIS)
Biehl, R.; Guo, X.; Prud'homme, R.K.; Monkenbusch, M.; Allgeier, J.; Richter, D.
2004-01-01
The diffusion of compact macromolecules in polymer networks is examined to understand how polymer networks or gels could be used to filter different types of macromolecules. We present new measurements of the network and probe dynamics by neutron spin-echo spectroscopy (NSE). The investigated system consists of the protein α-lactalbumin (R g =15.2 Angst) as probe and a network of high molecular weight polyethylenoxide. The high molecular weight ensures a long disentanglement time for the polymer in order to create a stable network. We compare the network dynamics and the dynamics of the probe protein in the network at different mesh sizes. We study the dynamics at different q values between 0.03 and 0.22 Angst -1 . The corresponding length scales reach from distances smaller than the mesh size to larger than the mesh size
Refinement: promoting the three Rs in practice.
Lloyd, M H; Foden, B W; Wolfensohn, S E
2008-07-01
Refinement of scientific procedures carried out on protected animals is an iterative process, which begins with a critical evaluation of practice. The process continues with objective assessment of the impact of the procedures, identification of areas for improvement, selection and implementation of an improvement strategy and evaluation of the results to determine whether there has been the desired effect, completing the refinement loop and resulting in the perpetuation of good practice. Refinements may be science-driven (those which facilitate getting high-quality results) or welfare-driven or may encompass both groups, but whatever the driver, refinements almost always result in benefits to both welfare and science. Refinements can be implemented in all aspects of animal use: improved methodology in invasive techniques, housing and husbandry, and even statistical analyses can all benefit animal welfare and scientific quality. If refinement is not actively sought, outdated and unnecessarily invasive techniques may not be replaced by better methods as they become available, and thus outdated information is passed down to the next generation, causing perpetuation of old-fashioned methods. This leads to a spiral of ignorance, leading ultimately to poor practice, poor animal welfare and poor-quality scientific data. Refinement is a legal and ethical requirement, yet refinements may not always be implemented. There are numerous obstacles to the implementation of refinement, which may be real or perceived. Either way, in order to take refinement forward, it is important to coordinate the approach to refinement, validate the science behind refinement, ensure there is adequate education and training in new techniques, improve liaison between users and make sure there is feedback on suitability of refinements for use. Overall, refinement requires a coordinated ongoing process of critical appraisal of practice and active scrutiny of resources for likely improvements. In
MESH NETWORK DEVELOPMENT PROJECT IN GREAT STONE INDUSTRY PARK
Directory of Open Access Journals (Sweden)
Pei Ping
2016-01-01
Full Text Available Wireless Mesh network (WMN are increasingly becoming popular as low cost alternatives to wired network for providing broadband access to users. A wireless mesh network (WMN is a communication networks made up of radio nodes organized in a mesh topology. It is also a form of wireless network. Wireless mesh networks often consist of mesh clients, mesh routers and gateways. The mesh clients are often laptops, cell phones and other wireless devices while the mesh routers forward traffic to and from the gateways, which may, but need not, be connected, to the Internet. In this paper, we discuss different radio frequency range in wireless connected to Access Point (AP and the project from Belarus – China great stone industry park in Mesh network. The China-Belarus industrial park is a territorial entity with the area of approximately 80 sq. km with a special legal status for the provision of comfortable conditions for business conducting. The Park is located in a unique natural complex 25 km far from Minsk, the capital of the Republic of Belarus. It is in close proximity to the international airport, railway lines, a transnational highway Berlin-Moscow. The result of analysis shows distribution of AP and covering services in great stone industry park. Mesh network provides robustness and load balancing in wireless networks communication.
Automated knowledge-base refinement
Mooney, Raymond J.
1994-01-01
Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.
Refining Visually Detected Object poses
DEFF Research Database (Denmark)
Holm, Preben; Petersen, Henrik Gordon
2010-01-01
Automated industrial assembly today require that the 3D position and orientation (hereafter ''pose`) of the objects to be assembled are known precisely. Today this precision is mostly established by a dedicated mechanical object alignment system. However, such systems are often dedicated to the p......Automated industrial assembly today require that the 3D position and orientation (hereafter ''pose`) of the objects to be assembled are known precisely. Today this precision is mostly established by a dedicated mechanical object alignment system. However, such systems are often dedicated...... to the particular object and in order to handle the demand for flexibility, there is an increasing demand for avoiding such dedicated mechanical alignment systems. Rather, it would be desirable to automatically locate and grasp randomly placed objects from tables, conveyor belts or even bins with a high accuracy...... that enables direct assembly. Conventional vision systems and laser triangulation systems can locate randomly placed known objects (with 3D CAD models available) with some accuracy, but not necessarily a good enough accuracy. In this paper, we present a novel method for refining the pose accuracy of an object...
Maddison, J. R.; Marshall, D. P.; Pain, C. C.; Piggott, M. D.
Accurate representation of geostrophic and hydrostatic balance is an essential requirement for numerical modelling of geophysical flows. Potentially, unstructured mesh numerical methods offer significant benefits over conventional structured meshes, including the ability to conform to arbitrary bounding topography in a natural manner and the ability to apply dynamic mesh adaptivity. However, there is a need to develop robust schemes with accurate representation of physical balance on arbitrary unstructured meshes. We discuss the origin of physical balance errors in a finite element discretisation of the Navier-Stokes equations using the fractional timestep pressure projection method. By considering the Helmholtz decomposition of forcing terms in the momentum equation, it is shown that the components of the buoyancy and Coriolis accelerations that project onto the non-divergent velocity tendency are the small residuals between two terms of comparable magnitude. Hence there is a potential for significant injection of imbalance by a numerical method that does not compute these residuals accurately. This observation is used to motivate a balanced pressure decomposition method whereby an additional "balanced pressure" field, associated with buoyancy and Coriolis accelerations, is solved for at increased accuracy and used to precondition the solution for the dynamical pressure. The utility of this approach is quantified in a fully non-linear system in exact geostrophic balance. The approach is further tested via quantitative comparison of unstructured mesh simulations of the thermally driven rotating annulus against laboratory data. Using a piecewise linear discretisation for velocity and pressure (a stabilised P1P1 discretisation), it is demonstrated that the balanced pressure decomposition method is required for a physically realistic representation of the system.
How MESSENGER Meshes Simulations and Games with Citizen Science
Hirshon, B.; Chapman, C. R.; Edmonds, J.; Goldstein, J.; Hallau, K. G.; Solomon, S. C.; Vanhala, H.; Weir, H. M.; Messenger Education; Public Outreach (Epo) Team
2010-12-01
How MESSENGER Meshes Simulations and Games with Citizen Science In the film The Last Starfighter, an alien civilization grooms their future champion—a kid on Earth—using a video game. As he gains proficiency in the game, he masters the skills he needs to pilot a starship and save their civilization. The NASA MESSENGER Education and Public Outreach (EPO) Team is using the same tactic to train citizen scientists to help the Science Team explore the planet Mercury. We are building a new series of games that appear to be designed primarily for fun, but that guide players through a knowledge and skill set that they will need for future science missions in support of MESSENGER mission scientists. As players score points, they gain expertise. Once they achieve a sufficiently high score, they will be invited to become participants in Mercury Zoo, a new program being designed by Zooniverse. Zooniverse created Galaxy Zoo and Moon Zoo, programs that allow interested citizens to participate in the exploration and interpretation of galaxy and lunar data. Scientists use the citizen interpretations to further refine their exploration of the same data, thereby narrowing their focus and saving precious time. Mercury Zoo will be designed with input from the MESSENGER Science Team. This project will not only support the MESSENGER mission, but it will also add to the growing cadre of informed members of the public available to help with other citizen science projects—building on the concept that engaged, informed citizens can help scientists make new discoveries. The MESSENGER EPO Team comprises individuals from the American Association for the Advancement of Science (AAAS); Carnegie Academy for Science Education (CASE); Center for Educational Resources (CERES) at Montana State University (MSU) - Bozeman; National Center for Earth and Space Science Education (NCESSE); Johns Hopkins University Applied Physics Laboratory (JHU/APL); National Air and Space Museum (NASM); Science
Performance of FACTS equipment in Meshed systems
Energy Technology Data Exchange (ETDEWEB)
Lerch, E.; Povh, D. [Siemens AG, Berlin (Germany)
1994-12-31
Modern power electronic devices such as thyristors and GTOs have made it possible to design controllable network elements, which will play a considerable role in ensuring reliable economic operation of transmission systems as a result of their capability to rapidly change active and reactive power. A number of FACTS elements for high-speed active and reactive power control will be described. Control of power system fluctuations in meshed systems by modulation of active and reactive power will be demonstrated using a number of examples. (author) 7 refs., 11 figs.
MEDIT : An interactive Mesh visualization Software
Frey, Pascal
2001-01-01
This technical report describes the main features of MEDIT (This software wa registered with the APP under n° IDDN.FR.001.410023.00.R.P. 2001.000.10800 on january 25, 2001.), an interactive mesh visualization tool developped in the Gamma project at INRIA-Rocquencourt. Based on the graphic standard OpenGL, this software has been specifically designed to fulfill most of the common requirements of engineers and numericians, in the context of numerical simulations. This program is rather intuitiv...
Corset neophallic musculoplasty with a mesh endoprosthesis
Directory of Open Access Journals (Sweden)
V. V. Mikhailichenko
2014-01-01
Full Text Available During thoracodorsal flap phalloplasty, recovered contractility of the muscular base of the neophallus may lead to its shortening that impedes introjection.To eliminate deformity and shortening of the neophallus, the authors propose the procedure of corset plasty of its muscle, which differs in that the alloplastic material – esfil mesh endoprosthesis, is used as a corset instead of fascia latum of the hip. The proposed procedure reduces surgical trauma, improves the functional characteristics of the neophallus, and accelerates sexual rehabilitation.
Corset neophallic musculoplasty with a mesh endoprosthesis
Directory of Open Access Journals (Sweden)
V. V. Mikhailichenko
2014-12-01
Full Text Available During thoracodorsal flap phalloplasty, recovered contractility of the muscular base of the neophallus may lead to its shortening that impedes introjection.To eliminate deformity and shortening of the neophallus, the authors propose the procedure of corset plasty of its muscle, which differs in that the alloplastic material – esfil mesh endoprosthesis, is used as a corset instead of fascia latum of the hip. The proposed procedure reduces surgical trauma, improves the functional characteristics of the neophallus, and accelerates sexual rehabilitation.
Performance Evaluation of Coded Meshed Networks
DEFF Research Database (Denmark)
Krigslund, Jeppe; Hansen, Jonas; Pedersen, Morten Videbæk
2013-01-01
of the former to enhance the gains of the latter. We first motivate our work through measurements in WiFi mesh networks. Later, we compare state-of-the-art approaches, e.g., COPE, RLNC, to CORE. Our measurements show the higher reliability and throughput of CORE over other schemes, especially, for asymmetric......We characterize the performance of intra- and inter-session network coding (NC) in wireless networks using real-life implementations. We compare this performance to a recently developed hybrid approach, called CORE, which combines intra- and inter-session NC exploiting the code structure...
Unbiased Sampling and Meshing of Isosurfaces
Yan, Dongming
2014-05-07
In this paper, we present a new technique to generate unbiased samples on isosurfaces. An isosurface, F(x,y,z) = c , of a function, F , is implicitly defined by trilinear interpolation of background grid points. The key idea of our approach is that of treating the isosurface within a grid cell as a graph (height) function in one of the three coordinate axis directions, restricted to where the slope is not too high, and integrating / sampling from each of these three. We use this unbiased sampling algorithm for applications in Monte Carlo integration, Poisson-disk sampling, and isosurface meshing.
Generating quality tetrahedral meshes from binary volumes
DEFF Research Database (Denmark)
Hansen, Mads Fogtmann; Bærentzen, Jakob Andreas; Larsen, Rasmus
2010-01-01
use these measures to generate high quality meshes from signed distance maps. This paper also describes an approach for computing (smooth) signed distance maps from binary volumes as volumetric data in many cases originate from segmentation of objects from imaging techniques such as CT, MRI, etc...... generation algorithm on four examples (torus, Stanford dragon, brain mask, and pig back) and report the dihedral angle, aspect ratio and radius-edge ratio. Even though, the algorithm incorporates none of the mentioned quality measures in the compression stage it receives a good score for all these measures...
Adaptive wall treatment for a second moment closure in the industrial context
International Nuclear Information System (INIS)
Wald, Jean-Francois
2016-01-01
CFD computations of turbulent flows always begin with a complex meshing process (upper plenum, fuel assembly in the nuclear industry for example). Geometrical constraints are the first ones to be satisfied (level of details, important zones to refine regarding 'user experiences'). One has however to satisfy constraints that are inherent to the RANS model (Reynolds Averaged Navier Stokes) used for the computation. For example, if a 'High-Reynolds' (κ-ε standard, SSG,...) model is used one should only have wall cells with a dimensionless distance to the wall greater or equal to 20 to justify the use of the universal 'law of the wall'. On the other hand, if a 'Low-Reynolds' (BL-v 2 /k, EB-RSM,...) model is used, one should only find wall cells with a dimensionless distance to the wall below 1. If those models are used in an inappropriate way the results could be dramatic (computations can either diverge or give unphysical results). This thesis proposes the development of a new turbulence model with adaptive wall treatments that gives satisfactory results on all types of meshes. In particular, the model will be able to cope with meshes containing both 'High-Reynolds' and 'Low-Reynolds' wall cells. Given the complex flows encountered in the nuclear industry this thesis will use a model known for its good behavior: the EB-RSM model. This model is able to reproduce the anisotropy of the turbulence and give more satisfactory results than eddy viscosity models in different configurations. This model is available in Code Saturne, an open source code developed at EDF. All the developments are made in this code. (author)
Comparing Syntactic and Semantics Action Refinement
Goltz, Ursula; Gorrieri, Roberto; Rensink, Arend
The semantic definition of action refinement on labelled configuration structures is compared with the notion of syntactic substitution, which can be used as another notion of action refinement in a process algebraic setting. The comparison is done by studying a process algebra equipped with
On Syntactic and Semantic Action Refinement
Hagiya, M.; Goltz, U.; Mitchell, J.C.; Gorrieri, R.; Rensink, Arend
1994-01-01
The semantic definition of action refinement on labelled event structures is compared with the notion of syntactic substitution, which can be used as another notion of action refinement in a process algebraic setting. This is done by studying a process algebra equipped with the ACP sequential
Anomalies in the refinement of isoleucine
Energy Technology Data Exchange (ETDEWEB)
Berntsen, Karen R. M.; Vriend, Gert, E-mail: gerrit.vriend@radboudumc.nl [Radboud University Medical Center, Geert Grooteplein 26-28, 6525 GA Nijmegen (Netherlands)
2014-04-01
The side-chain torsion angles of isoleucines in X-ray protein structures are a function of resolution, secondary structure and refinement software. Detailing the standard torsion angles used in refinement software can improve protein structure refinement. A study of isoleucines in protein structures solved using X-ray crystallography revealed a series of systematic trends for the two side-chain torsion angles χ{sub 1} and χ{sub 2} dependent on the resolution, secondary structure and refinement software used. The average torsion angles for the nine rotamers were similar in high-resolution structures solved using either the REFMAC, CNS or PHENIX software. However, at low resolution these programs often refine towards somewhat different χ{sub 1} and χ{sub 2} values. Small systematic differences can be observed between refinement software that uses molecular dynamics-type energy terms (for example CNS) and software that does not use these terms (for example REFMAC). Detailing the standard torsion angles used in refinement software can improve the refinement of protein structures. The target values in the molecular dynamics-type energy functions can also be improved.
Refinement Checking on Parametric Modal Transition Systems
DEFF Research Database (Denmark)
Benes, Nikola; Kretínsky, Jan; Larsen, Kim Guldstrand
2015-01-01
Modal transition systems (MTS) is a well-studied specification formalism of reactive systems supporting a step-wise refinement methodology. Despite its many advantages, the formalism as well as its currently known extensions are incapable of expressing some practically needed aspects in the refin...
Refined large N duality for knots
DEFF Research Database (Denmark)
Kameyama, Masaya; Nawata, Satoshi
We formulate large N duality of U(N) refined Chern-Simons theory with a torus knot/link in S³. By studying refined BPS states in M-theory, we provide the explicit form of low-energy effective actions of Type IIA string theory with D4-branes on the Ω-background. This form enables us to relate...
Grain refinement of zinc-aluminium alloys
International Nuclear Information System (INIS)
Zaid, A.I.O.
2006-01-01
It is now well-established that the structure of the zinc-aluminum die casting alloys can be modified by the binary Al-Ti or the ternary Al-Ti-B master alloys. in this paper, grain refinement of zinc-aluminum alloys by rare earth materials is reviewed and discussed. The importance of grain refining of these alloys and parameters affecting it are presented and discussed. These include parameters related to the Zn-Al alloys cast, parameters related to the grain refining elements or alloys and parameters related to the process. The effect of addition of other alloying elements e.g. Zr either alone or in the presence of the main grain refiners Ti or Ti + B on the grain refining efficiency is also reviewed and discussed. Furthermore, based on the grain refinement and the parameters affecting it, a criterion for selection of the optimum grain refiner is suggested. Finally, the recent research work on the effect of grain refiners on the mechanical behaviour, impact strength, wear resistance, and fatigue life of these alloys are presented and discussed. (author)
Anomalies in the refinement of isoleucine
International Nuclear Information System (INIS)
Berntsen, Karen R. M.; Vriend, Gert
2014-01-01
The side-chain torsion angles of isoleucines in X-ray protein structures are a function of resolution, secondary structure and refinement software. Detailing the standard torsion angles used in refinement software can improve protein structure refinement. A study of isoleucines in protein structures solved using X-ray crystallography revealed a series of systematic trends for the two side-chain torsion angles χ 1 and χ 2 dependent on the resolution, secondary structure and refinement software used. The average torsion angles for the nine rotamers were similar in high-resolution structures solved using either the REFMAC, CNS or PHENIX software. However, at low resolution these programs often refine towards somewhat different χ 1 and χ 2 values. Small systematic differences can be observed between refinement software that uses molecular dynamics-type energy terms (for example CNS) and software that does not use these terms (for example REFMAC). Detailing the standard torsion angles used in refinement software can improve the refinement of protein structures. The target values in the molecular dynamics-type energy functions can also be improved