Performance of a streaming mesh refinement algorithm.
Thompson, David C.; Pebay, Philippe Pierre
2004-08-01
In SAND report 2004-1617, we outline a method for edge-based tetrahedral subdivision that does not rely on saving state or communication to produce compatible tetrahedralizations. This report analyzes the performance of the technique by characterizing (a) mesh quality, (b) execution time, and (c) traits of the algorithm that could affect quality or execution time differently for different meshes. It also details the method used to debug the several hundred subdivision templates that the algorithm relies upon. Mesh quality is on par with other similar refinement schemes and throughput on modern hardware can exceed 600,000 output tetrahedra per second. But if you want to understand the traits of the algorithm, you have to read the report!
Parallel Adaptive Mesh Refinement
Diachin, L; Hornung, R; Plassmann, P; WIssink, A
2005-03-04
As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the
Conformal refinement of unstructured quadrilateral meshes
Garmella, Rao [Los Alamos National Laboratory
2009-01-01
We present a multilevel adaptive refinement technique for unstructured quadrilateral meshes in which the mesh is kept conformal at all times. This means that the refined mesh, like the original, is formed of only quadrilateral elements that intersect strictly along edges or at vertices, i.e., vertices of one quadrilateral element do not lie in an edge of another quadrilateral. Elements are refined using templates based on 1:3 refinement of edges. We demonstrate that by careful design of the refinement and coarsening strategy, we can maintain high quality elements in the refined mesh. We demonstrate the method on a number of examples with dynamically changing refinement regions.
Adaptive mesh refinement in titanium
Colella, Phillip; Wen, Tong
2005-01-21
In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.
Mesh refinement strategy for optimal control problems
Paiva, Luis Tiago; Fontes, Fernando,
2013-01-01
International audience; Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform node...
Streaming Compression of Hexahedral Meshes
Isenburg, M; Courbet, C
2010-02-03
We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.
Mesh refinement strategy for optimal control problems
Paiva, L. T.; Fontes, F. A. C. C.
2013-10-01
Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.
Adaptive mesh refinement for storm surge
Mandli, Kyle T.
2014-03-01
An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the GeoClaw framework and compared to ADCIRC for Hurricane Ike along with observed tide gauge data and the computational cost of each model run. © 2014 Elsevier Ltd.
Relativistic MHD with Adaptive Mesh Refinement
Anderson, M; Liebling, S L; Neilsen, D; Anderson, Matthew; Hirschmann, Eric; Liebling, Steven L.; Neilsen, David
2006-01-01
We solve the relativistic magnetohydrodynamics (MHD) equations using a finite difference Convex ENO method (CENO) in 3+1 dimensions within a distributed parallel adaptive mesh refinement (AMR) infrastructure. In flat space we examine a Balsara blast wave problem along with a spherical blast wave and a relativistic rotor test both with unigrid and AMR simulations. The AMR simulations substantially improve performance while reproducing the resolution equivalent unigrid simulation results. We also investigate the impact of hyperbolic divergence cleaning for the spherical blast wave and relativistic rotor. We include unigrid and mesh refinement parallel performance measurements for the spherical blast wave.
Adaptive Mesh Refinement for Storm Surge
Mandli, Kyle T
2014-01-01
An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the \\geoclaw framework and compared to \\adcirc for Hurricane Ike along with observed tide gauge data and the computational cost of each model run.
GRChombo: Numerical relativity with adaptive mesh refinement
Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran
2015-12-01
In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial ‘many-boxes-in-many-boxes’ mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.
Parallel object-oriented adaptive mesh refinement
Balsara, D.; Quinlan, D.J.
1997-04-01
In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.
Relativistic MHD with adaptive mesh refinement
Anderson, Matthew [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States); Hirschmann, Eric W [Department of Physics and Astronomy, Brigham Young University, Provo, UT 84602 (United States); Liebling, Steven L [Department of Physics, Long Island University-C W Post Campus, Brookville, NY 11548 (United States); Neilsen, David [Department of Physics and Astronomy, Brigham Young University, Provo, UT 84602 (United States)
2006-11-22
This paper presents a new computer code to solve the general relativistic magnetohydrodynamics (GRMHD) equations using distributed parallel adaptive mesh refinement (AMR). The fluid equations are solved using a finite difference convex ENO method (CENO) in 3 + 1 dimensions, and the AMR is Berger-Oliger. Hyperbolic divergence cleaning is used to control the {nabla} . B = 0 constraint. We present results from three flat space tests, and examine the accretion of a fluid onto a Schwarzschild black hole, reproducing the Michel solution. The AMR simulations substantially improve performance while reproducing the resolution equivalent unigrid simulation results. Finally, we discuss strong scaling results for parallel unigrid and AMR runs.
Adaptive Mesh Refinement for Characteristic Grids
Thornburg, Jonathan
2009-01-01
I consider techniques for Berger-Oliger adaptive mesh refinement (AMR) when numerically solving partial differential equations with wave-like solutions, using characteristic (double-null) grids. Such AMR algorithms are naturally recursive, and the best-known past Berger-Oliger characteristic AMR algorithm, that of Pretorius & Lehner (J. Comp. Phys. 198 (2004), 10), recurses on individual "diamond" characteristic grid cells. This leads to the use of fine-grained memory management, with individual grid cells kept in 2-dimensional linked lists at each refinement level. This complicates the implementation and adds overhead in both space and time. Here I describe a Berger-Oliger characteristic AMR algorithm which instead recurses on null \\emph{slices}. This algorithm is very similar to the usual Cauchy Berger-Oliger algorithm, and uses relatively coarse-grained memory management, allowing entire null slices to be stored stored in contiguous arrays in memory. The algorithm is very efficient in both space and ti...
GRChombo : Numerical Relativity with Adaptive Mesh Refinement
Clough, Katy; Finkel, Hal; Kunesch, Markus; Lim, Eugene A; Tunyasuvunakool, Saran
2015-01-01
Numerical relativity has undergone a revolution in the past decade. With a well-understood mathematical formalism, and full control over the gauge modes, it is now entering an era in which the science can be properly explored. In this work, we introduce GRChombo, a new numerical relativity code written to take full advantage of modern parallel computing techniques. GRChombo's features include full adaptive mesh refinement with block structured Berger-Rigoutsos grid generation which supports non-trivial "many-boxes-in-many-boxes" meshing hierarchies, and massive parallelism through the Message Passing Interface (MPI). GRChombo evolves the Einstein equation with the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. We show that GRChombo passes all the standard "Apples-to-Apples" code comparison tests. We also show that it can stably and accurately evolve vacuum black hole spacetimes such as binary black hole mergers, and non-vacuum spacetimes such as scalar collapses into b...
Carpet: Adaptive Mesh Refinement for the Cactus Framework
Schnetter, Erik; Hawley, Scott; Hawke, Ian
2016-11-01
Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.
Elliptic Solvers for Adaptive Mesh Refinement Grids
Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.
1999-06-03
We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.
Adaptive mesh refinement for stochastic reaction-diffusion processes
Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros
2011-01-01
We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.
Adaptive mesh refinement for shocks and material interfaces
Dai, William Wenlong [Los Alamos National Laboratory
2010-01-01
There are three kinds of adaptive mesh refinement (AMR) in structured meshes. Block-based AMR sometimes over refines meshes. Cell-based AMR treats cells cell by cell and thus loses the advantage of the nature of structured meshes. Patch-based AMR is intended to combine advantages of block- and cell-based AMR, i.e., the nature of structured meshes and sharp regions of refinement. But, patch-based AMR has its own difficulties. For example, patch-based AMR typically cannot preserve symmetries of physics problems. In this paper, we will present an approach for a patch-based AMR for hydrodynamics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, management of patches, and load balance. The special features of this patch-based AMR include symmetry preserving, efficiency of refinement across shock fronts and material interfaces, special implementation of flux correction, and patch management in parallel computing environments. To demonstrate the capability of the AMR framework, we will show both two- and three-dimensional hydrodynamics simulations with many levels of refinement.
Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling
Davis, B. N.; LeVeque, R. J.
2016-12-01
One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.
Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units
Beckingsale, D. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Gaudin, W. P. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Hornung, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gunney, B. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Herdman, J. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Jarvis, S. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom)
2014-11-17
Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.
Toward parallel, adaptive mesh refinement for chemically reacting flow simulations
Devine, K.D.; Shadid, J.N.; Salinger, A.G. Hutchinson, S.A. [Sandia National Labs., Albuquerque, NM (United States); Hennigan, G.L. [New Mexico State Univ., Las Cruces, NM (United States)
1997-12-01
Adaptive numerical methods offer greater efficiency than traditional numerical methods by concentrating computational effort in regions of the problem domain where the solution is difficult to obtain. In this paper, the authors describe progress toward adding mesh refinement to MPSalsa, a computer program developed at Sandia National laboratories to solve coupled three-dimensional fluid flow and detailed reaction chemistry systems for modeling chemically reacting flow on large-scale parallel computers. Data structures that support refinement and dynamic load-balancing are discussed. Results using uniform refinement with mesh sequencing to improve convergence to steady-state solutions are also presented. Three examples are presented: a lid driven cavity, a thermal convection flow, and a tilted chemical vapor deposition reactor.
Evolutions in 3D numerical relativity using fixed mesh refinement
Schnetter, E; Hawke, I; Schnetter, Erik; Hawley, Scott H.; Hawke, Ian
2004-01-01
We present results of 3D numerical simulations using a finite difference code featuring fixed mesh refinement (FMR), in which a subset of the computational domain is refined in space and time. We apply this code to a series of test cases including a robust stability test, a nonlinear gauge wave and an excised Schwarzschild black hole in an evolving gauge. We find that the mesh refinement results are comparable in accuracy, stability and convergence to unigrid simulations with the same effective resolution. At the same time, the use of FMR reduces the computational resources needed to obtain a given accuracy. Particular care must be taken at the interfaces between coarse and fine grids to avoid a loss of convergence at high resolutions. This FMR system, "Carpet", is a driver module in the freely available Cactus computational infrastructure, and is able to endow existing Cactus simulation modules ("thorns") with FMR with little or no extra effort.
Streaming Compression of Tetrahedral Volume Meshes
Isenburg, M; Lindstrom, P; Gumhold, S; Shewchuk, J
2005-11-21
Geometry processing algorithms have traditionally assumed that the input data is entirely in main memory and available for random access. This assumption does not scale to large data sets, as exhausting the physical memory typically leads to IO-inefficient thrashing. Recent works advocate processing geometry in a 'streaming' manner, where computation and output begin as soon as possible. Streaming is suitable for tasks that require only local neighbor information and batch process an entire data set. We describe a streaming compression scheme for tetrahedral volume meshes that encodes vertices and tetrahedra in the order they are written. To keep the memory footprint low, the compressor is informed when vertices are referenced for the last time (i.e. are finalized). The compression achieved depends on how coherent the input order is and how many tetrahedra are buffered for local reordering. For reasonably coherent orderings and a buffer of 10,000 tetrahedra, we achieve compression rates that are only 25 to 40 percent above the state-of-the-art, while requiring drastically less memory resources and less than half the processing time.
Scalable Video Streaming in Wireless Mesh Networks for Education
Liu, Yan; Wang, Xinheng; Zhao, Liqiang
2011-01-01
In this paper, a video streaming system for education based on a wireless mesh network is proposed. A wireless mesh network is a self-organizing, self-managing and reliable intelligent network, which allows educators to deploy a network quickly. Video streaming plays an important role in this system for multimedia data transmission. This new…
Scalable Video Streaming in Wireless Mesh Networks for Education
Liu, Yan; Wang, Xinheng; Zhao, Liqiang
2011-01-01
In this paper, a video streaming system for education based on a wireless mesh network is proposed. A wireless mesh network is a self-organizing, self-managing and reliable intelligent network, which allows educators to deploy a network quickly. Video streaming plays an important role in this system for multimedia data transmission. This new…
Enzo: An Adaptive Mesh Refinement Code for Astrophysics
The Enzo Collaboration; Bryan, Greg L.; Norman, Michael L.; O'Shea, Brian W.; Abel, Tom; Wise, John H.; Turk, Matthew J.; Reynolds, Daniel R.; Collins, David C.; Wang, Peng; Skillman, Samuel W.; Smith, Britton; Harkness, Robert P.; Bordner, James; Kim, Ji-Hoon
2013-01-01
This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in 1, 2, and 3 dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically-thin radiative cooling of prim...
The Nonlinear Sigma Model With Distributed Adaptive Mesh Refinement
Liebling, S L
2004-01-01
An adaptive mesh refinement (AMR) scheme is implemented in a distributed environment using Message Passing Interface (MPI) to find solutions to the nonlinear sigma model. Previous work studied behavior similar to black hole critical phenomena at the threshold for singularity formation in this flat space model. This work is a follow-up describing extensions to distribute the grid hierarchy and presenting tests showing the correctness of the model.
Boxlib with tiling: an adaptive mesh refinement software framework
Unat, Didem; Zhang, W.; Almgren, A.; Day, M.; Nguyen, T.; Shalf, J.
2016-01-01
In this paper we introduce a block-structured adaptive mesh refinement software framework that incorporates tiling, a well-known loop transformation. Because the multiscale, multiphysics codes built in boxlib are designed to solve complex systems at high resolution, performance on current and next generation architectures is essential. With the expectation of many more cores per node on next generation architectures, the ability to effectively utilize threads within a node is essential, and t...
Elliptic Solvers with Adaptive Mesh Refinement on Complex Geometries
Phillip, B.
2000-07-24
Adaptive Mesh Refinement (AMR) is a numerical technique for locally tailoring the resolution computational grids. Multilevel algorithms for solving elliptic problems on adaptive grids include the Fast Adaptive Composite grid method (FAC) and its parallel variants (AFAC and AFACx). Theory that confirms the independence of the convergence rates of FAC and AFAC on the number of refinement levels exists under certain ellipticity and approximation property conditions. Similar theory needs to be developed for AFACx. The effectiveness of multigrid-based elliptic solvers such as FAC, AFAC, and AFACx on adaptively refined overlapping grids is not clearly understood. Finally, a non-trivial eye model problem will be solved by combining the power of using overlapping grids for complex moving geometries, AMR, and multilevel elliptic solvers.
Block-Structured Adaptive Mesh Refinement Algorithms for Vlasov Simulation
Hittinger, J A F
2012-01-01
Direct discretization of continuum kinetic equations, like the Vlasov equation, are under-utilized because the distribution function generally exists in a high-dimensional (>3D) space and computational cost increases geometrically with dimension. We propose to use high-order finite-volume techniques with block-structured adaptive mesh refinement (AMR) to reduce the computational cost. The primary complication comes from a solution state comprised of variables of different dimensions. We develop the algorithms required to extend standard single-dimension block structured AMR to the multi-dimension case. Specifically, algorithms for reduction and injection operations that transfer data between mesh hierarchies of different dimensions are explained in detail. In addition, modifications to the basic AMR algorithm that enable the use of high-order spatial and temporal discretizations are discussed. Preliminary results for a standard 1D+1V Vlasov-Poisson test problem are presented. Results indicate that there is po...
Toward a Consistent Framework for High Order Mesh Refinement Schemes in Numerical Relativity
Mongwane, Bishop
2015-01-01
It has now become customary in the field of numerical relativity to couple high order finite difference schemes to mesh refinement algorithms. To this end, different modifications to the standard Berger-Oliger adaptive mesh refinement algorithm have been proposed. In this work we present a fourth order stable mesh refinement scheme with sub-cycling in time for numerical relativity. We do not use buffer zones to deal with refinement boundaries but explicitly specify boundary data for refined grids. We argue that the incompatibility of the standard mesh refinement algorithm with higher order Runge Kutta methods is a manifestation of order reduction phenomena, caused by inconsistent application of boundary data in the refined grids. Our scheme also addresses the problem of spurious reflections that are generated when propagating waves cross mesh refinement boundaries. We introduce a transition zone on refined levels within which the phase velocity of propagating modes is allowed to decelerate in order to smoothl...
Fully implicit adaptive mesh refinement algorithm for reduced MHD
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
3D level set methods for evolving fronts on tetrahedral meshes with adaptive mesh refinement
Morgan, Nathaniel R.; Waltz, Jacob I.
2017-05-01
The level set method is commonly used to model dynamically evolving fronts and interfaces. In this work, we present new methods for evolving fronts with a specified velocity field or in the surface normal direction on 3D unstructured tetrahedral meshes with adaptive mesh refinement (AMR). The level set field is located at the nodes of the tetrahedral cells and is evolved using new upwind discretizations of Hamilton-Jacobi equations combined with a Runge-Kutta method for temporal integration. The level set field is periodically reinitialized to a signed distance function using an iterative approach with a new upwind gradient. The details of these level set and reinitialization methods are discussed. Results from a range of numerical test problems are presented.
ENZO: AN ADAPTIVE MESH REFINEMENT CODE FOR ASTROPHYSICS
Bryan, Greg L.; Turk, Matthew J. [Columbia University, Department of Astronomy, New York, NY 10025 (United States); Norman, Michael L.; Bordner, James; Xu, Hao; Kritsuk, Alexei G. [CASS, University of California, San Diego, 9500 Gilman Drive La Jolla, CA 92093-0424 (United States); O' Shea, Brian W.; Smith, Britton [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Abel, Tom; Wang, Peng; Skillman, Samuel W. [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Menlo Park, CA 94025 (United States); Wise, John H. [Center for Relativistic Astrophysics, School of Physics, Georgia Institute of Technology, 837 State Street, Atlanta, GA (United States); Reynolds, Daniel R. [Department of Mathematics, Southern Methodist University, Box 750156, Dallas, TX 75205-0156 (United States); Collins, David C. [Department of Physics, Florida State University, Tallahassee, FL 32306 (United States); Harkness, Robert P. [NICS, Oak Ridge National Laboratory, 1 Bethel Valley Rd, Oak Ridge, TN 37831 (United States); Kim, Ji-hoon [Department of Astronomy and Astrophysics, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); Kuhlen, Michael [Theoretical Astrophysics Center, University of California Berkeley, Hearst Field Annex, Berkeley, CA 94720 (United States); Goldbaum, Nathan [Institute for Astronomy, University of Edinburgh, Edinburgh EH9 3HJ (United Kingdom); Hummels, Cameron [Department of Astronomy/Steward Observatory, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721 (United States); Tasker, Elizabeth [Physics Department, Faculty of Science, Hokkaido University, Kita-10 Nishi 8, Kita-ku, Sapporo 060-0810 (Japan); Collaboration: Enzo Collaboration; and others
2014-04-01
This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in one, two, and three dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically thin radiative cooling of primordial and metal-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the code's parallel performance, and discuss the Enzo collaboration's code development methodology.
Local mesh refinement for incompressible fluid flow with free surfaces
Terasaka, H.; Kajiwara, H.; Ogura, K. [Tokyo Electric Power Company (Japan)] [and others
1995-09-01
A new local mesh refinement (LMR) technique has been developed and applied to incompressible fluid flows with free surface boundaries. The LMR method embeds patches of fine grid in arbitrary regions of interest. Hence, more accurate solutions can be obtained with a lower number of computational cells. This method is very suitable for the simulation of free surface movements because free surface flow problems generally require a finer computational grid to obtain adequate results. By using this technique, one can place finer grids only near the surfaces, and therefore greatly reduce the total number of cells and computational costs. This paper introduces LMR3D, a three-dimensional incompressible flow analysis code. Numerical examples calculated with the code demonstrate well the advantages of the LMR method.
Block-structured Adaptive Mesh Refinement - Theory, Implementation and Application
Deiterding Ralf
2011-12-01
Full Text Available Structured adaptive mesh refinement (SAMR techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.
Enzo: An Adaptive Mesh Refinement Code for Astrophysics
Bryan, Greg L; O'Shea, Brian W; Abel, Tom; Wise, John H; Turk, Matthew J; Reynolds, Daniel R; Collins, David C; Wang, Peng; Skillman, Samuel W; Smith, Britton; Harkness, Robert P; Bordner, James; Kim, Ji-hoon; Kuhlen, Michael; Xu, Hao; Goldbaum, Nathan; Hummels, Cameron; Kritsuk, Alexei G; Tasker, Elizabeth; Skory, Stephen; Simpson, Christine M; Hahn, Oliver; Oishi, Jeffrey S; So, Geoffrey C; Zhao, Fen; Cen, Renyue; Li, Yuan
2013-01-01
This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in 1, 2, and 3 dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically-thin radiative cooling of primordial and metal-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the code's parallel performance, and discuss the Enzo collaboration's code development methodology.
Hydrodynamical Adaptive Mesh Refinement Simulations of Disk Galaxies
Gibson, Brad K; Sanchez-Blazquez, Patricia; Teyssier, Romain; House, Elisa L; Brook, Chris B; Kawata, Daisuke
2008-01-01
To date, fully cosmological hydrodynamic disk simulations to redshift zero have only been undertaken with particle-based codes, such as GADGET, Gasoline, or GCD+. In light of the (supposed) limitations of traditional implementations of smoothed particle hydrodynamics (SPH), or at the very least, their respective idiosyncrasies, it is important to explore complementary approaches to the SPH paradigm to galaxy formation. We present the first high-resolution cosmological disk simulations to redshift zero using an adaptive mesh refinement (AMR)-based hydrodynamical code, in this case, RAMSES. We analyse the temporal and spatial evolution of the simulated stellar disks' vertical heating, velocity ellipsoids, stellar populations, vertical and radial abundance gradients (gas and stars), assembly/infall histories, warps/lopsideness, disk edges/truncations (gas and stars), ISM physics implementations, and compare and contrast these properties with our sample of cosmological SPH disks, generated with GCD+. These prelim...
Production-quality Tools for Adaptive Mesh RefinementVisualization
Weber, Gunther H.; Childs, Hank; Bonnell, Kathleen; Meredith,Jeremy; Miller, Mark; Whitlock, Brad; Bethel, E. Wes
2007-10-25
Adaptive Mesh Refinement (AMR) is a highly effectivesimulation method for spanning a large range of spatiotemporal scales,such as astrophysical simulations that must accommodate ranges frominterstellar to sub-planetary. Most mainstream visualization tools stilllack support for AMR as a first class data type and AMR code teams usecustom built applications for AMR visualization. The Department ofEnergy's (DOE's) Science Discovery through Advanced Computing (SciDAC)Visualization and Analytics Center for Enabling Technologies (VACET) isextending and deploying VisIt, an open source visualization tool thataccommodates AMR as a first-class data type, for use asproduction-quality, parallel-capable AMR visual data analysisinfrastructure. This effort will help science teams that use AMR-basedsimulations and who develop their own AMR visual data analysis softwareto realize cost and labor savings.
On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach
Liu, Zheng; Xue, Kaiping; Hong, Peilin
The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.
Adaptive Mesh Refinement in Reactive Transport Modeling of Subsurface Environments
Molins, S.; Day, M.; Trebotich, D.; Graves, D. T.
2015-12-01
Adaptive mesh refinement (AMR) is a numerical technique for locally adjusting the resolution of computational grids. AMR makes it possible to superimpose levels of finer grids on the global computational grid in an adaptive manner allowing for more accurate calculations locally. AMR codes rely on the fundamental concept that the solution can be computed in different regions of the domain with different spatial resolutions. AMR codes have been applied to a wide range of problem including (but not limited to): fully compressible hydrodynamics, astrophysical flows, cosmological applications, combustion, blood flow, heat transfer in nuclear reactors, and land ice and atmospheric models for climate. In subsurface applications, in particular, reactive transport modeling, AMR may be particularly useful in accurately capturing concentration gradients (hence, reaction rates) that develop in localized areas of the simulation domain. Accurate evaluation of reaction rates is critical in many subsurface applications. In this contribution, we will discuss recent applications that bring to bear AMR capabilities on reactive transport problems from the pore scale to the flood plain scale.
Parallel adaptive mesh refinement techniques for plasticity problems
Barry, W.J. [Carnegie Mellon Univ., Pittsburgh, PA (United States). Dept. of Civil and Environmental Engineering; Jones, M.T. [Virginia Polytechnic Institute, Blacksburg, VA (United States). Dept. of Electrical and Computer Engineering]|[State Univ., Blacksburg, VA (United States); Plassmann, P.E. [Argonne National Lab., IL (United States)
1997-12-31
The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.
A Parallel Algorithm for Adaptive Local Refinement of Tetrahedral Meshes Using Bisection
LinBo Zhang
2009-01-01
Local mesh refinement is one of the key steps in the implementations of adaptive finite element methods. This paper presents a parallel algorithm for distributed memory parallel computers for adaptive local refinement of tetrahedral meshes using bisection. This algorithm is used in PHG, Parallel Hierarchical Grid (http: //lsec. cc. ac. cn/phg/J, a toolbox under active development for parallel adaptive finite element solutions of partial differential equations. The algorithm proposed is characterized by allowing simultaneous refinement of submeshes to arbitrary levels before synchronization between submeshes and without the need of a central coordinator process for managing new vertices. Using the concept of canonical refinement, a simple proof of the independence of the resulting mesh on the mesh partitioning is given, which is useful in better understanding the behaviour of the bisectioning refinement procedure.AMS subject classifications: 65Y05, 65N50
RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code
Zhang, Wei-Qun; /KIPAC, Menlo Park; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study
2005-06-06
The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.
Mesh Generation via Local Bisection Refinement of Triangulated Grids
2015-06-01
tb [Maubach 1995, Theorem 5.1]. This is exploited in the recursive algorithm RefineTriangle (Algorithm 2) to com- patibly refine a given triangle; the... recursion depth of RefineTriangle is bounded by the maximum level of refinement in T [Maubach 1995]. RefineTriangle calls itself repeatedly on a... sequence of triangles until a compatibly divisible triangle is found. This sequence of triangles is then bisected in reverse order to preserve
Fakhari, Abbas; Lee, Taehun
2014-03-01
An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.
A REGIONAL REFINEMENT FOR FINITE ELEMENT MESH DESIGN USING COLLAPSIBLE ELEMENT
Priyo Suprobo
2000-01-01
Full Text Available A practical algorithm for automated mesh design in finite element analysis is developed. A regional mixed mesh improvement procedure is introduced. The error control%2C algorithm implementation%2C code development%2C and the solution accuracy are discussed. Numerical example includes automated mesh designs for plane elastic media with singularities. The efficiency of the procedure is demonstrated. Abstract in Bahasa Indonesia : regional+refinement%2C+mesh+generation%2C+isoparametric+element%2C+collapsible+element
A unified framework for mesh refinement in random and physical space
Li, Jing; Stinis, Panos
2016-10-01
In recent work we have shown how an accurate reduced model can be utilized to perform mesh refinement in random space. That work relied on the explicit knowledge of an accurate reduced model which is used to monitor the transfer of activity from the large to the small scales of the solution. Since this is not always available, we present in the current work a framework which shares the merits and basic idea of the previous approach but does not require an explicit knowledge of a reduced model. Moreover, the current framework can be applied for refinement in both random and physical space. In this manuscript we focus on the application to random space mesh refinement. We study examples of increasing difficulty (from ordinary to partial differential equations) which demonstrate the efficiency and versatility of our approach. We also provide some results from the application of the new framework to physical space mesh refinement.
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable
Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries
Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)
2002-01-01
This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.
Error resilient concurrent video streaming over wireless mesh networks
LI Dan-jue; ZHANG Qian; CHUAH Chen-nee; YOO Ben S.J.
2006-01-01
In this paper, we propose a multi-source multi-path video streaming system for supporting high quality concurrent video-on-demand (VoD) services over wireless mesh networks (WMNs), and leverage forward error correction to enhance the error resilience of the system. By taking wireless interference into consideration, we present a more realistic networking model to capture the characteristics of WMNs and then design a route selection scheme using a joint rate/interference-distortion optimization framework to help the system optimally select concurrent streaming paths. We mathematically formulate such a route selection problem, and solve it heuristically using genetic algorithm. Simulation results demonstrate the effectiveness of our proposed scheme.
Improvement of neutronic calculations on a Masurca core using adaptive mesh refinement capabilities
Fournier, D.; Archier, P.; Le Tellier, R.; Suteau, C., E-mail: damien.fournier@cea.fr, E-mail: pascal.archier@cea.fr, E-mail: romain.le-tellier@cea.fr, E-mail: christophe.suteau@cea.fr [CEA, DEN, DER/SPRC/LEPh, Cadarache, Saint Paul-lez-Durance (France)
2011-07-01
The simulation of 3D cores with homogenized assemblies in transport theory remains time and memory consuming for production calculations. With a multigroup discretization for the energy variable and a discrete ordinate method for the angle, a system of about 10{sup 4} coupled hyperbolic transport equations has to be solved. For these equations, we intend to optimize the spatial discretization. In the framework of the SNATCH solver used in this study, the spatial problem is dealt with by using a structured hexahedral mesh and applying a Discontinuous Galerkin Finite Element Method (DGFEM). This paper shows the improvements due to the development of Adaptive Mesh Refinement (AMR) methods. As the SNATCH solver uses a hierarchical polynomial basis, p−refinement is possible but also h−refinement thanks to non conforming capabilities. Besides, as the flux spatial behavior is highly dependent on the energy, we propose to adapt differently the spatial discretization according to the energy group. To avoid dealing with too many meshes, some energy groups are joined and share the same mesh. The different energy-dependent AMR strategies are compared to each other but also with the classical approach of a conforming and highly refined spatial mesh. This comparison is carried out on different quantities such as the multiplication factor, the flux or the current. The gain in time and memory is shown for 2D and 3D benchmarks coming from the ZONA2B experimental core configuration of the MASURCA mock-up at CEA Cadarache. (author)
Dynamic mesh refinement for discrete models of jet electro-hydrodynamics
Lauricella, Marco; Pisignano, Dario; Succi, Sauro
2015-01-01
Nowadays, several models of unidimensional fluid jets exploit discrete element methods. In some cases, as for models aiming at describing the electrospinning nanofabrication process of polymer fibers, discrete element methods suffer a non constant resolution of the jet representation. We develop a dynamic mesh-refinement method for the numerical study of the electro-hydrodynamic behavior of charged jets using discrete element methods. To this purpose, we import ideas and techniques from the string method originally developed in the framework of free-energy landscape simulations. The mesh-refined discrete element method is demonstrated for the case of electrospinning applications.
h-Refinement for simple corner balance scheme of SN transport equation on distorted meshes
Yang, Rong; Yuan, Guangwei
2016-11-01
The transport sweep algorithm is a common method for solving discrete ordinate transport equation, but it breaks down once a concave cell appears in spatial meshes. To deal with this issue a local h-refinement for simple corner balance (SCB) scheme of SN transport equation on arbitrary quadrilateral meshes is presented in this paper by using a new subcell partition. It follows that a hybrid mesh with both triangle and quadrilateral cells is generated, and the geometric quality of these cells improves, especially it is ensured that all cells become convex. Combining with the original SCB scheme, an adaptive transfer algorithm based on the hybrid mesh is constructed. Numerical experiments are presented to verify the utility and accuracy of the new algorithm, especially for some application problems such as radiation transport coupled with Lagrangian hydrodynamic flow. The results show that it performs well on extremely distorted meshes with concave cells, on which the original SCB scheme does not work.
Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.
2006-01-01
Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.
Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand
2016-12-01
A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.
Hornung, R.D. [Duke Univ., Durham, NC (United States)
1996-12-31
An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.
A Hybrid Advection Scheme for Conserving Angular Momentum on a Refined Cartesian Mesh
Byerly, Zachary D; Tohline, Joel E; Marcello, Dominic C
2014-01-01
We test a new "hybrid" scheme for simulating dynamical fluid flows in which cylindrical components of the momentum are advected across a rotating Cartesian coordinate mesh. This hybrid scheme allows us to conserve angular momentum to machine precision while capitalizing on the advantages offered by a Cartesian mesh, such as a straightforward implementation of mesh refinement. Our test focuses on measuring the real and imaginary parts of the eigenfrequency of unstable axisymmetric modes that naturally arise in massless polytropic tori having a range of different aspect ratios, and quantifying the uncertainty in these measurements. Our measured eigenfrequencies show good agreement with the results obtained from the linear stability analysis of Kojima (1986) and from nonlinear hydrodynamic simulations performed on a cylindrical coordinate mesh by Woodward et al. (1994). When compared against results conducted with a traditional Cartesian advection scheme, the hybrid scheme achieves qualitative convergence at the...
Adaptive mesh refinement with spectral accuracy for magnetohydrodynamics in two space dimensions
Rosenberg, D; Pouquet, A
2007-01-01
We examine the effect of accuracy of high-order spectral element methods, with or without adaptive mesh refinement (AMR), in the context of a classical configuration of magnetic reconnection in two space dimensions, the so-called Orszag-Tang vortex made up of a magnetic X-point centered on a stagnation point of the velocity. A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code is applied to simulate this problem. The MHD solver is explicit, and uses the Elsasser formulation on high-order elements. It automatically takes advantage of the adaptive grid mechanics that have been described elsewhere in the fluid context [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)]; the code allows both statically refined and dynamically refined grids. Tests of the algorithm using analytic solutions are described, and comparisons of the Orszag-Tang solutions with pseudo-spectral computations are performed. We demonstrate for moderate Reynolds numbers th...
A GPU implementation of adaptive mesh refinement to simulate tsunamis generated by landslides
de la Asunción, Marc; Castro, Manuel J.
2016-04-01
In this work we propose a CUDA implementation for the simulation of landslide-generated tsunamis using a two-layer Savage-Hutter type model and adaptive mesh refinement (AMR). The AMR method consists of dynamically increasing the spatial resolution of the regions of interest of the domain while keeping the rest of the domain at low resolution, thus obtaining better runtimes and similar results compared to increasing the spatial resolution of the entire domain. Our AMR implementation uses a patch-based approach, it supports up to three levels, power-of-two ratios of refinement, different refinement criteria and also several user parameters to control the refinement and clustering behaviour. A strategy based on the variation of the cell values during the simulation is used to interpolate and propagate the values of the fine cells. Several numerical experiments using artificial and realistic scenarios are presented.
Application of adaptive mesh refinement to particle-in-cell simulations of plasmas and beams
Vay, J.-L.; Colella, P.; Kwan, J.W.; McCorquodale, P.; Serafini, D.B.; Friedman, A.; Grote, D.P.; Westenskow, G.; Adam, J.-C.; Heron, A.; Haber, I.
2003-11-04
Plasma simulations are often rendered challenging by the disparity of scales in time and in space which must be resolved. When these disparities are in distinctive zones of the simulation domain, a method which has proven to be effective in other areas (e.g. fluid dynamics simulations) is the mesh refinement technique. We briefly discuss the challenges posed by coupling this technique with plasma Particle-In-Cell simulations, and present examples of application in Heavy Ion Fusion and related fields which illustrate the effectiveness of the approach. We also report on the status of a collaboration under way at Lawrence Berkeley National Laboratory between the Applied Numerical Algorithms Group (ANAG) and the Heavy Ion Fusion group to upgrade ANAG's mesh refinement library Chombo to include the tools needed by Particle-In-Cell simulation codes.
A Field-length based refinement criterion for adaptive mesh simulations of the interstellar medium
Gressel, Oliver
2009-01-01
Adequate modelling of the multiphase interstellar medium requires optically thin radiative cooling, comprising an inherent thermal instability. The size of the occurring condensation and evaporation interfaces is determined by the so-called Field-length, which gives the dimension at which the instability is significantly damped by thermal conduction. Our aim is to study the relevance of conduction scale effects in the numerical modelling of a bistable medium and check the applicability of conventional and alternative adaptive mesh techniques. The low physical value of the thermal conduction within the ISM defines a multiscale problem, hence promoting the use of adaptive meshes. We here introduce a new refinement strategy that applies the Field condition by Koyama & Inutsuka as a refinement criterion. The described method is very similar to the Jeans criterion for gravitational instability by Truelove and efficiently allows to trace the unstable gas situated at the thermal interfaces. We present test compu...
Relativistic Vlasov-Maxwell modelling using finite volumes and adaptive mesh refinement
Wettervik, Benjamin Svedung; Siminos, Evangelos; Fülöp, Tünde
2016-01-01
The dynamics of collisionless plasmas can be modelled by the Vlasov-Maxwell system of equations. An Eulerian approach is needed to accurately describe processes that are governed by high energy tails in the distribution function, but is of limited efficiency for high dimensional problems. The use of an adaptive mesh can reduce the scaling of the computational cost with the dimension of the problem. Here, we present a relativistic Eulerian Vlasov-Maxwell solver with block-structured adaptive mesh refinement in one spatial and one momentum dimension. The discretization of the Vlasov equation is based on a high-order finite volume method. A flux corrected transport algorithm is applied to limit spurious oscillations and ensure the physical character of the distribution function. We demonstrate a speed-up by a factor of five, because of the use of an adaptive mesh, in a typical scenario involving laser-plasma interaction in the self-induced transparency regime.
Implementations of mesh refinement schemes for particle-in-cell plasma simulations
Vay, J.-L.; Colella, P.; Friedman, A.; Grote, D.P.; McCorquodale, P.; Serafini, D.B.
2003-10-20
Plasma simulations are often rendered challenging by the disparity of scales in time and in space which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g. fluid dynamics simulations) is the mesh refinement technique. We briefly discuss the challenges posed by coupling this technique with plasma Particle-In-Cell simulations and present two implementations in more detail, with examples.
Leng, Wei; Zhong, Shijie
2011-01-01
Numerical modeling of mantle convection is challenging. Owing to the multiscale nature of mantle dynamics, high resolution is often required in localized regions, with coarser resolution being sufficient elsewhere. When investigating thermochemical mantle convection, high resolution is required to resolve sharp and often discontinuous boundaries between distinct chemical components. In this paper, we present a 2-D finite element code with adaptive mesh refinement techniques for si...
1995-01-01
The generation of suitable fine mesh divisions is essential to obtan two -dimensional electric field analysis solutions with desired accuracy. This process , however, requires considerable technical knowledge and experience. To solve this kind of problem, adaptive methods prove effective. In electric field problems, for example, researches are usually interested in the values of electric field intensity and its distributions. In this paper, we have developed an h-adaptive refinement procedure...
Error estimation and adaptive mesh refinement for parallel analysis of shell structures
Keating, Scott C.; Felippa, Carlos A.; Park, K. C.
1994-01-01
The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.
Greg L. Bryan
2002-01-01
Full Text Available As an entry for the 2001 Gordon Bell Award in the "special" category, we describe our 3-d, hybrid, adaptive mesh refinement (AMR code Enzo designed for high-resolution, multiphysics, cosmological structure formation simulations. Our parallel implementation places no limit on the depth or complexity of the adaptive grid hierarchy, allowing us to achieve unprecedented spatial and temporal dynamic range. We report on a simulation of primordial star formation which develops over 8000 subgrids at 34 levels of refinement to achieve a local refinement of a factor of 1012 in space and time. This allows us to resolve the properties of the first stars which form in the universe assuming standard physics and a standard cosmological model. Achieving extreme resolution requires the use of 128-bit extended precision arithmetic (EPA to accurately specify the subgrid positions. We describe our EPA AMR implementation on the IBM SP2 Blue Horizon system at the San Diego Supercomputer Center.
A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction
Herrnstein, Aaron R. [Univ. of California, Davis, CA (United States)
2005-12-01
An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO_{2} concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No
Nicholas, Paul; Stasiuk, David; Nørgaard, Esben
2015-01-01
and material. Adaptive mesh refinement is used to support localised variance in resolution and information flow across these scales. The adaptation of mesh resolution is linked to structural analysis, panelisation, local geometric formation, connectivity, and the calculation of forming strains and material...
ADER-WENO finite volume schemes with space-time adaptive mesh refinement
Dumbser, Michael; Zanotti, Olindo; Hidalgo, Arturo; Balsara, Dinshaw S.
2013-09-01
We present the first high order one-step ADER-WENO finite volume scheme with adaptive mesh refinement (AMR) in multiple space dimensions. High order spatial accuracy is obtained through a WENO reconstruction, while a high order one-step time discretization is achieved using a local space-time discontinuous Galerkin predictor method. Due to the one-step nature of the underlying scheme, the resulting algorithm is particularly well suited for an AMR strategy on space-time adaptive meshes, i.e. with time-accurate local time stepping. The AMR property has been implemented 'cell-by-cell', with a standard tree-type algorithm, while the scheme has been parallelized via the message passing interface (MPI) paradigm. The new scheme has been tested over a wide range of examples for nonlinear systems of hyperbolic conservation laws, including the classical Euler equations of compressible gas dynamics and the equations of magnetohydrodynamics (MHD). High order in space and time have been confirmed via a numerical convergence study and a detailed analysis of the computational speed-up with respect to highly refined uniform meshes is also presented. We also show test problems where the presented high order AMR scheme behaves clearly better than traditional second order AMR methods. The proposed scheme that combines for the first time high order ADER methods with space-time adaptive grids in two and three space dimensions is likely to become a useful tool in several fields of computational physics, applied mathematics and mechanics.
A high order special relativistic hydrodynamic code with space-time adaptive mesh refinement
Zanotti, Olindo
2013-01-01
We present a high order one-step ADER-WENO finite volume scheme with space-time adaptive mesh refinement (AMR) for the solution of the special relativistic hydrodynamics equations. By adopting a local discontinuous Galerkin predictor method, a high order one-step time discretization is obtained, with no need for Runge-Kutta sub-steps. This turns out to be particularly advantageous in combination with space-time adaptive mesh refinement, which has been implemented following a "cell-by-cell" approach. As in existing second order AMR methods, also the present higher order AMR algorithm features time-accurate local time stepping (LTS), where grids on different spatial refinement levels are allowed to use different time steps. We also compare two different Riemann solvers for the computation of the numerical fluxes at the cell interfaces. The new scheme has been validated over a sample of numerical test problems in one, two and three spatial dimensions, exploring its ability in resolving the propagation of relativ...
Adaptive mesh refinement with spectral accuracy for magnetohydrodynamics in two space dimensions
Rosenberg, D.; Pouquet, A.; Mininni, P. D.
2007-08-01
We examine the effect of accuracy of high-order spectral element methods, with or without adaptive mesh refinement (AMR), in the context of a classical configuration of magnetic reconnection in two space dimensions, the so-called Orszag-Tang (OT) vortex made up of a magnetic X-point centred on a stagnation point of the velocity. A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code is applied to simulate this problem. The MHD solver is explicit, and uses the Elsässer formulation on high-order elements. It automatically takes advantage of the adaptive grid mechanics that have been described elsewhere in the fluid context (Rosenberg et al 2006 J. Comput. Phys. 215 59-80) the code allows both statically refined and dynamically refined grids. Tests of the algorithm using analytic solutions are described, and comparisons of the OT solutions with pseudo-spectral computations are performed. We demonstrate for moderate Reynolds numbers that the algorithms using both static and refined grids reproduce the pseudo-spectral solutions quite well. We show that low-order truncation—even with a comparable number of global degrees of freedom—fails to correctly model some strong (sup-norm) quantities in this problem, even though it satisfies adequately the weak (integrated) balance diagnostics.
A New MHD Code with Adaptive Mesh Refinement and Parallelization for Astrophysics
Jiang, R L; Chen, P F
2012-01-01
A new code, named MAP, is written in Fortran language for magnetohydrodynamics (MHD) calculation with the adaptive mesh refinement (AMR) and Message Passing Interface (MPI) parallelization. There are several optional numerical schemes for computing the MHD part, namely, modified Mac Cormack Scheme (MMC), Lax-Friedrichs scheme (LF) and weighted essentially non-oscillatory (WENO) scheme. All of them are second order, two-step, component-wise schemes for hyperbolic conservative equations. The total variation diminishing (TVD) limiters and approximate Riemann solvers are also equipped. A high resolution can be achieved by the hierarchical block-structured AMR mesh. We use the extended generalized Lagrange multiplier (EGLM) MHD equations to reduce the non-divergence free error produced by the scheme in the magnetic induction equation. The numerical algorithms for the non-ideal terms, e.g., the resistivity and the thermal conduction, are also equipped in the MAP code. The details of the AMR and MPI algorithms are d...
Constrained-Transport Magnetohydrodynamics with Adaptive-Mesh-Refinement in CHARM
Miniati, Francesco
2011-01-01
We present the implementation of a three-dimensional, second order accurate Godunov-type algorithm for magneto-hydrodynamic (MHD), in the adaptive-mesh-refinement (AMR) cosmological code {\\tt CHARM}. The algorithm is based on the full 12-solve spatially unsplit Corner-Transport-Upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the Piecewise-Parabolic-Method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a Constrained-Transport (CT) method. The multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a {\\it reflux-curl} operation, which maintains a ...
Single-Pass GPU-Raycasting for Structured Adaptive Mesh Refinement Data
Kaehler, Ralf
2012-01-01
Structured Adaptive Mesh Refinement (SAMR) is a popular numerical technique to study processes with high spatial and temporal dynamic range. It reduces computational requirements by adapting the lattice on which the underlying differential equations are solved to most efficiently represent the solution. Particularly in astrophysics and cosmology such simulations now can capture spatial scales ten orders of magnitude apart and more. The irregular locations and extensions of the refined regions in the SAMR scheme and the fact that different resolution levels partially overlap, poses a challenge for GPU-based direct volume rendering methods. kD-trees have proven to be advantageous to subdivide the data domain into non-overlapping blocks of equally sized cells, optimal for the texture units of current graphics hardware, but previous GPU-supported raycasting approaches for SAMR data using this data structure required a separate rendering pass for each node, preventing the application of many advanced lighting sche...
Fromang, S; Teyssier, R
2006-01-01
In this paper, we present a new method to perform numerical simulations of astrophysical MHD flows using the Adaptive Mesh Refinement framework and Constrained Transport. The algorithm is based on a previous work in which the MUSCL--Hancock scheme was used to evolve the induction equation. In this paper, we detail the extension of this scheme to the full MHD equations and discuss its properties. Through a series of test problems, we illustrate the performances of this new code using two different MHD Riemann solvers (Lax-Friedrich and Roe) and the need of the Adaptive Mesh Refinement capabilities in some cases. Finally, we show its versatility by applying it to two completely different astrophysical situations well studied in the past years: the growth of the magnetorotational instability in the shearing box and the collapse of magnetized cloud cores. We have implemented this new Godunov scheme to solve the ideal MHD equations in the AMR code RAMSES. It results in a powerful tool that can be applied to a grea...
Cell-based Adaptive Mesh Refinement on the GPU with Applications to Exascale Supercomputing
Trujillo, Dennis; Robey, Robert; Davis, Neal; Nicholaeff, David
2011-10-01
We present an OpenCL implementation of a cell-based adaptive mesh refinement (AMR) scheme for the shallow water equations. The challenges associated with ensuring the locality of algorithm architecture to fully exploit the massive number of parallel threads on the GPU is discussed. This includes a proof of concept that a cell-based AMR code can be effectively implemented, even on a small scale, in the memory and threading model provided by OpenCL. Additionally, the program requires dynamic memory in order to properly implement the mesh; as this is not supported in the OpenCL 1.1 standard, a combination of CPU memory management and GPU computation effectively implements a dynamic memory allocation scheme. Load balancing is achieved through a new stencil-based implementation of a space-filling curve, eliminating the need for a complete recalculation of the indexing on the mesh. A cartesian grid hash table scheme to allow fast parallel neighbor accesses is also discussed. Finally, the relative speedup of the GPU-enabled AMR code is compared to the original serial version. We conclude that parallelization using the GPU provides significant speedup for typical numerical applications and is feasible for scientific applications in the next generation of supercomputing.
A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model
Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A
2009-03-03
Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.
Maric, Tomislav; Bothe, Dieter
2013-01-01
A new parallelized unsplit geometrical Volume of Fluid (VoF) algorithm with support for arbitrary unstructured meshes and dynamic local Adaptive Mesh Refinement (AMR), as well as for two and three dimensional computation is developed. The geometrical VoF algorithm supports arbitrary unstructured meshes in order to enable computations involving flow domains of arbitrary geometrical complexity. The implementation of the method is done within the framework of the OpenFOAM library for Computational Continuum Mechanics (CCM) using the C++ programming language with modern policy based design for high program code modularity. The development of the geometrical VoF algorithm significantly extends the method base of the OpenFOAM library by geometrical volumetric flux computation for two-phase flow simulations. For the volume fraction advection, a novel unsplit geometrical algorithm is developed, which inherently sustains volume conservation utilizing unique Lagrangian discrete trajectories located in the mesh points. ...
Juan J. Garcia-Cantero
2017-06-01
Full Text Available Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells’ overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma’s morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been
Henshaw, W; Schwendeman, D
2007-11-15
This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.
Sascha M. Schnepp
2012-01-01
Full Text Available An extension of the framework of the Finite Integration Technique (FIT including dynamic and adaptive mesh refinement is presented. After recalling the standard formulation of the FIT, the proposed mesh adaptation procedure is described. Besides the linear interpolation approach, a novel interpolation technique based on specialized spline functions for approximating the discrete electromagnetic field solution during mesh adaptation is introduced. The standard FIT on a fixed mesh and the new adaptive approach are applied to a simulation test case with a known analytical solution. The numerical accuracy of the two methods is shown to be comparable. The dynamic mesh approach is, however, much more efficient. This is demonstrated with the full scale modeling of the complete rf gun at the Photo Injector Test Facility DESY Zeuthen (PITZ on a single computer. Results of a detailed design study addressing the effects of individual components of the gun onto the beam emittance using a fully self-consistent approach are presented.
Schnepp, Sascha M; Weiland, Thomas
2011-01-01
An extension of the framework of the Finite Integration Technique (FIT) including dynamic and adaptive mesh refinement is presented. After recalling the standard formulation of the FIT, the proposed mesh adaptation procedure is described. Besides the linear interpolation approach, a novel interpolation technique based on specialized spline functions for approximating the discrete electromagnetic field solution during mesh adaptation is introduced. The standard FIT on a fixed mesh and the new adaptive approach are applied to a simulation test case with known analytical solution. The numerical accuracy of the two methods are shown to be comparable. The dynamic mesh approach is, however, much more efficient. This is also demonstrated for the full scale modeling of the complete RF gun at the Photo Injector Test Facility DESY Zeuthen (PITZ) on a single computer. Results of a detailed design study addressing the effects of individual components of the gun onto the beam emittance using a fully self-consistent approa...
Simulation of tsunamis generated by landslides using adaptive mesh refinement on GPU
de la Asunción, M.; Castro, M. J.
2017-09-01
Adaptive mesh refinement (AMR) is a widely used technique to accelerate computationally intensive simulations, which consists of dynamically increasing the spatial resolution of the areas of interest of the domain as the simulation advances. During the last years there have appeared many publications that tackle the implementation of AMR-based applications in GPUs in order to take advantage of their massively parallel architecture. In this paper we present the first AMR-based application implemented on GPU for the simulation of tsunamis generated by landslides by using a two-layer shallow water system. We also propose a new strategy for the interpolation and projection of the values of the fine cells in the AMR algorithm based on the fluctuations of the state values instead of the usual approach of considering the current state values. Numerical experiments on artificial and realistic problems show the validity and efficiency of the solver.
Relativistic Flows Using Spatial and Temporal Adaptive Structured Mesh Refinement. I. Hydrodynamics
Wang, Peng; Zhang, Weiqun
2007-01-01
Astrophysical relativistic flow problems require high resolution three-dimensional numerical simulations. In this paper, we describe a new parallel three-dimensional code for simulations of special relativistic hydrodynamics (SRHD) using both spatially and temporally structured adaptive mesh refinement (AMR). We used method of lines to discrete SRHD equations spatially and used a total variation diminishing (TVD) Runge-Kutta scheme for time integration. For spatial reconstruction, we have implemented piecewise linear method (PLM), piecewise parabolic method (PPM), third order convex essentially non-oscillatory (CENO) and third and fifth order weighted essentially non-oscillatory (WENO) schemes. Flux is computed using either direct flux reconstruction or approximate Riemann solvers including HLL, modified Marquina flux, local Lax-Friedrichs flux formulas and HLLC. The AMR part of the code is built on top of the cosmological Eulerian AMR code {\\sl enzo}, which uses the Berger-Colella AMR algorithm and is parall...
A Survey of High Level Frameworks in Block-Structured Adaptive Mesh Refinement Packages
Dubey, Anshu; Bell, John; Berzins, Martin; Brandt, Steve; Bryan, Greg; Colella, Phillip; Graves, Daniel; Lijewski, Michael; Löffler, Frank; O'Shea, Brian; Schnetter, Erik; Van Straalen, Brian; Weide, Klaus
2016-01-01
Over the last decade block-structured adaptive mesh refinement (SAMR) has found increasing use in large, publicly available codes and frameworks. SAMR frameworks have evolved along different paths. Some have stayed focused on specific domain areas, others have pursued a more general functionality, providing the building blocks for a larger variety of applications. In this survey paper we examine a representative set of SAMR packages and SAMR-based codes that have been in existence for half a decade or more, have a reasonably sized and active user base outside of their home institutions, and are publicly available. The set consists of a mix of SAMR packages and application codes that cover a broad range of scientific domains. We look at their high-level frameworks, and their approach to dealing with the advent of radical changes in hardware architecture. The codes included in this survey are BoxLib, Cactus, Chombo, Enzo, FLASH, and Uintah.
On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields
Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.
2011-06-27
Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.
Lichtenberg, Tim
2015-01-01
The astonishing diversity in the observed planetary population requires theoretical efforts and advances in planet formation theories. Numerical approaches provide a method to tackle the weaknesses of current planet formation models and are an important tool to close gaps in poorly constrained areas. We present a global disk setup to model the first stages of giant planet formation via gravitational instabilities (GI) in 3D with the block-structured adaptive mesh refinement (AMR) hydrodynamics code ENZO. With this setup, we explore the impact of AMR techniques on the fragmentation and clumping due to large-scale instabilities using different AMR configurations. Additionally, we seek to derive general resolution criteria for global simulations of self-gravitating disks of variable extent. We run a grid of simulations with varying AMR settings, including runs with a static grid for comparison, and study the effects of varying the disk radius. Adopting a marginally stable disk profile (Q_init=1), we validate the...
Enzo+Moray: Radiation Hydrodynamics Adaptive Mesh Refinement Simulations with Adaptive Ray Tracing
Wise, John H
2010-01-01
We describe a photon-conserving radiative transfer algorithm, using a spatially-adaptive ray tracing scheme, and its parallel implementation into the adaptive mesh refinement (AMR) cosmological hydrodynamics code, Enzo. By coupling the solver with the energy equation and non-equilibrium chemistry network, our radiation hydrodynamics framework can be utilised to study a broad range of astrophysical problems, such as stellar and black hole (BH) feedback. Inaccuracies can arise from large timesteps and poor sampling, therefore we devised an adaptive time-stepping scheme and a fast approximation of the optically-thin radiation field with multiple sources. We test the method with several radiative transfer and radiation hydrodynamics tests that are given in Iliev et al. (2006, 2009). We further test our method with more dynamical situations, for example, the propagation of an ionisation front through a Rayleigh-Taylor instability, time-varying luminosities, and collimated radiation. The test suite also includes an...
GLAMER Part I: A Code for Gravitational Lensing Simulations with Adaptive Mesh Refinement
Metcalf, R Benton
2013-01-01
A computer code is described for the simulation of gravitational lensing data. The code incorporates adaptive mesh refinement in choosing which rays to shoot based on the requirements of the source size, location and surface brightness distribution or to find critical curves/caustics. A variety of source surface brightness models are implemented to represent galaxies and quasar emission regions. The lensing mass can be represented by point masses (stars), smoothed simulation particles, analytic halo models, pixelized mass maps or any combination of these. The deflection and beam distortions (convergence and shear) are calculated by modified tree algorithm when halos, point masses or particles are used and by FFT when mass maps are used. The combination of these methods allow for a very large dynamical range to be represented in a single simulation. Individual images of galaxies can be represented in a simulation that covers many square degrees. For an individual strongly lensed quasar, source sizes from the s...
An Application of the Mesh Generation and Refinement Tool to Mobile Bay, Alabama, USA
Aziz, Wali; Alarcon, Vladimir J.; McAnally, William; Martin, James; Cartwright, John
2009-08-01
A grid generation tool, called the Mesh Generation and Refinement Tool (MGRT), has been developed using Qt4. Qt4 is a comprehensive C++ application framework which includes GUI and container class-libraries and tools for cross-platform development. MGRT is capable of using several types of algorithms for grid generation. This paper presents an application of the MGRT grid generation tool for creating an unstructured grid of Mobile Bay (Alabama, USA) that will be used for hydrodynamics modeling. The algorithm used in this particular application is the Advancing-Front/Local-Reconnection (AFLR) [1] [2]. This research shows results of grids created with MGRT and compares them to grids (for the same geographical container) created using other grid generation tools. The superior quality of the grids generated by MGRT is shown.
The Singularity Threshold of the Nonlinear Sigma Model Using 3D Adaptive Mesh Refinement
Liebling, S L
2002-01-01
Numerical solutions to the nonlinear sigma model (NLSM), a wave map from 3+1 Minkowski space to S^3, are computed in three spatial dimensions (3D) using adaptive mesh refinement (AMR). For initial data with compact support the model is known to have two regimes, one in which regular initial data forms a singularity and another in which the energy is dispersed to infinity. The transition between these regimes has been shown in spherical symmetry to demonstrate threshold behavior similar to that between black hole formation and dispersal in gravitating theories. Here, I generalize the result by removing the assumption of spherical symmetry. The evolutions suggest that the spherically symmetric critical solution remains an intermediate attractor separating the two end states.
A new MHD code with adaptive mesh refinement and parallelization for astrophysics
Jiang, R.-L.; Fang, C.; Chen, P.-F.
2012-08-01
A new code, named MAP, is written in FORTRAN language for magnetohydrodynamics (MHD) simulations with the adaptive mesh refinement (AMR) and Message Passing Interface (MPI) parallelization. There are several optional numerical schemes for computing the MHD part, namely, modified Mac Cormack Scheme (MMC), Lax-Friedrichs scheme (LF), and weighted essentially non-oscillatory (WENO) scheme. All of them are second-order, two-step, component-wise schemes for hyperbolic conservative equations. The total variation diminishing (TVD) limiters and approximate Riemann solvers are also equipped. A high resolution can be achieved by the hierarchical block-structured AMR mesh. We use the extended generalized Lagrange multiplier (EGLM) MHD equations to reduce the non-divergence free error produced by the scheme in the magnetic induction equation. The numerical algorithms for the non-ideal terms, e.g., the resistivity and the thermal conduction, are also equipped in the code. The details of the AMR and MPI algorithms are described in the paper.
ADER-WENO Finite Volume Schemes with Space-Time Adaptive Mesh Refinement
Dumbser, Michael; Hidalgo, Arturo; Balsara, Dinshaw S
2012-01-01
We present the first high order one-step ADER-WENO finite volume scheme with Adaptive Mesh Refinement (AMR) in multiple space dimensions. High order spatial accuracy is obtained through a WENO reconstruction, while a high order one-step time discretization is achieved using a local space-time discontinuous Galerkin predictor method. Due to the one-step nature of the underlying scheme, the resulting algorithm is particularly well suited for an AMR strategy on space-time adaptive meshes, i.e.with time-accurate local time stepping. The AMR property has been implemented 'cell-by-cell', with a standard tree-type algorithm, while the scheme has been parallelized via the Message Passing Interface (MPI) paradigm. The new scheme has been tested over a wide range of examples for nonlinear systems of hyperbolic conservation laws, including the classical Euler equations of compressible gas dynamics and the equations of magnetohydrodynamics (MHD). High order in space and time have been confirmed via a numerical convergenc...
The Numerical Simulation of Ship Waves Using Cartesian Grid Methods with Adaptive Mesh Refinement
Dommermuth, Douglas G; Beck, Robert F; O'Shea, Thomas T; Wyatt, Donald C; Olson, Kevin; MacNeice, Peter
2014-01-01
Cartesian-grid methods with Adaptive Mesh Refinement (AMR) are ideally suited for simulating the breaking of waves, the formation of spray, and the entrainment of air around ships. As a result of the cartesian-grid formulation, minimal input is required to describe the ships geometry. A surface panelization of the ship hull is used as input to automatically generate a three-dimensional model. No three-dimensional gridding is required. The AMR portion of the numerical algorithm automatically clusters grid points near the ship in regions where wave breaking, spray formation, and air entrainment occur. Away from the ship, where the flow is less turbulent, the mesh is coarser. The numerical computations are implemented using parallel algorithms. Together, the ease of input and usage, the ability to resolve complex free-surface phenomena, and the speed of the numerical algorithms provide a robust capability for simulating the free-surface disturbances near a ship. Here, numerical predictions, with and without AMR,...
Development of a scalable gas-dynamics solver with adaptive mesh refinement
Korkut, Burak
There are various computational physics areas in which Direct Simulation Monte Carlo (DSMC) and Particle in Cell (PIC) methods are being employed. The accuracy of results from such simulations depend on the fidelity of the physical models being used. The computationally demanding nature of these problems make them ideal candidates to make use of modern supercomputers. The software developed to run such simulations also needs special attention so that the maintainability and extendability is considered with the recent numerical methods and programming paradigms. Suited for gas-dynamics problems, a software called SUGAR (Scalable Unstructured Gas dynamics with Adaptive mesh Refinement) has recently been developed and written in C++ and MPI. Physical and numerical models were added to this framework to simulate ion thruster plumes. SUGAR is used to model the charge-exchange (CEX) reactions occurring between the neutral and ion species as well as the induced electric field effect due to ions. Multiple adaptive mesh refinement (AMR) meshes were used in order to capture different physical length scales present in the flow. A multiple-thruster configuration was run to extend the studies to cases for which there is no axial or radial symmetry present that could only be modeled with a three-dimensional simulation capability. The combined plume structure showed interactions between individual thrusters where AMR capability captured this in an automated way. The back flow for ions was found to occur when CEX and momentum-exchange (MEX) collisions are present and strongly enhanced when the induced electric field is considered. The ion energy distributions in the back flow region were obtained and it was found that the inclusion of the electric field modeling is the most important factor in determining its shape. The plume back flow structure was also examined for a triple-thruster, 3-D geometry case and it was found that the ion velocity in the back flow region appears to be
Numerical relativity simulations of neutron star merger remnants using conservative mesh refinement
Dietrich, Tim; Ujevic, Maximiliano; Bruegmann, Bernd
2015-01-01
We study equal and unequal-mass neutron star mergers by means of new numerical relativity simulations in which the general relativistic hydrodynamics solver employs an algorithm that guarantees mass conservation across the refinement levels of the computational mesh. We consider eight binary configurations with total mass $M=2.7\\,M_\\odot$, mass-ratios $q=1$ and $q=1.16$, and four different equation of states (EOSs), and one configuration with a stiff EOS, $M=2.5M_\\odot$ and $q=1.5$. We focus on the post-merger dynamics and study the merger remnant, dynamical ejecta and the postmerger gravitational wave spectrum. Although most of the merger remnants form a hypermassive neutron star collapsing to a black hole+disk system on dynamical timescales, stiff EOSs can eventually produce a stable massive neutron star. Ejecta are mostly emitted around the orbital plane; favored by large mass ratios and softer EOS. The postmerger wave spectrum is mainly characterized by non-axisymmetric oscillations of the remnant. The st...
Effenberger, Frederic; Arnold, Lukas; Grauer, Rainer; Dreher, Jürgen
2011-01-01
The formation of a thin current sheet in a magnetic quasi-separatrix layer (QSL) is investigated by means of numerical simulation using a simplified ideal, low-$\\beta$, MHD model. The initial configuration and driving boundary conditions are relevant to phenomena observed in the solar corona and were studied earlier by Aulanier et al., A&A 444, 961 (2005). In extension to that work, we use the technique of adaptive mesh refinement (AMR) to significantly enhance the local spatial resolution of the current sheet during its formation, which enables us to follow the evolution into a later stage. Our simulations are in good agreement with the results of Aulanier et al. up to the calculated time in that work. In a later phase, we observe a basically unarrested collapse of the sheet to length scales that are more than one order of magnitude smaller than those reported earlier. The current density attains correspondingly larger maximum values within the sheet. During this thinning process, which is finally limite...
ENZO+MORAY: radiation hydrodynamics adaptive mesh refinement simulations with adaptive ray tracing
Wise, John H.; Abel, Tom
2011-07-01
We describe a photon-conserving radiative transfer algorithm, using a spatially-adaptive ray-tracing scheme, and its parallel implementation into the adaptive mesh refinement cosmological hydrodynamics code ENZO. By coupling the solver with the energy equation and non-equilibrium chemistry network, our radiation hydrodynamics framework can be utilized to study a broad range of astrophysical problems, such as stellar and black hole feedback. Inaccuracies can arise from large time-steps and poor sampling; therefore, we devised an adaptive time-stepping scheme and a fast approximation of the optically-thin radiation field with multiple sources. We test the method with several radiative transfer and radiation hydrodynamics tests that are given in Iliev et al. We further test our method with more dynamical situations, for example, the propagation of an ionization front through a Rayleigh-Taylor instability, time-varying luminosities and collimated radiation. The test suite also includes an expanding H II region in a magnetized medium, utilizing the newly implemented magnetohydrodynamics module in ENZO. This method linearly scales with the number of point sources and number of grid cells. Our implementation is scalable to 512 processors on distributed memory machines and can include the radiation pressure and secondary ionizations from X-ray radiation. It is included in the newest public release of ENZO.
Multigroup radiation hydrodynamics with flux-limited diffusion and adaptive mesh refinement
González, Matthias; Commerçon, Benoît; Masson, Jacques
2015-01-01
Radiative transfer plays a key role in the star formation process. Due to a high computational cost, radiation-hydrodynamics simulations performed up to now have mainly been carried out in the grey approximation. In recent years, multi-frequency radiation-hydrodynamics models have started to emerge, in an attempt to better account for the large variations of opacities as a function of frequency. We wish to develop an efficient multigroup algorithm for the adaptive mesh refinement code RAMSES which is suited to heavy proto-stellar collapse calculations. Due to prohibitive timestep constraints of an explicit radiative transfer method, we constructed a time-implicit solver based on a stabilised bi-conjugate gradient algorithm, and implemented it in RAMSES under the flux-limited diffusion approximation. We present a series of tests which demonstrate the high performance of our scheme in dealing with frequency-dependent radiation-hydrodynamic flows. We also present a preliminary simulation of a three-dimensional p...
Moura, R. C.; Silva, A. F. C.; Bigarella, E. D. V.; Fazenda, A. L.; Ortega, M. A.
2016-08-01
This paper proposes two important improvements to shock-capturing strategies using a discontinuous Galerkin scheme, namely, accurate shock identification via finite-time Lyapunov exponent (FTLE) operators and efficient shock treatment through a point-implicit discretization of a PDE-based artificial viscosity technique. The advocated approach is based on the FTLE operator, originally developed in the context of dynamical systems theory to identify certain types of coherent structures in a flow. We propose the application of FTLEs in the detection of shock waves and demonstrate the operator's ability to identify strong and weak shocks equally well. The detection algorithm is coupled with a mesh refinement procedure and applied to transonic and supersonic flows. While the proposed strategy can be used potentially with any numerical method, a high-order discontinuous Galerkin solver is used in this study. In this context, two artificial viscosity approaches are employed to regularize the solution near shocks: an element-wise constant viscosity technique and a PDE-based smooth viscosity model. As the latter approach is more sophisticated and preferable for complex problems, a point-implicit discretization in time is proposed to reduce the extra stiffness introduced by the PDE-based technique, making it more competitive in terms of computational cost.
Hummels, Cameron
2011-01-01
We carry out adaptive mesh refinement (AMR) cosmological simulations of Milky-Way mass halos in order to investigate the formation of disk-like galaxies in a {\\Lambda}-dominated Cold Dark Matter model. We evolve a suite of five halos to z = 0 and find gaseous-disk formation in all; however, in agreement with previous SPH simulations (that did not include a subgrid feedback model), the rotation curves of all halos are centrally peaked due to a massive spheroidal component. Our standard model includes radiative cooling and star formation, but no feedback. We further investigate this angular momentum problem by systematically modifying various simulation parameters including: (i) spatial resolution, ranging from 1700 to 212 pc; (ii) an additional pressure component to ensure that the Jeans length is always resolved; (iii) low star formation efficiency, going down to 0.1%; (iv) fixed physical resolution as opposed to comoving resolution; (v) a supernova feedback model which injects thermal energy to the local cel...
GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid
Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua
2016-10-01
A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.
De Colle, Fabio; Lopez-Camara, Diego; Ramirez-Ruiz, Enrico
2011-01-01
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in Gamma-Ray Burst sources. The SRHD equations are solved using finite volume conservative solvers. The correct implementation of the algorithms is verified by one-dimensional (1D) shock tube and multidimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with $\\rho \\propto r^{-k}$, bridging between the relativistic and Newtonian phases, as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to non-relativistic speeds in one-dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, toge...
Relativistic Flows Using Spatial And Temporal Adaptive Structured Mesh Refinement. I. Hydrodynamics
Wang, Peng; Abel, Tom; Zhang, Weiqun; /KIPAC, Menlo Park
2007-04-02
Astrophysical relativistic flow problems require high resolution three-dimensional numerical simulations. In this paper, we describe a new parallel three-dimensional code for simulations of special relativistic hydrodynamics (SRHD) using both spatially and temporally structured adaptive mesh refinement (AMR). We used method of lines to discrete SRHD equations spatially and used a total variation diminishing (TVD) Runge-Kutta scheme for time integration. For spatial reconstruction, we have implemented piecewise linear method (PLM), piecewise parabolic method (PPM), third order convex essentially non-oscillatory (CENO) and third and fifth order weighted essentially non-oscillatory (WENO) schemes. Flux is computed using either direct flux reconstruction or approximate Riemann solvers including HLL, modified Marquina flux, local Lax-Friedrichs flux formulas and HLLC. The AMR part of the code is built on top of the cosmological Eulerian AMR code enzo, which uses the Berger-Colella AMR algorithm and is parallel with dynamical load balancing using the widely available Message Passing Interface library. We discuss the coupling of the AMR framework with the relativistic solvers and show its performance on eleven test problems.
Dimensional reduction as a tool for mesh refinement and trackingsingularities of PDEs
Stinis, Panagiotis
2007-06-10
We present a collection of algorithms which utilizedimensional reduction to perform mesh refinement and study possiblysingular solutions of time-dependent partial differential equations. Thealgorithms are inspired by constructions used in statistical mechanics toevaluate the properties of a system near a critical point. The firstalgorithm allows the accurate determination of the time of occurrence ofa possible singularity. The second algorithm is an adaptive meshrefinement scheme which can be used to approach efficiently the possiblesingularity. Finally, the third algorithm uses the second algorithm untilthe available resolution is exhausted (as we approach the possiblesingularity) and then switches to a dimensionally reduced model which,when accurate, can follow faithfully the solution beyond the time ofoccurrence of the purported singularity. An accurate dimensionallyreduced model should dissipate energy at the right rate. We construct twovariants of each algorithm. The first variant assumes that we have actualknowledge of the reduced model. The second variant assumes that we knowthe form of the reduced model, i.e., the terms appearing in the reducedmodel, but not necessarily their coefficients. In this case, we alsoprovide a way of determining the coefficients. We present numericalresults for the Burgers equation with zero and nonzero viscosity toillustrate the use of the algorithms.
Benoit, Commercon; Romain, Teyssier
2014-01-01
Implicit solvers present strong limitations when used on supercomputing facilities and in particular for adaptive mesh-refinement codes. We present a new method for implicit adaptive time-stepping on adaptive mesh refinement-grids and implementing it in the radiation hydrodynamics solver we designed for the RAMSES code for astrophysical purposes and, more particularly, for protostellar collapse. We briefly recall the radiation hydrodynamics equations and the adaptive time-stepping methodology used for hydrodynamical solvers. We then introduce the different types of boundary conditions (Dirichlet, Neumann, and Robin) that are used at the interface between levels and present our implementation of the new method in the RAMSES code. The method is tested against classical diffusion and radiation hydrodynamics tests, after which we present an application for protostellar collapse. We show that using Dirichlet boundary conditions at level interfaces is a good compromise between robustness and accuracy and that it ca...
Areias, P.; Rabczuk, T.; de Sá, J. César
2016-12-01
We propose an alternative crack propagation algorithm which effectively circumvents the variable transfer procedure adopted with classical mesh adaptation algorithms. The present alternative consists of two stages: a mesh-creation stage where a local damage model is employed with the objective of defining a crack-conforming mesh and a subsequent analysis stage with a localization limiter in the form of a modified screened Poisson equation which is exempt of crack path calculations. In the second stage, the crack naturally occurs within the refined region. A staggered scheme for standard equilibrium and screened Poisson equations is used in this second stage. Element subdivision is based on edge split operations using a constitutive quantity (damage). To assess the robustness and accuracy of this algorithm, we use five quasi-brittle benchmarks, all successfully solved.
Ravindran, Prashaanth
The unstable nature of detonation waves is a result of the critical relationship between the hydrodynamic shock and the chemical reactions sustaining the shock. A perturbative analysis of the critical point is quite challenging due to the multiple spatio-temporal scales involved along with the non-linear nature of the shock-reaction mechanism. The author's research attempts to provide detailed resolution of the instabilities at the shock front. Another key aspect of the present research is to develop an understanding of the causality between the non-linear dynamics of the front and the eventual breakdown of the sub-structures. An accurate numerical simulation of detonation waves requires a very efficient solution of the Euler equations in conservation form with detailed, non-equilibrium chemistry. The difference in the flow and reaction length scales results in very stiff source terms, requiring the problem to be solved with adaptive mesh refinement. For this purpose, Berger-Colella's block-structured adaptive mesh refinement (AMR) strategy has been developed and applied to time-explicit finite volume methods. The block-structured technique uses a hierarchy of parent-child sub-grids, integrated recursively over time. One novel approach to partition the problem within a large supercomputer was the use of modified Peano-Hilbert space filling curves. The AMR framework was merged with CLAWPACK, a package providing finite volume numerical methods tailored for wave-propagation problems. The stiffness problem is bypassed by using a 1st order Godunov or a 2nd order Strang splitting technique, where the flow variables and source terms are integrated independently. A linearly explicit fourth-order Runge-Kutta integrator is used for the flow, and an ODE solver was used to overcome the numerical stiffness. Second-order spatial resolution is obtained by using a second-order Roe-HLL scheme with the inclusion of numerical viscosity to stabilize the solution near the discontinuity
De Colle, Fabio; Ramirez-Ruiz, Enrico [Astronomy and Astrophysics Department, University of California, Santa Cruz, CA 95064 (United States); Granot, Jonathan [Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); Lopez-Camara, Diego, E-mail: fabio@ucolick.org [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Ap. 70-543, 04510 D.F. (Mexico)
2012-02-20
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the
De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico
2012-02-01
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.
Donmez, Orhan
We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.
Simulations of recoiling black holes: adaptive mesh refinement and radiative transfer
Meliani, Zakaria; Mizuno, Yosuke; Olivares, Hector; Porth, Oliver; Rezzolla, Luciano; Younsi, Ziri
2017-02-01
Context. In many astrophysical phenomena, and especially in those that involve the high-energy regimes that always accompany the astronomical phenomenology of black holes and neutron stars, physical conditions that are achieved are extreme in terms of speeds, temperatures, and gravitational fields. In such relativistic regimes, numerical calculations are the only tool to accurately model the dynamics of the flows and the transport of radiation in the accreting matter. Aims: We here continue our effort of modelling the behaviour of matter when it orbits or is accreted onto a generic black hole by developing a new numerical code that employs advanced techniques geared towards solving the equations of general-relativistic hydrodynamics. Methods: More specifically, the new code employs a number of high-resolution shock-capturing Riemann solvers and reconstruction algorithms, exploiting the enhanced accuracy and the reduced computational cost of adaptive mesh-refinement (AMR) techniques. In addition, the code makes use of sophisticated ray-tracing libraries that, coupled with general-relativistic radiation-transfer calculations, allow us to accurately compute the electromagnetic emissions from such accretion flows. Results: We validate the new code by presenting an extensive series of stationary accretion flows either in spherical or axial symmetry that are performed either in two or three spatial dimensions. In addition, we consider the highly nonlinear scenario of a recoiling black hole produced in the merger of a supermassive black-hole binary interacting with the surrounding circumbinary disc. In this way, we can present for the first time ray-traced images of the shocked fluid and the light curve resulting from consistent general-relativistic radiation-transport calculations from this process. Conclusions: The work presented here lays the ground for the development of a generic computational infrastructure employing AMR techniques to accurately and self
Essadki Mohamed
2016-09-01
Full Text Available Predictive simulation of liquid fuel injection in automotive engines has become a major challenge for science and applications. The key issue in order to properly predict various combustion regimes and pollutant formation is to accurately describe the interaction between the carrier gaseous phase and the polydisperse evaporating spray produced through atomization. For this purpose, we rely on the EMSM (Eulerian Multi-Size Moment Eulerian polydisperse model. It is based on a high order moment method in size, with a maximization of entropy technique in order to provide a smooth reconstruction of the distribution, derived from a Williams-Boltzmann mesoscopic model under the monokinetic assumption [O. Emre (2014 PhD Thesis, École Centrale Paris; O. Emre, R.O. Fox, M. Massot, S. Chaisemartin, S. Jay, F. Laurent (2014 Flow, Turbulence and Combustion 93, 689-722; O. Emre, D. Kah, S. Jay, Q.-H. Tran, A. Velghe, S. de Chaisemartin, F. Laurent, M. Massot (2015 Atomization Sprays 25, 189-254; D. Kah, F. Laurent, M. Massot, S. Jay (2012 J. Comput. Phys. 231, 394-422; D. Kah, O. Emre, Q.-H. Tran, S. de Chaisemartin, S. Jay, F. Laurent, M. Massot (2015 Int. J. Multiphase Flows 71, 38-65; A. Vié, F. Laurent, M. Massot (2013 J. Comp. Phys. 237, 277-310]. The present contribution relies on a major extension of this model [M. Essadki, S. de Chaisemartin, F. Laurent, A. Larat, M. Massot (2016 Submitted to SIAM J. Appl. Math.], with the aim of building a unified approach and coupling with a separated phases model describing the dynamics and atomization of the interface near the injector. The novelty is to be found in terms of modeling, numerical schemes and implementation. A new high order moment approach is introduced using fractional moments in surface, which can be related to geometrical quantities of the gas-liquid interface. We also provide a novel algorithm for an accurate resolution of the evaporation. Adaptive mesh refinement properly scaling on massively
Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang
2015-10-01
For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.
Side-stream products of edible oil refining as feedstocks in biodiesel production
Cvetković Bojan S.
2016-01-01
Full Text Available Biodiesel, a diesel fuel alternative, is produced from vegetable oils and animal fats by the transesterification reaction of triacylglycerols and lower aliphatic alcohols. Beside number advantages related to fossil fuels, the main barrier to biodiesel wider commercial use is the high price of edible oils. Recently, the special attention was given to side-stream products of edible oil refining as low-cost triacylglycerol sources for biodiesel production because of their positive economic and ecological effects. In this paper, the different procedures for biodiesel production from side-stream refining products such as soapstock, spent bleaching earth and deodorizer distillate were analyzed. The main goal of this paper is to analyze the possibilities for reusing the by-products of edible oil refinement in the biodiesel production.
AbouEisha, Hassan M.
2017-07-13
We consider a class of two-and three-dimensional h-refined meshes generated by an adaptive finite element method. We introduce an element partition tree, which controls the execution of the multi-frontal solver algorithm over these refined grids. We propose and study algorithms with polynomial computational cost for the optimization of these element partition trees. The trees provide an ordering for the elimination of unknowns. The algorithms automatically optimize the element partition trees using extensions of dynamic programming. The construction of the trees by the dynamic programming approach is expensive. These generated trees cannot be used in practice, but rather utilized as a learning tool to propose fast heuristic algorithms. In this first part of our paper we focus on the dynamic programming approach, and draw a sketch of the heuristic algorithm. The second part will be devoted to a more detailed analysis of the heuristic algorithm extended for the case of hp-adaptive
Composite-Grid Techniques and Adaptive Mesh Refinement in Computational Fluid Dynamics
1990-01-01
grid that would result from global refinement is .,. If the ratio Nfto, 0t/Nf9 ob~ is too large, then global refinement should bc ADAPTIVE SCENARIO 58...which has the disadvantage of being nonconservative, i.e., it does not preserve conservation properties integrally. More will bc said about this in...r2 converges to the solution of the equation if the (sub-) matriz splitting A = B - C is used and the following conditions hold: 1. A is a real
Li, Pak Shing; Klein, Richard I. [Astronomy Department, University of California, Berkeley, CA 94720 (United States); Martin, Daniel F. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); McKee, Christopher F., E-mail: psli@astron.berkeley.edu, E-mail: klein@astron.berkeley.edu, E-mail: DFMartin@lbl.gov, E-mail: cmckee@astro.berkeley.edu [Physics Department and Astronomy Department, University of California, Berkeley, CA 94720 (United States)
2012-02-01
Performing a stable, long-duration simulation of driven MHD turbulence with a high thermal Mach number and a strong initial magnetic field is a challenge to high-order Godunov ideal MHD schemes because of the difficulty in guaranteeing positivity of the density and pressure. We have implemented a robust combination of reconstruction schemes, Riemann solvers, limiters, and constrained transport electromotive force averaging schemes that can meet this challenge, and using this strategy, we have developed a new adaptive mesh refinement (AMR) MHD module of the ORION2 code. We investigate the effects of AMR on several statistical properties of a turbulent ideal MHD system with a thermal Mach number of 10 and a plasma {beta}{sub 0} of 0.1 as initial conditions; our code is shown to be stable for simulations with higher Mach numbers (M{sub rms}= 17.3) and smaller plasma beta ({beta}{sub 0} = 0.0067) as well. Our results show that the quality of the turbulence simulation is generally related to the volume-averaged refinement. Our AMR simulations show that the turbulent dissipation coefficient for supersonic MHD turbulence is about 0.5, in agreement with unigrid simulations.
Li, Pak Shing; Klein, Richard I; McKee, Christopher F
2011-01-01
Performing a stable, long duration simulation of driven MHD turbulence with a high thermal Mach number and a strong initial magnetic field is a challenge to high-order Godunov ideal MHD schemes because of the difficulty in guaranteeing positivity of the density and pressure. We have implemented a robust combination of reconstruction schemes, Riemann solvers, limiters, and Constrained Transport EMF averaging schemes that can meet this challenge, and using this strategy, we have developed a new Adaptive Mesh Refinement (AMR) MHD module of the ORION2 code. We investigate the effects of AMR on several statistical properties of a turbulent ideal MHD system with a thermal Mach number of 10 and a plasma $\\beta_0$ of 0.1 as initial conditions; our code is shown to be stable for simulations with higher Mach numbers ($M_rms = 17.3$) and smaller plasma beta ($\\beta_0 = 0.0067$) as well. Our results show that the quality of the turbulence simulation is generally related to the volume-averaged refinement. Our AMR simulati...
Patched based methods for adaptive mesh refinement solutions of partial differential equations
Saltzman, J.
1997-09-02
This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2016-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
Performance Evaluation of Multipath Discovery Algorithms for VoD Streaming in Wireless Mesh Network
Praful C. Ramteke
2014-07-01
Full Text Available Transmission and routing of video data over wireless network is a challenging task because of wireless interferences. To improve the performance of video on demand transmission over wireless networks multipath algorithms are used. IPD/S (Iterative path discovery/ selection PPD/S (Parallel Path discovery/selection are two algorithms which is used for discovering maximum number of edge disjoint paths from source to destination, for each VoD request by considering the effects of wireless interferences. In this paper performance evaluation of these multipath discovery algorithms for VoD (Video on demand streaming in wireless mesh network is presented. These algorithms are evaluated on the bases of Number of Path discovers, Packet drop ratio. Simulation result shows that PPD/S works batter as compared to IPD/S because it’s able to discover more paths than IPD/S under same circumstances
Vertical Scan (V-SCAN) for 3-D Grid Adaptive Mesh Refinement for an atmospheric Model Dynamical Core
Andronova, N. G.; Vandenberg, D.; Oehmke, R.; Stout, Q. F.; Penner, J. E.
2009-12-01
One of the major building blocks of a rigorous representation of cloud evolution in global atmospheric models is a parallel adaptive grid MPI-based communication library (an Adaptive Blocks for Locally Cartesian Topologies library -- ABLCarT), which manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits a block as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells - the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. A major performance concern with adaptive mesh refinement is how to represent calculations that have need to be sequenced in a particular order in a direction, such as calculating integrals along a specific path (e.g. atmospheric pressure or geopotential in the vertical dimension). This concern is compounded if the blocks have varying levels of refinement, or are scattered across different processors, as can be the case in parallel computing. In this paper we describe an implementation in ABLCarT of a vertical scan operation, which allows computing along vertical paths in the correct order across blocks transparent to their resolution and processor location. We test this functionality on a 2D and a 3D advection problem, which tests the performance of the model’s dynamics (transport) and physics (sources and sinks) for different model resolutions needed for inclusion of cloud formation.
3-D grid refinement using the University of Michigan adaptive mesh library for a pure advective test
Oehmke, R.; Vandenberg, D.; Andronova, N.; Penner, J.; Stout, Q.; Zubov, V.; Jablonowski, C.
2008-05-01
The numerical representation of the partial differential equations (PDE) for high resolution atmospheric dynamical and physical features requires division of the atmospheric volume into a set of 3D grids, each of which has a not quite rectangular form. Each location on the grid contains multiple data that together represent the state of Earth's atmosphere. For successful numerical integration of the PDEs the size of each grid box is used to define the Courant-Friedrichs-Levi criterion in setting the time step. 3D adaptive representations of a sphere are needed to represent the evolution of clouds. In this paper we present the University of Michigan adaptive mesh library - a library that supports the production of parallel codes with use of adaptation on a sphere. The library manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits blocks as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells — the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. Users provide data manipulation functions for performing interpolation of user data when refining blocks. We rigorously test the library using refinement of the modeled vertical transport of a tracer with prescribed atmospheric sources and sinks. It is both a 2 and a 3D test, and bridges the performance of the model's dynamics and physics needed for inclusion of cloud formation.
Lopez-Camara, D.; Lazzati, Davide [Department of Physics, NC State University, 2401 Stinson Drive, Raleigh, NC 27695-8202 (United States); Morsony, Brian J. [Department of Astronomy, University of Wisconsin-Madison, 2535 Sterling Hall, 475 N. Charter Street, Madison, WI 53706-1582 (United States); Begelman, Mitchell C., E-mail: dlopezc@ncsu.edu [JILA, University of Colorado, 440 UCB, Boulder, CO 80309-0440 (United States)
2013-04-10
We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.
Cunningham, Andrew J.; Frank, Adam; Varnière, Peggy; Mitran, Sorin; Jones, Thomas W.
2009-06-01
A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.
Constrained-Transport Magnetohydrodynamics with Adaptive-Mesh-Refinement in CHARM
Miniatii, Francesco; Martin, Daniel
2011-05-24
We present the implementation of a three-dimensional, second order accurate Godunov-type algorithm for magneto-hydrodynamic (MHD), in the adaptivemesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit Corner-Transport-Upwind (CTU) scheme. Thefluid quantities are cell-centered and are updated using the Piecewise-Parabolic- Method (PPM), while the magnetic field variables are face-centered and areevolved through application of the Stokes theorem on cell edges via a Constrained- Transport (CT) method. The so-called ?multidimensional MHD source terms?required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracyor robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These includeface-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. Thecode is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests,a three-dimensional shock-cloud interaction problem and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence isshown to remain negligible throughout. Subject headings: cosmology: theory - methods: numerical
Angelidis, Dionysios; Sotiropoulos, Fotis
2015-11-01
The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.
Amaziane Brahim
2014-07-01
Full Text Available In this paper, we consider adaptive numerical simulation of miscible displacement problems in porous media, which are modeled by single phase flow equations. A vertex-centred finite volume method is employed to discretize the coupled system: the Darcy flow equation and the diffusion-convection concentration equation. The convection term is approximated with a Godunov scheme over the dual finite volume mesh, whereas the diffusion-dispersion term is discretized by piecewise linear conforming finite elements. We introduce two kinds of indicators, both of them of residual type. The first one is related to time discretization and is local with respect to the time discretization: thus, at each time, it provides an appropriate information for the choice of the next time step. The second is related to space discretization and is local with respect to both the time and space variable and the idea is that at each time it is an efficient tool for mesh adaptivity. An error estimation procedure evaluates where additional refinement is needed and grid generation procedures dynamically create or remove fine-grid patches as resolution requirements change. The method was implemented in the software MELODIE, developed by the French Institute for Radiological Protection and Nuclear Safety (IRSN, Institut de Radioprotection et de Sûreté Nucléaire. The algorithm is then used to simulate the evolution of radionuclide migration from the waste packages through a heterogeneous disposal, demonstrating its capability to capture complex behavior of the resulting flow.
D'Avillez, M A; Breitschwerdt, Dieter
2005-01-01
State of the art models of the ISM use adaptive mesh refinement to capture small scale structures, by refining on the fly those regions of the grid where density and pressure gradients occur, keeping at the same time the existing resolution in the other regions. With this technique it became possible to study the ISM in star-forming galaxies in a global way by following matter circulation between stars and the interstellar gas, and, in particular the energy input by random and clustered supernova explosions, which determine the dynamical and chemical evolution of the ISM, and hence of the galaxy as a whole. In this paper we review the conditions for a self-consistent modelling of the ISM and present the results from the latest developments in the 3D HD/MHD global models of the ISM. Special emphasis is put on the effects of the magnetic field with respect to the volume and mass fractions of the different ISM ``phases'', the relative importance of ram, thermal and magnetic pressures, and whether the field can p...
Moving Overlapping Grids with Adaptive Mesh Refinement for High-Speed Reactive and Non-reactive Flow
Henshaw, W D; Schwendeman, D W
2005-08-30
We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows in order to demonstrate the use and accuracy of the numerical approach.
Shih, Chihhsiong; Yang, Yuanfan
2012-02-01
A novel three-dimensional (3-D) photorealistic texturing process is presented that applies a view-planning and view-sequencing algorithm to the 3-D coarse model to determine a set of best viewing angles for capturing the individual real-world objects/building's images. The best sequence of views will generate sets of visible edges in each view to serve as a guide for camera field shots by either manual adjustment or equipment alignment. The best view tries to cover as many objects/building surfaces as possible in one shot. This will lead to a smaller total number of shots taken for a complete model reconstruction requiring texturing with photo-realistic effects. The direct linear transformation method (DLT) is used for reprojection of 3-D model vertices onto a two-dimensional (2-D) images plane for actual texture mapping. Given this method, the actual camera orientations do not have to be unique and can be set arbitrarily without heavy and expensive positioning equipment. We also present results of a study on the texture-mapping precision as a function of the level of visible mesh subdivision. In addition, the control points selection for the DLT method used for reprojection of 3-D model vertices onto 2-D textured images is also investigated for its effects on mapping precision. By using DLT and perspective projection theories on a coarse model feature points, this technique will allow accurate 3-D texture mapping of refined model meshes of real-world buildings. The novel integration flow of this research not only greatly reduces the human labor and intensive equipment requirements of traditional methods, but also generates a more appealing photo-realistic appearance of reconstructed models, which is useful in many multimedia applications. The roles of view planning (VP) are multifold. VP can (1) reduce the repetitive texture-mapping computation load, (2) can present a set of visible model wireframe edges that can serve as a guide for images with sharp edges and
Huang, Rongzong; Wu, Huiying
2016-06-01
A total enthalpy-based lattice Boltzmann (LB) method with adaptive mesh refinement (AMR) is developed in this paper to efficiently simulate solid-liquid phase change problem where variables vary significantly near the phase interface and thus finer grid is required. For the total enthalpy-based LB method, the velocity field is solved by an incompressible LB model with multiple-relaxation-time (MRT) collision scheme, and the temperature field is solved by a total enthalpy-based MRT LB model with the phase interface effects considered and the deviation term eliminated. With a kinetic assumption that the density distribution function for solid phase is at equilibrium state, a volumetric LB scheme is proposed to accurately realize the nonslip velocity condition on the diffusive phase interface and in the solid phase. As compared with the previous schemes, this scheme can avoid nonphysical flow in the solid phase. As for the AMR approach, it is developed based on multiblock grids. An indicator function is introduced to control the adaptive generation of multiblock grids, which can guarantee the existence of overlap area between adjacent blocks for information exchange. Since MRT collision schemes are used, the information exchange is directly carried out in the moment space. Numerical tests are firstly performed to validate the strict satisfaction of the nonslip velocity condition, and then melting problems in a square cavity with different Prandtl numbers and Rayleigh numbers are simulated, which demonstrate that the present method can handle solid-liquid phase change problem with high efficiency and accuracy.
Zanotti, Olindo; Dumbser, Michael
2015-01-01
We present a new numerical tool for solving the special relativistic ideal MHD equations that is based on the combination of the following three key features: (i) a one-step ADER discontinuous Galerkin (DG) scheme that allows for an arbitrary order of accuracy in both space and time, (ii) an a posteriori subcell finite volume limiter that is activated to avoid spurious oscillations at discontinuities without destroying the natural subcell resolution capabilities of the DG finite element framework and finally (iii) a space-time adaptive mesh refinement (AMR) framework with time-accurate local time-stepping. The divergence-free character of the magnetic field is instead taken into account through the so-called 'divergence-cleaning' approach. The convergence of the new scheme is verified up to 5th order in space and time and the results for a sample of significant numerical tests including shock tube problems, the RMHD rotor problem and the Orszag-Tang vortex system are shown. We also consider a simple case of t...
Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.
2011-01-01
We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.
Cui, Laizhong; Jiang, Yong; Wu, Jianping; Xia, Shutao
Most large-scale Peer-to-Peer (P2P) live streaming systems are constructed as a mesh structure, which can provide robustness in the dynamic P2P environment. The pull scheduling algorithm is widely used in this mesh structure, which degrades the performance of the entire system. Recently, network coding was introduced in mesh P2P streaming systems to improve the performance, which makes the push strategy feasible. One of the most famous scheduling algorithms based on network coding is R2, with a random push strategy. Although R2 has achieved some success, the push scheduling strategy still lacks a theoretical model and optimal solution. In this paper, we propose a novel optimal pull-push scheduling algorithm based on network coding, which consists of two stages: the initial pull stage and the push stage. The main contributions of this paper are: 1) we put forward a theoretical analysis model that considers the scarcity and timeliness of segments; 2) we formulate the push scheduling problem to be a global optimization problem and decompose it into local optimization problems on individual peers; 3) we introduce some rules to transform the local optimization problem into a classical min-cost optimization problem for solving it; 4) We combine the pull strategy with the push strategy and systematically realize our scheduling algorithm. Simulation results demonstrate that decode delay, decode ratio and redundant fraction of the P2P streaming system with our algorithm can be significantly improved, without losing throughput and increasing overhead.
Electrical impedance tomography using adaptive mesh refinement%利用自适应网格细分法的电阻抗成像
严佩敏; 王朔中
2006-01-01
In electrical impedance tomography ( EIT), distribution of the internal resistivity or conductivity of an unknown object is estimated using measured boundary voltage data induced by different current patterns with various reconstruction algorithms.The reconstruction algorithms usually employ the Newton-Raphson iteration scheme to visualize the resistivity distribution inside the object.Accuracy of the imaging process depends not only on the algorithm used, but also on the scheme of finite element discretization.In this paper an adaptive mesh refinement is used in a modified reconstruction algorithm for the regularized EIT.The method has a major impact on efficient solution of the forward problem as well as on achieving improved image resolution.Computer simulations indicate that the Newton-Raphson reconstruction algorithm for EIT using adaptive mesh refinement performs better than the classical Newton-Raphson algorithm in terms of reconstructed image resolution.
Laidlaw, T. L.; Jessup, B.; Stagliano, D.; Stribling, J.; Feldman, D. L.; Bollman, W.
2005-05-01
Montana's Department of Environmental Quality (DEQ) has collected macroinvertebrate data for twenty years. During this time, sampling methods and mesh sizes have been modified, though the effects of the modifications on the samples collected have not been studied. DEQ has used and continues to use both 500 and 1200 Ã¬m mesh sizes. The purpose of this study is to evaluate the effects of the different mesh sizes on taxonomic diversity and metric values. Field crews followed DEQ's traveling kick sampling methods and collected samples at each site using both mesh sizes. Sixteen sampling locations were distributed throughout two ecoregions (the Mountains and the Mountain and Valley Foothills) with replicate samples collected at seven locations. We developed a suite of both quantitative and qualitative performance characteristics (precision, accuracy, bias) and directly compared them for each mesh size. Preliminary ordination results showed no significant differences between the community level performance measures. Preliminary metric analysis showed that the 1200 Ã¬m mesh captured a greater abundance and diversity of caddisflies (Trichoptera) than the 500 Ã¬m mesh. This study will determine if data collected using different mesh sizes can be aggregated for development of bioassessment tools and will help DEQ implement consistent statewide sampling protocols.
Mosler, J.; Ortiz, M.
2009-01-01
A variational h-adaptive finite element formulation is proposed. The distinguishing feature of this method is that mesh refinement and coarsening are governed by the same minimization principle characterizing the underlying physical problem. Hence, no error estimates are invoked at any stage of the adaption procedure. As a consequence, linearity of the problem and a corresponding Hilbert-space functional framework are not required and the proposed formulation can be applied to hig...
Delaunay Refinement Mesh Generation
1997-05-18
determinant evaluation that considers floating- point operands, except for one limited example: Ottmann , Thiemt, and Ullrich [74] advocate the use of an...Sandia National Labo- ratories, October 1996. [74] Thomas Ottmann , Gerald Thiemt, and Christian Ullrich. Numerical Stability of Geometric Algorithms
Abdolrasoul Khosravi
2015-05-01
Findings: the results show that there is no significant difference between documents relevance when Pubmed suggestions or MESH terms are used during each search session. According to the findings of this study both Pubmed suggestions and MESH terms help the user with better understanding the concept and they both broaden the terminology of users. At the same time both methods are efficient in relevancy of the retrieved documents.
Hundebøll, Martin; Pedersen, Morten Videbæk; Roetter, Daniel Enrique Lucani
2014-01-01
This work studies the potential and impact of the FRANC network coding protocol for delivering high quality Dynamic Adaptive Streaming over HTTP (DASH) in wireless networks. Although DASH aims to tailor the video quality rate based on the available throughput to the destination, it relies...
Miyamoto, Hitoshi; Maeba, Hiroshi; Nakayama, Kazuya; Michioku, Kohji
A basin-wide stream network model was developed for stream temperature prediction in a river basin. The model used Horton’s geomorphologic laws for channel networks and river basins with stream ordering systems in order to connect channel segments from sources to the river mouth. Within the each segment, a theoretical solution derived from a thermal energy equation was used to predict longitudinal variation of stream temperatures. The model also took into account effects of solar radiation reduction due to both riparian vegetation and topography, thermal advection from the sources and lateral land-use. Comparison of the model prediction with observation in the Ibo River Basin of Japan showed very good agreement for the thermal structure throughout the river basin for almost all seasons, excluding the autumnal month in which the thermal budget on the stream water body was changed from positive to negative.
Valdivia, Valeska
2014-01-01
Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims. Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods. We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results. We find that the accuracy for the extinction of the tree-based method is better than 10%, while the ...
Donmez, O
2004-01-01
In this paper, the general procedure to solve the General Relativistic Hydrodynamical(GRH) equations with Adaptive-Mesh Refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of general relativistic hydrodynamic equations are done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second order convergence of the code in 1D, 2D and 3D. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the general relativistic hydrodynamical equa...
Scalable and Adaptive Streaming of 3D Mesh to Heterogeneous Devices
Abderrahim, Zeineb; Bouhlel, Mohamed Salim
2016-12-01
This article comprises a presentation of a web platform for the diffusion and visualization of 3D compressed data on the web. Indeed, the major goal of this work resides in the proposal of the transfer adaptation of the three-dimensional data to resources (network bandwidth, the type of visualization terminals, display resolution, user's preferences...). Also, it is an attempt to provide an effective consultation adapted to the user's request (preferences, levels of the requested detail, etc.). Such a platform can adapt the levels of detail to the change in the bandwidth and the rendering time when loading the mesh at the client level. In addition, the levels of detail are adapted to the distance between the object and the camera. These features are able to minimize the latency time and to make the real time interaction possible. The experiences as well as the comparison with the existing solutions show auspicious results in terms of latency, scalability and the quality of the experience offered to the users.
Guzik, S; McCorquodale, P; Colella, P
2011-12-16
A fourth-order accurate finite-volume method is presented for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Novel considerations for formulating the semi-discrete system of equations in computational space combined with detailed mechanisms for accommodating the adapting grids ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). Advancement in time is achieved with a fourth-order Runge-Kutta method.
Prasad Kella, Vara; Ghosh, J.; Chattopadhyay, P. K.; Sharma, D.; Saxena, Y. C.
2017-03-01
Instabilities in the sheath-presheath region formed in plasma-boundary layers are known to modify the particle flow velocities and their distribution functions, hence influencing the particle transport in this region significantly. In this paper, experimental observations of the ion-ion counter streaming instability excited in the sheath-presheath region of Argon (Ar), Helium (He), and Ar + He plasmas have been reported. These instabilities are excited in the vicinity of a stainless steel mesh grid placed inside the plasma. Floating potential (FP) fluctuations from the grid and from a movable Langmuir probe placed in the sheath-presheath region are measured. The frequency spectra of FP fluctuations in an argon plasma show a dominant broad peak in the range of 10-20 kHz centering around 15 kHz, which is identified as due to the ion-ion counter streaming instability. This frequency peak exists only in the sheath-presheath region and ceases to exist when the mesh grid is covered with a thin metal foil from one side, which restricts the counter streaming of the ions. The measured wave number, k, of the wave matches quite well with the calculated one from the dispersion relation of ion-ion counter streaming instability. The experiments are repeated to study the instability in He and Ar + He (two ion species) plasmas in similar experimental conditions. The neutral pressure threshold for sustenance of this instability has also been observed.
Schartmann, M; Burkert, A; Gillessen, S; Genzel, R; Pfuhl, O; Eisenhauer, F; Plewa, P M; Ott, T; George, E M; Habibi, M
2015-01-01
The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtained results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-$\\gamma$ data, (3) a detailed comparison to the observed high-quality position-velocity diagrams and the evolution of the total Brackett-$\\gamma$ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scen...
Dönmez, Orhan
2004-09-01
In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.
1981-01-01
This report summarizes the progress of the Solvent Refined Coal (SRC) project at the SRC Pilot Plant in Fort Lewis, Wahsington, and the Process Development Unit (P-99) in Harmarville, Pennsylvania. After the remaining runs of the slurry preheater survey test program were completed January 14, the Fort Lewis Pilot Plant was shut down to inspect Slurry Preheater B and to insulate the coil for future testing at higher rates of heat flux. Radiographic inspection of the coil showed that the welds at the pressure taps and the immersion thermowells did not meet design specifications. Slurry Preheater A was used during the first 12 days of February while weld repairs and modifications to Slurry Preheater B were completed. Two attempts to complete a material balance run on Powhatan No. 6 Mine coal were attempted but neither was successful. Slurry Preheater B was in service the remainder of the quarter. The start of a series of runs at higher heat flux was delayed because of plugging in both the slurry and the hydrogen flow metering systems. Three baseline runs and three slurry runs of the high heat flux program were completed before the plant was shut down March 12 for repair of the Inert Gas Unit. Attempts to complete a fourth slurry run at high heat flux were unsuccessful because of problems with the coal feed handling and the vortex mix systems. Process Development Unit (P-99) completed three of the four runs designed to study the effect of dissolver L/D ratio. The fourth was under way at the end of the period. SRC yield correlations have been developed that include coal properties as independent variables. A preliminary ranking of coals according to their reactivity in PDU P-99 has been made. Techniques for studying coking phenomenona are now in place.
Valdivia, Valeska; Hennebelle, Patrick
2014-11-01
Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We
Core, X.
2002-02-01
The isobar approximation for the system of the balance equations of mass, momentum, energy and chemical species is a suitable approximation to represent low Mach number reactive flows. In this approximation, which neglects acoustics phenomena, the mixture is hydrodynamically incompressible and the thermodynamic effects lead to an uniform compression of the system. We present a novel numerical scheme for this approximation. An incremental projection method, which uses the original form of mass balance equation, discretizes in time the Navier-Stokes equations. Spatial discretization is achieved through a finite volume approach on MAC-type staggered mesh. A higher order de-centered scheme is used to compute the convective fluxes. We associate to this discretization a local mesh refinement method, based on Flux Interface Correction technique. A first application concerns a forced flow with variable density which mimics a combustion problem. The second application is natural convection with first small temperature variations and then beyond the limit of validity of the Boussinesq approximation. Finally, we treat a third application which is a laminar diffusion flame. For each of these test problems, we demonstrate the robustness of the proposed numerical scheme, notably for the density spatial variations. We analyze the gain in accuracy obtained with the local mesh refinement method. (author)
Surface meshing with curvature convergence
Li, Huibin
2014-06-01
Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.
Geometrically Consistent Mesh Modification
Bonito, A.
2010-01-01
A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.
Streaming Surface Reconstruction from Real Time 3D Measurements
Bodenmüller, Tim
2009-01-01
In this thesis, a robust method for fast surface reconstruction from real time 3D point streams is presented. It is designed for the integration in a fast visual feedback system that supports a user while manually 3D scanning objects. The method iteratively generates a dense and homogeneous triangular mesh by inserting sample points from the real time data stream and refining the surface model locally. A spatial data structure ensures a fast access to growing point sets and continuously updat...
Mesh Adaptation and Shape Optimization on Unstructured Meshes Project
National Aeronautics and Space Administration — In this SBIR CRM proposes to implement the entropy adjoint method for solution adaptive mesh refinement into the Loci/CHEM unstructured flow solver. The scheme will...
An Adaptive Mesh Algorithm: Mesh Structure and Generation
Scannapieco, Anthony J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-21
The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented by a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally
2015-01-01
Mesh generation and visualization software based on the CGAL library. Folder content: drawmesh Visualize slices of the mesh (surface/volumetric) as wireframe on top of an image (3D). drawsurf Visualize surfaces of the mesh (surface/volumetric). img2mesh Convert isosurface in image to volumetric...
Neale, Richard B. [University Corporation For Atmospheric Research, Boulder, CO (United States)
2015-12-01
In this project we analyze climate simulations using the Community Earth System Model (CESM) in order to determine the modeled response and sensitivity to horizontal resolution. Simple aqua-planet configurations were used to provide a clean comparison of the response to resolution in CESM. This enables us to easily examine all aspects of the model sensitivity to resolution including mean quantities, variability and physical parameterization tendencies: the chief reflection of resolution sensitivity. An extension to the global resolution sensitivity study is the examination of regional grid refinement where resolution changes are prescribed in a single global simulation. We examine the relevance of the global resolution sensitivity results as applied to these regional refinement simulations. In particular we examine how variations in the grid resolution, centered on different parts of the globe, lead to differences in the parameterized response and the potential to generate residual circulations as a result. Given the potential to generate this resolution sensitivity we examine simple modifications to the parameterized physics that are able to moderate any residual circulations. Finally, we transfer the framework to the standard AMIP configuration to examine the resolution sensitivity in the presence of compounding effects such as land-sea distributions, orography and seasonal variation.
Godsk, Mikkel
This paper presents a flexible model, ‘STREAM’, for transforming higher science education into blended and online learning. The model is inspired by ideas of active and collaborative learning and builds on feedback strategies well-known from Just-in-Time Teaching, Flipped Classroom, and Peer...... Instruction. The aim of the model is to provide both a concrete and comprehensible design toolkit for adopting and implementing educational technologies in higher science teaching practice and at the same time comply with diverse ambitions. As opposed to the above-mentioned feedback strategies, the STREAM...
1994-01-01
element approximations of singular solutions with quasiuniform meshes the pollution error may be significant (depending on the strength of the singu...3 4. For singular solutions computed using quasi-uniform meshes, the pollution error may be significant and ii%-superconvergence regions may not...superconvergence for singular solutions : L-shaped domain. 3 Fig. 24. Pollution effect and 71%-superconvergence for singular solutions : L-shaped domain meshed
6th International Meshing Roundtable '97
White, D.
1997-09-01
The goal of the 6th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the pas~ the Roundtable has enjoyed significant participation born each of these groups from a wide variety of countries. The Roundtable will consist of technical presentations from contributed papers and abstracts, two invited speakers, and two invited panels of experts discussing topics related to the development and use of automatic mesh generation tools. In addition, this year we will feature a "Bring Your Best Mesh" competition and poster session to encourage discussion and participation from a wide variety of mesh generation tool users. The schedule and evening social events are designed to provide numerous opportunities for informal dialog. A proceedings will be published by Sandia National Laboratories and distributed at the Roundtable. In addition, papers of exceptionally high quaIity will be submitted to a special issue of the International Journal of Computational Geometry and Applications. Papers and one page abstracts were sought that present original results on the meshing process. Potential topics include but are got limited to: Unstructured triangular and tetrahedral mesh generation Unstructured quadrilateral and hexahedral mesh generation Automated blocking and structured mesh generation Mixed element meshing Surface mesh generation Geometry decomposition and clean-up techniques Geometry modification techniques related to meshing Adaptive mesh refinement and mesh quality control Mesh visualization Special purpose meshing algorithms for particular applications Theoretical or novel ideas with practical potential Technical presentations from industrial researchers.
MPDATA error estimator for mesh adaptivity
Szmelter, Joanna; Smolarkiewicz, Piotr K.
2006-04-01
In multidimensional positive definite advection transport algorithm (MPDATA) the leading error as well as the first- and second-order solutions are known explicitly by design. This property is employed to construct refinement indicators for mesh adaptivity. Recent progress with the edge-based formulation of MPDATA facilitates the use of the method in an unstructured-mesh environment. In particular, the edge-based data structure allows for flow solvers to operate on arbitrary hybrid meshes, thereby lending itself to implementations of various mesh adaptivity techniques. A novel unstructured-mesh nonoscillatory forward-in-time (NFT) solver for compressible Euler equations is used to illustrate the benefits of adaptive remeshing as well as mesh movement and enrichment for the efficacy of MPDATA-based flow solvers. Validation against benchmark test cases demonstrates robustness and accuracy of the approach.
A comparison of tetrahedral mesh improvement techniques
Freitag, L.A.; Ollivier-Gooch, C. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.
1996-12-01
Automatic mesh generation and adaptive refinement methods for complex three-dimensional domains have proven to be very successful tools for the efficient solution of complex applications problems. These methods can, however, produce poorly shaped elements that cause the numerical solution to be less accurate and more difficult to compute. Fortunately, the shape of the elements can be improved through several mechanisms, including face-swapping techniques that change local connectivity and optimization-based mesh smoothing methods that adjust grid point location. The authors consider several criteria for each of these two methods and compare the quality of several meshes obtained by using different combinations of swapping and smoothing. Computational experiments show that swapping is critical to the improvement of general mesh quality and that optimization-based smoothing is highly effective in eliminating very small and very large angles. The highest quality meshes are obtained by using a combination of swapping and smoothing techniques.
... Prosthetics Hernia Surgical Mesh Implants Hernia Surgical Mesh Implants Share Tweet Linkedin Pin it More sharing options ... majority of tissue used to produce these mesh implants are from a pig (porcine) or cow (bovine) ...
Urogynecologic Surgical Mesh Implants
... Prosthetics Urogynecologic Surgical Mesh Implants Urogynecologic Surgical Mesh Implants Share Tweet Linkedin Pin it More sharing options ... majority of tissue used to produce these mesh implants are from a pig (porcine) or cow (bovine). ...
Spherical geodesic mesh generation
Fung, Jimmy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kenamond, Mark Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Burton, Donald E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shashkov, Mikhail Jurievich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-27
In ALE simulations with moving meshes, mesh topology has a direct influence on feature representation and code robustness. In three-dimensional simulations, modeling spherical volumes and features is particularly challenging for a hydrodynamics code. Calculations on traditional spherical meshes (such as spin meshes) often lead to errors and symmetry breaking. Although the underlying differencing scheme may be modified to rectify this, the differencing scheme may not be accessible. This work documents the use of spherical geodesic meshes to mitigate solution-mesh coupling. These meshes are generated notionally by connecting geodesic surface meshes to produce triangular-prismatic volume meshes. This mesh topology is fundamentally different from traditional mesh topologies and displays superior qualities such as topological symmetry. This work describes the geodesic mesh topology as well as motivating demonstrations with the FLAG hydrocode.
Moving mesh generation with a sequential approach for solving PDEs
of physical and mesh equations suffers typically from long computation time due to highly nonlinear coupling between the two equations. Moreover, the extended system (physical and mesh equations) may be sensitive to the tuning parameters such as a temporal relaxation factor. It is therefore useful to design...... adaptive grid method (local refinement by adding/deleting the meshes at a discrete time level) as well as of efficiency for the dynamic adaptive grid method (or moving mesh method) where the number of meshes is not changed. For illustration, a phase change problem is solved with the decomposition algorithm.......In moving mesh methods, physical PDEs and a mesh equation derived from equidistribution of an error metrics (so-called the monitor function) are simultaneously solved and meshes are dynamically concentrated on steep regions (Lim et al., 2001). However, the simultaneous solution procedure...
HE JiFeng
2008-01-01
This paper presents a refinement calculus for service components. We model the behaviour of individual service by a guarded design, which enables one to separate the responsibility of clients from the commitment made by the system, and to iden-tify a component by a set of failures and divergences. Protocols are introduced to coordinate the interactions between a component with the external environment. We adopt the notion of process refinement to formalize the substitutivity of components, and provide a complete proof method based on the notion of simulations.
Multiple Staggered Mesh Ewald: Boosting the Accuracy of the Smooth Particle Mesh Ewald Method
Wang, Han; Fang, Jun
2016-01-01
The smooth particle mesh Ewald (SPME) method is the standard method for computing the electrostatic interactions in the molecular simulations. In this work, the multiple staggered mesh Ewald (MSME) method is proposed to boost the accuracy of the SPME method. Unlike the SPME that achieves higher accuracy by refining the mesh, the MSME improves the accuracy by averaging the standard SPME forces computed on, e.g. $M$, staggered meshes. We prove, from theoretical perspective, that the MSME is as accurate as the SPME, but uses $M^2$ times less mesh points in a certain parameter range. In the complementary parameter range, the MSME is as accurate as the SPME with twice of the interpolation order. The theoretical conclusions are numerically validated both by a uniform and uncorrelated charge system, and by a three-point-charge water system that is widely used as solvent for the bio-macromolecules.
Adaptive sampling for mesh spectrum editing
ZHAO Xiang-jun; ZHANG Hong-xin; BAO Hu-jun
2006-01-01
A mesh editing framework is presented in this paper, which integrates Free-Form Deformation (FFD) and geometry signal processing. By using simplified model from original mesh, the editing task can be accomplished with a few operations. We take the deformation of the proxy and the position coordinates of the mesh models as geometry signal. Wavelet analysis is employed to separate local detail information gracefully. The crucial innovation of this paper is a new adaptive regular sampling approach for our signal analysis based editing framework. In our approach, an original mesh is resampled and then refined iteratively which reflects optimization of our proposed spectrum preserving energy. As an extension of our spectrum editing scheme,the editing principle is applied to geometry details transferring, which brings satisfying results.
An effective quadrilateral mesh adaptation
KHATTRI Sanjay Kumar
2006-01-01
Accuracy of a simulation strongly depends on the grid quality. Here, quality means orthogonality at the boundaries and quasi-orthogonality within the critical regions, smoothness, bounded aspect ratios and solution adaptive behaviour. It is not recommended to refine the parts of the domain where the solution shows little variation. It is desired to concentrate grid points and cells in the part of the domain where the solution shows strong gradients or variations. We present a simple, effective and computationally efficient approach for quadrilateral mesh adaptation. Several numerical examples are presented for supporting our claim.
Coupling of non-conforming meshes in a component mode synthesis method
Akcay-Perdahcioglu, D.; Doreille, M.; Boer, de A.; Ludwig, T.
2013-01-01
A common mesh refinement-based coupling technique is embedded into a component mode synthesis method, Craig–Bampton. More specifically, a common mesh is generated between the non-conforming interfaces of the coupled structures, and the compatibility constraints are enforced on that mesh via L2-minim
Ralf Deiterding
2011-01-01
Full Text Available Numerical simulation can be key to the understanding of the multidimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the nonequilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniques in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, that is, under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis, and a diagram of the transition boundaries between possible Mach reflection structures is constructed.
Effects of mesh style and grid convergence on numerical simulation accuracy of centrifugal pump
刘厚林; 刘明明; 白羽; 董亮
2015-01-01
In order to evaluate the effects of mesh generation techniques and grid convergence on pump performance in centrifugal pump model, three widely used mesh styles including structured hexahedral, unstructured tetrahedral and hybrid prismatic/tetrahedral meshes were generated for a centrifugal pump model. And quantitative grid convergence was assessed based on a grid convergence index (GCI), which accounts for the degree of grid refinement. The structured, unstructured or hybrid meshes are found to have certain difference for velocity distributions in impeller with the change of grid cell number. And the simulation results have errors to different degrees compared with experimental data. The GCI-value for structured meshes calculated is lower than that for the unstructured and hybrid meshes. Meanwhile, the structured meshes are observed to get more vortexes in impeller passage. Nevertheless, the hybrid meshes are found to have larger low-velocity area at outlet and more secondary vortexes at a specified location than structured meshes and unstructured meshes.
Pei Ping; YURY N. PETRENKO
2015-01-01
A Mesh network simulation framework which provides a powerful and concise modeling chain for a network structure will be introduce in this report. Mesh networks has a special topologic structure. The paper investigates a message transfer in wireless mesh network simulation and how does it works in cellular network simulation. Finally the experimental result gave us the information that mesh networks have different principle in transmission way with cellular networks in transmission, and multi...
Mesh generation in archipelagos
Terwisscha van Scheltinga, A.; Myers, P.G.; Pietrzak, J.D.
2012-01-01
A new mesh size field is presented that is specifically designed for efficient meshing of highly irregular oceanic domains: archipelagos. The new approach is based on the standard mesh size field that uses the proximity to the nearest coastline. Here, the proximities to the two nearest coastlines
Finite element mesh generation
Lo, Daniel SH
2014-01-01
Highlights the Progression of Meshing Technologies and Their ApplicationsFinite Element Mesh Generation provides a concise and comprehensive guide to the application of finite element mesh generation over 2D domains, curved surfaces, and 3D space. Organised according to the geometry and dimension of the problem domains, it develops from the basic meshing algorithms to the most advanced schemes to deal with problems with specific requirements such as boundary conformity, adaptive and anisotropic elements, shape qualities, and mesh optimization. It sets out the fundamentals of popular techniques
Hybrid Surface Mesh Adaptation for Climate Modeling
Ahmed Khamayseh; Valmor de Almeida; Glen Hansen
2008-01-01
Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, lesspopular method of spatial adaptivity is called "mesh motion" (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is pro-duced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is de-signed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.
Hydrodynamic simulations on a moving Voronoi mesh
Springel, Volker
2011-01-01
At the heart of any method for computational fluid dynamics lies the question of how the simulated fluid should be discretized. Traditionally, a fixed Eulerian mesh is often employed for this purpose, which in modern schemes may also be adaptively refined during a calculation. Particle-based methods on the other hand discretize the mass instead of the volume, yielding an approximately Lagrangian approach. It is also possible to achieve Lagrangian behavior in mesh-based methods if the mesh is allowed to move with the flow. However, such approaches have often been fraught with substantial problems related to the development of irregularity in the mesh topology. Here we describe a novel scheme that eliminates these weaknesses. It is based on a moving unstructured mesh defined by the Voronoi tessellation of a set of discrete points. The mesh is used to solve the hyperbolic conservation laws of ideal hydrodynamics with a finite volume approach, based on a second-order Godunov scheme with an exact Riemann solver. A...
Evolution of Cold Streams and Emergence of the Hubble Sequence
Cen, Renyue
2014-01-01
A new physical framework for the emergence of the Hubble sequence is outlined, based on novel analyses performed to quantify the evolution of cold streams of a large sample of galaxies from a state-of-the-art ultra-high resolution, large-scale adaptive mesh-refinement hydrodynamic simulation in a fully cosmological setting. It is found that the following three key physical variables of galactic cold inflows crossing the virial sphere substantially decrease with decreasing redshift: the number of streams N_{90} that make up 90% of concurrent inflow mass flux, average inflow rate per stream dot M_{90} and mean (mass flux weighted) gas density in the streams n_{gas}. Another key variable, the stream dimensionless angular momentum parameter lambda, instead is found to increase with decreasing redshift. Assimilating these trends and others leads naturally to a physically coherent scenario for the emergence of the Hubble sequence, including the following expectations: (1) the predominance of a mixture of disproport...
Electrostatic PIC with adaptive Cartesian mesh
Kolobov, Vladimir I
2016-01-01
We describe an initial implementation of an electrostatic Particle-in-Cell (ES-PIC) module with adaptive Cartesian mesh in our Unified Flow Solver framework. Challenges of PIC method with cell-based adaptive mesh refinement (AMR) are related to a decrease of the particle-per-cell number in the refined cells with a corresponding increase of the numerical noise. The developed ES-PIC solver is validated for capacitively coupled plasma, its AMR capabilities are demonstrated for simulations of streamer development during high-pressure gas breakdown. It is shown that cell-based AMR provides a convenient particle management algorithm for exponential multiplications of electrons and ions in the ionization events.
Details of tetrahedral anisotropic mesh adaptation
Jensen, Kristian Ejlebjerg; Gorman, Gerard
2016-04-01
We have implemented tetrahedral anisotropic mesh adaptation using the local operations of coarsening, swapping, refinement and smoothing in MATLAB without the use of any for- N loops, i.e. the script is fully vectorised. In the process of doing so, we have made three observations related to details of the implementation: 1. restricting refinement to a single edge split per element not only simplifies the code, it also improves mesh quality, 2. face to edge swapping is unnecessary, and 3. optimising for the Vassilevski functional tends to give a little higher value for the mean condition number functional than optimising for the condition number functional directly. These observations have been made for a uniform and a radial shock metric field, both starting from a structured mesh in a cube. Finally, we compare two coarsening techniques and demonstrate the importance of applying smoothing in the mesh adaptation loop. The results pertain to a unit cube geometry, but we also show the effect of corners and edges by applying the implementation in a spherical geometry.
2015-01-01
With the advances in mobile computing technologies and the growth of the Net, mobile mesh networks are going through a set of important evolutionary steps. In this paper, we survey architectural aspects of mobile mesh networks and their use cases and deployment models. Also, we survey challenging areas of mobile mesh networks and describe our vision of promising mobile services. This paper presents a basic introductory material for Masters of Open Information Technologies Lab, interested in m...
Cheng, Siu-Wing; Shewchuk, Jonathan
2013-01-01
Written by authors at the forefront of modern algorithms research, Delaunay Mesh Generation demonstrates the power and versatility of Delaunay meshers in tackling complex geometric domains ranging from polyhedra with internal boundaries to piecewise smooth surfaces. Covering both volume and surface meshes, the authors fully explain how and why these meshing algorithms work.The book is one of the first to integrate a vast amount of cutting-edge material on Delaunay triangulations. It begins with introducing the problem of mesh generation and describing algorithms for constructing Delaunay trian
2011-11-01
triangles in two dimensions and tetrahedra ( tets ) in three dimensions. There are many other ways to discretize a region using unstructured meshes, but this...The boundary points associated with the airfoil surface were moved, but all of the interior points remained stationary , which resulted in a mesh
An Adaptive Mesh Algorithm: Mapping the Mesh Variables
Scannapieco, Anthony J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-07-25
Both thermodynamic and kinematic variables must be mapped. The kinematic variables are defined on a separate kinematic mesh; it is the duel mesh to the thermodynamic mesh. The map of the kinematic variables is done by calculating the contributions of kinematic variables on the old thermodynamic mesh, mapping the kinematic variable contributions onto the new thermodynamic mesh and then synthesizing the mapped kinematic variables on the new kinematic mesh. In this document the map of the thermodynamic variables will be described.
Parallel Mesh Adaptive Techniques for Complex Flow Simulation: Geometry Conservation
Angelo Casagrande
2012-01-01
Full Text Available Dynamic mesh adaptation on unstructured grids, by localised refinement and derefinement, is a very efficient tool for enhancing solution accuracy and optimising computational time. One of the major drawbacks, however, resides in the projection of the new nodes created, during the refinement process, onto the boundary surfaces. This can be addressed by the introduction of a library capable of handling geometric properties given by a CAD (computer-aided design description. This is of particular interest also to enhance the adaptation module when the mesh is being smoothed, and hence moved, to then reproject it onto the surface of the exact geometry.
Reaction rates for reaction-diffusion kinetics on unstructured meshes
Hellander, Stefan; Petzold, Linda
2017-02-01
The reaction-diffusion master equation is a stochastic model often utilized in the study of biochemical reaction networks in living cells. It is applied when the spatial distribution of molecules is important to the dynamics of the system. A viable approach to resolve the complex geometry of cells accurately is to discretize space with an unstructured mesh. Diffusion is modeled as discrete jumps between nodes on the mesh, and the diffusion jump rates can be obtained through a discretization of the diffusion equation on the mesh. Reactions can occur when molecules occupy the same voxel. In this paper, we develop a method for computing accurate reaction rates between molecules occupying the same voxel in an unstructured mesh. For large voxels, these rates are known to be well approximated by the reaction rates derived by Collins and Kimball, but as the mesh is refined, no analytical expression for the rates exists. We reduce the problem of computing accurate reaction rates to a pure preprocessing step, depending only on the mesh and not on the model parameters, and we devise an efficient numerical scheme to estimate them to high accuracy. We show in several numerical examples that as we refine the mesh, the results obtained with the reaction-diffusion master equation approach those of a more fine-grained Smoluchowski particle-tracking model.
Robust, multidimensional mesh motion based on Monge-Kantorovich equidistribution
Delzanno, G L [Los Alamos National Laboratory; Finn, J M [Los Alamos National Laboratory
2009-01-01
Mesh-motion (r-refinement) grid adaptivity schemes are attractive due to their potential to minimize the numerical error for a prescribed number of degrees of freedom. However, a key roadblock to a widespread deployment of the technique has been the formulation of robust, reliable mesh motion governing principles, which (1) guarantee a solution in multiple dimensions (2D and 3D), (2) avoid grid tangling (or folding of the mesh, whereby edges of a grid cell cross somewhere in the domain), and (3) can be solved effectively and efficiently. In this study, we formulate such a mesh-motion governing principle, based on volume equidistribution via Monge-Kantorovich optimization (MK). In earlier publications [1, 2], the advantages of this approach in regards to these points have been demonstrated for the time-independent case. In this study, demonstrate that Monge-Kantorovich equidistribution can in fact be used effectively in a time stepping context, and delivers an elegant solution to the otherwise pervasive problem of grid tangling in mesh motion approaches, without resorting to ad-hoc time-dependent terms (as in moving-mesh PDEs, or MMPDEs [3, 4]). We explore two distinct r-refinement implementations of MK: direct, where the current mesh relates to an initial, unchanging mesh, and sequential, where the current mesh is related to the previous one in time. We demonstrate that the direct approach is superior in regards to mesh distortion and robustness. The properties of the approach are illustrated with a paradigmatic hyperbolic PDE, the advection of a passive scalar. Imposed velocity flow fields or varying vorticity levels and flow shears are considered.
Bucki, Marek; Payan, Yohan; 10.1016/j.media.2010.02.003
2010-01-01
Finite Element mesh generation remains an important issue for patient specific biomechanical modeling. While some techniques make automatic mesh generation possible, in most cases, manual mesh generation is preferred for better control over the sub-domain representation, element type, layout and refinement that it provides. Yet, this option is time consuming and not suited for intraoperative situations where model generation and computation time is critical. To overcome this problem we propose a fast and automatic mesh generation technique based on the elastic registration of a generic mesh to the specific target organ in conjunction with element regularity and quality correction. This Mesh-Match-and-Repair (MMRep) approach combines control over the mesh structure along with fast and robust meshing capabilities, even in situations where only partial organ geometry is available. The technique was successfully tested on a database of 5 pre-operatively acquired complete femora CT scans, 5 femoral heads partially...
Yan, Su; Arslanbekov, Robert R; Kolobov, Vladimir I; Jin, Jian-Ming
2016-01-01
A discontinuous Galerkin time-domain (DGTD) method based on dynamically adaptive Cartesian meshes (ACM) is developed for a full-wave analysis of electromagnetic fields in dispersive media. Hierarchical Cartesian grids offer simplicity close to that of structured grids and the flexibility of unstructured grids while being highly suited for adaptive mesh refinement (AMR). The developed DGTD-ACM achieves a desired accuracy by refining non-conformal meshes near material interfaces to reduce stair-casing errors without sacrificing the high efficiency afforded with uniform Cartesian meshes. Moreover, DGTD-ACM can dynamically refine the mesh to resolve the local variation of the fields during propagation of electromagnetic pulses. A local time-stepping scheme is adopted to alleviate the constraint on the time-step size due to the stability condition of the explicit time integration. Simulations of electromagnetic wave diffraction over conducting and dielectric cylinders and spheres demonstrate that the proposed meth...
Held, Gilbert
2005-01-01
Wireless mesh networking is a new technology that has the potential to revolutionize how we access the Internet and communicate with co-workers and friends. Wireless Mesh Networks examines the concept and explores its advantages over existing technologies. This book explores existing and future applications, and examines how some of the networking protocols operate.The text offers a detailed analysis of the significant problems affecting wireless mesh networking, including network scale issues, security, and radio frequency interference, and suggests actual and potential solutions for each pro
Mesh implants: An overview of crucial mesh parameters
Lei-Ming; Zhu; Philipp; Schuster; Uwe; Klinge
2015-01-01
Hernia repair is one of the most frequently performed surgical interventions that use mesh implants. This article evaluates crucial mesh parameters to facilitate selection of the most appropriate mesh implant, considering raw materials, mesh composition, structure parameters and mechanical parameters. A literature review was performed using the Pub Med database. The most important mesh parameters in the selection of a mesh implant are the raw material, structural parameters and mechanical parameters, which should match the physiological conditions. The structural parameters, especially the porosity, are the most important predictors of the biocompatibility performance of synthetic meshes. Meshes with large pores exhibit less inflammatory infiltrate, connective tissue and scar bridging, which allows increased soft tissue ingrowth. The raw material and combination of raw materials of the used mesh, including potential coatings and textile design, strongly impact the inflammatory reaction to the mesh. Synthetic meshes made from innovative polymers combined with surface coating have been demonstrated to exhibit advantageous behavior in specialized fields. Monofilament, largepore synthetic meshes exhibit advantages. The value of mesh classification based on mesh weight seems to be overestimated. Mechanical properties of meshes, such as anisotropy/isotropy, elasticity and tensile strength, are crucial parameters for predicting mesh performance after implantation.
Pascucci, V
2004-02-18
This paper presents a simple approach for rendering isosurfaces of a scalar field. Using the vertex programming capability of commodity graphics cards, we transfer the cost of computing an isosurface from the Central Processing Unit (CPU), running the main application, to the Graphics Processing Unit (GPU), rendering the images. We consider a tetrahedral decomposition of the domain and draw one quadrangle (quad) primitive per tetrahedron. A vertex program transforms the quad into the piece of isosurface within the tetrahedron (see Figure 2). In this way, the main application is only devoted to streaming the vertices of the tetrahedra from main memory to the graphics card. For adaptively refined rectilinear grids, the optimization of this streaming process leads to the definition of a new 3D space-filling curve, which generalizes the 2D Sierpinski curve used for efficient rendering of triangulated terrains. We maintain the simplicity of the scheme when constructing view-dependent adaptive refinements of the domain mesh. In particular, we guarantee the absence of T-junctions by satisfying local bounds in our nested error basis. The expensive stage of fixing cracks in the mesh is completely avoided. We discuss practical tradeoffs in the distribution of the workload between the application and the graphics hardware. With current GPU's it is convenient to perform certain computations on the main CPU. Beyond the performance considerations that will change with the new generations of GPU's this approach has the major advantage of avoiding completely the storage in memory of the isosurface vertices and triangles.
Botsch, Mario; Pauly, Mark; Alliez, Pierre; Levy, Bruno
2010-01-01
Geometry processing, or mesh processing, is a fast-growing area of research that uses concepts from applied mathematics, computer science, and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation, and transmission of complex 3D models. Applications of geometry processing algorithms already cover a wide range of areas from multimedia, entertainment, and classical computer-aided design, to biomedical computing, reverse engineering, and scientific computing. Over the last several years, triangle meshes have become increasingly popular,
Atkey, Robert; Ghani, Neil
2012-01-01
Dependently typed programming languages allow sophisticated properties of data to be expressed within the type system. Of particular use in dependently typed programming are indexed types that refine data by computationally useful information. For example, the N-indexed type of vectors refines lists by their lengths. Other data types may be refined in similar ways, but programmers must produce purpose-specific refinements on an ad hoc basis, developers must anticipate which refinements to include in libraries, and implementations must often store redundant information about data and their refinements. In this paper we show how to generically derive inductive characterisations of refinements of inductive types, and argue that these characterisations can alleviate some of the aforementioned difficulties associated with ad hoc refinements. Our characterisations also ensure that standard techniques for programming with and reasoning about inductive types are applicable to refinements, and that refinements can the...
A LAGUERRE VORONOI BASED SCHEME FOR MESHING PARTICLE SYSTEMS.
Bajaj, Chandrajit
2005-06-01
We present Laguerre Voronoi based subdivision algorithms for the quadrilateral and hexahedral meshing of particle systems within a bounded region in two and three dimensions, respectively. Particles are smooth functions over circular or spherical domains. The algorithm first breaks the bounded region containing the particles into Voronoi cells that are then subsequently decomposed into an initial quadrilateral or an initial hexahedral scaffold conforming to individual particles. The scaffolds are subsequently refined via applications of recursive subdivision (splitting and averaging rules). Our choice of averaging rules yield a particle conforming quadrilateral/hexahedral mesh, of good quality, along with being smooth and differentiable in the limit. Extensions of the basic scheme to dynamic re-meshing in the case of addition, deletion, and moving particles are also discussed. Motivating applications of the use of these static and dynamic meshes for particle systems include the mechanics of epoxy/glass composite materials, bio-molecular force field calculations, and gas hydrodynamics simulations in cosmology.
Algorithm refinement for fluctuating hydrodynamics
Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.
2007-07-03
This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.
EVOLUTION OF COLD STREAMS AND THE EMERGENCE OF THE HUBBLE SEQUENCE
Cen, Renyue, E-mail: cen@astro.princeton.edu [Princeton University Observatory, Princeton, NJ 08544 (United States)
2014-07-01
A new physical framework for the emergence of the Hubble sequence is outlined, based on novel analyses performed to quantify the evolution of cold streams of a large sample of galaxies from a state-of-the-art ultra-high resolution, large-scale adaptive mesh-refinement hydrodynamic simulation in a fully cosmological setting. It is found that the following three key physical variables of galactic cold inflows crossing the virial sphere substantially decrease with decreasing redshift: the number of streams N {sub 90} that make up 90% of concurrent inflow mass flux, average inflow rate per stream M-dot {sub 90} and mean (mass flux weighted) gas density in the streams n {sub gas}. Another key variable, the stream dimensionless angular momentum parameter λ, is found to instead increase with decreasing redshift. Assimilating these trends and others naturally leads to a physically coherent scenario for the emergence of the Hubble sequence, including the following expectations: (1) the predominance of a mixture of disproportionately small irregular and complex disk galaxies at z ≥ 2 when most galaxies have multiple concurrent streams, (2) the beginning of the appearance of flocculent spirals at z ∼ 1-2 when the number of concurrent streams are about 2-3, (3) the grand-design spiral galaxies appear at z ≤ 1 when galaxies with only one major cold stream significantly emerge. These expected general trends are in good accord with observations. Early-type galaxies are those that have entered a perennial state of zero cold gas stream, with their abundance increasing with decreasing redshift.
Design of Finite Element Tools for Coupled Surface and Volume Meshes
Daniel K(o)ster; Oliver Kriessl; Kunibert G. Siebert
2008-01-01
Many problems with underlying variational structure involve a coupling of volume with surface effects. A straight-forward approach in a finite element discretization is to make use of the surface triangulation that is naturally induced by the volume triangulation. In an adaptive method one wants to facilitate "matching" local mesh modifications, i.e., local refinement and/or coarsening, of volume and surface mesh with standard tools such that the surface grid is always induced by the volume grid. We describe the concepts behind this approach for bisectional refinement and describe new tools incorporated in the finite element toolbox ALBERTA. We also present several important applications of the mesh coupling.
Documentation for MeshKit - Reactor Geometry (&mesh) Generator
Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-09-30
This report gives documentation for using MeshKit’s Reactor Geometry (and mesh) Generator (RGG) GUI and also briefly documents other algorithms and tools available in MeshKit. RGG is a program designed to aid in modeling and meshing of complex/large hexagonal and rectilinear reactor cores. RGG uses Argonne’s SIGMA interfaces, Qt and VTK to produce an intuitive user interface. By integrating a 3D view of the reactor with the meshing tools and combining them into one user interface, RGG streamlines the task of preparing a simulation mesh and enables real-time feedback that reduces accidental scripting mistakes that could waste hours of meshing. RGG interfaces with MeshKit tools to consolidate the meshing process, meaning that going from model to mesh is as easy as a button click. This report is designed to explain RGG v 2.0 interface and provide users with the knowledge and skills to pilot RGG successfully. Brief documentation of MeshKit source code, tools and other algorithms available are also presented for developers to extend and add new algorithms to MeshKit. RGG tools work in serial and parallel and have been used to model complex reactor core models consisting of conical pins, load pads, several thousands of axially varying material properties of instrumentation pins and other interstices meshes.
Gill, S P D; Gibson, B K; Flynn, C; Ibata, R A; Lewis, G F; Gill, Stuart P.D.; Knebe, Alexander; Gibson, Brad K.; Flynn, Chris; Ibata, Rodrigo A.; Lewis, Geraint F.
2002-01-01
An adaptive multi grid approach to simulating the formation of structure from collisionless dark matter is described. MLAPM (Multi-Level Adaptive Particle Mesh) is one of the most efficient serial codes available on the cosmological 'market' today. As part of Swinburne University's role in the development of the Square Kilometer Array, we are implementing hydrodynamics, feedback, and radiative transfer within the MLAPM adaptive mesh, in order to simulate baryonic processes relevant to the interstellar and intergalactic media at high redshift. We will outline our progress to date in applying the existing MLAPM to a study of the decay of satellite galaxies within massive host potentials.
DeCristofaro, Michael A.; Lansdowne, Chatwin A.; Schlesinger, Adam M.
2014-01-01
NASA has identified standardized wireless mesh networking as a key technology for future human and robotic space exploration. Wireless mesh networks enable rapid deployment, provide coverage in undeveloped regions. Mesh networks are also self-healing, resilient, and extensible, qualities not found in traditional infrastructure-based networks. Mesh networks can offer lower size, weight, and power (SWaP) than overlapped infrastructure-perapplication. To better understand the maturity, characteristics and capability of the technology, we developed an 802.11 mesh network consisting of a combination of heterogeneous commercial off-the-shelf devices and opensource firmware and software packages. Various streaming applications were operated over the mesh network, including voice and video, and performance measurements were made under different operating scenarios. During the testing several issues with the currently implemented mesh network technology were identified and outlined for future work.
An accuracy assessment of Cartesian-mesh approaches for the Euler equations
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.
On the Support of Multimedia Applications over Wireless Mesh Networks
Chemseddine BEMMOUSSAT
2013-05-01
Full Text Available For next generation wireless networks, supporting quality of service (QoS in multimedia application likevideo, streaming and voice over IP is a necessary and critical requirement. Wireless Mesh Networking isenvisioned as a solution for next networks generation and a promising technology for supportingmultimedia application.With decreasing the numbers of mesh clients, QoS will increase automatically. Several research arefocused to improve QoS in Wireless Mesh networks (WMNs, they try to improve a basics algorithm, likerouting protocols or one of example of canal access, but in moments it no sufficient to ensure a robustsolution to transport multimedia application over WMNs.In this paper we propose an efficient routing algorithm for multimedia transmission in the mesh networkand an approach of QoS in the MAC layer for facilitated transport video over the network studied.
Isotopic Implicit Surface Meshing
Boissonnat, Jean-Daniel; Cohen-Steiner, David; Vegter, Gert
2004-01-01
This paper addresses the problem of piecewise linear approximation of implicit surfaces. We first give a criterion ensuring that the zero-set of a smooth function and the one of a piecewise linear approximation of it are isotopic. Then, we deduce from this criterion an implicit surface meshing algor
Vickers, Trevor
1992-01-01
On the Refinement Calculus gives one view of the development of the refinement calculus and its attempt to bring together - among other things - Z specifications and Dijkstra's programming language. It is an excellent source of reference material for all those seeking the background and mathematical underpinnings of the refinement calculus.
Mesh Resolution Effect on 3D RANS Turbomachinery Flow Simulations
Yershov, Sergiy
2016-01-01
The paper presents the study of the effect of a mesh refinement on numerical results of 3D RANS computations of turbomachinery flows. The CFD solver F, which based on the second-order accurate ENO scheme, is used in this study. The simplified multigrid algorithm and local time stepping permit decreasing computational time. The flow computations are performed for a number of turbine and compressor cascades and stages. In all flow cases, the successively refined meshes of H-type with an approximate orthogonalization near the solid walls were generated. The results obtained are compared in order to estimate their both mesh convergence and ability to resolve the transonic flow pattern. It is concluded that for thorough studying the fine phenomena of the 3D turbomachinery flows, it makes sense to use the computational meshes with the number of cells from several millions up to several hundred millions per a single turbomachinery blade channel, while for industrial computations, a mesh of about or less than one mil...
Advanced Automatic Hexahedral Mesh Generation from Surface Quad Meshes
Kremer, Michael; Bommes, David; Lim, Isaak; Kobbelt, Leif
2013-01-01
International audience; A purely topological approach for the generation of hexahedral meshes from quadrilateral surface meshes of genus zero has been proposed by M. MÃ¼ller-Hannemann: in a first stage, the input surface mesh is reduced to a single hexahedron by successively eliminating loops from the dual graph of the quad mesh; in the second stage, the hexahedral mesh is constructed by extruding a layer of hexahedra for each dual loop from the first stage in reverse elimination order. In th...
Feature detection of triangular meshes vianeighbor supporting
Xiao-chao WANG; Jun-jie CAO; Xiu-ping LIU; Bao-jun LI; Xi-quan SHI; Yi-zhen SUN
2012-01-01
We propose a robust method for detecting features on triangular meshes by combining normal tensor voting with neighbor supporting.Our method contains two stages:feature detection and feature refinement.First,the normal tensor voting method is modified to detect the initial features,which may include some pseudo features.Then,at the feature refinement stage,a novel salient measure deriving from the idea of neighbor supporting is developed. Benefiting from the integrated reliable salient measure feature,pseudo features can be effectively discriminated from the initially detected features and removed. Compared to previous methods based on the differential geometric property,the main advantage of our method is that it can detect both sharp and weak features.Numerical experiments show that our algorithm is robust,effective,and can produce more accurate results.We also discuss how detected features are incorporated into applications,such as feature-preserving mesh denoising and hole-filling,and present visually appealing results by integrating feature information.
OPTIMIZING EUCALYPTUS PULP REFINING
Vail Manfredi
2004-01-01
This paper discusses the refining of bleached eucalyptus kraft pulp (BEKP).Pilot plant tests were carried out in to optimize the refining process and to identify the effects of refining variables on final paper quality and process costs.The following parameters are discussed: pulp consistency, disk pattern design, refiner speed,energy input, refiner configuration (parallel or serial)and refining intensity.The effects of refining on pulp fibers were evaluated against the pulp quality properties, such as physical strengths, bulk, opacity and porosity, as well as the interactions with papermaking process, such as paper machine runnability, paper breaks and refining control.The results showed that process optimization,considering pulp quality and refining costs, were obtained when eucalyptus pulp is refined under the lowest intensity and the highest pulp consistency possible. Changes on the operational refining conditions will have the highest impact on total energy requirements (costs) without any significant effect on final paper properties.It was also observed that classical ways to control the industrial operation, such as those based on drainage measurements, do not represent the best alternative to maximize the final paper properties neither the paper machine runability.
Efficient Packet Forwarding in Mesh Network
Soumen Kanrar
2012-01-01
Wireless Mesh Network (WMN) is a multi hop low cost, with easy maintenance robust network providing reliable service coverage. WMNs consist of mesh routers and mesh clients. In this architecture, while static mesh routers form the wireless backbone, mesh clients access the network through mesh routers as well as directly meshing with each other. Different from traditional wireless networks, WMN is dynamically self-organized and self-configured. In other words, the nodes in the mesh network au...
Streams with Strahler Stream Order
Minnesota Department of Natural Resources — Stream segments with Strahler stream order values assigned. As of 01/08/08 the linework is from the DNR24K stream coverages and will not match the updated...
Tang, Zhao; Wei, Qingshan; Wei, Alexander
2011-12-01
Metal-mesh lithography (MML) is a practical hybrid of microcontact printing and capillary force lithography that can be applied over millimeter-sized areas with a high level of uniformity. MML can be achieved by blotting various inks onto substrates through thin copper grids, relying on preferential wetting and capillary interactions between template and substrate for pattern replication. The resulting mesh patterns, which are inverted relative to those produced by stenciling or serigraphy, can be reproduced with low micrometer resolution. MML can be combined with other surface chemistry and lift-off methods to create functional microarrays for diverse applications, such as periodic islands of gold nanorods and patterned corrals for fibroblast cell cultures.
E pur si muove: Galiliean-invariant cosmological hydrodynamical simulations on a moving mesh
Springel, Volker
2009-01-01
Hydrodynamic cosmological simulations at present usually employ either the Lagrangian SPH technique, or Eulerian hydrodynamics on a Cartesian mesh with adaptive mesh refinement. Both of these methods have disadvantages that negatively impact their accuracy in certain situations. We here propose a novel scheme which largely eliminates these weaknesses. It is based on a moving unstructured mesh defined by the Voronoi tessellation of a set of discrete points. The mesh is used to solve the hyperbolic conservation laws of ideal hydrodynamics with a finite volume approach, based on a second-order unsplit Godunov scheme with an exact Riemann solver. The mesh-generating points can in principle be moved arbitrarily. If they are chosen to be stationary, the scheme is equivalent to an ordinary Eulerian method with second order accuracy. If they instead move with the velocity of the local flow, one obtains a Lagrangian formulation of hydrodynamics that does not suffer from the mesh distortion limitations inherent in othe...
Semi-structured meshes for axial turbomachinery blades
Sbardella, L.; Sayma, A. I.; Imregun, M.
2000-03-01
This paper describes the development and application of a novel mesh generator for the flow analysis of turbomachinery blades. The proposed method uses a combination of structured and unstructured meshes, the former in the radial direction and the latter in the axial and tangential directions, in order to exploit the fact that blade-like structures are not strongly three-dimensional since the radial variation is usually small. The proposed semi-structured mesh formulation was found to have a number of advantages over its structured counterparts. There is a significant improvement in the smoothness of the grid spacing and also in capturing particular aspects of the blade passage geometry. It was also found that the leading- and trailing-edge regions could be discretized without generating superfluous points in the far field, and that further refinements of the mesh to capture wake and shock effects were relatively easy to implement. The capability of the method is demonstrated in the case of a transonic fan blade for which the steady state flow is predicted using both structured and semi-structured meshes. A totally unstructured mesh is also generated for the same geometry to illustrate the disadvantages of using such an approach for turbomachinery blades. Copyright
Mesh Algorithms for PDE with Sieve I: Mesh Distribution
Matthew G. Knepley
2009-01-01
Full Text Available We have developed a new programming framework, called Sieve, to support parallel numerical partial differential equation(s (PDE algorithms operating over distributed meshes. We have also developed a reference implementation of Sieve in C++ as a library of generic algorithms operating on distributed containers conforming to the Sieve interface. Sieve makes instances of the incidence relation, or arrows, the conceptual first-class objects represented in the containers. Further, generic algorithms acting on this arrow container are systematically used to provide natural geometric operations on the topology and also, through duality, on the data. Finally, coverings and duality are used to encode not only individual meshes, but all types of hierarchies underlying PDE data structures, including multigrid and mesh partitions. In order to demonstrate the usefulness of the framework, we show how the mesh partition data can be represented and manipulated using the same fundamental mechanisms used to represent meshes. We present the complete description of an algorithm to encode a mesh partition and then distribute a mesh, which is independent of the mesh dimension, element shape, or embedding. Moreover, data associated with the mesh can be similarly distributed with exactly the same algorithm. The use of a high level of abstraction within the Sieve leads to several benefits in terms of code reuse, simplicity, and extensibility. We discuss these benefits and compare our approach to other existing mesh libraries.
Hybrid direct and iterative solvers for h refined grids with singularities
Paszyński, Maciej R.
2015-04-27
This paper describes a hybrid direct and iterative solver for two and three dimensional h adaptive grids with point singularities. The point singularities are eliminated by using a sequential linear computational cost solver O(N) on CPU [1]. The remaining Schur complements are submitted to incomplete LU preconditioned conjugated gradient (ILUPCG) iterative solver. The approach is compared to the standard algorithm performing static condensation over the entire mesh and executing the ILUPCG algorithm on top of it. The hybrid solver is applied for two or three dimensional grids automatically h refined towards point or edge singularities. The automatic refinement is based on the relative error estimations between the coarse and fine mesh solutions [2], and the optimal refinements are selected using the projection based interpolation. The computational mesh is partitioned into sub-meshes with local point and edge singularities separated. This is done by using the following greedy algorithm.
Development and Verification of Unstructured Adaptive Mesh Technique with Edge Compatibility
Ito, Kei; Kunugi, Tomoaki; Ohshima, Hiroyuki
In the design study of the large-sized sodium-cooled fast reactor (JSFR), one key issue is suppression of gas entrainment (GE) phenomena at a gas-liquid interface. Therefore, the authors have been developed a high-precision CFD algorithm to evaluate the GE phenomena accurately. The CFD algorithm has been developed on unstructured meshes to establish an accurate modeling of JSFR system. For two-phase interfacial flow simulations, a high-precision volume-of-fluid algorithm is employed. It was confirmed that the developed CFD algorithm could reproduce the GE phenomena in a simple GE experiment. Recently, the authors have been developed an important technique for the simulation of the GE phenomena in JSFR. That is an unstructured adaptive mesh technique which can apply fine cells dynamically to the region where the GE occurs in JSFR. In this paper, as a part of the development, a two-dimensional unstructured adaptive mesh technique is discussed. In the two-dimensional adaptive mesh technique, each cell is refined isotropically to reduce distortions of the mesh. In addition, connection cells are formed to eliminate the edge incompatibility between refined and non-refined cells. The two-dimensional unstructured adaptive mesh technique is verified by solving well-known lid-driven cavity flow problem. As a result, the two-dimensional unstructured adaptive mesh technique succeeds in providing a high-precision solution, even though poor-quality distorted initial mesh is employed. In addition, the simulation error on the two-dimensional unstructured adaptive mesh is much less than the error on the structured mesh with a larger number of cells.
An application of MeSH enrichment analysis in livestock.
Morota, G; Peñagaricano, F; Petersen, J L; Ciobanu, D C; Tsuyuzaki, K; Nikaido, I
2015-08-01
An integral part of functional genomics studies is to assess the enrichment of specific biological terms in lists of genes found to be playing an important role in biological phenomena. Contrasting the observed frequency of annotated terms with those of the background is at the core of overrepresentation analysis (ORA). Gene Ontology (GO) is a means to consistently classify and annotate gene products and has become a mainstay in ORA. Alternatively, Medical Subject Headings (MeSH) offers a comprehensive life science vocabulary including additional categories that are not covered by GO. Although MeSH is applied predominantly in human and model organism research, its full potential in livestock genetics is yet to be explored. In this study, MeSH ORA was evaluated to discern biological properties of identified genes and contrast them with the results obtained from GO enrichment analysis. Three published datasets were employed for this purpose, representing a gene expression study in dairy cattle, the use of SNPs for genome-wide prediction in swine and the identification of genomic regions targeted by selection in horses. We found that several overrepresented MeSH annotations linked to these gene sets share similar concepts with those of GO terms. Moreover, MeSH yielded unique annotations, which are not directly provided by GO terms, suggesting that MeSH has the potential to refine and enrich the representation of biological knowledge. We demonstrated that MeSH can be regarded as another choice of annotation to draw biological inferences from genes identified via experimental analyses. When used in combination with GO terms, our results indicate that MeSH can enhance our functional interpretations for specific biological conditions or the genetic basis of complex traits in livestock species.
Dahlgaard, Katja; Raposo, Alexandre A S F; Niccoli, Teresa; St Johnston, Daniel
2007-10-01
Mutants in the actin nucleators Cappuccino and Spire disrupt the polarized microtubule network in the Drosophila oocyte that defines the anterior-posterior axis, suggesting that microtubule organization depends on actin. Here, we show that Cappuccino and Spire organize an isotropic mesh of actin filaments in the oocyte cytoplasm. capu and spire mutants lack this mesh, whereas overexpressed truncated Cappuccino stabilizes the mesh in the presence of Latrunculin A and partially rescues spire mutants. Spire overexpression cannot rescue capu mutants, but prevents actin mesh disassembly at stage 10B and blocks late cytoplasmic streaming. We also show that the actin mesh regulates microtubules indirectly, by inhibiting kinesin-dependent cytoplasmic flows. Thus, the Capu pathway controls alternative states of the oocyte cytoplasm: when active, it assembles an actin mesh that suppresses kinesin motility to maintain a polarized microtubule cytoskeleton. When inactive, unrestrained kinesin movement generates flows that wash microtubules to the cortex.
A multilevel adaptive mesh generation scheme using Kd-trees
Alfonso Limon
2009-04-01
Full Text Available We introduce a mesh refinement strategy for PDE based simulations that benefits from a multilevel decomposition. Using Harten's MRA in terms of Schroder-Pander linear multiresolution analysis [20], we are able to bound discontinuities in $mathbb{R}$. This MRA is extended to $mathbb{R}^n$ in terms of n-orthogonal linear transforms and utilized to identify cells that contain a codimension-one discontinuity. These refinement cells become leaf nodes in a balanced Kd-tree such that a local dyadic MRA is produced in $mathbb{R}^n$, while maintaining a minimal computational footprint. The nodes in the tree form an adaptive mesh whose density increases in the vicinity of a discontinuity.
An edge-based unstructured mesh discretisation in geospherical framework
Szmelter, Joanna; Smolarkiewicz, Piotr K.
2010-07-01
An arbitrary finite-volume approach is developed for discretising partial differential equations governing fluid flows on the sphere. Unconventionally for unstructured-mesh global models, the governing equations are cast in the anholonomic geospherical framework established in computational meteorology. The resulting discretisation retains proven properties of the geospherical formulation, while it offers the flexibility of unstructured meshes in enabling irregular spatial resolution. The latter allows for a global enhancement of the spatial resolution away from the polar regions as well as for a local mesh refinement. A class of non-oscillatory forward-in-time edge-based solvers is developed and applied to numerical examples of three-dimensional hydrostatic flows, including shallow-water benchmarks, on a rotating sphere.
The finite cell method for polygonal meshes: poly-FCM
Duczek, Sascha; Gabbert, Ulrich
2016-10-01
In the current article, we extend the two-dimensional version of the finite cell method (FCM), which has so far only been used for structured quadrilateral meshes, to unstructured polygonal discretizations. Therefore, the adaptive quadtree-based numerical integration technique is reformulated and the notion of generalized barycentric coordinates is introduced. We show that the resulting polygonal (poly-)FCM approach retains the optimal rates of convergence if and only if the geometry of the structure is adequately resolved. The main advantage of the proposed method is that it inherits the ability of polygonal finite elements for local mesh refinement and for the construction of transition elements (e.g. conforming quadtree meshes without hanging nodes). These properties along with the performance of the poly-FCM are illustrated by means of several benchmark problems for both static and dynamic cases.
toolkit computational mesh conceptual model.
Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.
2010-03-01
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
Sibeyn, J.; Rao, P; Juurlink, B.
1996-01-01
Algorithms for performing gossiping on one- and higher dimensional meshes are presented. As a routing model, we assume the practically important worm-hole routing. For one-dimensional arrays and rings, we give a novel lower bound and an asymptotically optimal gossiping algorithm for all choices of the parameters involved. For two-dimensional meshes and tori, several simple algorithms composed of one-dimensional phases are presented. For an important range of packet and mesh sizes it gives cle...
Synthesized Optimization of Triangular Mesh
HU Wenqiang; YANG Wenyu
2006-01-01
Triangular mesh is often used to describe geometric object as computed model in digital manufacture, thus the mesh model with both uniform triangular shape and excellent geometric shape is expected. But in fact, the optimization of triangular shape often is contrary with that of geometric shape. In this paper, one synthesized optimizing algorithm is presented through subdividing triangles to achieve the trade-off solution between the geometric and triangular shape optimization of mesh model. The result mesh with uniform triangular shape and excellent topology are obtained.
Automatic off-body overset adaptive Cartesian mesh method based on an octree approach
Peron, Stephanie, E-mail: stephanie.peron@onera.fr [ONERA - The French Aerospace Lab, F-92322 Chatillon (France); Benoit, Christophe, E-mail: christophe.benoit@onera.fr [ONERA - The French Aerospace Lab, F-92322 Chatillon (France)
2013-01-01
This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.
Streaming nested data parallelism on multicores
Madsen, Frederik Meisner; Filinski, Andrzej
2016-01-01
the available computation resources. To allow for an accurate space-cost model in such cases, we have previously proposed the Streaming NESL language, a refinement of NESL with a high-level notion of streamable sequences. In this paper, we report on experience with a prototype implementation of Streaming NESL...
Mesh Algorithms for PDE with Sieve I: Mesh Distribution
Knepley, Matthew G
2009-01-01
We have developed a new programming framework, called Sieve, to support parallel numerical PDE algorithms operating over distributed meshes. We have also developed a reference implementation of Sieve in C++ as a library of generic algorithms operating on distributed containers conforming to the Sieve interface. Sieve makes instances of the incidence relation, or \\emph{arrows}, the conceptual first-class objects represented in the containers. Further, generic algorithms acting on this arrow container are systematically used to provide natural geometric operations on the topology and also, through duality, on the data. Finally, coverings and duality are used to encode not only individual meshes, but all types of hierarchies underlying PDE data structures, including multigrid and mesh partitions. In order to demonstrate the usefulness of the framework, we show how the mesh partition data can be represented and manipulated using the same fundamental mechanisms used to represent meshes. We present the complete des...
Prevention of Adhesion to Prosthetic Mesh
van ’t Riet, Martijne; de Vos van Steenwijk, Peggy J.; Bonthuis, Fred; Marquet, Richard L.; Steyerberg, Ewout W.; Jeekel, Johannes; Bonjer, H. Jaap
2003-01-01
Objective To assess whether use of antiadhesive liquids or coatings could prevent adhesion formation to prosthetic mesh. Summary Background Data Incisional hernia repair frequently involves the use of prosthetic mesh. However, concern exists about development of adhesions between viscera and the mesh, predisposing to intestinal obstruction or enterocutaneous fistulas. Methods In 91 rats, a defect in the muscular abdominal wall was created, and mesh was fixed intraperitoneally to cover the defect. Rats were divided in five groups: polypropylene mesh only (control group), addition of Sepracoat or Icodextrin solution to polypropylene mesh, Sepramesh (polypropylene mesh with Seprafilm coating), and Parietex composite mesh (polyester mesh with collagen coating). Seven and 30 days postoperatively, adhesions were assessed and wound healing was studied by microscopy. Results Intraperitoneal placement of polypropylene mesh was followed by bowel adhesions to the mesh in 50% of the cases. A mean of 74% of the mesh surface was covered by adhesions after 7 days, and 48% after 30 days. Administration of Sepracoat or Icodextrin solution had no influence on adhesion formation. Coated meshes (Sepramesh and Parietex composite mesh) had no bowel adhesions. Sepramesh was associated with a significant reduction of the mesh surface covered by adhesions after 7 and 30 days. Infection was more prevalent with Parietex composite mesh, with concurrent increased mesh surface covered by adhesions after 30 days (78%). Conclusions Sepramesh significantly reduced mesh surface covered by adhesions and prevented bowel adhesion to the mesh. Parietex composite mesh prevented bowel adhesions as well but increased infection rates in the current model. PMID:12496539
Risk Factors for Mesh Exposure after Transvaginal Mesh Surgery
Ke Niu; Yong-Xian Lu; Wen-Jie Shen; Ying-Hui Zhang; Wen-Ying Wang
2016-01-01
Background:Mesh exposure after surgery continues to be a clinical challenge for urogynecological surgeons.The purpose of this study was to explore the risk factors for polypropylene (PP) mesh exposure after transvaginal mesh (TVM) surgery.Methods:This study included 195 patients with advanced pelvic organ prolapse (POP),who underwent TVM from January 2004to December 2012 at the First Affiliated Hospital of Chinese PLA General Hospital.Clinical data were evaluated including patient's demography,TVM type,concomitant procedures,operation time,blood loss,postoperative morbidity,and mesh exposure.Mesh exposure was identified through postoperative vaginal examination.Statistical analysis was performed to identify risk factors for mesh exposure.Results:Two-hundred and nine transvaginal PP meshes were placed,including 194 in the anterior wall and 15 in the posterior wall.Concomitant tension-free vaginal tape was performed in 61 cases.The mean follow-up time was 35.1 ± 23.6 months.PP mesh exposure was identified in 32 cases (16.4％),with 31 in the anterior wall and 1 in the posterior wall.Significant difference was found in operating time and concomitant procedures between exposed and nonexposed groups (F =7.443,P =0.007;F =4.307,P =0.039,respectively).Binary logistic regression revealed that the number of concomitant procedures and operation time were risk factors for mesh exposure (P =0.001,P =0.043).Conclusion:Concomitant procedures and increased operating time increase the risk for postoperative mesh exposure in patients undergoing TVM surgery for POP.
OPTIMIZING EUCALYPTUS PULP REFINING
VailManfredi
2004-01-01
This paper discusses the refining of bleachedeucalyptus kraft pulp (BEKP).Pilot plant tests were carded out in to optimize therefining process and to identify the effects of refiningvariables on final paper quality and process costs.The following parameters are discussed: pulpconsistency, disk pattern design, refiner speed,energy input, refiner configuration (parallel or serial)and refining intensity.The effects of refining on pulp fibers were evaluatedagainst the pulp quality properties, such as physicalstrengths, bulk, opacity and porosity, as well as theinteractions with papermaking process, such as papermachine runnability, paper breaks and refiningcontrol.The results showed that process optimization,considering pulp quality and refining costs, wereobtained when eucalyptus pulp is refined under thelowest intensity and the highest pulp consistencypossible. Changes on the operational refiningconditions will have the highest impact on totalenergy requirements (costs) without any significanteffect on final paper properties.It was also observed that classical ways to control theindustrial operation, such as those based on drainagemeasurements, do not represent the best alternative tomaximize the final paper properties neither the papermachine runability.
Grouper: A Compact, Streamable Triangle Mesh Data Structure
Luffel, Mark [Georgia Inst. of Technology, Atlanta, GA (United States). Visualization and Usability Center (GVU); Gurung, Topraj [Georgia Inst. of Technology, Atlanta, GA (United States). Visualization and Usability Center (GVU); Lindstrom, Peter [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rossignac, Jarek [Georgia Inst. of Technology, Atlanta, GA (United States). Visualization and Usability Center (GVU)
2014-01-01
Here, we present Grouper: an all-in-one compact file format, random-access data structure, and streamable representation for large triangle meshes. Similarly to the recently published SQuad representation, Grouper represents the geometry and connectivity of a mesh by grouping vertices and triangles into fixed-size records, most of which store two adjacent triangles and a shared vertex. Unlike SQuad, however, Grouper interleaves geometry with connectivity and uses a new connectivity representation to ensure that vertices and triangles can be stored in a coherent order that enables memory-efficient sequential stream processing. We also present a linear-time construction algorithm that allows streaming out Grouper meshes using a small memory footprint while preserving the initial ordering of vertices. In this construction, we show how the problem of assigning vertices and triangles to groups reduces to a well-known NP-hard optimization problem, and present a simple yet effective heuristic solution that performs well in practice. Our array-based Grouper representation also doubles as a triangle mesh data structure that allows direct access to vertices and triangles. Storing only about two integer references per triangle-i.e., less than the three vertex references stored with each triangle in a conventional indexed mesh format-Grouper answers both incidence and adjacency queries in amortized constant time. Our compact representation enables data-parallel processing on multicore computers, instant partitioning and fast transmission for distributed processing, as well as efficient out-of-core access. We demonstrate the versatility and performance benefits of Grouper using a suite of example meshes and processing kernels.
Parameterization for fitting triangular mesh
LIN Hongwei; WANG Guojin; LIU Ligang; BAO Hujun
2006-01-01
In recent years, with the development of 3D data acquisition equipments, the study on reverse engineering has become more and more important. However, the existing methods for parameterization can hardly ensure that the parametric domain is rectangular, and the parametric curve grid is regular. In order to overcome these limitations, we present a novel method for parameterization of triangular meshes in this paper. The basic idea is twofold: first, because the isotherms in the steady temperature do not intersect with each other, and are distributed uniformly, no singularity (fold-over) exists in the parameterization; second, a 3D harmonic equation is solved by the finite element method to obtain the steady temperature field on a 2D triangular mesh surface with four boundaries. Therefore, our proposed method avoids the embarrassment that it is impossible to solve the 2D quasi-harmonic equation on the 2D triangular mesh without the parametric values at mesh vertices. Furthermore, the isotherms on the temperature field are taken as a set of iso-parametric curves on the triangular mesh surface. The other set of iso-parametric curves can be obtained by connecting the points with the same chord-length on the isotherms sequentially. The obtained parametric curve grid is regular, and distributed uniformly, and can map the triangular mesh surface to the unit square domain with boundaries of mesh surface to boundaries of parametric domain, which ensures that the triangular mesh surface or point cloud can be fitted with the NURBS surface.
Guaranteed-Quality Triangular Meshes
1989-04-01
Defense Ad, : ed Research P: jects Pgency or the U.S- Gower ment° iI Guaranteed-Quality Triangular Meshes DTIC ELECTE L. Paul Chew* JUL 1419891 TR 89-983 S... Wittchen , M. S. Shephard, K. R. Grice, and M. A. Yerry, Robust, geometrically based, automatic two-dimensional mesh generation, International Journal for
An Improved Moving Mesh Algorithm
无
2001-01-01
we consider an iterative algorithm of mesh optimization for finite element solution, and give an improved moving mesh strategy that reduces rapidly the complexity and cost of solving variational problems.A numerical result is presented for a 2-dimensional problem by the improved algorithm.
Lober, R.R.; Tautges, T.J.; Vaughan, C.T.
1997-03-01
Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.
Adaptive and Unstructured Mesh Cleaving
Bronson, Jonathan R.; Sastry, Shankar P.; Levine, Joshua A.; Whitaker, Ross T.
2015-01-01
We propose a new strategy for boundary conforming meshing that decouples the problem of building tetrahedra of proper size and shape from the problem of conforming to complex, non-manifold boundaries. This approach is motivated by the observation that while several methods exist for adaptive tetrahedral meshing, they typically have difficulty at geometric boundaries. The proposed strategy avoids this conflict by extracting the boundary conforming constraint into a secondary step. We first build a background mesh having a desired set of tetrahedral properties, and then use a generalized stenciling method to divide, or “cleave”, these elements to get a set of conforming tetrahedra, while limiting the impacts cleaving has on element quality. In developing this new framework, we make several technical contributions including a new method for building graded tetrahedral meshes as well as a generalization of the isosurface stuffing and lattice cleaving algorithms to unstructured background meshes. PMID:26137171
Determination of an Initial Mesh Density for Finite Element Computations via Data Mining
Kanapady, R; Bathina, S K; Tamma, K K; Kamath, C; Kumar, V
2001-07-23
Numerical analysis software packages which employ a coarse first mesh or an inadequate initial mesh need to undergo a cumbersome and time consuming mesh refinement studies to obtain solutions with acceptable accuracy. Hence, it is critical for numerical methods such as finite element analysis to be able to determine a good initial mesh density for the subsequent finite element computations or as an input to a subsequent adaptive mesh generator. This paper explores the use of data mining techniques for obtaining an initial approximate finite element density that avoids significant trial and error to start finite element computations. As an illustration of proof of concept, a square plate which is simply supported at its edges and is subjected to a concentrated load is employed for the test case. Although simplistic, the present study provides insight into addressing the above considerations.
Refined Semilattices of Semigroups
Liang Zhang; K.P. Shum; Ronghua Zhang
2001-01-01
In this paper, we introduce the concept of refined semilattices of semigroups. This is a modified concept of the generally strong semilattice of semigroups initiated by Zhang and Huang. By using the concept of generally strong semilattice, Zhang and Huang showed that a regular band can be expressed by a generally strong semilattice of rectangular bands. However, the proof of the associativity for the multiplication is not complete and there exist some gaps in their construction of regular bands. We now revise the generally strong semilattices and call them refined semilattices. In this way, we are able to remove the gaps,and the associative law of the multiplication can be verified. As an application, we prove that a band is regular if and only if it is a refined semilattice of rectangular bands. In fact, refined semilattices provide a new device in the construction of new semigroups from the old ones.
Refinement by interface instantiation
Hallerstede, Stefan; Hoang, Thai Son
2012-01-01
be easily refined. Our first contribution hence is a proposal for a new construct called interface that encapsulates the external variables, along with a mechanism for interface instantiation. Using the new construct and mechanism, external variables can be refined consistently. Our second contribution...... is an approach for verifying the correctness of Event-B extensions using the supporting Rodin tool. We illustrate our approach by proving the correctness of interface instantiation....
NAFTA opportunities: Petroleum refining
1993-01-01
The North American Free Trade Agreement (NAFTA) creates a more transparent environment for the sale of refined petroleum products to Mexico, and locks in access to Canada's relatively open market for these products. Canada and Mexico are sizable United States export markets for refined petroleum products, with exports of $556 million and $864 million, respectively, in 1992. These markets represent approximately 24 percent of total U.S. exports of these goods.
Bozzelli, Laura; French, Tim; Hales, James; Pinchinat, Sophie
2012-01-01
In this paper we present refinement modal logic. A refinement is like a bisimulation, except that from the three relational requirements only 'atoms' and 'back' need to be satisfied. Our logic contains a new operator 'forall' in additional to the standard modalities 'Box' for each agent. The operator 'forall' acts as a quantifier over the set of all refinements of a given model. We call it the refinement operator. As a variation on a bisimulation quantifier, it can be seen as a refinement quantifier over a variable not occurring in the formula bound by the operator. The logic combines the simplicity of multi-agent modal logic with some powers of monadic second order quantification. We present a sound and complete axiomatization of multiagent refinement modal logic. We also present an extension of the logic to the modal mu-calculus, and an axiomatization for the single-agent version of this logic. Examples and applications are also discussed: to software verification and design (the set of agents can also be s...
Stream monitoring for detection of Phytophthora ramorum in Oregon
W. Sutton; E.M. Hansen; P. Reeser; A. Kanaskie
2008-01-01
Stream monitoring using leaf baits for early detection of P. ramorum is an important part of the Oregon sudden oak death program. About 50 streams in and near the Oregon quarantine area in the southwest corner of the state are currently monitored. Rhododendron and tanoak leaf baits in mesh bags are exchanged every two weeks throughout the year....
Nanowire mesh solar fuels generator
Yang, Peidong; Chan, Candace; Sun, Jianwei; Liu, Bin
2016-05-24
This disclosure provides systems, methods, and apparatus related to a nanowire mesh solar fuels generator. In one aspect, a nanowire mesh solar fuels generator includes (1) a photoanode configured to perform water oxidation and (2) a photocathode configured to perform water reduction. The photocathode is in electrical contact with the photoanode. The photoanode may include a high surface area network of photoanode nanowires. The photocathode may include a high surface area network of photocathode nanowires. In some embodiments, the nanowire mesh solar fuels generator may include an ion conductive polymer infiltrating the photoanode and the photocathode in the region where the photocathode is in electrical contact with the photoanode.
Model refinements of transformers via a subproblem finite element method
Dular, Patrick; Kuo-Peng, Patrick; Ferreira Da Luz, Mauricio,; Krähenbühl, Laurent
2015-01-01
International audience; A progressive modeling of transformers is performed via a subproblem finite element method. A complete problem is split into subproblems with different adapted overlapping meshes. Model refinements are performed from ideal to real flux tubes, 1-D to 2-D to 3-D models, linear to nonlinear materials, perfect to real materials, single wire to volume conductor windings, and homogenized to fine models of cores and coils, with any coupling of these changes. The proposed unif...
Two of Singapore`s refiners expand despite lack of land
Rhodes, A.K.
1995-08-14
With 1.1 million b/d of refining capacity, Singapore is the world`s third largest concentrated refining center, trailing only the US Gulf Coast and Rotterdam areas. But space on the island is limited and land is expensive. Despite these restrictions, two of Singapore`s major refiners have found ways to expand their operations. Singapore Refining Co. Pte. Ltd. (SRC) is completing a major expansion project, set to come on stream this fall. And Mobil Oil Singapore Pte. Ltd. brought on stream a new UOP continuous catalytic reformer (CCR) and aromatics complex in January 1994. Both projects are described.
Constancio, Silva
2006-07-01
In 2004, refining margins showed a clear improvement that persisted throughout the first three quarters of 2005. This enabled oil companies to post significantly higher earnings for their refining activity in 2004 compared to 2003, with the results of the first half of 2005 confirming this trend. As for petrochemicals, despite a steady rise in the naphtha price, higher cash margins enabled a turnaround in 2004 as well as a clear improvement in oil company financial performance that should continue in 2005, judging by the net income figures reported for the first half-year. Despite this favorable business environment, capital expenditure in refining and petrochemicals remained at a low level, especially investment in new capacity, but a number of projects are being planned for the next five years. (author)
Mersiline mesh in premaxillary augmentation.
Foda, Hossam M T
2005-01-01
Premaxillary retrusion may distort the aesthetic appearance of the columella, lip, and nasal tip. This defect is characteristically seen in, but not limited to, patients with cleft lip nasal deformity. This study investigated 60 patients presenting with premaxillary deficiencies in which Mersiline mesh was used to augment the premaxilla. All the cases had surgery using the external rhinoplasty technique. Two methods of augmentation with Mersiline mesh were used: the Mersiline roll technique, for the cases with central symmetric deficiencies, and the Mersiline packing technique, for the cases with asymmetric deficiencies. Premaxillary augmentation with Mersiline mesh proved to be simple technically, easy to perform, and not associated with any complications. Periodic follow-up evaluation for a mean period of 32 months (range, 12-98 months) showed that an adequate degree of premaxillary augmentation was maintained with no clinically detectable resorption of the mesh implant.
GENERATION OF IRREGULAR HEXAGONAL MESHES
Vlasov Aleksandr Nikolaevich
2012-07-01
Decomposition is performed in a constructive way and, as option, it involves meshless representation. Further, this mapping method is used to generate the calculation mesh. In this paper, the authors analyze different cases of mapping onto simply connected and bi-connected canonical domains. They represent forward and backward mapping techniques. Their potential application for generation of nonuniform meshes within the framework of the asymptotic homogenization theory is also performed to assess and project effective characteristics of heterogeneous materials (composites.
Verborgh, Ruben
2013-01-01
The book is styled on a Cookbook, containing recipes - combined with free datasets - which will turn readers into proficient OpenRefine users in the fastest possible way.This book is targeted at anyone who works on or handles a large amount of data. No prior knowledge of OpenRefine is required, as we start from the very beginning and gradually reveal more advanced features. You don't even need your own dataset, as we provide example data to try out the book's recipes.
An arbitrary boundary triangle mesh generation method for multi-modality imaging
Zhang, Xuanxuan; Deng, Yong; Gong, Hui; Meng, Yuanzheng; Yang, Xiaoquan; Luo, Qingming
2012-03-01
Low-resolution and ill-posedness are the major challenges in diffuse optical tomography(DOT)/fluorescence molecular tomography(FMT). Recently, the multi-modality imaging technology that combines micro-computed tomography (micro-CT) with DOT/FMT is developed to improve resolution and ill-posedness. To take advantage of the fine priori anatomical maps obtained from micro-CT, we present an arbitrary boundary triangle mesh generation method for FMT/DOT/micro-CT multi-modality imaging. A planar straight line graph (PSLG) based on the image of micro-CT is obtained by an adaptive boundary sampling algorithm. The subregions of mesh are accurately matched with anatomical structures by a two-step solution, firstly, the triangles and nodes during mesh refinement are labeled respectively, and then a revising algorithm is used to modifying meshes of each subregion. The triangle meshes based on a regular model and a micro-CT image are generated respectively. The results show that the subregions of triangle meshes can match with anatomical structures accurately and triangle meshes have good quality. This provides an arbitrary boundaries triangle mesh generation method with the ability to incorporate the fine priori anatomical information into DOT/FMT reconstructions.
Image-driven mesh optimization
Lindstrom, P; Turk, G
2001-01-05
We describe a method of improving the appearance of a low vertex count mesh in a manner that is guided by rendered images of the original, detailed mesh. This approach is motivated by the fact that greedy simplification methods often yield meshes that are poorer than what can be represented with a given number of vertices. Our approach relies on edge swaps and vertex teleports to alter the mesh connectivity, and uses the downhill simplex method to simultaneously improve vertex positions and surface attributes. Note that this is not a simplification method--the vertex count remains the same throughout the optimization. At all stages of the optimization the changes are guided by a metric that measures the differences between rendered versions of the original model and the low vertex count mesh. This method creates meshes that are geometrically faithful to the original model. Moreover, the method takes into account more subtle aspects of a model such as surface shading or whether cracks are visible between two interpenetrating parts of the model.
Adaptive meshing technique applied to an orthopaedic finite element contact problem.
Roarty, Colleen M; Grosland, Nicole M
2004-01-01
Finite element methods have been applied extensively and with much success in the analysis of orthopaedic implants. Recently a growing interest has developed, in the orthopaedic biomechanics community, in how numerical models can be constructed for the optimal solution of problems in contact mechanics. New developments in this area are of paramount importance in the design of improved implants for orthopaedic surgery. Finite element and other computational techniques are widely applied in the analysis and design of hip and knee implants, with additional joints (ankle, shoulder, wrist) attracting increased attention. The objective of this investigation was to develop a simplified adaptive meshing scheme to facilitate the finite element analysis of a dual-curvature total wrist implant. Using currently available software, the analyst has great flexibility in mesh generation, but must prescribe element sizes and refinement schemes throughout the domain of interest. Unfortunately, it is often difficult to predict in advance a mesh spacing that will give acceptable results. Adaptive finite-element mesh capabilities operate to continuously refine the mesh to improve accuracy where it is required, with minimal intervention by the analyst. Such mesh adaptation generally means that in certain areas of the analysis domain, the size of the elements is decreased (or increased) and/or the order of the elements may be increased (or decreased). In concept, mesh adaptation is very appealing. Although there have been several previous applications of adaptive meshing for in-house FE codes, we have coupled an adaptive mesh formulation with the pre-existing commercial programs PATRAN (MacNeal-Schwendler Corp., USA) and ABAQUS (Hibbit Karlson and Sorensen, Pawtucket, RI). In doing so, we have retained several attributes of the commercial software, which are very attractive for orthopaedic implant applications.
Method and system for mesh network embedded devices
Wang, Ray (Inventor)
2009-01-01
A method and system for managing mesh network devices. A mesh network device with integrated features creates an N-way mesh network with a full mesh network topology or a partial mesh network topology.
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
A tetrahedral mesh generation approach for 3D marine controlled-source electromagnetic modeling
Um, Evan Schankee; Kim, Seung-Sep; Fu, Haohuan
2017-03-01
3D finite-element (FE) mesh generation is a major hurdle for marine controlled-source electromagnetic (CSEM) modeling. In this paper, we present a FE discretization operator (FEDO) that automatically converts a 3D finite-difference (FD) model into reliable and efficient tetrahedral FE meshes for CSEM modeling. FEDO sets up wireframes of a background seabed model that precisely honors the seafloor topography. The wireframes are then partitioned into multiple regions. Outer regions of the wireframes are discretized with coarse tetrahedral elements whose maximum size is as large as a skin depth of the regions. We demonstrate that such coarse meshes can produce accurate FE solutions because numerical dispersion errors of tetrahedral meshes do not accumulate but oscillates. In contrast, central regions of the wireframes are discretized with fine tetrahedral elements to describe complex geology in detail. The conductivity distribution is mapped from FD to FE meshes in a volume-averaged sense. To avoid excessive mesh refinement around receivers, we introduce an effective receiver size. Major advantages of FEDO are summarized as follow. First, FEDO automatically generates reliable and economic tetrahedral FE meshes without adaptive meshing or interactive CAD workflows. Second, FEDO produces FE meshes that precisely honor the boundaries of the seafloor topography. Third, FEDO derives multiple sets of FE meshes from a given FD model. Each FE mesh is optimized for a different set of sources and receivers and is fed to a subgroup of processors on a parallel computer. This divide and conquer approach improves the parallel scalability of the FE solution. Both accuracy and effectiveness of FEDO are demonstrated with various CSEM examples.
Incremental Bisimulation Abstraction Refinement
Godskesen, Jens Christian; Song, Lei; Zhang, Lijun
2013-01-01
an abstraction refinement approach for the probabilistic computation tree logic (PCTL), which is based on incrementally computing a sequence of may- and must-quotient automata. These are induced by depth-bounded bisimulation equivalences of increasing depth. The approach is both sound and complete, since...
User Manual for the PROTEUS Mesh Tools
Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, Emily R. [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-06-01
This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.
Phase-field simulation of dendritic solidification using a full threaded tree with adaptive meshing
Yin Yajun; Zhou Jianxin; Liao Dunming; Pang Shengyong; Shen Xu
2014-01-01
Simulation of the microstructure evolution during solidification is greatly beneficial to the control of solidification microstructures. A phase-field method based on the ful threaded tree (FTT) for the simulation of casting solidification microstructure was proposed in this paper, and the structure of the ful threaded tree and the mesh refinement method was discussed. During dendritic growth in solidification, the mesh for simulation is adaptively refined at the liquid-solid interface, and coarsened in other areas. The numerical results of a three-dimension dendrite growth indicate that the phase-field method based on FTT is suitable for microstructure simulation. Most importantly, the FTT method can increase the spatial and temporal resolutions beyond the limits imposed by the available hardware compared with the conventional uniform mesh. At the simulation time of 0.03 s in this study, the computer memory used for computation is no more than 10 MB with the FTT method, while it is about 50 MB with the uniform mesh method. In addition, the proposed FTT method is more efficient in computation time when compared with the uniform mesh method. It would take about 20 h for the uniform mesh method, while only 2 h for the FTT method for computation when the solidification time is 0.17 s in this study.
Dahlgaard, Katja; Raposo, Alexandre A.S.F.; Niccoli, Teresa; St Johnston, Daniel
2007-01-01
Summary Mutants in the actin nucleators Cappuccino and Spire disrupt the polarized microtubule network in the Drosophila oocyte that defines the anterior-posterior axis, suggesting that microtubule organization depends on actin. Here, we show that Cappuccino and Spire organize an isotropic mesh of actin filaments in the oocyte cytoplasm. capu and spire mutants lack this mesh, whereas overexpressed truncated Cappuccino stabilizes the mesh in the presence of Latrunculin A and partially rescues spire mutants. Spire overexpression cannot rescue capu mutants, but prevents actin mesh disassembly at stage 10B and blocks late cytoplasmic streaming. We also show that the actin mesh regulates microtubules indirectly, by inhibiting kinesin-dependent cytoplasmic flows. Thus, the Capu pathway controls alternative states of the oocyte cytoplasm: when active, it assembles an actin mesh that suppresses kinesin motility to maintain a polarized microtubule cytoskeleton. When inactive, unrestrained kinesin movement generates flows that wash microtubules to the cortex. PMID:17925229
Algorithm refinement for stochastic partial differential equations.
Alexander, F. J. (Francis J.); Garcia, Alejandro L.,; Tartakovsky, D. M. (Daniel M.)
2001-01-01
A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. A variety of numerical experiments were performed for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except within the particle region, far from the interface. Extensions of the methodology to fluid mechanics applications are discussed.
Connectivity editing for quadrilateral meshes
Peng, Chihan
2011-12-12
We propose new connectivity editing operations for quadrilateral meshes with the unique ability to explicitly control the location, orientation, type, and number of the irregular vertices (valence not equal to four) in the mesh while preserving sharp edges. We provide theoretical analysis on what editing operations are possible and impossible and introduce three fundamental operations to move and re-orient a pair of irregular vertices. We argue that our editing operations are fundamental, because they only change the quad mesh in the smallest possible region and involve the fewest irregular vertices (i.e., two). The irregular vertex movement operations are supplemented by operations for the splitting, merging, canceling, and aligning of irregular vertices. We explain how the proposed highlevel operations are realized through graph-level editing operations such as quad collapses, edge flips, and edge splits. The utility of these mesh editing operations are demonstrated by improving the connectivity of quad meshes generated from state-of-art quadrangulation techniques. © 2011 ACM.
Connectivity editing for quadrilateral meshes
Peng, Chihan
2011-12-01
We propose new connectivity editing operations for quadrilateral meshes with the unique ability to explicitly control the location, orientation, type, and number of the irregular vertices (valence not equal to four) in the mesh while preserving sharp edges. We provide theoretical analysis on what editing operations are possible and impossible and introduce three fundamental operations to move and re-orient a pair of irregular vertices. We argue that our editing operations are fundamental, because they only change the quad mesh in the smallest possible region and involve the fewest irregular vertices (i.e., two). The irregular vertex movement operations are supplemented by operations for the splitting, merging, canceling, and aligning of irregular vertices. We explain how the proposed high-level operations are realized through graph-level editing operations such as quad collapses, edge flips, and edge splits. The utility of these mesh editing operations are demonstrated by improving the connectivity of quad meshes generated from state-of-art quadrangulation techniques.
Mesh Router Nodes placement in Rural Wireless Mesh Networks
Ebongue, Jean Louis Fendji Kedieng; Thron, Christopher; Nlong, Jean Michel
2015-01-01
The problem of placement of mesh router nodes in Wireless Mesh Networks is known to be a NP hard problem. In this paper, the problem is addressed under a constraint of network model tied to rural regions where we usually observe low density and sparse population. We consider the area to cover as decomposed into a set of elementary areas which can be required or optional in terms of coverage and where a node can be placed or not. We propose an effective algorithm to ensure the coverage. This a...
Acquiring Plausible Predications from MEDLINE by Clustering MeSH Annotations.
Miñarro-Giménez, Jose Antonio; Kreuzthaler, Markus; Bernhardt-Melischnig, Johannes; Martínez-Costa, Catalina; Schulz, Stefan
2015-01-01
The massive accumulation of biomedical knowledge is reflected by the growth of the literature database MEDLINE with over 23 million bibliographic records. All records are manually indexed by MeSH descriptors, many of them refined by MeSH subheadings. We use subheading information to cluster types of MeSH descriptor co-occurrences in MEDLINE by processing co-occurrence information provided by the UMLS. The goal is to infer plausible predicates to each resulting cluster. In an initial experiment this was done by grouping disease-pharmacologic substance co-occurrences into six clusters. Then, a domain expert manually performed the assignment of meaningful predicates to the clusters. The mean accuracy of the best ten generated biomedical facts of each cluster was 85%. This result supports the evidence of the potential of MeSH subheadings for extracting plausible medical predications from MEDLINE.
Perspicuity and Granularity in Refinement
Boiten, Eerke
2011-01-01
This paper reconsiders refinements which introduce actions on the concrete level which were not present at the abstract level. It draws a distinction between concrete actions which are "perspicuous" at the abstract level, and changes of granularity of actions between different levels of abstraction. The main contribution of this paper is in exploring the relation between these different methods of "action refinement", and the basic refinement relation that is used. In particular, it shows how the "refining skip" method is incompatible with failures-based refinement relations, and consequently some decisions in designing Event-B refinement are entangled.
Kansas Data Access and Support Center — Digital representation of the map accompanying the "Kansas stream and river fishery resource evaluation" (R.E. Moss and K. Brunson, 1981.U.S. Fish and Wildlife...
Refinement for Administrative Policies
Dekker, MAC; Etalle, S Sandro
2007-01-01
Flexibility of management is an important requisite for access control systems as it allows users to adapt the access control system in accordance with practical requirements. This paper builds on earlier work where we defined administrative policies for a general class of RBAC models. We present a formal definition of administrative refinnement and we show that there is an ordering for administrative privileges which yields administrative refinements of policies. We argue (by giving an examp...
Efficient Packet Forwarding in Mesh Network
Kanrar, Soumen
2012-01-01
Wireless Mesh Network (WMN) is a multi hop low cost, with easy maintenance robust network providing reliable service coverage. WMNs consist of mesh routers and mesh clients. In this architecture, while static mesh routers form the wireless backbone, mesh clients access the network through mesh routers as well as directly meshing with each other. Different from traditional wireless networks, WMN is dynamically self-organized and self-configured. In other words, the nodes in the mesh network automatically establish and maintain network connectivity. Over the years researchers have worked, to reduce the redundancy in broadcasting packet in the mesh network in the wireless domain for providing reliable service coverage, the source node deserves to broadcast or flood the control packets. The redundant control packet consumes the bandwidth of the wireless medium and significantly reduces the average throughput and consequently reduces the overall system performance. In this paper I study the optimization problem in...
On Linear Spaces of Polyhedral Meshes.
Poranne, Roi; Chen, Renjie; Gotsman, Craig
2015-05-01
Polyhedral meshes (PM)-meshes having planar faces-have enjoyed a rise in popularity in recent years due to their importance in architectural and industrial design. However, they are also notoriously difficult to generate and manipulate. Previous methods start with a smooth surface and then apply elaborate meshing schemes to create polyhedral meshes approximating the surface. In this paper, we describe a reverse approach: given the topology of a mesh, we explore the space of possible planar meshes having that topology. Our approach is based on a complete characterization of the maximal linear spaces of polyhedral meshes contained in the curved manifold of polyhedral meshes with a given topology. We show that these linear spaces can be described as nullspaces of differential operators, much like harmonic functions are nullspaces of the Laplacian operator. An analysis of this operator provides tools for global and local design of a polyhedral mesh, which fully expose the geometric possibilities and limitations of the given topology.
Kummel, Miro; Bruder, Andrea; Powell, Jim; Kohler, Brynja; Lewis, Matt
2016-01-01
Dead leaves, ping-pong balls or plastic golf balls are floated down a small stream. The number of leaves/balls passing recording stations along the stream are tallied. Students are then challenged to develop a transport model for the resulting data. From this exercise students gain greater understanding of PDE modeling, conservation laws, parameter estimation as well as mass and momentum transport processes.
Anisotropic Diffusion in Mesh-Free Numerical Magnetohydrodynamics
Hopkins, Philip F
2016-01-01
We extend recently-developed mesh-free Lagrangian methods for numerical magnetohydrodynamics (MHD) to arbitrary anisotropic diffusion equations, including: passive scalar diffusion, Spitzer-Braginskii conduction and viscosity, cosmic ray diffusion/streaming, anisotropic radiation transport, non-ideal MHD (Ohmic resistivity, ambipolar diffusion, the Hall effect), and turbulent 'eddy diffusion.' We study these as implemented in the code GIZMO for both new meshless finite-volume Godunov schemes (MFM/MFV) as well as smoothed-particle hydrodynamics (SPH). We show the MFM/MFV methods are accurate and stable even with noisy fields and irregular particle arrangements, and recover the correct behavior even in arbitrarily anisotropic cases. They are competitive with state-of-the-art AMR/moving-mesh methods, and can correctly treat anisotropic diffusion-driven instabilities (e.g. the MTI and HBI, Hall MRI). We also develop a new scheme for stabilizing anisotropic tensor-valued fluxes with high-order gradient estimators ...
Hermann, Verena; Käser, Martin; Castro, Cristóbal E.
2011-02-01
We present a Discontinuous Galerkin finite element method using a high-order time integration technique for seismic wave propagation modelling on non-conforming hybrid meshes in two space dimensions. The scheme can be formulated to achieve the same approximation order in space and time and avoids numerical artefacts due to non-conforming mesh transitions or the change of the element type. A point-wise Gaussian integration along partially overlapping edges of adjacent elements is used to preserve the schemes accuracy while providing a higher flexibility in the problem-adapted mesh generation process. We describe the domain decomposition strategy of the parallel implementation and validate the performance of the new scheme by numerical convergence test and experiments with comparisons to independent reference solutions. The advantage of non-conforming hybrid meshes is the possibility to choose the mesh spacing proportional to the seismic velocity structure, which allows for simple refinement or coarsening methods even for regular quadrilateral meshes. For particular problems of strong material contrasts and geometrically thin structures, the scheme reduces the computational cost in the sense of memory and run-time requirements. The presented results promise to achieve a similar behaviour for an extension to three space dimensions where the coupling of tetrahedral and hexahedral elements necessitates non-conforming mesh transitions to avoid linking elements in form of pyramids.
Improved Butterfly Subdivision Scheme for Meshes with Arbitrary Topology
ZHANG Hui; MA Yong-you; ZHANG Cheng; JIANG Shou-wei
2005-01-01
Based on the butterfly subdivision scheme and the modified butterfly subdivision scheme, an improved butterfly subdivision scheme is proposed. The scheme uses a small stencil of six points to calculate new inserting vertex, 2n new vertices are inserted in the 2n triangle faces in each recursion, and the n old vertices are kept, special treatment is given to the boundary, achieving higher smoothness while using small stencils is realized. With the proposed scheme, the number of triangle faces increases only by a factor of 3 in each refinement step. Compared with the butterfly subdivision scheme and the modified butterfly subdivision scheme, the size of triangle faces changes more gradually, which allows one to have greater control over the resolution of a refined mesh.
Capelli, Silvia C; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan
2014-09-01
Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly-l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree-Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints - even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu's), all other structural parameters agree within less than 2 csu's. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å(2) as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements - an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å.
Benazzi, E.; Alario, F
2004-07-01
In 2003, refining margins showed a clear improvement that continued throughout the first three quarters of 2004. Oil companies posted significantly higher earnings in 2003 compared to 2002, with the results of first quarter 2004 confirming this trend. Due to higher feedstock prices, the implementation of new capacity and more intense competition, the petrochemicals industry was not able to boost margins in 2003. In such difficult business conditions, aggravated by soaring crude prices, the petrochemicals industry is not likely to see any improvement in profitability before the second half of 2004. (author)
Benazzi, E
2003-07-01
Down sharply in 2002, refining margins showed a clear improvement in the first half-year of 2003. As a result, the earnings reported by oil companies for financial year 2002 were significantly lower than in 2001, but the prospects are brighter for 2003. In the petrochemicals sector, slow demand and higher feedstock prices eroded margins in 2002, especially in Europe and the United States. The financial results for the first part of 2003 seem to indicate that sector profitability will not improve before 2004. (author)
Adaptive Mesh Fluid Simulations on GPU
Wang, Peng; Kaehler, Ralf
2009-01-01
We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA par...
Inching toward 'push-button' meshing
James Masters
2015-01-01
While "push-button" meshing remains an elusive goal, advances in 2015 have brought the technology to the point where meshes can be constructed with relative ease when appropriate surfaces are available...
stream-stream: Stellar and dark-matter streams interactions
Bovy, Jo
2017-02-01
Stream-stream analyzes the interaction between a stellar stream and a disrupting dark-matter halo. It requires galpy (ascl:1411.008), NEMO (ascl:1010.051), and the usual common scientific Python packages.
Macromolecular crystallographic estructure refinement
Afonine, Pavel V.
2015-04-01
Full Text Available Model refinement is a key step in crystallographic structure determination that ensures final atomic structure of macromolecule represents measured diffraction data as good as possible. Several decades have been put into developing methods and computational tools to streamline this step. In this manuscript we provide a brief overview of major milestones of crystallographic computing and methods development pertinent to structure refinement.El refinamiento es un paso clave en el proceso de determinación de una estructura cristalográfica al garantizar que la estructura atómica de la macromolécula final represente de la mejor manera posible los datos de difracción. Han hecho falta varias décadas para poder desarrollar nuevos métodos y herramientas computacionales dirigidas a dinamizar esta etapa. En este artículo ofrecemos un breve resumen de los principales hitos en la computación cristalográfica y de los nuevos métodos relevantes para el refinamiento de estructuras.
Particle Collection Efficiency for Nylon Mesh Screens
Cena, Lorenzo G.; Ku, Bon-Ki; Peters, Thomas M.
2011-01-01
Mesh screens composed of nylon fibers leave minimal residual ash and produce no significant spectral interference when ashed for spectrometric examination. These characteristics make nylon mesh screens attractive as a collection substrate for nanoparticles. A theoretical single-fiber efficiency expression developed for wire-mesh screens was evaluated for estimating the collection efficiency of submicrometer particles for nylon mesh screens. Pressure drop across the screens, the effect of part...
2010-10-01
... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Mesh size. 300.110 Section 300.110... Antarctic Marine Living Resources § 300.110 Mesh size. (a) The use of pelagic and bottom trawls having the mesh size in any part of a trawl less than indicated is prohibited for any directed fishing for the...
Markov Random Fields on Triangle Meshes
Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas
2010-01-01
mesh edges according to a feature detecting prior. Since we should not smooth across a sharp feature, we use edge labels to control the vertex process. In a Bayesian framework, MRF priors are combined with the likelihood function related to the mesh formation method. The output of our algorithm...... is a piecewise smooth mesh with explicit labelling of edges belonging to the sharp features....
Mesh network achieve its fuction on Linux
Pei Ping; PETRENKO Y.N.
2015-01-01
In this paper, we introduce a Mesh network protocol evaluation and development. It has a special protocol. We could easily understand the Linux operation principles which are in use in mesh network. In addition to our comprehension, we describe the graph which shows package routing way. At last according to testing we prove that Mesh protocol AODV satisfy Linux platform performance requirements.
The mesh network protocol evaluation and development
Pei Ping; PETRENKO Y.N.
2015-01-01
In this paper, we introduce a Mesh network protocol evaluation and development. It has a special protocol. We could easily to understand that how different protocols are used in mesh network. In addition to our comprehension, Multi – hop routing protocol could provide robustness and load balancing to communication in wireless mesh networks.
Automatic mesh adaptivity for CADIS and FW-CADIS neutronics modeling of difficult shielding problems
Ibrahim, A. M.; Peplow, D. E.; Mosher, S. W.; Wagner, J. C.; Evans, T. M. [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Wilson, P. P.; Sawan, M. E. [University of Wisconsin-Madison, 1500 Engineering Dr., Madison, WI 53706 (United States)
2013-07-01
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macro-material approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm de-couples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, obviating the need for a world-class super computer. (authors)
Combining taxonomy and function in the study of stream macroinvertebrates
Kenneth W. Cummins
2016-03-01
Full Text Available Over the last fifty years, research on freshwater macroinvertebrates has been driven largely by the state of the taxonomy of these animals. In the great majority of studies conducted during the 2000s macroinvertebrates have been operationally defined by investigators as invertebrates retained by a 250 μ mesh in field sampling devices. Significant advances have been and continue to be made in developing ever more refined keys to macroinvertebrate groups. The analysis by function is a viable alternative when advances in macroinvertebrate ecological research is restricted by the level of detail in identifications. Focus on function, namely adaptations of macroinvertebrates to habitats and the utilization of food resources, has facilitated ecological evaluation of freshwater ecosystems (Functional feeding groups; FFG. As the great stream ecologist Noel Hynes observed, aquatic insects around the world exhibit similar morphologies and behaviors, even though they are in very different taxonomic groups. This is the basis for the FFG analysis that was initially developed in the early 1970s. FFG analysis applies taxonomy only to the level of detail that allows assignment to one of six FFG categories: scrapers adapted to feed on periphyton, detrital shredders adapted to feed on coarse (CPOM riparian-derived plant litter that has been colonized by microbes, herbivore shredders that feed on live, rooted aquatic vascular plants, filtering collectors adapted to remove fine particle detritus (FPOM from the water column, gathering collectors adapted to feed on FPOM where it is deposited on surfaces or in crevices in the sediments, and predators that capture live prey. The interacting roles of these FFGs in stream ecosystems were originally depicted in a conceptual model. Thus, there are a limited number of adaptations exhibited by stream macroinvertebrates that exploit these habitats and food resources. This accounts for the wide range of macroinvertebrate taxa
User Manual for the PROTEUS Mesh Tools
Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, Emily R [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-09-19
PROTEUS is built around a finite element representation of the geometry for visualization. In addition, the PROTEUS-SN solver was built to solve the even-parity transport equation on a finite element mesh provided as input. Similarly, PROTEUS-MOC and PROTEUS-NEMO were built to apply the method of characteristics on unstructured finite element meshes. Given the complexity of real world problems, experience has shown that using commercial mesh generator to create rather simple input geometries is overly complex and slow. As a consequence, significant effort has been put into place to create multiple codes that help assist in the mesh generation and manipulation. There are three input means to create a mesh in PROTEUS: UFMESH, GRID, and NEMESH. At present, the UFMESH is a simple way to generate two-dimensional Cartesian and hexagonal fuel assembly geometries. The UFmesh input allows for simple assembly mesh generation while the GRID input allows the generation of Cartesian, hexagonal, and regular triangular structured grid geometry options. The NEMESH is a way for the user to create their own mesh or convert another mesh file format into a PROTEUS input format. Given that one has an input mesh format acceptable for PROTEUS, we have constructed several tools which allow further mesh and geometry construction (i.e. mesh extrusion and merging). This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial
Refines Efficiency Improvement
WRI
2002-05-15
Refinery processes that convert heavy oils to lighter distillate fuels require heating for distillation, hydrogen addition or carbon rejection (coking). Efficiency is limited by the formation of insoluble carbon-rich coke deposits. Heat exchangers and other refinery units must be shut down for mechanical coke removal, resulting in a significant loss of output and revenue. When a residuum is heated above the temperature at which pyrolysis occurs (340 C, 650 F), there is typically an induction period before coke formation begins (Magaril and Aksenova 1968, Wiehe 1993). To avoid fouling, refiners often stop heating a residuum before coke formation begins, using arbitrary criteria. In many cases, this heating is stopped sooner than need be, resulting in less than maximum product yield. Western Research Institute (WRI) has developed innovative Coking Index concepts (patent pending) which can be used for process control by refiners to heat residua to the threshold, but not beyond the point at which coke formation begins when petroleum residua materials are heated at pyrolysis temperatures (Schabron et al. 2001). The development of this universal predictor solves a long standing problem in petroleum refining. These Coking Indexes have great potential value in improving the efficiency of distillation processes. The Coking Indexes were found to apply to residua in a universal manner, and the theoretical basis for the indexes has been established (Schabron et al. 2001a, 2001b, 2001c). For the first time, a few simple measurements indicates how close undesired coke formation is on the coke formation induction time line. The Coking Indexes can lead to new process controls that can improve refinery distillation efficiency by several percentage points. Petroleum residua consist of an ordered continuum of solvated polar materials usually referred to as asphaltenes dispersed in a lower polarity solvent phase held together by intermediate polarity materials usually referred to as
Silvia C. Capelli
2014-09-01
Full Text Available Hirshfeld atom refinement (HAR is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly–l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree–Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs are freely refined without constraints or restraints – even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu's, all other structural parameters agree within less than 2 csu's. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules, the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å2 as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements – an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å.
Mamy, Laurent; Letouzey, Vincent; Lavigne, Jean-Philippe; Garric, Xavier; Gondry, Jean; Mares, Pierre; De Tayrac, Renaud
2010-01-01
International audience; INTRODUCTION AND HYPOTHESIS: The aim of this study was to evaluate a link between mesh infection and shrinkage. METHODS: Twenty-eight Wistar rats were implanted with synthetic meshes that were either non-absorbable (polypropylene (PP), n = 14) or absorbable (poly (D: ,L: -lactic acid) (PLA94), n = 14). A validated animal incisionnal abdominal hernia model of mesh infection was used. Fourteen meshes (n = 7 PLA94 and n = 7 PP meshes) were infected intraoperatively with 1...
Confined helium on Lagrange meshes
Baye, Daniel
2015-01-01
The Lagrange-mesh method has the simplicity of a calculation on a mesh and can have the accuracy of a variational method. It is applied to the study of a confined helium atom. Two types of confinement are considered. Soft confinements by potentials are studied in perimetric coordinates. Hard confinement in impenetrable spherical cavities is studied in a system of rescaled perimetric coordinates varying in [0,1] intervals. Energies and mean values of the distances between electrons and between an electron and the helium nucleus are calculated. A high accuracy of 11 to 15 significant figures is obtained with small computing times. Pressures acting on the confined atom are also computed. For sphere radii smaller than 1, their relative accuracies are better than $10^{-10}$. For larger radii up to 10, they progressively decrease to $10^{-3}$, still improving the best literature results.
21st International Meshing Roundtable
Weill, Jean-Christophe
2013-01-01
This volume contains the articles presented at the 21st International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held on October 7–10, 2012 in San Jose, CA, USA. The first IMR was held in 1992, and the conference series has been held annually since. Each year the IMR brings together researchers, developers, and application experts in a variety of disciplines, from all over the world, to present and discuss ideas on mesh generation and related topics. The technical papers in this volume present theoretical and novel ideas and algorithms with practical potential, as well as technical applications in science and engineering, geometric modeling, computer graphics, and visualization.
The moving mesh code Shadowfax
Vandenbroucke, Bert
2016-01-01
We introduce the moving mesh code Shadowfax, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public License. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare Shadowfax with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.
The moving mesh code SHADOWFAX
Vandenbroucke, B.; De Rijcke, S.
2016-07-01
We introduce the moving mesh code SHADOWFAX, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public Licence. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare SHADOWFAX with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.
On the flexibility of Kokotsakis meshes
Karpenkov, Oleg
2008-01-01
In this paper we study geometric, algebraic, and computational aspects of flexibility and infinitesimal flexibility of Kokotsakis meshes. A Kokotsakis mesh is a mesh that consists of a face in the middle and a certain band of faces attached to the middle face by its perimeter. In particular any 3x3-mesh made of quadrangles is a Kokotsakis mesh. We express the infinitesimal flexibility condition in terms of Ceva and Menelaus theorems. Further we study semi-algebraic properties of the set of fl...
Image meshing via hierarchical optimization
Hao XIE; Ruo-feng TONG‡
2016-01-01
Vector graphic, as a kind of geometric representation of raster images, has many advantages, e.g., defi nition independence and editing facility. A popular way to convert raster images into vector graphics is image meshing, the aim of which is to fi nd a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it diﬃcult to fi nd a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to fi ner ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.
Image meshing via hierarchical optimization＊
Hao XIE; Ruo-feng TONGS
2016-01-01
Vector graphic, as a kind of geometric representation of raster images, has many advantages, e.g., definition independence and editing facility. A popular way to convert raster images into vector graphics is image meshing, the aim of which is to find a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it difficult to find a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to finer ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.
Soto, Dan; Le Helloco, Antoine; Clanet, Cristophe; Quere, David; Varanasi, Kripa
2016-11-01
A drop thrown against a mesh can pass through its holes if impacting with enough inertia. As a result, although part of the droplet may remain on one side of the sieve, the rest will end up grated through the other side. This inexpensive method to break up millimetric droplets into micrometric ones may be of particular interest in a wide variety of applications: enhancing evaporation of droplets launched from the top of an evaporative cooling tower or preventing drift of pesticides sprayed above crops by increasing their initial size and atomizing them at the very last moment with a mesh. In order to understand how much liquid will be grated we propose in this presentation to start first by studying a simpler situation: a drop impacting a plate pierced with a single off centered hole. The study of the role of natural parameters such as the radius drop and speed or the hole position, size and thickness allows us to discuss then the more general situation of a plate pierced with multiple holes: the mesh.
Adaptive mesh generation for image registration and segmentation
Fogtmann, Mads; Larsen, Rasmus
2013-01-01
This paper deals with the problem of generating quality tetrahedral meshes for image registration. From an initial coarse mesh the approach matches the mesh to the image volume by combining red-green subdivision and mesh evolution through mesh-to-image matching regularized with a mesh quality...
Dynamic mesh for TCAD modeling with ECORCE
Michez, A.; Boch, J.; Touboul, A.; Saigné, F.
2016-08-01
Mesh generation for TCAD modeling is challenging. Because densities of carriers can change by several orders of magnitude in thin areas, a significant change of the solution can be observed for two very similar meshes. The mesh must be defined at best to minimize this change. To address this issue, a criterion based on polynomial interpolation on adjacent nodes is proposed that adjusts accurately the mesh to the gradients of Degrees of Freedom. Furthermore, a dynamic mesh that follows changes of DF in DC and transient mode is a powerful tool for TCAD users. But, in transient modeling, adding nodes to a mesh induces oscillations in the solution that appears as spikes at the current collected at the contacts. This paper proposes two schemes that solve this problem. Examples show that using these techniques, the dynamic mesh generator of the TCAD tool ECORCE handle semiconductors devices in DC and transient mode.
SHARP/PRONGHORN Interoperability: Mesh Generation
Avery Bingham; Javier Ortensi
2012-09-01
Progress toward collaboration between the SHARP and MOOSE computational frameworks has been demonstrated through sharing of mesh generation and ensuring mesh compatibility of both tools with MeshKit. MeshKit was used to build a three-dimensional, full-core very high temperature reactor (VHTR) reactor geometry with 120-degree symmetry, which was used to solve a neutron diffusion critical eigenvalue problem in PRONGHORN. PRONGHORN is an application of MOOSE that is capable of solving coupled neutron diffusion, heat conduction, and homogenized flow problems. The results were compared to a solution found on a 120-degree, reflected, three-dimensional VHTR mesh geometry generated by PRONGHORN. The ability to exchange compatible mesh geometries between the two codes is instrumental for future collaboration and interoperability. The results were found to be in good agreement between the two meshes, thus demonstrating the compatibility of the SHARP and MOOSE frameworks. This outcome makes future collaboration possible.
Cluster parallel rendering based on encoded mesh
QIN Ai-hong; XIONG Hua; PENG Hao-yu; LIU Zhen; SHI Jiao-ying
2006-01-01
Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compression scheme PRMC (Parallel Rendering based Mesh Compression) supplying encoded meshes that can be partitioned and sorted in parallel rendering system even in encoded-domain. First, we segment the mesh into submeshes and clip the submeshes' boundary into Runs, and then piecewise compress the submeshes and Runs respectively. With the help of several auxiliary index tables, compressed submeshes and Runs can serve as rendering primitives in parallel rendering system. Based on PRMC, we design and implement a parallel rendering architecture. Compared with uncompressed representation, experimental results showed that PRMC meshes applied in cluster parallel rendering system can dramatically reduce the communication requirement.
Wavelet-Based Geometry Coding for Three Dimensional Mesh Using Space Frequency Quantization
Shymaa T. El-Leithy
2009-01-01
Full Text Available Problem statement: Recently, 3D objects have been used in several applications like internet games, virtual reality and scientific visualization. These applications require real time rendering and fast transmission of large objects through internet. However, due to limitation of bandwidth, the compression and streaming of 3D object is still an open research problem. Approach: Novel procedure for compression and coding of 3-Dimensional (3-D semi-regular meshes using wavelet transform had been introduced. This procedure was based on Space Frequency Quantization (SFQ which was used to minimize distortion error of reconstructed mesh for a different bit-rate constraint. Results: Experimental results had been carried out over five datasets with different mesh intense and irregularity. Results were evaluated by using the peak signal to noise ratio as an error measurement. Experiments showed that 3D SFQ code over performs Progressive Geometry Coder (PGC in terms of quality of compressed meshes. Conclusion: A pure 3D geometry coding algorithm based on wavelet had been introduced. Proposed procedure showed its superiority over the state of art coding techniques. Moreover, bit-stream can be truncated at any point and still decode reasonable visual quality meshes.
Anisotropic diffusion in mesh-free numerical magnetohydrodynamics
Hopkins, Philip F.
2017-04-01
We extend recently developed mesh-free Lagrangian methods for numerical magnetohydrodynamics (MHD) to arbitrary anisotropic diffusion equations, including: passive scalar diffusion, Spitzer-Braginskii conduction and viscosity, cosmic ray diffusion/streaming, anisotropic radiation transport, non-ideal MHD (Ohmic resistivity, ambipolar diffusion, the Hall effect) and turbulent 'eddy diffusion'. We study these as implemented in the code GIZMO for both new meshless finite-volume Godunov schemes (MFM/MFV). We show that the MFM/MFV methods are accurate and stable even with noisy fields and irregular particle arrangements, and recover the correct behaviour even in arbitrarily anisotropic cases. They are competitive with state-of-the-art AMR/moving-mesh methods, and can correctly treat anisotropic diffusion-driven instabilities (e.g. the MTI and HBI, Hall MRI). We also develop a new scheme for stabilizing anisotropic tensor-valued fluxes with high-order gradient estimators and non-linear flux limiters, which is trivially generalized to AMR/moving-mesh codes. We also present applications of some of these improvements for SPH, in the form of a new integral-Godunov SPH formulation that adopts a moving-least squares gradient estimator and introduces a flux-limited Riemann problem between particles.
A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics
Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars
2016-11-01
We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.
Venkatachari, Balaji Shankar; Chang, Chau-Lyan
2016-11-01
The focus of this study is scale-resolving simulations of the canonical normal shock- isotropic turbulence interaction using unstructured tetrahedral meshes and the space-time conservation element solution element (CESE) method. Despite decades of development in unstructured mesh methods and its potential benefits of ease of mesh generation around complex geometries and mesh adaptation, direct numerical or large-eddy simulations of turbulent flows are predominantly carried out using structured hexahedral meshes. This is due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for unstructured meshes that can resolve multiple physical scales and flow discontinuities simultaneously. The CESE method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to accurately simulate turbulent flows using tetrahedral meshes. As part of the study, various regimes of the shock-turbulence interaction (wrinkled and broken shock regimes) will be investigated along with a study on how adaptive refinement of tetrahedral meshes benefits this problem. The research funding for this paper has been provided by Revolutionary Computational Aerosciences (RCA) subproject under the NASA Transformative Aeronautics Concepts Program (TACP).
Structure refinement of astrophyllite
MA; Zhesheng
2001-01-01
［1］Abdel-Fattah M. Abdel-Rahman., Mineral chemistry and paragenesis of astrophyllite from Egypt, Mineralogical Magazine, 1992, 56: 17-26.［2］Liu Yan, Ma Zhesheng, Han Xiuling et al, Astrophyllite from the Namjabarwa Area, Eastern Tibet, Acta Petrologica et Mineralogica, 1997,16(4): 338-340.［3］Peng Zhizhong, Ma Zhesheng, The crystal structure of astrophyllite (in Russian), Scientia Sinica, 1963, 12(2): 272-276.［4］Pen Zhizhong, Ma Zhesheng, The crystal structure of Tricinic Mangano-astrophyllite (in Russian), Scientia Sinica (Scien-ce in China), 1964, 13(7): 1180-1183.［5］Shi Nicheng, Ma Zhesheng, Li Guowu et al., Stucyure Refinement of Monoclinic astrophyllite, Acta Crystallographica, Section B, 1998, B54: 109-114.［6］Woodrow, P. J., The Crystal structure of astrophyllite, Acta Crystallographica, 1967, 22: 673-678.［7］СеменовЕ. И., Куплетскит-Новый Минерал Группы Астрофиллита, ДАН, 1956, 108(5), 933-936.［8］Nickel, E. H., Rowland, J. E., Charette, D. J., Niobophyllite the niobium analogue of astrophyllite: A new mineral from Sead Laxe Labrador, Canad. Mine., 1964, 8(1): 40.［9］X-Ray Laboratory of Hubei Geologic College, The crystal chemistry of astrophyllite group minerals (in Chinese), Scientia Geologica Sinica, 1974, (1): 18-30.［10］Sheldrick, G. M., Program for the solution of crystal structures, SHELX86, University of G?ttingen, 1985, Germany.［11］Sheldrick, G. M., Program for the refinement of crystal structures, SHELXL93, University of G?ttingen, 1993, Germany.［12］Liebau, F., Structural Chemistry of Silicates Structure, Bonding, and Classification, Heidelberg: Springer-Verlag QD181, S6L614, 1985.［13］Ferraris, G., Ivaldi, G., Khomyakov, A. P. et al., Nafertisite, a layer titanosilicate member of a polysomatic series including mica, Eur. J. Mineral.,1996, 8: 241-249.［14］Ferraris, G., Polysomatism as a tool for correlating properties and structure, in EMU Notes in
Balla, Andrea; Quaresima, Silvia; Smolarek, Sebastian; Shalaby, Mostafa; Missori, Giulia; Sileri, Pierpaolo
2017-04-01
This review reports the incidence of mesh-related erosion after ventral mesh rectopexy to determine whether any difference exists in the erosion rate between synthetic and biological mesh. A systematic search of the MEDLINE and the Ovid databases was conducted to identify suitable articles published between 2004 and 2015. The search strategy capture terms were laparoscopic ventral mesh rectopexy, laparoscopic anterior rectopexy, robotic ventral rectopexy, and robotic anterior rectopexy. Eight studies (3,956 patients) were included in this review. Of those patients, 3,517 patients underwent laparoscopic ventral rectopexy (LVR) using synthetic mesh and 439 using biological mesh. Sixty-six erosions were observed with synthetic mesh (26 rectal, 32 vaginal, 8 recto-vaginal fistulae) and one (perineal erosion) with biological mesh. The synthetic and the biological mesh-related erosion rates were 1.87% and 0.22%, respectively. The time between rectopexy and diagnosis of mesh erosion ranged from 1.7 to 124 months. No mesh-related mortalities were reported. The incidence of mesh-related erosion after LVR is low and is more common after the placement of synthetic mesh. The use of biological mesh for LVR seems to be a safer option; however, large, multicenter, randomized, control trials with long follow-ups are required if a definitive answer is to be obtained.
Ultrasonic sensor to characterize wood pulp during refining.
Greenwood, M S; Panetta, P D; Bond, L J; McCaw, M W
2006-12-22
A novel sensor concept has been developed for measuring the degree of refining, the water retention value (WRV), and the weight percentage of wood pulp during the refining process. The measurement time is less than 5 min and the sensor can operate in a slip-stream of the process line or as an at-line instrument. The degree of refining and the WRV are determined from settling measurements. The settling of a pulp suspension (with a weight percentage less than 0.5 wt%) is observed, after the mixer, which keeps the pulp uniformly distributed, is turned off. The attenuation of ultrasound as a function of time is recorded and these data show a peak at a time designated as the "peak time." The peak time T increases with the degree of refining, as demonstrated by measuring pulp samples with known degrees of refining. The WRV can be determined using the relative peak time, defined as the ratio T(2)/T(1), where T(1) is an initial peak time and T(2) is the value after additional refining. This method offers an alternative WRV test for the industry to the current time-consuming method.
Energetic deposition of carbon in a cathodic vacuum arc with a biased mesh
Moafi, A.; Lau, D. W. M.; Sadek, A. Z.; Partridge, J. G.; McKenzie, D. R.; McCulloch, D. G.
2011-04-01
Carbon films were deposited in a filtered cathodic vacuum arc with a bias potential applied to a conducting mesh mounted in the plasma stream between the source and the substrate. We determined the stress and microstructural properties of the resulting carbon films and compared the results with those obtained using direct substrate bias with no mesh. Since the relationship between deposition energy and the stress, sp2 fraction and density of carbon are well known, measuring these film properties enabled us to investigate the effect of the mesh on the energy and composition of the depositing flux. When a mesh was used, the film stress showed a monotonic decrease for negative mesh bias voltages greater than 400V, even though the floating potential of the substrate did not vary. We explain this result by the neutralization of some ions when they are near to or passing through the negatively biased mesh. The microstructure of the films showed a change from amorphous to glassy carbonlike with increasing bias. Potential applications for this method include the deposition of carbon films with controlled stress on low conductivity substrates to form rectifying or ohmic contacts.
Jennings Jason
2010-01-01
Full Text Available Laparoscopic inguinal herniorraphy via a transabdominal preperitoneal (TAPP approach using Polypropylene Mesh (Mesh and staples is an accepted technique. Mesh induces a localised inflammatory response that may extend to, and involve, adjacent abdominal and pelvic viscera such as the appendix. We present an interesting case of suspected Mesh-induced appendicitis treated successfully with laparoscopic appendicectomy, without Mesh removal, in an elderly gentleman who presented with symptoms and signs of acute appendicitis 18 months after laparoscopic inguinal hernia repair. Possible mechanisms for Mesh-induced appendicitis are briefly discussed.
Jansen, Gunnar; Sohrabi, Reza; Miller, Stephen A.
2017-02-01
Short for Hexahedra from Unique Location in (K)convex Polyhedra - HULK is a simple and efficient algorithm to generate hexahedral meshes from generic STL files describing a geological model to be used in simulation tools based on the finite element, finite volume or finite difference methods. Using binary space partitioning of the input geometry and octree refinement on the grid, a successive increase in accuracy of the mesh is achieved. We present the theoretical basis as well as the implementation procedure with three geological models with varying complexity, providing the basis on which the algorithm is evaluated. HULK generates high accuracy discretizations with cell counts suitable for state-of-the-art subsurface simulators and provides a new method for hexahedral mesh generation in geological settings.
Crystal structure refinement with SHELXL
Sheldrick, George M., E-mail: gsheldr@shelx.uni-ac.gwdg.de [Department of Structural Chemistry, Georg-August Universität Göttingen, Tammannstraße 4, Göttingen 37077 (Germany)
2015-01-01
New features added to the refinement program SHELXL since 2008 are described and explained. The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as ‘a CIF’) containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors.
Bluetooth Low Energy Mesh Networks: A Survey.
Darroudi, Seyed Mahdi; Gomez, Carles
2017-06-22
Bluetooth Low Energy (BLE) has gained significant momentum. However, the original design of BLE focused on star topology networking, which limits network coverage range and precludes end-to-end path diversity. In contrast, other competing technologies overcome such constraints by supporting the mesh network topology. For these reasons, academia, industry, and standards development organizations have been designing solutions to enable BLE mesh networks. Nevertheless, the literature lacks a consolidated view on this emerging area. This paper comprehensively surveys state of the art BLE mesh networking. We first provide a taxonomy of BLE mesh network solutions. We then review the solutions, describing the variety of approaches that leverage existing BLE functionality to enable BLE mesh networks. We identify crucial aspects of BLE mesh network solutions and discuss their advantages and drawbacks. Finally, we highlight currently open issues.
Mesh networking optimized for robotic teleoperation
Hart, Abraham; Pezeshkian, Narek; Nguyen, Hoa
2012-06-01
Mesh networks for robot teleoperation pose different challenges than those associated with traditional mesh networks. Unmanned ground vehicles (UGVs) are mobile and operate in constantly changing and uncontrollable environments. Building a mesh network to work well under these harsh conditions presents a unique challenge. The Manually Deployed Communication Relay (MDCR) mesh networking system extends the range of and provides non-line-of-sight (NLOS) communications for tactical and explosive ordnance disposal (EOD) robots currently in theater. It supports multiple mesh nodes, robots acting as nodes, and works with all Internet Protocol (IP)-based robotic systems. Under MDCR, the performance of different routing protocols and route selection metrics were compared resulting in a modified version of the Babel mesh networking protocol. This paper discusses this and other topics encountered during development and testing of the MDCR system.
Bluetooth Low Energy Mesh Networks: A Survey
Darroudi, Seyed Mahdi; Gomez, Carles
2017-01-01
Bluetooth Low Energy (BLE) has gained significant momentum. However, the original design of BLE focused on star topology networking, which limits network coverage range and precludes end-to-end path diversity. In contrast, other competing technologies overcome such constraints by supporting the mesh network topology. For these reasons, academia, industry, and standards development organizations have been designing solutions to enable BLE mesh networks. Nevertheless, the literature lacks a consolidated view on this emerging area. This paper comprehensively surveys state of the art BLE mesh networking. We first provide a taxonomy of BLE mesh network solutions. We then review the solutions, describing the variety of approaches that leverage existing BLE functionality to enable BLE mesh networks. We identify crucial aspects of BLE mesh network solutions and discuss their advantages and drawbacks. Finally, we highlight currently open issues. PMID:28640183
Unstructured Polyhedral Mesh Thermal Radiation Diffusion
Palmer, T.S.; Zika, M.R.; Madsen, N.K.
2000-07-27
Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module.
Delaunay triangulation and computational fluid dynamics meshes
Posenau, Mary-Anne K.; Mount, David M.
1992-01-01
In aerospace computational fluid dynamics (CFD) calculations, the Delaunay triangulation of suitable quadrilateral meshes can lead to unsuitable triangulated meshes. Here, we present case studies which illustrate the limitations of using structured grid generation methods which produce points in a curvilinear coordinate system for subsequent triangulations for CFD applications. We discuss conditions under which meshes of quadrilateral elements may not produce a Delaunay triangulation suitable for CFD calculations, particularly with regard to high aspect ratio, skewed quadrilateral elements.
A study on the dependency between turbulent models and mesh configurations of CFD codes
Bang, Jungjin; Heo, Yujin; Jerng, Dong-Wook [CAU, Seoul (Korea, Republic of)
2015-10-15
This paper focuses on the analysis of the behavior of hydrogen mixing and hydrogen stratification, using the GOTHIC code and the CFD code. Specifically, we examined the mesh sensitivity and how the turbulence model affects hydrogen stratification or hydrogen mixing, depending on the mesh configuration. In this work, sensitivity analyses for the meshes and the turbulence models were conducted for missing and stratification phenomena. During severe accidents in a nuclear power plants, the generation of hydrogen may occur and this will complicate the atmospheric condition of the containment by causing stratification of air, steam, and hydrogen. This could significantly impact containment integrity analyses, as hydrogen could be accumulated in local region. From this need arises the importance of research about stratification of gases in the containment. Two computation fluid dynamics code, i.e. GOTHIC and STAR-CCM+ were adopted and the computational results were benchmarked against the experimental data from PANDA facility. The main findings observed through the present work can be summarized as follows: 1) In the case of the GOTHIC code, it was observed that the aspect ratio of the mesh was found more important than the mesh size. Also, if the number of the mesh is over 3,000, the effects of the turbulence models were marginal. 2) For STAR-CCM+, the tendency is quite different from the GOTHIC code. That is, the effects of the turbulence models were small for fewer number of the mesh, however, as the number of mesh increases, the effects of the turbulence models becomes significant. Another observation is that away from the injection orifice, the role of the turbulence models tended to be important due to the nature of mixing process and inducted jet stream.
Parallel, Gradient-Based Anisotropic Mesh Adaptation for Re-entry Vehicle Configurations
Bibb, Karen L.; Gnoffo, Peter A.; Park, Michael A.; Jones, William T.
2006-01-01
Two gradient-based adaptation methodologies have been implemented into the Fun3d refine GridEx infrastructure. A spring-analogy adaptation which provides for nodal movement to cluster mesh nodes in the vicinity of strong shocks has been extended for general use within Fun3d, and is demonstrated for a 70 sphere cone at Mach 2. A more general feature-based adaptation metric has been developed for use with the adaptation mechanics available in Fun3d, and is applicable to any unstructured, tetrahedral, flow solver. The basic functionality of general adaptation is explored through a case of flow over the forebody of a 70 sphere cone at Mach 6. A practical application of Mach 10 flow over an Apollo capsule, computed with the Felisa flow solver, is given to compare the adaptive mesh refinement with uniform mesh refinement. The examples of the paper demonstrate that the gradient-based adaptation capability as implemented can give an improvement in solution quality.
Design of electrospinning mesh devices
Russo, Giuseppina; Peters, Gerrit W. M.; Solberg, Ramon H. M.; Vittoria, Vittoria
2012-07-01
This paper describes the features of new membranes that can act as local biomedical devices owing to their peculiar shape in the form of mesh structure. These materials are designed to provide significant effects to reduce local inflammations and improve the tissue regeneration. Lamellar Hydrotalcite loaded with Diclofenac Sodium (HTLc-DIK) was homogenously dispersed inside a polymeric matrix of Poly-caprolactone (PCL) to manufacture membranes by electrospinning technique. The experimental procedure and the criteria employed have shown to be extremely effective at increasing potentiality and related applications. The employed technique has proved to be very useful to manufacture polymeric fibers with diameters in the range of nano-micro scale. In this work a dedicated collector based on a proprietary technology of IME Technologies and Eindhoven University of Technology (TU/e) was used. It allowed to obtain devices with a macro shape of a 3D-mesh. Atomic Force Microscopy (AFM) highlights a very interesting texture of the electrospun fibers. They show a lamellar morphology that is only slightly modified by the inclusion of the interclay embedded in the devices to control the drug release phenomena.
MOAB : a mesh-oriented database.
Tautges, Timothy James; Ernst, Corey; Stimpson, Clint; Meyers, Ray J.; Merkley, Karl
2004-04-01
A finite element mesh is used to decompose a continuous domain into a discretized representation. The finite element method solves PDEs on this mesh by modeling complex functions as a set of simple basis functions with coefficients at mesh vertices and prescribed continuity between elements. The mesh is one of the fundamental types of data linking the various tools in the FEA process (mesh generation, analysis, visualization, etc.). Thus, the representation of mesh data and operations on those data play a very important role in FEA-based simulations. MOAB is a component for representing and evaluating mesh data. MOAB can store structured and unstructured mesh, consisting of elements in the finite element 'zoo'. The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. The MOAB data model consists of a mesh interface instance, mesh entities (vertices and elements), sets, and tags. Entities are addressed through handles rather than pointers, to allow the underlying representation of an entity to change without changing the handle to that entity. Sets are arbitrary groupings of mesh entities and other sets. Sets also support parent/child relationships as a relation distinct from sets containing other sets. The directed-graph provided by set parent/child relationships is useful for modeling topological relations from a geometric model or other metadata. Tags are named data which can be assigned to the mesh as a whole, individual entities, or sets. Tags are a mechanism for attaching data to individual entities and sets are a mechanism for describing relations between entities; the combination of these two mechanisms is a powerful yet simple interface for representing metadata or application
Ibrahim, Ahmad M., E-mail: ibrahimam@ornl.gov [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Wilson, Paul P. [University of Wisconsin-Madison, 1500 Engineering Dr., Madison, WI 53706 (United States); Sawan, Mohamed E., E-mail: sawan@engr.wisc.edu [University of Wisconsin-Madison, 1500 Engineering Dr., Madison, WI 53706 (United States); Mosher, Scott W.; Peplow, Douglas E.; Grove, Robert E. [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831 (United States)
2014-10-15
Highlights: •Calculate the prompt dose rate everywhere throughout the entire fusion energy facility. •Utilize FW-CADIS to accurately perform difficult neutronics calculations for fusion energy systems. •Develop three mesh adaptivity algorithms to enhance FW-CADIS efficiency in fusion-neutronics calculations. -- Abstract: Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.
Licandro Francesco
2007-01-01
Full Text Available Current video-surveillance systems typically consist of many video sources distributed over a wide area, transmitting live video streams to a central location for processing and monitoring. The target of this paper is to present an experience of implementation of a large-scale video-surveillance system based on a wireless mesh network infrastructure, discussing architecture, protocol, and implementation issues. More specifically, the paper proposes an architecture for a video-surveillance system, and mainly centers its focus on the routing protocol to be used in the wireless mesh network, evaluating its impact on performance at the receiver side. A wireless mesh network was chosen to support a video-surveillance application in order to reduce the overall system costs and increase scalability and performance. The paper analyzes the performance of the network in order to choose design parameters that will achieve the best trade-off between video encoding quality and the network traffic generated.
Francesco Licandro
2007-03-01
Full Text Available Current video-surveillance systems typically consist of many video sources distributed over a wide area, transmitting live video streams to a central location for processing and monitoring. The target of this paper is to present an experience of implementation of a large-scale video-surveillance system based on a wireless mesh network infrastructure, discussing architecture, protocol, and implementation issues. More specifically, the paper proposes an architecture for a video-surveillance system, and mainly centers its focus on the routing protocol to be used in the wireless mesh network, evaluating its impact on performance at the receiver side. A wireless mesh network was chosen to support a video-surveillance application in order to reduce the overall system costs and increase scalability and performance. The paper analyzes the performance of the network in order to choose design parameters that will achieve the best trade-off between video encoding quality and the network traffic generated.
Mesh Exposure and Associated Risk Factors in Women Undergoing Transvaginal Prolapse Repair with Mesh
Elizabeth A. Frankman
2013-01-01
Full Text Available Objective. To determine frequency, rate, and risk factors associated with mesh exposure in women undergoing transvaginal prolapse repair with polypropylene mesh. Methods. Retrospective chart review was performed for all women who underwent Prolift Pelvic Floor Repair System (Gynecare, Somerville, NJ between September 2005 and September 2008. Multivariable logistic regression was performed to identify risk factors for mesh exposure. Results. 201 women underwent Prolift. Mesh exposure occurred in 12% (24/201. Median time to mesh exposure was 62 days (range: 10–372. When mesh was placed in the anterior compartment, the frequency of mesh exposure was higher than that when mesh was placed in the posterior compartment (8.7% versus 2.9%, P=0.04. Independent risk factors for mesh exposure were diabetes (AOR = 7.7, 95% CI 1.6–37.6; P=0.01 and surgeon (AOR = 7.3, 95% CI 1.9–28.6; P=0.004. Conclusion. Women with diabetes have a 7-fold increased risk for mesh exposure after transvaginal prolapse repair using Prolift. The variable rate of mesh exposure amongst surgeons may be related to technique. The anterior vaginal wall may be at higher risk of mesh exposure as compared to the posterior vaginal wall.
Tensile Behaviour of Welded Wire Mesh and Hexagonal Metal Mesh for Ferrocement Application
Tanawade, A. G.; Modhera, C. D.
2017-08-01
Tension tests were conducted on welded mesh and hexagonal Metal mesh. Welded Mesh is available in the market in different sizes. The two types are analysed viz. Ø 2.3 mm and Ø 2.7 mm welded mesh, having opening size 31.75 mm × 31.75 mm and 25.4 mm × 25.4 mm respectively. Tensile strength test was performed on samples of welded mesh in three different orientations namely 0°, 30° and 45° degrees with the loading axis and hexagonal Metal mesh of Ø 0.7 mm, having opening 19.05 × 19.05 mm. Experimental tests were conducted on samples of these meshes. The objective of this study was to investigate the behaviour of the welded mesh and hexagonal Metal mesh. The result shows that the tension load carrying capacity of welded mesh of Ø 2.7 mm of 0° orientation is good as compared to Ø2.3 mm mesh and ductility of hexagonal Metal mesh is good in behaviour.
Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa
2012-08-01
Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model.
Crystal structure refinement with SHELXL.
Sheldrick, George M
2015-01-01
The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as `a CIF') containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors.
On Modal Refinement and Consistency
Nyman, Ulrik; Larsen, Kim Guldstrand; Wasowski, Andrzej
2007-01-01
Almost 20 years after the original conception, we revisit several fundamental question about modal transition systems. First, we demonstrate the incompleteness of the standard modal refinement using a counterexample due to Hüttel. Deciding any refinement, complete with respect to the standard...... notions of implementation, is shown to be computationally hard (co-NP hard). Second, we consider four forms of consistency (existence of implementations) for modal specifications. We characterize each operationally, giving algorithms for deciding, and for synthesizing implementations, together...
7th International Meshing Roundtable '98
Eldred, T.J.
1998-10-01
The goal of the 7th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the past, the Roundtable has enjoyed significant participation from each of these groups from a wide variety of countries.
Laparoscopic Pelvic Floor Repair Using Polypropylene Mesh
Shih-Shien Weng
2008-09-01
Conclusion: Laparoscopic pelvic floor repair using a single piece of polypropylene mesh combined with uterosacral ligament suspension appears to be a feasible procedure for the treatment of advanced vaginal vault prolapse and enterocele. Fewer mesh erosions and postoperative pain syndromes were seen in patients who had no previous pelvic floor reconstructive surgery.
Nonhydrostatic adaptive mesh dynamics for multiscale climate models (Invited)
Collins, W.; Johansen, H.; McCorquodale, P.; Colella, P.; Ullrich, P. A.
2013-12-01
Many of the atmospheric phenomena with the greatest potential impact in future warmer climates are inherently multiscale. Such meteorological systems include hurricanes and tropical cyclones, atmospheric rivers, and other types of hydrometeorological extremes. These phenomena are challenging to simulate in conventional climate models due to the relatively coarse uniform model resolutions relative to the native nonhydrostatic scales of the phenomonological dynamics. To enable studies of these systems with sufficient local resolution for the multiscale dynamics yet with sufficient speed for climate-change studies, we have adapted existing adaptive mesh dynamics for the DOE-NSF Community Atmosphere Model (CAM). In this talk, we present an adaptive, conservative finite volume approach for moist non-hydrostatic atmospheric dynamics. The approach is based on the compressible Euler equations on 3D thin spherical shells, where the radial direction is treated implicitly (using a fourth-order Runga-Kutta IMEX scheme) to eliminate time step constraints from vertical acoustic waves. Refinement is performed only in the horizontal directions. The spatial discretization is the equiangular cubed-sphere mapping, with a fourth-order accurate discretization to compute flux averages on faces. By using both space-and time-adaptive mesh refinement, the solver allocates computational effort only where greater accuracy is needed. The resulting method is demonstrated to be fourth-order accurate for model problems, and robust at solution discontinuities and stable for large aspect ratios. We present comparisons using a simplified physics package for dycore comparisons of moist physics. Hadley cell lifting an advected tracer into upper atmosphere, with horizontal adaptivity
Commercon, Benoit; Audit, Edouard; Hennebelle, Patrick; Chabrier, Gilles
2011-01-01
Radiative transfer has a strong impact on the collapse and the fragmentation of prestellar dense cores. We present the radiation-hydrodynamics solver we designed for the RAMSES code. The method is designed for astrophysical purposes, and in particular for protostellar collapse. We present the solver, using the co-moving frame to evaluate the radiative quantities. We use the popular flux limited diffusion approximation, under the grey approximation (one group of photon). The solver is based on the second-order Godunov scheme of RAMSES for its hyperbolic part, and on an implicit scheme for the radiation diffusion and the coupling between radiation and matter. We report in details our methodology to integrate the RHD solver into RAMSES. We test successfully the method against several conventional tests. For validation in 3D, we perform calculations of the collapse of an isolated 1 M_sun prestellar dense core, without rotation. We compare successfully the results with previous studies using different models for r...
Autotuning of Adaptive Mesh Refinement PDE Solvers on Shared Memory Architectures
Nogina, Svetlana
2012-01-01
Many multithreaded, grid-based, dynamically adaptive solvers for partial differential equations permanently have to traverse subgrids (patches) of different and changing sizes. The parallel efficiency of this traversal depends on the interplay of the patch size, the architecture used, the operations triggered throughout the traversal, and the grain size, i.e. the size of the subtasks the patch is broken into. We propose an oracle mechanism delivering grain sizes on-the-fly. It takes historical runtime measurements for different patch and grain sizes as well as the traverse\\'s operations into account, and it yields reasonable speedups. Neither magic configuration settings nor an expensive pre-tuning phase are necessary. It is an autotuning approach. © 2012 Springer-Verlag.
Wenjun Ying
2015-01-01
Full Text Available A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented.
Simulations of recoiling black holes: adaptive mesh refinement and radiative transfer
Meliani, Zakaria; Olivares, Hector; Porth, Oliver; Rezzolla, Luciano; Younsi, Ziri
2016-01-01
(Abridged) We here continue our effort to model the behaviour of matter when orbiting or accreting onto a generic black hole by developing a new numerical code employing advanced techniques geared solve the equations of in general-relativistic hydrodynamics. The new code employs a number of high-resolution shock-capturing Riemann-solvers and reconstruction algorithms, exploiting the enhanced accuracy and the reduced computational cost of AMR techniques. In addition, the code makes use of sophisticated ray-tracing libraries that, coupled with general-relativistic radiation-transfer calculations, allow us to compute accurately the electromagnetic emissions from such accretion flows. We validate the new code by presenting an extensive series of stationary accretion flows either in spherical or axial symmetry and performed either in 2D or 3D. In addition, we consider the highly nonlinear scenario of a recoiling black hole produced in the merger of a supermassive black hole binary interacting with the surrounding ...
A dynamic mesh refinement technique for Lattice Boltzmann simulations on octree-like grids
Neumann, Philipp
2012-04-27
In this contribution, we present our new adaptive Lattice Boltzmann implementation within the Peano framework, with special focus on nanoscale particle transport problems. With the continuum hypothesis not holding anymore on these small scales, new physical effects - such as Brownian fluctuations - need to be incorporated. We explain the overall layout of the application, including memory layout and access, and shortly review the adaptive algorithm. The scheme is validated by different benchmark computations in two and three dimensions. An extension to dynamically changing grids and a spatially adaptive approach to fluctuating hydrodynamics, allowing for the thermalisation of the fluid in particular regions of interest, is proposed. Both dynamic adaptivity and adaptive fluctuating hydrodynamics are validated separately in simulations of particle transport problems. The application of this scheme to an oscillating particle in a nanopore illustrates the importance of Brownian fluctuations in such setups. © 2012 Springer-Verlag.
woptic: optical conductivity with Wannier functions and adaptive k-mesh refinement
Assmann, E; Kuneš, J; Toschi, A; Blaha, P; Held, K
2015-01-01
We present an algorithm for the adaptive tetrahedral integration over the Brillouin zone of crystalline materials, and apply it to compute the optical conductivity, dc conductivity, and thermopower. For these quantities, whose contributions are often localized in small portions of the Brillouin zone, adaptive integration is especially relevant. Our implementation, the woptic package, is tied into the wien2wannier framework and allows including a many-body self energy, e.g. from dynamical mean-field theory (DMFT). Wannier functions and dipole matrix elements are computed with the DFT package Wien2k and Wannier90. For illustration, we show DFT results for fcc-Al and DMFT results for the correlated metal SrVO$_3$.
Characteristics of Mesh Wave Impedance in FDTD Non-Uniform Mesh
REN Wu; LIU Bo; GAO Ben-qing
2005-01-01
In order to increase the evaluating precision of mesh reflection wave, the mesh wave impedance(MWI) is extended to the non-uniform mesh in 1-D and 2-D cases for the first time on the basis of the Yee's positional relation for electromagnetic field components. Lots of characteristics are obtained for different mesh sizes and frequencies. Then the reflection coefficient caused by the non-uniform mesh can be calculated according to the theory of equivalent transmission line. By comparing it with that calculated by MWI in the uniform mesh, it is found that the evaluating error can be largely reduced and is in good agreement with that directly computed by FDTD method. And this extension of MWI can be used in the error analysis of complex mesh.
Update on Development of Mesh Generation Algorithms in MeshKit
Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Vanderzee, Evan [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-09-30
MeshKit uses a graph-based design for coding all its meshing algorithms, which includes the Reactor Geometry (and mesh) Generation (RGG) algorithms. This report highlights the developmental updates of all the algorithms, results and future work. Parallel versions of algorithms, documentation and performance results are reported. RGG GUI design was updated to incorporate new features requested by the users; boundary layer generation and parallel RGG support were added to the GUI. Key contributions to the release, upgrade and maintenance of other SIGMA1 libraries (CGM and MOAB) were made. Several fundamental meshing algorithms for creating a robust parallel meshing pipeline in MeshKit are under development. Results and current status of automated, open-source and high quality nuclear reactor assembly mesh generation algorithms such as trimesher, quadmesher, interval matching and multi-sweeper are reported.
Automatic Mesh Generation of Hybrid Mesh on Valves in Multiple Positions in Feedline Systems
Ross, Douglass H.; Ito, Yasushi; Dorothy, Fredric W.; Shih, Alan M.; Peugeot, John
2010-01-01
Fluid flow simulations through a valve often require evaluation of the valve in multiple opening positions. A mesh has to be generated for the valve for each position and compounding. The problem is the fact that the valve is typically part of a larger feedline system. In this paper, we propose to develop a system to create meshes for feedline systems with parametrically controlled valve openings. Herein we outline two approaches to generate the meshes for a valve in a feedline system at multiple positions. There are two issues that must be addressed. The first is the creation of the mesh on the valve for multiple positions. The second is the generation of the mesh for the total feedline system including the valve. For generation of the mesh on the valve, we will describe the use of topology matching and mesh generation parameter transfer. For generation of the total feedline system, we will describe two solutions that we have implemented. In both cases the valve is treated as a component in the feedline system. In the first method the geometry of the valve in the feedline system is replaced with a valve at a different opening position. Geometry is created to connect the valve to the feedline system. Then topology for the valve is created and the portion of the topology for the valve is topology matched to the standard valve in a different position. The mesh generation parameters are transferred and then the volume mesh for the whole feedline system is generated. The second method enables the user to generate the volume mesh on the valve in multiple open positions external to the feedline system, to insert it into the volume mesh of the feedline system, and to reduce the amount of computer time required for mesh generation because only two small volume meshes connecting the valve to the feedline mesh need to be updated.
Network Monitoring as a Streaming Analytics Problem
Gupta, Arpit
2016-11-02
Programmable switches make it easier to perform flexible network monitoring queries at line rate, and scalable stream processors make it possible to fuse data streams to answer more sophisticated queries about the network in real-time. Unfortunately, processing such network monitoring queries at high traffic rates requires both the switches and the stream processors to filter the traffic iteratively and adaptively so as to extract only that traffic that is of interest to the query at hand. Others have network monitoring in the context of streaming; yet, previous work has not closed the loop in a way that allows network operators to perform streaming analytics for network monitoring applications at scale. To achieve this objective, Sonata allows operators to express a network monitoring query by considering each packet as a tuple and efficiently partitioning each query between the switches and the stream processor through iterative refinement. Sonata extracts only the traffic that pertains to each query, ensuring that the stream processor can scale traffic rates of several terabits per second. We show with a simple example query involving DNS reflection attacks and traffic traces from one of the world\\'s largest IXPs that Sonata can capture 95% of all traffic pertaining to the query, while reducing the overall data rate by a factor of about 400 and the number of required counters by four orders of magnitude. Copyright 2016 ACM.
Welch, J. A.; Kópházi, J.; Owens, A. R.; Eaton, M. D.
2017-10-01
In this paper a method is presented for the application of energy-dependent spatial meshes applied to the multigroup, second-order, even-parity form of the neutron transport equation using Isogeometric Analysis (IGA). The computation of the inter-group regenerative source terms is based on conservative interpolation by Galerkin projection. The use of Non-Uniform Rational B-splines (NURBS) from the original computer-aided design (CAD) model allows for efficient implementation and calculation of the spatial projection operations while avoiding the complications of matching different geometric approximations faced by traditional finite element methods (FEM). The rate-of-convergence was verified using the method of manufactured solutions (MMS) and found to preserve the theoretical rates when interpolating between spatial meshes of different refinements. The scheme's numerical efficiency was then studied using a series of two-energy group pincell test cases where a significant saving in the number of degrees-of-freedom can be found if the energy group with a complex variation in the solution is refined more than an energy group with a simpler solution function. Finally, the method was applied to a heterogeneous, seven-group reactor pincell where the spatial meshes for each energy group were adaptively selected for refinement. It was observed that by refining selected energy groups a reduction in the total number of degrees-of-freedom for the same total L2 error can be obtained.
Suvorov, A. S.; Sokov, E. M.; V'yushkina, I. A.
2016-09-01
A new method is presented for the automatic refinement of finite element models of complex mechanical-acoustic systems using the results of experimental studies. The method is based on control of the spectral characteristics via selection of the optimal distribution of adjustments to the stiffness of a finite element mesh. The results of testing the method are given to show the possibility of its use to significantly increase the simulation accuracy of vibration characteristics of bodies with arbitrary spatial configuration.
Zone refining of plutonium metal
Blau, Michael S. [Univ. of Idaho, Moscow, ID (United States)
1994-08-01
The zone refining process was applied to Pu metal containing known amounts of impurities. Rod specimens of plutonium metal were melted into and contained in tantalum boats, each of which was passed horizontally through a three-turn, high-frequency coil in such a manner as to cause a narrow molten zone to pass through the Pu metal rod 10 times. The impurity elements Co, Cr, Fe, Ni, Np, U were found to move in the same direction as the molten zone as predicted by binary phase diagrams. The elements Al, Am, and Ga moved in the opposite direction of the molten zone as predicted by binary phase diagrams. As the impurity alloy was zone refined, {delta}-phase plutonium metal crystals were produced. The first few zone refining passes were more effective than each later pass because an oxide layer formed on the rod surface. There was no clear evidence of better impurity movement at the slower zone refining speed. Also, constant or variable coil power appeared to have no effect on impurity movement during a single run (10 passes). This experiment was the first step to developing a zone refining process for plutonium metal.
Quadratically consistent projection from particles to mesh
Duque, Daniel
2016-01-01
The advantage of particle Lagrangian methods in computational fluid dynamics is that advection is accurately modeled. However, this complicates the calculation of space derivatives. If a mesh is employed, it must be updated at each time step. On the other hand, fixed mesh, Eulerian, formulations benefit from the mesh being defined at the beginning of the simulation, but feature non-linear advection terms. It therefore seems natural to combine the two approaches, using a fixed mesh to perform calculations related to space derivatives, and using the particles to advect the information with time. The idea of combining Lagrangian particles and a fixed mesh goes back to Particle-in-Cell methods, and is here considered within the context of the finite element method (FEM) for the fixed mesh, and the particle FEM (pFEM) for the particles. Our results, in agreement with recent works, show that interpolation ("projection") errors, especially from particles to mesh, are the culprits of slow convergence of the method if...
Fatigue strength of MAR-M509 alloy with structure refined by rapid crystallization
M. Mróz
2010-07-01
Full Text Available This study presents test results of high-cycle (N>2⋅107 fatigue bending strength of MAR-M-509 cobalt alloy samples, as cast and aftersurface refining with a concentrated stream of heat. Tests were conducted on samples of MAR-M59 alloy casts, obtained using the lostwax method. Cast structure refining was performed with the GTAW method in argon atmosphere, using the current I = 200 A andelectrical arc scanning velocity vs = 250 mm/min. The effect of rapid crystallization occurring after the fusion process is refinement of the MAR-M509 alloy cast microstructure and significant improvement in bending fatigue strength.
Zhang, Fang
2011-02-01
Mesh current collectors made of stainless steel (SS) can be integrated into microbial fuel cell (MFC) cathodes constructed of a reactive carbon black and Pt catalyst mixture and a poly(dimethylsiloxane) (PDMS) diffusion layer. It is shown here that the mesh properties of these cathodes can significantly affect performance. Cathodes made from the coarsest mesh (30-mesh) achieved the highest maximum power of 1616 ± 25 mW m-2 (normalized to cathode projected surface area; 47.1 ± 0.7 W m-3 based on liquid volume), while the finest mesh (120-mesh) had the lowest power density (599 ± 57 mW m-2). Electrochemical impedance spectroscopy showed that charge transfer and diffusion resistances decreased with increasing mesh opening size. In MFC tests, the cathode performance was primarily limited by reaction kinetics, and not mass transfer. Oxygen permeability increased with mesh opening size, accounting for the decreased diffusion resistance. At higher current densities, diffusion became a limiting factor, especially for fine mesh with low oxygen transfer coefficients. These results demonstrate the critical nature of the mesh size used for constructing MFC cathodes. © 2010 Elsevier B.V. All rights reserved.
Bauxite Mining and Alumina Refining
Frisch, Neale; Olney, David
2014-01-01
Objective: To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Methods: Review article. Results: The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust, alumina dust, and caustic mist in contemporary best-practice bauxite mining and alumina refining operations have not been demonstrated to be associated with clinically significant decrements in lung function. Exposures to bauxite dust and alumina dust at such operations are also not associated with the incidence of cancer. Conclusions: A range of occupational health risks in bauxite mining and alumina refining require the maintenance of effective control measures. PMID:24806720
Data refinement for true concurrency
Brijesh Dongol
2013-05-01
Full Text Available The majority of modern systems exhibit sophisticated concurrent behaviour, where several system components modify and observe the system state with fine-grained atomicity. Many systems (e.g., multi-core processors, real-time controllers also exhibit truly concurrent behaviour, where multiple events can occur simultaneously. This paper presents data refinement defined in terms of an interval-based framework, which includes high-level operators that capture non-deterministic expression evaluation. By modifying the type of an interval, our theory may be specialised to cover data refinement of both discrete and continuous systems. We present an interval-based encoding of forward simulation, then prove that our forward simulation rule is sound with respect to our data refinement definition. A number of rules for decomposing forward simulation proofs over both sequential and parallel composition are developed.
Pointing Refinement of SIRTF Images
Masci, F; Moshir, M; Shupe, D; Fowler, J W; Fowler, John W.
2002-01-01
The soon-to-be-launched Space Infrared Telescope Facility (SIRTF) shall produce image data with an a-posteriori pointing knowledge of 1.4 arcsec (1 sigma radial) with a goal of 1.2 arcsec in the International Celestial Reference System (ICRS). To perform robust image coaddition, mosaic generation, extraction and position determination of faint sources, the pointing will need to be refined to better than a few-tenths of an arcsecond. We use a linear-sparse matrix solver to find a "global-minimization" of all relative image offsets in a mosaic from which refined pointings and orientations can be computed. This paper summarizes the pointing-refinement algorithm and presents the results of testing on simulated data.
H(curl) Auxiliary Mesh Preconditioning
Kolev, T V; Pasciak, J E; Vassilevski, P S
2006-08-31
This paper analyzes a two-level preconditioning scheme for H(curl) bilinear forms. The scheme utilizes an auxiliary problem on a related mesh that is more amenable for constructing optimal order multigrid methods. More specifically, we analyze the case when the auxiliary mesh only approximately covers the original domain. The latter assumption is important since it allows for easy construction of nested multilevel spaces on regular auxiliary meshes. Numerical experiments in both two and three space dimensions illustrate the optimal performance of the method.
Engagement of Metal Debris into Gear Mesh
handschuh, Robert F.; Krantz, Timothy L.
2010-01-01
A series of bench-top experiments was conducted to determine the effects of metallic debris being dragged through meshing gear teeth. A test rig that is typically used to conduct contact fatigue experiments was used for these tests. Several sizes of drill material, shim stock and pieces of gear teeth were introduced and then driven through the meshing region. The level of torque required to drive the "chip" through the gear mesh was measured. From the data gathered, chip size sufficient to jam the mechanism can be determined.
Application of mesh network radios to UGS
Calcutt, Wade; Jones, Barry; Roeder, Brent
2008-04-01
During the past five years McQ has been actively pursuing integrating and applying wireless mesh network radios as a communications solution for unattended ground sensor (UGS) systems. This effort has been rewarded with limited levels of success and has ultimately resulted in a corporate position regarding the use of mesh network radios for UGS systems. A discussion into the background of the effort, the challenges of implementing commercial off-the-shelf (COTS) mesh radios with UGSs, the tradeoffs involved, and an overview of the future direction is presented.
Mesh Optimization for Ground Vehicle Aerodynamics
Adrian Gaylard; Essam F Abo-Serie; Nor Elyana Ahmad
2010-01-01
Mesh optimization strategy for estimating accurate drag of a ground vehicle is proposed based on examining the effect of different mesh parameters. The optimized mesh parameters were selected using design of experiment (DOE) method to be able to work in a...
Rigidity Constraints for Large Mesh Deformation
Yong Zhao; Xin-Guo Liu; Qun-Sheng Peng; Hu-Jun Bao
2009-01-01
It is a challenging problem of surface-based deformation to avoid apparent volumetric distortions around largely deformed areas. In this paper, we propose a new rigidity constraint for gradient domain mesh deformation to address this problem. Intuitively the proposed constraint can be regarded as several small cubes defined by the mesh vertices through mean value coordinates. The user interactively specifies the cubes in the regions which are prone to volumetric distortions, and the rigidity constraints could make the mesh behave like a solid object during deformation. The experimental results demonstrate that our constraint is intuitive, easy to use and very effective.
SURFACE MESH PARAMETERIZATION WITH NATURAL BOUNDARY
Ye Ming; Zhu Xiaofeng; Wang Chengtao
2003-01-01
Using the projected curve of surface mesh boundary as parameter domain border, linear mapping parameterization with natural boundary is realized. A fast algorithm for least squares fitting plane of vertices in the mesh boundary is proposed. After the mesh boundary is projected onto the fitting plane, low-pass filtering is adopted to eliminate crossovers, sharp corners and cavities in the projected curve and convert it into an eligible convex parameter domain boundary. In order to facilitate quantitative evaluations of parameterization schemes, three distortion-measuring formulae are presented.
Kennedy, A. T.; Cornford, S. L.; Lunt, D. J.; Payne, A. J.
2016-12-01
Paleo ice sheet models provide valuable tools to test our understanding of the cryosphere-ocean-climate system and how it might respond under warm conditions. However, the long time scales and uncertainty in boundary conditions required for paleo simulations usually necessitates the use of highly parameterised and simplified model techniques, and not the use of state-of-the-art models. One such state-of-the-art model which has been used for the present day is BISICLES which, due to its adaptive mesh refinement capabilities, can explicitly model highly localised and dynamic features such as grounding line migration and ice streams. We will show results testing the suitability of using such a model for a paleo application, including the model's sensitivity to uncertainty in the ice volume, bedrock properties and climatic/oceanic forcing. We will also show preliminary results of modelling the Antarctic Ice Sheet state at the Mid-Pliocene, a period when the ice sheet is expected to have contributed many metres worth of global mean sea level increase. We will highlight the range of ice mass loss under different parameterisation and forcing schemes and the level of agreement with previous data and modelling studies, e.g. Miller et al (2012) and DeConto & Pollard (2016). It remains too computationally expensive to run BISICLES for a full glacial-interglacial cycle for example, but this model could still prove valuable for assessing the role of highly dynamic features in past ice sheets. This work will help bridge a gap in understanding of the strengths and weaknesses of the simpler models used in paleo ice sheet modelling compared to the state-of-the-art models used for present day and future prediction.
Three-dimensional simulation of slip-streaming in vehicle aerodynamics
Mitra, Saurav
2013-11-01
Simulation of slip-streaming in vehicle aerodynamics is computationally challenging. To resolve turbulent wakes, and estimate drag between two co-linear vehicles with less number of computational cells requires advanced techniques. In this study, the variation of drag reduction and increase arising due to different inter-spacing between two Ahmed vehicles bodies (canonical vehicle geometry with 30° slant back angle) are presented. The computational fluid dynamics solver CONVERGE was used, for its automatic mesh refinement (AMR) capabilities. AMR is based on the second derivative of shear and normal components of velocity gradients and was used to resolve the flow around geometric features such as the frontal area, the slant back, etc. Steady-state density-based solver is used where each cell has its own pseudo time-step based on the local numerical stability criterion. The RNG k- ɛ turbulence model was used to model turbulence. The non-dimensional inter-spacing based on vehicle length, was varied from 0.1 to 2.0. The largest grid size used here was 0.04 m and the smallest was 0.005 m to resolve the turbulent wake which is characterized by a strong vortex system, longitudinal counter-rotating vortices arising from the slant back.
DSm Vector Spaces of Refined Labels
Kandasamy, W B Vasantha
2011-01-01
In this book the authors introduce the notion of DSm vector spaces of refined labels. They also realize the refined labels as a plane and a n-dimensional space. Further, using these refined labels, several algebraic structures are defined. Finally DSm semivector space or refined labels is described. Authors also propose some research problems.
Refining Nodes and Edges of State Machines
Hallerstede, Stefan; Snook, Colin
2011-01-01
State machines are hierarchical automata that are widely used to structure complex behavioural specifications. We develop two notions of refinement of state machines, node refinement and edge refinement. We compare the two notions by means of examples and argue that, by adopting simple convention...... refinement theory and UML-B state machine refinement influences the style of node refinement. Hence we propose a method with direct proof of state machine refinement avoiding the detour via Event-B that is needed by UML-B....
Assignment of fields from particles to mesh
Duque, Daniel
2016-01-01
In Computational Fluid Dynamics there have been many attempts to combine the power of a fixed mesh on which to carry out spatial calculations with that of a set of particles that moves following the velocity field. These ideas indeed go back to Particle-in-Cell methods, proposed about 60 years ago. Of course, some procedure is needed to transfer field information between particles and mesh. There are many possible choices for this "assignment", or "projection". Several requirements may guide this choice. Two well-known ones are conservativity and stability, which apply to volume integrals of the fields. An additional one is here considered: preservation of information. This means that mesh interpolation, followed by mesh assignment, should leave the field values invariant. The resulting methods are termed "mass" assignments due to their strong similarities with the Finite Element Method. We test several procedures, including the well-known FLIP, on three scenarios: simple 1D convection, 2D convection of Zales...
Shape space exploration of constrained meshes
Yang, Yongliang
2011-01-01
We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc.
LR: Compact connectivity representation for triangle meshes
Gurung, T; Luffel, M; Lindstrom, P; Rossignac, J
2011-01-28
We propose LR (Laced Ring) - a simple data structure for representing the connectivity of manifold triangle meshes. LR provides the option to store on average either 1.08 references per triangle or 26.2 bits per triangle. Its construction, from an input mesh that supports constant-time adjacency queries, has linear space and time complexity, and involves ordering most vertices along a nearly-Hamiltonian cycle. LR is best suited for applications that process meshes with fixed connectivity, as any changes to the connectivity require the data structure to be rebuilt. We provide an implementation of the set of standard random-access, constant-time operators for traversing a mesh, and show that LR often saves both space and traversal time over competing representations.
Shape space exploration of constrained meshes
Yang, Yongliang
2011-12-12
We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.
Spacetime Meshing for Discontinuous Galerkin Methods
Thite, Shripad Vidyadhar
2008-01-01
Spacetime discontinuous Galerkin (SDG) finite element methods are used to solve such PDEs involving space and time variables arising from wave propagation phenomena in important applications in science and engineering. To support an accurate and efficient solution procedure using SDG methods and to exploit the flexibility of these methods, we give a meshing algorithm to construct an unstructured simplicial spacetime mesh over an arbitrary simplicial space domain. Our algorithm is the first spacetime meshing algorithm suitable for efficient solution of nonlinear phenomena in anisotropic media using novel discontinuous Galerkin finite element methods for implicit solutions directly in spacetime. Given a triangulated d-dimensional Euclidean space domain M (a simplicial complex) and initial conditions of the underlying hyperbolic spacetime PDE, we construct an unstructured simplicial mesh of the (d+1)-dimensional spacetime domain M x [0,infinity). Our algorithm uses a near-optimal number of spacetime elements, ea...
Metal Mesh Filters for Terahertz Receivers Project
National Aeronautics and Space Administration — The technical objective of this SBIR program is to develop and demonstrate metal mesh filters for use in NASA's low noise receivers for terahertz astronomy and...
Mesh Processing in Medical Image Analysis
The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....
Obtuse triangle suppression in anisotropic meshes
Sun, Feng
2011-12-01
Anisotropic triangle meshes are used for efficient approximation of surfaces and flow data in finite element analysis, and in these applications it is desirable to have as few obtuse triangles as possible to reduce the discretization error. We present a variational approach to suppressing obtuse triangles in anisotropic meshes. Specifically, we introduce a hexagonal Minkowski metric, which is sensitive to triangle orientation, to give a new formulation of the centroidal Voronoi tessellation (CVT) method. Furthermore, we prove several relevant properties of the CVT method with the newly introduced metric. Experiments show that our algorithm produces anisotropic meshes with much fewer obtuse triangles than using existing methods while maintaining mesh anisotropy. © 2011 Elsevier B.V. All rights reserved.
Removal of line artifacts on mesh boundary in computer generated hologram by mesh phase matching.
Park, Jae-Hyeung; Yeom, Han-Ju; Kim, Hee-Jae; Zhang, HuiJun; Li, BoNi; Ji, Yeong-Min; Kim, Sang-Hoo
2015-03-23
Mesh-based computer generated hologram enables realistic and efficient representation of three-dimensional scene. However, the dark line artifacts on the boundary between neighboring meshes are frequently observed, degrading the quality of the reconstruction. In this paper, we propose a simple technique to remove the dark line artifacts by matching the phase on the boundary of neighboring meshes. The feasibility of the proposed method is confirmed by the numerical and optical reconstruction of the generated hologram.
MHD simulations on an unstructured mesh
Strauss, H.R. [New York Univ., NY (United States); Park, W.; Belova, E.; Fu, G.Y. [Princeton Univ., NJ (United States). Plasma Physics Lab.; Longcope, D.W. [Univ. of Montana, Missoula, MT (United States); Sugiyama, L.E. [Massachusetts Inst. of Tech., Cambridge, MA (United States)
1998-12-31
Two reasons for using an unstructured computational mesh are adaptivity, and alignment with arbitrarily shaped boundaries. Two codes which use finite element discretization on an unstructured mesh are described. FEM3D solves 2D and 3D RMHD using an adaptive grid. MH3D++, which incorporates methods of FEM3D into the MH3D generalized MHD code, can be used with shaped boundaries, which might be 3D.
Mesh Processing in Medical Image Analysis
The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation.......The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....
Vector field processing on triangle meshes
De Goes, Fernando; Desbrun, Mathieu; Tong, Yiying
2015-01-01
While scalar fields on surfaces have been staples of geometry processing, the use of tangent vector fields has steadily grown in geometry processing over the last two decades: they are crucial to encoding directions and sizing on surfaces as commonly required in tasks such as texture synthesis, non-photorealistic rendering, digital grooming, and meshing. There are, however, a variety of discrete representations of tangent vector fields on triangle meshes, and each approach offers different tr...
Refining analgesia strategies using lasers.
Hampshire, Victoria
2015-08-01
Sound programs for the humane care and use of animals within research facilities incorporate experimental refinements such as multimodal approaches for pain management. These approaches can include non-traditional strategies along with more established ones. The use of lasers for pain relief is growing in popularity among companion animal veterinary practitioners and technologists. Therefore, its application in the research sector warrants closer consideration.
On Interaction Refinement in Middleware
Truyen, Eddy; Jørgensen, Bo Nørregaard; Joosen, Wouter;
2000-01-01
components together. We have examined a reflective technique that improve the dynamics of this gluing process such that interaction between components can be refined at run-time. In this paper, we show how we have used this reflective technique to dynamically integrate into the architecture of middleware...
Unstructured Mesh Movement and Viscous Mesh Generation for CFD-Based Design Optimization Project
National Aeronautics and Space Administration — The innovations proposed are twofold: 1) a robust unstructured mesh movement method able to handle isotropic (Euler), anisotropic (viscous), mixed element (hybrid)...
Mesh geometry impact on Micromegas performance with an Exchangeable Mesh prototype
Kuger, F., E-mail: fabian.kuger@cern.ch [CERN, Geneva (Switzerland); Julius-Maximilians-Universität, Würzburg (Germany); Bianco, M.; Iengo, P. [CERN, Geneva (Switzerland); Sekhniaidze, G. [CERN, Geneva (Switzerland); Universita e INFN, Napoli (Italy); Veenhof, R. [Uludağ University, Bursa (Turkey); Wotschack, J. [CERN, Geneva (Switzerland)
2016-07-11
The reconstruction precision of gaseous detectors is limited by losses of primary electrons during signal formation. In addition to common gas related losses, like attachment, Micromegas suffer from electron absorption during its transition through the micro mesh. This study aims for a deepened understanding of electron losses and their dependency on the mesh geometry. It combines experimental results obtained with a novel designed Exchangeable Mesh Micromegas (ExMe) and advanced microscopic-tracking simulations (ANSYS and Garfield++) of electron drift and mesh transition.
Robust moving mesh algorithms for hybrid stretched meshes: Application to moving boundaries problems
Landry, Jonathan; Soulaïmani, Azzeddine; Luke, Edward; Ben Haj Ali, Amine
2016-12-01
A robust Mesh-Mover Algorithm (MMA) approach is designed to adapt meshes of moving boundaries problems. A new methodology is developed from the best combination of well-known algorithms in order to preserve the quality of initial meshes. In most situations, MMAs distribute mesh deformation while preserving a good mesh quality. However, invalid meshes are generated when the motion is complex and/or involves multiple bodies. After studying a few MMA limitations, we propose the following approach: use the Inverse Distance Weighting (IDW) function to produce the displacement field, then apply the Geometric Element Transformation Method (GETMe) smoothing algorithms to improve the resulting mesh quality, and use an untangler to revert negative elements. The proposed approach has been proven efficient to adapt meshes for various realistic aerodynamic motions: a symmetric wing that has suffered large tip bending and twisting and the high-lift components of a swept wing that has moved to different flight stages. Finally, the fluid flow problem has been solved on meshes that have moved and they have produced results close to experimental ones. However, for situations where moving boundaries are too close to each other, more improvements need to be made or other approaches should be taken, such as an overset grid method.
Denner, Fabian; van Wachem, Berend G. M.
2015-10-01
Total variation diminishing (TVD) schemes are a widely applied group of monotonicity-preserving advection differencing schemes for partial differential equations in numerical heat transfer and computational fluid dynamics. These schemes are typically designed for one-dimensional problems or multidimensional problems on structured equidistant quadrilateral meshes. Practical applications, however, often involve complex geometries that cannot be represented by Cartesian meshes and, therefore, necessitate the application of unstructured meshes, which require a more sophisticated discretisation to account for their additional topological complexity. In principle, TVD schemes are applicable to unstructured meshes, however, not all the data required for TVD differencing is readily available on unstructured meshes, and the solution suffers from considerable numerical diffusion as a result of mesh skewness. In this article we analyse TVD differencing on unstructured three-dimensional meshes, focusing on the non-linearity of TVD differencing and the extrapolation of the virtual upwind node. Furthermore, we propose a novel monotonicity-preserving correction method for TVD schemes that significantly reduces numerical diffusion caused by mesh skewness. The presented numerical experiments demonstrate the importance of accounting for the non-linearity introduced by TVD differencing and of imposing carefully chosen limits on the extrapolated virtual upwind node, as well as the efficacy of the proposed method to correct mesh skewness.
Discrete differential geometry: the nonplanar quadrilateral mesh.
Twining, Carole J; Marsland, Stephen
2012-06-01
We consider the problem of constructing a discrete differential geometry defined on nonplanar quadrilateral meshes. Physical models on discrete nonflat spaces are of inherent interest, as well as being used in applications such as computation for electromagnetism, fluid mechanics, and image analysis. However, the majority of analysis has focused on triangulated meshes. We consider two approaches: discretizing the tensor calculus, and a discrete mesh version of differential forms. While these two approaches are equivalent in the continuum, we show that this is not true in the discrete case. Nevertheless, we show that it is possible to construct mesh versions of the Levi-Civita connection (and hence the tensorial covariant derivative and the associated covariant exterior derivative), the torsion, and the curvature. We show how discrete analogs of the usual vector integral theorems are constructed in such a way that the appropriate conservation laws hold exactly on the mesh, rather than only as approximations to the continuum limit. We demonstrate the success of our method by constructing a mesh version of classical electromagnetism and discuss how our formalism could be used to deal with other physical models, such as fluids.
Automatic Scheme Selection for Toolkit Hex Meshing
TAUTGES,TIMOTHY J.; WHITE,DAVID R.
1999-09-27
Current hexahedral mesh generation techniques rely on a set of meshing tools, which when combined with geometry decomposition leads to an adequate mesh generation process. Of these tools, sweeping tends to be the workhorse algorithm, accounting for at least 50% of most meshing applications. Constraints which must be met for a volume to be sweepable are derived, and it is proven that these constraints are necessary but not sufficient conditions for sweepability. This paper also describes a new algorithm for detecting extruded or sweepable geometries. This algorithm, based on these constraints, uses topological and local geometric information, and is more robust than feature recognition-based algorithms. A method for computing sweep dependencies in volume assemblies is also given. The auto sweep detect and sweep grouping algorithms have been used to reduce interactive user time required to generate all-hexahedral meshes by filtering out non-sweepable volumes needing further decomposition and by allowing concurrent meshing of independent sweep groups. Parts of the auto sweep detect algorithm have also been used to identify independent sweep paths, for use in volume-based interval assignment.
Fire performance of basalt FRP mesh reinforced HPC thin plates
Hulin, Thomas; Hodicky, Kamil; Schmidt, Jacob Wittrup;
2013-01-01
An experimental program was carried out to investigate the influence of basalt FRP (BFRP) reinforcing mesh on the fire behaviour of thin high performance concrete (HPC) plates applied to sandwich elements. Samples with BFRP mesh were compared to samples with no mesh, samples with steel mesh...
Refining and blending of aviation turbine fuels.
White, R D
1999-02-01
Aviation turbine fuels (jet fuels) are similar to other petroleum products that have a boiling range of approximately 300F to 550F. Kerosene and No.1 grades of fuel oil, diesel fuel, and gas turbine oil share many similar physical and chemical properties with jet fuel. The similarity among these products should allow toxicology data on one material to be extrapolated to the others. Refineries in the USA manufacture jet fuel to meet industry standard specifications. Civilian aircraft primarily use Jet A or Jet A-1 fuel as defined by ASTM D 1655. Military aircraft use JP-5 or JP-8 fuel as defined by MIL-T-5624R or MIL-T-83133D respectively. The freezing point and flash point are the principle differences between the finished fuels. Common refinery processes that produce jet fuel include distillation, caustic treatment, hydrotreating, and hydrocracking. Each of these refining processes may be the final step to produce jet fuel. Sometimes blending of two or more of these refinery process streams are needed to produce jet fuel that meets the desired specifications. Chemical additives allowed for use in jet fuel are also defined in the product specifications. In many cases, the customer rather than the refinery will put additives into the fuel to meet their specific storage or flight condition requirements.
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
A multi-mesh finite element method for Lagrange elements of arbitrary degree
Witkowski, Thomas
2010-01-01
We consider within a finite element approach the usage of different adaptively refined meshes for different variables in systems of nonlinear, time-depended PDEs. To resolve different solution behaviours of these variables, the meshes can be independently adapted. The resulting linear systems are usually much smaller, when compared to the usage of a single mesh, and the overall computational runtime can be more than halved in such cases. Our multi-mesh method works for Lagrange finite elements of arbitrary degree and is independent of the spatial dimension. The approach is well defined, and can be implemented in existing adaptive finite element codes with minimal effort. We show computational examples in 2D and 3D ranging from dendritic growth to solid-solid phase-transitions. A further application comes from fluid dynamics where we demonstrate the applicability of the approach for solving the incompressible Navier-Stokes equations with Lagrange finite elements of the same order for velocity and pressure. The...
Earth As An Unstructured Mesh and Its Recovery from Seismic Waveform Data
De Hoop, M. V.
2015-12-01
We consider multi-scale representations of Earth's interior from thepoint of view of their possible recovery from multi- andhigh-frequency seismic waveform data. These representations areintrinsically connected to (geologic, tectonic) structures, that is,geometric parametrizations of Earth's interior. Indeed, we address theconstruction and recovery of such parametrizations using localiterative methods with appropriately designed data misfits andguaranteed convergence. The geometric parametrizations containinterior boundaries (defining, for example, faults, salt bodies,tectonic blocks, slabs) which can, in principle, be obtained fromsuccessive segmentation. We make use of unstructured meshes. For the adaptation and recovery of an unstructured mesh we introducean energy functional which is derived from the Hausdorff distance. Viaan augmented Lagrangian method, we incorporate the mentioned datamisfit. The recovery is constrained by shape optimization of theinterior boundaries, and is reminiscent of Hausdorff warping. We useelastic deformation via finite elements as a regularization whilefollowing a two-step procedure. The first step is an update determinedby the energy functional; in the second step, we modify the outcome ofthe first step where necessary to ensure that the new mesh isregular. This modification entails an array of techniques includingtopology correction involving interior boundary contacting andbreakup, edge warping and edge removal. We implement this as afeed-back mechanism from volume to interior boundary meshesoptimization. We invoke and apply a criterion of mesh quality controlfor coarsening, and for dynamical local multi-scale refinement. Wepresent a novel (fluid-solid) numerical framework based on theDiscontinuous Galerkin method.
Block-structured adaptive meshes and reduced grids for atmospheric general circulation models.
Jablonowski, Christiane; Oehmke, Robert C; Stout, Quentin F
2009-11-28
Adaptive mesh refinement techniques offer a flexible framework for future variable-resolution climate and weather models since they can focus their computational mesh on certain geographical areas or atmospheric events. Adaptive meshes can also be used to coarsen a latitude-longitude grid in polar regions. This allows for the so-called reduced grid setups. A spherical, block-structured adaptive grid technique is applied to the Lin-Rood finite-volume dynamical core for weather and climate research. This hydrostatic dynamics package is based on a conservative and monotonic finite-volume discretization in flux form with vertically floating Lagrangian layers. The adaptive dynamical core is built upon a flexible latitude-longitude computational grid and tested in two- and three-dimensional model configurations. The discussion is focused on static mesh adaptations and reduced grids. The two-dimensional shallow water setup serves as an ideal testbed and allows the use of shallow water test cases like the advection of a cosine bell, moving vortices, a steady-state flow, the Rossby-Haurwitz wave or cross-polar flows. It is shown that reduced grid configurations are viable candidates for pure advection applications but should be used moderately in nonlinear simulations. In addition, static grid adaptations can be successfully used to resolve three-dimensional baroclinic waves in the storm-track region.
Parallel octree-based hexahedral mesh generation for eulerian to lagrangian conversion.
Staten, Matthew L.; Owen, Steven James
2010-09-01
Computational simulation must often be performed on domains where materials are represented as scalar quantities or volume fractions at cell centers of an octree-based grid. Common examples include bio-medical, geotechnical or shock physics calculations where interface boundaries are represented only as discrete statistical approximations. In this work, we introduce new methods for generating Lagrangian computational meshes from Eulerian-based data. We focus specifically on shock physics problems that are relevant to ASC codes such as CTH and Alegra. New procedures for generating all-hexahedral finite element meshes from volume fraction data are introduced. A new primal-contouring approach is introduced for defining a geometric domain. New methods for refinement, node smoothing, resolving non-manifold conditions and defining geometry are also introduced as well as an extension of the algorithm to handle tetrahedral meshes. We also describe new scalable MPI-based implementations of these procedures. We describe a new software module, Sculptor, which has been developed for use as an embedded component of CTH. We also describe its interface and its use within the mesh generation code, CUBIT. Several examples are shown to illustrate the capabilities of Sculptor.
Algorithm Refinement for Stochastic Partial Differential Equations. I. Linear Diffusion
Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.
2002-10-01
A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. Results from a variety of numerical experiments are presented for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a nonstochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except in particle regions away from the interface. Extensions of the methodology to fluid mechanics applications are discussed.
Algorithm refinement for stochastic partial differential equations I. linear diffusion
Alexander, F J; Tartakovsky, D M
2002-01-01
A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. Results from a variety of numerical experiments are presented for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a nonstochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except in particle regions away from the interface. Extensions of the methodology to fluid mechanics applications are discussed.
Niski, K; Purnomo, B; Cohen, J
2006-11-06
Previous algorithms for view-dependent level of detail provide local mesh refinements either at the finest granularity or at a fixed, coarse granularity. The former provides triangle-level adaptation, often at the expense of heavy CPU usage and low triangle rendering throughput; the latter improves CPU usage and rendering throughput by operating on groups of triangles. We present a new multiresolution hierarchy and associated algorithms that provide adaptive granularity. This multi-grained hierarchy allows independent control of the number of hierarchy nodes processed on the CPU and the number of triangles to be rendered on the GPU. We employ a seamless texture atlas style of geometry image as a GPU-friendly data organization, enabling efficient rendering and GPU-based stitching of patch borders. We demonstrate our approach on both large triangle meshes and terrains with up to billions of vertices.
Wang, Tianyang; Wüchner, Roland; Sicklinger, Stefan; Bletzinger, Kai-Uwe
2016-05-01
This paper investigates data mapping between non-matching meshes and geometries in fluid-structure interaction. Mapping algorithms for surface meshes including nearest element interpolation, the standard mortar method and the dual mortar method are studied and comparatively assessed. The inconsistency problem of mortar methods at curved edges of fluid-structure-interfaces is solved by a newly developed enforcing consistency approach, which is robust enough to handle even the case that fluid boundary facets are totally not in contact with structure boundary elements due to high fluid refinement. Besides, tests with representative geometries show that the mortar methods are suitable for conservative mapping but it is better to use the nearest element interpolation in a direct way, and moreover, the dual mortar method can give slight oscillations. This work also develops a co-rotating mapping algorithm for 1D beam elements. Its novelty lies in the ability of handling large displacements and rotations.
White Dwarf Mergers on Adaptive Meshes I. Methodology and Code Verification
Katz, Max P; Calder, Alan C; Swesty, F Douglas; Almgren, Ann S; Zhang, Weiqun
2015-01-01
The Type Ia supernova progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf merger scenario, which has the potential to naturally explain many of the observed characteristics of Type Ia supernovae. To date there have been relatively few self-consistent simulations of merging white dwarf systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hy...
Refining Visually Detected Object poses
Holm, Preben; Petersen, Henrik Gordon
2010-01-01
Automated industrial assembly today require that the 3D position and orientation (hereafter ''pose`) of the objects to be assembled are known precisely. Today this precision is mostly established by a dedicated mechanical object alignment system. However, such systems are often dedicated...... that enables direct assembly. Conventional vision systems and laser triangulation systems can locate randomly placed known objects (with 3D CAD models available) with some accuracy, but not necessarily a good enough accuracy. In this paper, we present a novel method for refining the pose accuracy of an object...... that has been located based on the appearance as detected by a monocular camera. We illustrate the quality of our refinement method experimentally....
A Streaming Language Implementation of the Discontinuous Galerkin Method
Barth, Timothy; Knight, Timothy
2005-01-01
We present a Brook streaming language implementation of the 3-D discontinuous Galerkin method for compressible fluid flow on tetrahedral meshes. Efficient implementation of the discontinuous Galerkin method using the streaming model of computation introduces several algorithmic design challenges. Using a cycle-accurate simulator, performance characteristics have been obtained for the Stanford Merrimac stream processor. The current Merrimac design achieves 128 Gflops per chip and the desktop board is populated with 16 chips yielding a peak performance of 2 Teraflops. Total parts cost for the desktop board is less than $20K. Current cycle-accurate simulations for discretizations of the 3-D compressible flow equations yield approximately 40-50% of the peak performance of the Merrimac streaming processor chip. Ongoing work includes the assessment of the performance of the same algorithm on the 2 Teraflop desktop board with a target goal of achieving 1 Teraflop performance.
Iterative Goal Refinement for Robotics
2014-06-01
Iterative Goal Refinement for Robotics Mark Roberts1, Swaroop Vattam1, Ronald Alford2, Bryan Auslander3, Justin Karneeb3, Matthew Molineaux3... robotics researchers and practitioners. We present a goal lifecycle and define a formal model for GR that (1) relates distinct disciplines concerning...researchers to collaborate in exploring this exciting frontier. 1. Introduction Robotic systems often act using incomplete models in environments
Mehl, S.; Hill, M.C.
2002-01-01
A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.
Conservative interpolation between general spherical meshes
E. Kritsikis
2015-06-01
Full Text Available An efficient, local, explicit, second-order, conservative interpolation algorithm between spherical meshes is presented. The cells composing the source and target meshes may be either spherical polygons or longitude–latitude quadrilaterals. Second-order accuracy is obtained by piecewise-linear finite volume reconstruction over the source mesh. Global conservation is achieved through the introduction of a supermesh, whose cells are all possible intersections of source and target cells. Areas and intersections are computed exactly to yield a geometrically exact method. The main efficiency bottleneck caused by the construction of the supermesh is overcome by adopting tree-based data structures and algorithms, from which the mesh connectivity can also be deduced efficiently. The theoretical second-order accuracy is verified using a smooth test function and pairs of meshes commonly used for atmospheric modelling. Experiments confirm that the most expensive operations, especially the supermesh construction, have O(NlogN computational cost. The method presented is meant to be incorporated in pre- or post-processing atmospheric modelling pipelines, or directly into models for flexible input/output. It could also serve as a basis for conservative coupling between model components, e.g. atmosphere and ocean.
Conservative interpolation between general spherical meshes
Kritsikis, Evaggelos; Aechtner, Matthias; Meurdesoif, Yann; Dubos, Thomas
2017-01-01
An efficient, local, explicit, second-order, conservative interpolation algorithm between spherical meshes is presented. The cells composing the source and target meshes may be either spherical polygons or latitude-longitude quadrilaterals. Second-order accuracy is obtained by piece-wise linear finite-volume reconstruction over the source mesh. Global conservation is achieved through the introduction of a supermesh, whose cells are all possible intersections of source and target cells. Areas and intersections are computed exactly to yield a geometrically exact method. The main efficiency bottleneck caused by the construction of the supermesh is overcome by adopting tree-based data structures and algorithms, from which the mesh connectivity can also be deduced efficiently.The theoretical second-order accuracy is verified using a smooth test function and pairs of meshes commonly used for atmospheric modelling. Experiments confirm that the most expensive operations, especially the supermesh construction, have O(NlogN) computational cost. The method presented is meant to be incorporated in pre- or post-processing atmospheric modelling pipelines, or directly into models for flexible input/output. It could also serve as a basis for conservative coupling between model components, e.g., atmosphere and ocean.
Karipineni, Farah; Joshi, Priya; Parsikia, Afshin; Dhir, Teena; Joshi, Amit R T
2016-03-01
Laparoscopic-assisted ventral hernia repair (LAVHR) with mesh is well established as the preferred technique for hernia repair. We sought to determine whether primary fascial closure and/or overlap of the mesh reduced recurrence and/or complications. We conducted a retrospective review on 57 LAVHR patients using polyester composite mesh between August 2010 and July 2013. They were divided into mesh-only (nonclosure) and primary fascial closure with mesh (closure) groups. Patient demographics, prior surgical history, mesh overlap, complications, and recurrence rates were compared. Thirty-nine (68%) of 57 patients were in the closure group and 18 (32%) in the nonclosure group. Mean defect sizes were 15.5 and 22.5 cm(2), respectively. Participants were followed for a mean of 1.3 years [standard deviation (SD) = 0.7]. Recurrence rates were 2/39 (5.1%) in the closure group and 1/18 (5.6%) in the nonclosure group (P = 0.947). There were no major postoperative complications in the nonclosure group. The closure group experienced four (10.3%) complications. This was not a statistically significant difference (P = 0.159). The median mesh-to-hernia ratio for all repairs was 15.2 (surface area) and 3.9 (diameter). Median length of stay was 14.5 hours (1.7-99.3) for patients with nonclosure and 11.9 hours (6.9-90.3 hours) for patients with closure (P = 0.625). In conclusion, this is one of the largest series of LAVHR exclusively using polyester dual-sided mesh. Our recurrence rate was about 5 per cent. Significant mesh overlap is needed to achieve such low recurrence rates. Primary closure of hernias seems less important than adequate mesh overlap in preventing recurrence after LAVHR.
Prioritized Contact Transport Stream
Hunt, Walter Lee, Jr. (Inventor)
2015-01-01
A detection process, contact recognition process, classification process, and identification process are applied to raw sensor data to produce an identified contact record set containing one or more identified contact records. A prioritization process is applied to the identified contact record set to assign a contact priority to each contact record in the identified contact record set. Data are removed from the contact records in the identified contact record set based on the contact priorities assigned to those contact records. A first contact stream is produced from the resulting contact records. The first contact stream is streamed in a contact transport stream. The contact transport stream may include and stream additional contact streams. The contact transport stream may be varied dynamically over time based on parameters such as available bandwidth, contact priority, presence/absence of contacts, system state, and configuration parameters.
U.S. Environmental Protection Agency — The StreamCat Dataset provides summaries of natural and anthropogenic landscape features for ~2.65 million streams, and their associated catchments, within the...
Adaptive Mesh Computations with the PLUTO Code for Astrophysical Fluid Dynamics
Mignone, A.; Zanni, C.
2012-07-01
We present an overview of the current version of the PLUTO code for numerical simulations of astrophysical fluid flows over block-structured adaptive grids. The code preserves its modular framework for the solution of the classical or relativistic magnetohydrodynamics (MHD) equations while exploiting the distributed infrastructure of the Chombo library for multidimensional adaptive mesh refinement (AMR) parallel computations. Equations are evolved in time using an explicit second-order, dimensionally unsplit time stepping scheme based on a cell-centered discretization of the flow variables. Efficiency and robustness are shown through multidimensional benchmarks and applications to problems of astrophysical relevance.
Bo, Yang
2010-01-01
A data stream management system (DSMS) is similar to a database management system (DBMS) but can search data directly in on-line streams. Using its mediator-wrapper approach, the extensible database system, Amos II, allows different kinds of distributed data resource to be queried. It has been extended with a stream datatype to query possibly infinite streams, which provides DSMS functionality. Nowadays, more and more web applications start to offer their services in JSON format which is a te...
Connectivity editing for quad-dominant meshes
Peng, Chihan
2013-08-01
We propose a connectivity editing framework for quad-dominant meshes. In our framework, the user can edit the mesh connectivity to control the location, type, and number of irregular vertices (with more or fewer than four neighbors) and irregular faces (non-quads). We provide a theoretical analysis of the problem, discuss what edits are possible and impossible, and describe how to implement an editing framework that realizes all possible editing operations. In the results, we show example edits and illustrate the advantages and disadvantages of different strategies for quad-dominant mesh design. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.
Retrofitting Masonry Walls with Carbon Mesh
Patrick Bischof
2014-01-01
Full Text Available Static-cyclic shear load tests and tensile tests on retrofitted masonry walls were conducted at UAS Fribourg for an evaluation of the newly developed retrofitting system, the S&P ARMO-System. This retrofitting system consists of a composite of carbon mesh embedded in a specially adapted high quality spray mortar. It can be applied with established construction techniques using traditional construction materials. The experimental study has shown that masonry walls reinforced by this retrofitting system reach a similar strength and a higher ductility than retrofits by means of bonded carbon fiber reinforced polymer sheets. Hence, the retrofitting system using carbon fiber meshes embedded in a high quality mortar constitutes a good option for static or seismic retrofits or reinforcements for masonry walls. However, the experimental studies also revealed that the mechanical anchorage of carbon mesh may be delicate depending on its design.
NASA Lewis Meshed VSAT Workshop meeting summary
Ivancic, William
1993-11-01
NASA Lewis Research Center's Space Electronics Division (SED) hosted a workshop to address specific topics related to future meshed very small-aperture terminal (VSAT) satellite communications networks. The ideas generated by this workshop will help to identify potential markets and focus technology development within the commercial satellite communications industry and NASA. The workshop resulted in recommendations concerning these principal points of interest: the window of opportunity for a meshed VSAT system; system availability; ground terminal antenna sizes; recommended multifrequency for time division multiple access (TDMA) uplink; a packet switch design concept for narrowband; and fault tolerance design concepts. This report presents a summary of group presentations and discussion associated with the technological, economic, and operational issues of meshed VSAT architectures that utilize processing satellites.
Mesh saliency with adaptive local patches
Nouri, Anass; Charrier, Christophe; Lézoray, Olivier
2015-03-01
3D object shapes (represented by meshes) include both areas that attract the visual attention of human observers and others less or not attractive at all. This visual attention depends on the degree of saliency exposed by these areas. In this paper, we propose a technique for detecting salient regions in meshes. To do so, we define a local surface descriptor based on local patches of adaptive size and filled with a local height field. The saliency of mesh vertices is then defined as its degree measure with edges weights computed from adaptive patch similarities. Our approach is compared to the state-of-the-art and presents competitive results. A study evaluating the influence of the parameters establishing this approach is also carried out. The strength and the stability of our approach with respect to noise and simplification are also studied.
The generation of hexahedral meshes for assembly geometries: A survey
TAUTGES,TIMOTHY J.
2000-02-14
The finite element method is being used today to model component assemblies in a wide variety of application areas, including structural mechanics, fluid simulations, and others. Generating hexahedral meshes for these assemblies usually requires the use of geometry decomposition, with different meshing algorithms applied to different regions. While the primary motivation for this approach remains the lack of an automatic, reliable all-hexahedral meshing algorithm, requirements in mesh quality and mesh configuration for typical analyses are also factors. For these reasons, this approach is also sometimes required when producing other types of unstructured meshes. This paper will review progress to date in automating many parts of the hex meshing process, which has halved the time to produce all-hex meshes for large assemblies. Particular issues which have been exposed due to this progress will also be discussed, along with their applicability to the general unstructured meshing problem.
Productivity of Stream Definitions
Endrullis, Jörg; Grabmayer, Clemens; Hendriks, Dimitri; Isihara, Ariya; Klop, Jan
2007-01-01
We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continuously in such a way that a uniquely determined stream is obtained as the limit. Whereas productivity is undecidable
Productivity of stream definitions
Endrullis, J.; Grabmayer, C.A.; Hendriks, D.; Isihara, A.; Klop, J.W.
2008-01-01
We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continually in such a way that a uniquely determined stream in constructor normal form is obtained as the limit. Whereas prod
Sharma, Mohita; Jain, Pratiksha; Varanasi, Jhansi L; Lal, Banwari; Rodríguez, Jorge; Lema, Juan M; Sarma, Priyangshu M
2013-12-01
An anoxic biocathode was developed using sulfate-reducing bacteria (SRB) consortium on activated carbon fabric (ACF) and the effect of stainless steel (SS) mesh as additional current collector was investigated. Improved performance of biocathode was observed with SS mesh leading to nearly five folds increase in power density (from 4.79 to 23.11 mW/m(2)) and threefolds increase in current density (from 75 to 250 mA/m(2)). Enhanced redox currents and lower Tafel slopes observed from cyclic voltammograms of ACF with SS mesh indicated the positive role of uniform electron collecting points. Differential pulse voltammetry technique was employed as an additional tool to assess the redox carriers involved in bioelectrochemical reactions. SRB biocathode was also tested for reduction of volatile fatty acids (VFA) present in the fermentation effluent stream and the results indicated the possibility of integration of this system with anaerobic fermentation for efficient product recovery.
Hejlesen, Mads Mølholm; Spietz, Henrik J.; Walther, Jens Honore
2014-01-01
In resent work we have developed a new FFT based Poisson solver, which uses regularized Greens functions to obtain arbitrary high order convergence to the unbounded Poisson equation. The high order Poisson solver has been implemented in an unbounded particle-mesh based vortex method which uses a re......-meshing of the vortex particles to ensure the convergence of the method. Furthermore, we use a re-projection of the vorticity field to include the constraint of a divergence-free stream function which is essential for the underlying Helmholtz decomposition and ensures a divergence free vorticity field. The high order...... with the principal axis of the strain rate tensor. We find that the dynamics of the enstrophy density is dominated by the local flow deformation and axis of rotation, which is used to infer some concrete tendencies related to the topology of the vorticity field....
The Refined Function-Behaviour-Structure Framework
Diertens, B.
2013-01-01
We refine the function-behaviour-structure framework for design introduced by John Gero in order to deal with complexity. We do this by connecting the frameworks for the desing of two models, one the refinement of the other. The result is a refined framework for the design of an object on two levels
Open preperitoneal groin hernia repair with mesh
Andresen, Kristoffer; Rosenberg, Jacob
2017-01-01
A systematic review was conducted and reported according to the PRISMA statement. PubMed, Cochrane library and Embase were searched systematically. Studies were included if they provided clinical data with more than 30 days follow up following repair of an inguinal hernia with an open preperitoneal mesh......Background For the repair of inguinal hernias, several surgical methods have been presented where the purpose is to place a mesh in the preperitoneal plane through an open access. The aim of this systematic review was to describe preperitoneal repairs with emphasis on the technique. Data sources...
Laparoscopic rectocele repair using polyglactin mesh.
Lyons, T L; Winer, W K
1997-05-01
We assessed the efficacy of laparoscopic treatment of rectocele defect using a polyglactin mesh graft. From May 1, 1995, through September 30, 1995, we prospectively evaluated 20 women (age 38-74 yrs) undergoing pelvic floor reconstruction for symptomatic pelvic floor prolapse, with or without hysterectomy. Morbidity of the procedure was extremely low compared with standard transvaginal and transrectal approaches. Patients were followed at 3-month intervals for 1 year. Sixteen had resolution of symptoms. Laparoscopic application of polyglactin mesh for the repair of the rectocele defect is a viable option, although long-term follow-up is necessary.
A new class of accurate, mesh-free hydrodynamic simulation methods
Hopkins, Philip F.
2015-06-01
We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.
Grain Refinement of Deoxidized Copper
Balart, María José; Patel, Jayesh B.; Gao, Feng; Fan, Zhongyun
2016-10-01
This study reports the current status of grain refinement of copper accompanied in particular by a critical appraisal of grain refinement of phosphorus-deoxidized, high residual P (DHP) copper microalloyed with 150 ppm Ag. Some deviations exist in terms of the growth restriction factor ( Q) framework, on the basis of empirical evidence reported in the literature for grain size measurements of copper with individual additions of 0.05, 0.1, and 0.5 wt pct of Mo, In, Sn, Bi, Sb, Pb, and Se, cast under a protective atmosphere of pure Ar and water quenching. The columnar-to-equiaxed transition (CET) has been observed in copper, with an individual addition of 0.4B and with combined additions of 0.4Zr-0.04P and 0.4Zr-0.04P-0.015Ag and, in a previous study, with combined additions of 0.1Ag-0.069P (in wt pct). CETs in these B- and Zr-treated casts have been ascribed to changes in the morphology and chemistry of particles, concurrently in association with free solute type and availability. No further grain-refining action was observed due to microalloying additions of B, Mg, Ca, Zr, Ti, Mn, In, Fe, and Zn (~0.1 wt pct) with respect to DHP-Cu microalloyed with Ag, and therefore are no longer relevant for the casting conditions studied. The critical microalloying element for grain size control in deoxidized copper and in particular DHP-Cu is Ag.
Inflation in a refined racetrack
Wen, Wen-Yu
2007-01-01
In this note, we refine the racetrack inflation model constructed in arXiv:hep-th/0406230 by including the open string modulus. This modulus encodes the embedding of our braneworld inside some Calabi-Yau throat. We argue that in generic this open string modulus dynamically runs with the inflaton field thanks to its nonlinear coupling. A full analysis becomes difficult because the scalar potential changes progressively during the inflation epoch. Nevertheless, by explicit construction we are still able to build a realistic model through appropriate choices of the initial conditions.
An Efficient Framework for Generating Storyline Visualizations from Streaming Data.
Tanahashi, Yuzuru; Hsueh, Chien-Hsin; Ma, Kwan-Liu
2015-06-01
This paper presents a novel framework for applying storyline visualizations to streaming data. The framework includes three components: a new data management scheme for processing and storing the incoming data, a layout construction algorithm specifically designed for incrementally generating storylines from streaming data, and a layout refinement algorithm for improving the legibility of the visualization. By dividing the layout computation to two separate components, one for constructing and another for refining, our framework effectively provides the users with the ability to follow and reason dynamic data. The evaluation studies of our storyline visualization framework demonstrate its efficacy to present streaming data as well as its superior performance over existing methods in terms of both computational efficiency and visual clarity.
Grigory I. Shishkin
2008-01-01
A boundary value problem is considered for a singularly perturbed parabolic convection-diffusion equation; we construct a finite difference scheme on a priori (se-quentially) adapted meshes and study its convergence. The scheme on a priori adapted meshes is constructed using a majorant function for the singular component of the discrete solution, which allows us to find a priori a subdomain where the computed solution requires a further improvement. This subdomain is defined by the perturbation parameter ε, the step-size of a uniform mesh in x, and also by the required accuracy of the discrete solution and the prescribed number of refinement iterations K for im-proving the solution. To solve the discrete problems aimed at the improvement of the solution, we use uniform meshes on the subdomains. The error of the numerical so-lution depends weakly on the parameter ε. The scheme converges almost ε-uniformly, precisely, under the condition N-1 = o(ev), where N denotes the number of nodes in the spatial mesh, and the value v=v(K) can be chosen arbitrarily small for suitable K.
MeshEZW: an image coder using mesh and finite elements
Landais, Thomas; Bonnaud, Laurent; Chassery, Jean-Marc
2003-08-01
In this paper, we present a new method to compress the information in an image, called MeshEZW. The proposed approach is based on the finite elements method, a mesh construction and a zerotree method. The zerotree method is an adaptive of the EZW algorithm with two new symbols for increasing the performance. These steps allow a progressive representation of the image by the automatic construction of a bitstream. The mesh structure is adapted to the image compression domain and is defined to allow video comrpession. The coder is described and some preliminary results are discussed.
Hilley, David; Ramachandran, Umakishore
Distributed continuous live stream analysis applications are increasingly common. Video-based surveillance, emergency response, disaster recovery, and critical infrastructure protection are all examples of such applications. They are characterized by a variety of high- and low-bandwidth streams as well as a need for analyzing both live and archived streams. We present a system called Persistent Temporal Streams (PTS) that supports a higher-level, domain-targeted programming abstraction for such applications. PTS provides a simple but expressive stream abstraction encompassing transport, manipulation and storage of streaming data. In this paper, we present a system architecture for implementing PTS. We provide an experimental evaluation which shows the system-level primitives can be implemented in a lightweight and high-performance manner, and an application-based evaluation designed to show that a representative high-bandwidth stream analysis application can be implemented relatively simply and with good performance.
Fournier, D.; Le Tellier, R.; Suteau, C., E-mail: damien.fournier@cea.fr, E-mail: romain.le-tellier@cea.fr, E-mail: christophe.suteau@cea.fr [CEA, DEN, DER/SPRC/LEPh, Cadarache, Saint Paul-lez-Durance (France); Herbin, R., E-mail: raphaele.herbin@cmi.univ-mrs.fr [Laboratoire d' Analyse et de Topologie de Marseille, Centre de Math´ematiques et Informatique (CMI), Universit´e de Provence, Marseille Cedex (France)
2011-07-01
The solution of the time-independent neutron transport equation in a deterministic way invariably consists in the successive discretization of the three variables: energy, angle and space. In the SNATCH solver used in this study, the energy and the angle are respectively discretized with a multigroup approach and the discrete ordinate method. A set of spatial coupled transport equations is obtained and solved using the Discontinuous Galerkin Finite Element Method (DGFEM). Within this method, the spatial domain is decomposed into elements and the solution is approximated by a hierarchical polynomial basis in each one. This approach is time and memory consuming when the mesh becomes fine or the basis order high. To improve the computational time and the memory footprint, adaptive algorithms are proposed. These algorithms are based on an error estimation in each cell. If the error is important in a given region, the mesh has to be refined (h−refinement) or the polynomial basis order increased (p−refinement). This paper is related to the choice between the two types of refinement. Two ways to estimate the error are compared on different benchmarks. Analyzing the differences, a hp−refinement method is proposed and tested. (author)
Rotary impeller refinement of 7075Al alloy
WANG Liping; GUO Erjun; HUANG Yongchang; LU Bin
2009-01-01
The effects of four parameters, gas flow, rotational speed, refining time, and stewing time, on the rotary impeller refinement of 7075 Al were studied. The effects of C2Cl6refining, rotary impeller refuting, and composite refining of 7075 AI alloy were compared with each other. The results showed that the greatest impact parameter of rotary impeller refinement was rotational speed, followed by gas flow, refining time, and stewing time. The optimum purification parameters obtained by orthogonal analysis were as follows: rotor speed of 400 r/min, inert gas flow of 0.4 mL/h, refining time of 15 min, and stewing time of 6 min. The best degassing effect can be obtained by the composite refuting of C2Cl6 and rotary impeller. The degassing rate of C2Cl6 rotary impeller, and composite refining was 34.5%, 69.2%, and 78%, respectively. The mechanical properties of the specimen refined by rotary impeller were higher than those by C2C16 refining, but lower than those by composite refining.
Zone refining of plutonium metal
NONE
1997-05-01
The purpose of this study was to investigate zone refining techniques for the purification of plutonium metal. The redistribution of 10 impurity elements from zone melting was examined. Four tantalum boats were loaded with plutonium impurity alloy, placed in a vacuum furnace, heated to 700{degrees}C, and held at temperature for one hour. Ten passes were made with each boat. Metallographic and chemical analyses performed on the plutonium rods showed that, after 10 passes, moderate movement of certain elements were achieved. Molten zone speeds of 1 or 2 inches per hour had no effect on impurity element movement. Likewise, the application of constant or variable power had no effect on impurity movement. The study implies that development of a zone refining process to purify plutonium is feasible. Development of a process will be hampered by two factors: (1) the effect on impurity element redistribution of the oxide layer formed on the exposed surface of the material is not understood, and (2) the tantalum container material is not inert in the presence of plutonium. Cold boat studies are planned, with higher temperature and vacuum levels, to determine the effect on these factors. 5 refs., 1 tab., 5 figs.
Constrained and joint inversion on unstructured meshes
Doetsch, J.; Jordi, C.; Rieckh, V.; Guenther, T.; Schmelzbach, C.
2015-12-01
Unstructured meshes allow for inclusion of arbitrary surface topography, complex acquisition geometry and undulating geological interfaces in the inversion of geophysical data. This flexibility opens new opportunities for coupling different geophysical and hydrological data sets in constrained and joint inversions. For example, incorporating geological interfaces that have been derived from high-resolution geophysical data (e.g., ground penetrating radar) can add geological constraints to inversions of electrical resistivity data. These constraints can be critical for a hydrogeological interpretation of the inversion results. For time-lapse inversions of geophysical data, constraints can be derived from hydrological point measurements in boreholes, but it is difficult to include these hard constraints in the inversion of electrical resistivity monitoring data. Especially mesh density and the regularization footprint around the hydrological point measurements are important for an improved inversion compared to the unconstrained case. With the help of synthetic and field examples, we analyze how regularization and coupling operators should be chosen for time-lapse inversions constrained by point measurements and for joint inversions of geophysical data in order to take full advantage of the flexibility of unstructured meshes. For the case of constraining to point measurements, it is important to choose a regularization operator that extends beyond the neighboring cells and the uncertainty in the point measurements needs to be accounted for. For joint inversion, the choice of the regularization depends on the expected subsurface heterogeneity and the cell size of the parameter mesh.
Hash functions and triangular mesh reconstruction*1
Hrádek, Jan; Kuchař, Martin; Skala, Václav
2003-07-01
Some applications use data formats (e.g. STL file format), where a set of triangles is used to represent the surface of a 3D object and it is necessary to reconstruct the triangular mesh with adjacency information. It is a lengthy process for large data sets as the time complexity of this process is O( N log N), where N is number of triangles. Triangular mesh reconstruction is a general problem and relevant algorithms can be used in GIS and DTM systems as well as in CAD/CAM systems. Many algorithms rely on space subdivision techniques while hash functions offer a more effective solution to the reconstruction problem. Hash data structures are widely used throughout the field of computer science. The hash table can be used to speed up the process of triangular mesh reconstruction but the speed strongly depends on hash function properties. Nevertheless the design or selection of the hash function for data sets with unknown properties is a serious problem. This paper describes a new hash function, presents the properties obtained for large data sets, and discusses validity of the reconstructed surface. Experimental results proved theoretical considerations and advantages of hash function use for mesh reconstruction.
Particle Collection Efficiency for Nylon Mesh Screens.
Cena, Lorenzo G; Ku, Bon-Ki; Peters, Thomas M
Mesh screens composed of nylon fibers leave minimal residual ash and produce no significant spectral interference when ashed for spectrometric examination. These characteristics make nylon mesh screens attractive as a collection substrate for nanoparticles. A theoretical single-fiber efficiency expression developed for wire-mesh screens was evaluated for estimating the collection efficiency of submicrometer particles for nylon mesh screens. Pressure drop across the screens, the effect of particle morphology (spherical and highly fractal) on collection efficiency and single-fiber efficiency were evaluated experimentally for three pore sizes (60, 100 and 180 μm) at three flow rates (2.5, 4 and 6 Lpm). The pressure drop across the screens was found to increase linearly with superficial velocity. The collection efficiency of the screens was found to vary by less than 4% regardless of particle morphology. Single-fiber efficiency calculated from experimental data was in good agreement with that estimated from theory for particles between 40 and 150 nm but deviated from theory for particles outside this size range. New coefficients for the single-fiber efficiency model were identified that minimized the sum of square error (SSE) between the values estimated with the model and those determined experimentally. Compared to the original theory, the SSE calculated using the modified theory was at least one order of magnitude lower for all screens and flow rates with the exception of the 60-μm pore screens at 2.5 Lpm, where the decrease was threefold.
Functionalized Nanofiber Meshes Enhance Immunosorbent Assays.
Hersey, Joseph S; Meller, Amit; Grinstaff, Mark W
2015-12-01
Three-dimensional substrates with high surface-to-volume ratios and subsequently large protein binding capacities are of interest for advanced immunosorbent assays utilizing integrated microfluidics and nanosensing elements. A library of bioactive and antifouling electrospun nanofiber substrates, which are composed of high-molecular-weight poly(oxanorbornene) derivatives, is described. Specifically, a set of copolymers are synthesized from three 7-oxanorbornene monomers to create a set of water insoluble copolymers with both biotin (bioactive) and triethylene glycol (TEG) (antifouling) functionality. Porous three-dimensional nanofiber meshes are electrospun from these copolymers with the ability to specifically bind streptavidin while minimizing the nonspecific binding of other proteins. Fluorescently labeled streptavidin is used to quantify the streptavidin binding capacity of each mesh type through confocal microscopy. A simplified enzyme-linked immunosorbent assay (ELISA) is presented to assess the protein binding capabilities and detection limits of these nanofiber meshes under both static conditions (26 h) and flow conditions (1 h) for a model target protein (i.e., mouse IgG) using a horseradish peroxidase (HRP) colorimetric assay. Bioactive and antifouling nanofiber meshes outperform traditional streptavidin-coated polystyrene plates under flow, validating their use in future advanced immunosorbent assays and their compatibility with microfluidic-based biosensors.
Mesh Optimization for Ground Vehicle Aerodynamics
Adrian Gaylard
2010-04-01
Full Text Available
Mesh optimization strategy for estimating accurate drag of a ground vehicle is proposed based on examining the effect of different mesh parameters. The optimized mesh parameters were selected using design of experiment (DOE method to be able to work in a limited memory environment and in a reasonable amount of time but without compromising the accuracy of results. The study was further extended to take into account the car model size effect. Three car model sizes have been investigated and compared with MIRA scale wind tunnel results. Parameters that lead to drag value closer to experiment with less memory and computational time have been identified. Scaling the optimized mesh size with the length of car model was successfully used to predict the drag of the other car sizes with reasonable accuracy. This investigation was carried out using STARCCM+ commercial software package, however the findings can be applied to any other CFD package.
Drag reduction properties of superhydrophobic mesh pipes
Geraldi, Nicasio R.; Dodd, Linzi E.; Xu, Ben B.; Wells, Gary G.; Wood, David; Newton, Michael I.; McHale, Glen
2017-09-01
Even with the recent extensive study into superhydrophobic surfaces, the fabrication of such surfaces on the inside walls of a pipe remains challenging. In this work we report a convenient bi-layered pipe design using a thin superhydrophobic metallic mesh formed into a tube, supported inside another pipe. A flow system was constructed to test the fabricated bi-layer pipeline, which allowed for different constant flow rates of water to be passed through the pipe, whilst the differential pressure was measured, from which the drag coefficient (ƒ) and Reynolds numbers (Re) were calculated. Expected values of ƒ were found for smooth glass pipes for the Reynolds number (Re) range 750-10 000, in both the laminar and part of the turbulent regimes. Flow through plain meshes without the superhydrophobic coating were also measured over a similar range (750 superhydrophobic coating, ƒ was found for 4000 superhydrophobic mesh can support a plastron and provide a drag reduction compared to a plain mesh, however, the plastron is progressively destroyed with use and in particular at higher flow rates.
Performance Evaluation of Coded Meshed Networks
Krigslund, Jeppe; Hansen, Jonas; Pedersen, Morten Videbæk;
2013-01-01
of the former to enhance the gains of the latter. We first motivate our work through measurements in WiFi mesh networks. Later, we compare state-of-the-art approaches, e.g., COPE, RLNC, to CORE. Our measurements show the higher reliability and throughput of CORE over other schemes, especially, for asymmetric...
Mesh Currents and Josephson Junction Arrays
1995-01-01
A simple but accurate mesh current analysis is performed on a XY model and on a SIMF model to derive the equations for a Josephson junction array. The equations obtained here turn out to be different from other equations already existing in the literature. Moreover, it is shown that the two models come from an unique hidden structure
The many streams of the Magellanic Stream
Stanimirovic, Snezana; Heiles, Carl; Douglas, Kevin A; Putman, Mary; Peek, Joshua E G
2008-01-01
We present results from neutral hydrogen (HI) observations of the tip of the Magellanic Stream (MS), obtained with the Arecibo telescope as a part of the on-going survey by the Consortium for Galactic studies with the Arecibo L-band Feed Array. We find four large-scale, coherent HI streams, extending continously over a length of 20 degrees, each stream possessing different morphology and velocity gradients. The newly discovered streams provide strong support for the tidal model of the MS formation by Connors et al. (2006), which suggested a spatial and kinematic bifurcation of the MS. The observed morphology and kinematics suggest that three of these streams could be interpreted as a 3-way splitting of the main MS filament, while the fourth stream appears much younger and may have originated from the Magellanic Bridge. We find an extensive population of HI clouds at the tip of the MS. Two thirds of clouds have an angular size in the range 3.5'--10'. We interpret this as being due to thermal instability, which...
Mikhaylov, Rebecca; Dawson, Douglas; Kwack, Eug
2014-01-01
NASA's Earth observing Soil Moisture Active & Passive (SMAP) Mission is scheduled to launch in November 2014 into a 685 km near-polar, sun synchronous orbit. SMAP will provide comprehensive global mapping measurements of soil moisture and freeze/thaw state in order to enhance understanding of the processes that link the water, energy, and carbon cycles. The primary objectives of SMAP are to improve worldwide weather and flood forecasting, enhance climate prediction, and refine drought and agriculture monitoring during its 3 year mission. The SMAP instrument architecture incorporates an L-band radar and an L-band radiometer which share a common feed horn and parabolic mesh reflector. The instrument rotates about the nadir axis at approximately 15 rpm, thereby providing a conically scanning wide swath antenna beam that is capable of achieving global coverage within 3 days. In order to make the necessary precise surface emission measurements from space, a temperature knowledge of 60 deg C for the mesh reflector is required. In order to show compliance, a thermal vacuum test was conducted using a portable solar simulator to illuminate a non flight, but flight-like test article through the quartz window of the vacuum chamber. The molybdenum wire of the antenna mesh is too fine to accommodate thermal sensors for direct temperature measurements. Instead, the mesh temperature was inferred from resistance measurements made during the test. The test article was rotated to five separate angles between 10 deg and 90 deg via chamber breaks to simulate the maximum expected on-orbit solar loading during the mission. The resistance measurements were converted to temperature via a resistance versus temperature calibration plot that was constructed from data collected in a separate calibration test. A simple thermal model of two different representations of the mesh (plate and torus) was created to correlate the mesh temperature predictions to within 60 deg C. The on-orbit mesh
Oxidation and degradation of polypropylene transvaginal mesh.
Talley, Anne D; Rogers, Bridget R; Iakovlev, Vladimir; Dunn, Russell F; Guelcher, Scott A
2017-04-01
Polypropylene (PP) transvaginal mesh (TVM) repair for stress urinary incontinence (SUI) has shown promising short-term objective cure rates. However, life-altering complications have been associated with the placement of PP mesh for SUI repair. PP degradation as a result of the foreign body reaction (FBR) has been proposed as a contributing factor to mesh complications. We hypothesized that PP oxidizes under in vitro conditions simulating the FBR, resulting in degradation of the PP. Three PP mid-urethral slings from two commercial manufacturers were evaluated. Test specimens (n = 6) were incubated in oxidative medium for up to 5 weeks. Oxidation was assessed by Fourier Transform Infrared Spectroscopy (FTIR), and degradation was evaluated by scanning electron microscopy (SEM). FTIR spectra of the slings revealed evidence of carbonyl and hydroxyl peaks after 5 weeks of incubation time, providing evidence of oxidation of PP. SEM images at 5 weeks showed evidence of surface degradation, including pitting and flaking. Thus, oxidation and degradation of PP pelvic mesh were evidenced by chemical and physical changes under simulated in vivo conditions. To assess changes in PP surface chemistry in vivo, fibers were recovered from PP mesh explanted from a single patient without formalin fixation, untreated (n = 5) or scraped (n = 5) to remove tissue, and analyzed by X-ray photoelectron spectroscopy. Mechanical scraping removed adherent tissue, revealing an underlying layer of oxidized PP. These findings underscore the need for further research into the relative contribution of oxidative degradation to complications associated with PP-based TVM devices in larger cohorts of patients.
Wireless Mesh Network Routing Under Uncertain Demands
Wellons, Jonathan; Dai, Liang; Chang, Bin; Xue, Yuan
Traffic routing plays a critical role in determining the performance of a wireless mesh network. Recent research results usually fall into two ends of the spectrum. On one end are the heuristic routing algorithms, which are highly adaptive to the dynamic environments of wireless networks yet lack the analytical properties of how well the network performs globally. On the other end are the optimal routing algorithms that are derived from the optimization problem formulation of mesh network routing. They can usually claim analytical properties such as resource use optimality and throughput fairness. However, traffic demand is usually implicitly assumed as static and known a priori in these problem formulations. In contrast, recent studies of wireless network traces show that the traffic demand, even being aggregated at access points, is highly dynamic and hard to estimate. Thus, to apply the optimization-based routing solution in practice, one must take into account the dynamic and uncertain nature of wireless traffic demand. There are two basic approaches to address the traffic uncertainty in optimal mesh network routing (1) predictive routing that infers the traffic demand with maximum possibility based in its history and optimizes the routing strategy based on the predicted traffic demand and (2) oblivious routing that considers all the possible traffic demands and selects the routing strategy where the worst-case network performance could be optimized. This chapter provides an overview of the optimal routing strategies for wireless mesh networks with a focus on the above two strategies that explicitly consider the traffic uncertainty. It also identifies the key factors that affect the performance of each routing strategy and provides guidelines towards the strategy selection in mesh network routing under uncertain traffic demands.
The mesh-matching algorithm: an automatic 3D mesh generator for Finite element structures
Couteau, B; Lavallee, S; Payan, Yohan; Lavallee, St\\'{e}phane
2000-01-01
Several authors have employed Finite Element Analysis (FEA) for stress and strain analysis in orthopaedic biomechanics. Unfortunately, the use of three-dimensional models is time consuming and consequently the number of analysis to be performed is limited. The authors have investigated a new method allowing automatically 3D mesh generation for structures as complex as bone for example. This method called Mesh-Matching (M-M) algorithm generated automatically customized 3D meshes of bones from an already existing model. The M-M algorithm has been used to generate FE models of ten proximal human femora from an initial one which had been experimentally validated. The new meshes seemed to demonstrate satisfying results.
Randomized clinical trial of self-gripping mesh versus sutured mesh for Lichtenstein hernia repair
Jorgensen, L N; Sommer, T; Assaadzadeh, S;
2012-01-01
between the groups in postoperative complications (33·7 versus 40·4 per cent; P = 0·215), rate of recurrent hernia within 1 year (1·2 per cent in both groups) or quality of life. CONCLUSION: The avoidance of suture fixation using a self-gripping mesh was not accompanied by a reduction in chronic symptoms......BACKGROUND: Many patients develop discomfort after open repair of a groin hernia. It was hypothesized that suture fixation of the mesh is a cause of these symptoms. METHODS: This patient- and assessor-blinded randomized multicentre clinical trial compared a self-gripping mesh (Parietene Progrip......(®) ) and sutured mesh for open primary repair of uncomplicated inguinal hernia by the Lichtenstein technique. Patients were assessed before surgery, on the day of operation, and at 1 and 12 months after surgery. The primary endpoint was moderate or severe symptoms after 12 months, including a combination...
CUBIT mesh generation environment. Volume 1: Users manual
Blacker, T.D.; Bohnhoff, W.J.; Edwards, T.L. [and others
1994-05-01
The CUBIT mesh generation environment is a two- and three-dimensional finite element mesh generation tool which is being developed to pursue the goal of robust and unattended mesh generation--effectively automating the generation of quadrilateral and hexahedral elements. It is a solid-modeler based preprocessor that meshes volume and surface solid models for finite element analysis. A combination of techniques including paving, mapping, sweeping, and various other algorithms being developed are available for discretizing the geometry into a finite element mesh. CUBIT also features boundary layer meshing specifically designed for fluid flow problems. Boundary conditions can be applied to the mesh through the geometry and appropriate files for analysis generated. CUBIT is specifically designed to reduce the time required to create all-quadrilateral and all-hexahedral meshes. This manual is designed to serve as a reference and guide to creating finite element models in the CUBIT environment.
On combining Laplacian and optimization-based mesh smoothing techniques
Freitag, L.A.
1997-07-01
Local mesh smoothing algorithms have been shown to be effective in repairing distorted elements in automatically generated meshes. The simplest such algorithm is Laplacian smoothing, which moves grid points to the geometric center of incident vertices. Unfortunately, this method operates heuristically and can create invalid meshes or elements of worse quality than those contained in the original mesh. In contrast, optimization-based methods are designed to maximize some measure of mesh quality and are very effective at eliminating extremal angles in the mesh. These improvements come at a higher computational cost, however. In this article the author proposes three smoothing techniques that combine a smart variant of Laplacian smoothing with an optimization-based approach. Several numerical experiments are performed that compare the mesh quality and computational cost for each of the methods in two and three dimensions. The author finds that the combined approaches are very cost effective and yield high-quality meshes.
INCISIONAL HERNIA - ONLAY VS SUBLAY MESH HERNIOPLAS T Y
Ravi Kamal Kumar; Chandrakumar; Vijayalaxmi,; Thokala; Venkat Ramana
2015-01-01
BACKGROUND : Incisional hernia is a common surgical problem. Anatomical repair of hernia is now out of vogue. Polypropylene mesh repair has now become accepted. In open mesh repair of incisional hernia cases the site of placement of mesh is still debated. Some surgeo ns favour the onlay repair and others use sublay or retro - rectus plane for deployment of the mesh. AIM: The aim of the study is to examine the pros and cons of both the techniques and find the bett...
Explicit inverse distance weighting mesh motion for coupled problems
Witteveen, J.A.S.; Bijl, H.
2009-01-01
An explicit mesh motion algorithm based on inverse distance weighting interpolation is presented. The explicit formulation leads to a fast mesh motion algorithm and an easy implementation. In addition, the proposed point-by-point method is robust and flexible in case of large deformations, hanging nodes, and parallelization. Mesh quality results and CPU time comparisons are presented for triangular and hexahedral unstructured meshes in an airfoil flutter fluid-structure interaction problem.
Kohler, Susanna
2016-01-01
Dwarf galaxies or globular clusters orbiting the Milky Way can be pulled apart by tidal forces, leaving behind a trail of stars known as a stellar stream. One such trail, the Ophiuchus stream, has posed a serious dynamical puzzle since its discovery. But a recent study has identified four stars that might help resolve this streams mystery.Conflicting TimescalesThe stellar stream Ophiuchus was discovered around our galaxy in 2014. Based on its length, which appears to be 1.6 kpc, we can calculate the time that has passed since its progenitor was disrupted and the stream was created: ~250 Myr. But the stars within it are ~12 Gyr old, and the stream orbits the galaxy with a period of ~350 Myr.Given these numbers, we can assume that Ophiuchuss progenitor completed many orbits of the Milky Way in its lifetime. So why would it only have been disrupted 250 million years ago?Fanning StreamLed by Branimir Sesar (Max Planck Institute for Astronomy), a team of scientists has proposed an idea that might help solve this puzzle. If the Ophiuchus stellar stream is on a chaotic orbit common in triaxial potentials, which the Milky Ways may be then the stream ends can fan out, with stars spreading in position and velocity.The fanned part of the stream, however, would be difficult to detect because of its low surface brightness. As a result, the Ophiuchus stellar stream could actually be longer than originally measured, implying that it was disrupted longer ago than was believed.Search for Fan StarsTo test this idea, Sesar and collaborators performed a search around the ends of the stream, looking for stars thatare of the right type to match the stream,are at the predicted distance of the stream,are located near the stream ends, andhave velocities that match the stream and dont match the background halo stars.Histogram of the heliocentric velocities of the 43 target stars. Six stars have velocities matching the stream velocity. Two of these are located in the main stream; the other
Modified Streaming Format for Direct Access Triangular Data Structures
Khaled Abid
2014-01-01
Full Text Available We define in this paper an extended solution to improve an Out-of-Core data structure which is the streaming format, by adding new information allowing to reduce file access cost, reducing the neighborhood access delay to constant time. The original streaming format is conceived to manipulate huge triangular meshes. It assumes that the whole mesh cannot be loaded entirely into the main memory. That's why the authors did not include the neighborhood in the file structure. However, almost all of the applications need the neighborhood information in the triangular structures. Using the original streaming format does not allow us to extract the neighborhood information easily. By adding the neighbor indices to the file in the same way as the original format, we can benefit from the streaming format, and at the same time, guarantee a constant time access to the neighborhood. We have adapted our new structure so that it can allow us to apply our direct access algorithm to different parts of the structure without having to go through the entire file.
SILICON REFINING BY VACUUM TREATMENT
André Alexandrino Lotto
2014-12-01
Full Text Available This work aims to investigate the phosphorus removal by vacuum from metallurgical grade silicon (MGSi (98.5% to 99% Si. Melting experiments were carried out in a vacuum induction furnace, varying parameters such as temperature, time and relation area exposed to the vacuum / volume of molten silicon. The results of chemical analysis were obtained by inductively coupled plasma (ICP, and evaluated based on thermodynamic and kinetic aspects of the reaction of vaporization of the phosphorus in the silicon. The phosphorus was decreased from 33 to approximately 1.5 ppm after three hours of vacuum treatment, concluding that the evaporation step is the controlling step of the process for parameters of temperature, pressure and agitation used and refining by this process is technically feasible.
A testing preocedure for the evaluation of directional mesh bias
Slobbe, A.T.; Hendriks, M.A.N.; Rots, J.G.
2013-01-01
This paper presents a dedicated numerical test that enables to assess the directional mesh bias of constitutive models in a systematic way. The test makes use of periodic boundary conditions, by which strain localization can be analyzed for different mesh alignments with preservation of mesh uniform
Multiphase flow of immiscible fluids on unstructured moving meshes
Misztal, Marek Krzysztof; Erleben, Kenny; Bargteil, Adam
2012-01-01
In this paper, we present a method for animating multiphase flow of immiscible fluids using unstructured moving meshes. Our underlying discretization is an unstructured tetrahedral mesh, the deformable simplicial complex (DSC), that moves with the flow in a Lagrangian manner. Mesh optimization op...
21 CFR 870.3650 - Pacemaker polymeric mesh bag.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Pacemaker polymeric mesh bag. 870.3650 Section 870...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Prosthetic Devices § 870.3650 Pacemaker polymeric mesh bag. (a) Identification. A pacemaker polymeric mesh bag is an implanted device used to hold a...
Mesh sensitivity study and optimization of fixed support for ITER torus and cryostat cryoline
Badgujar, S.; Vaghela, H.; Shah, N.; Bhattacharya, R.; Sarkar, B.
2010-02-01
The torus & cryostat cryoline of ITER cryodistribution system has been designed as per the process specifications. The cryoline is an ensemble of six process pipes, thermal shield, fixed, sliding support and outer jacket. The fixed support (FS), which also acts as the anchor for the bellows, is one of the most important part of the cryoline. The FS has to withstand the static weight of pipes as well as the spring and thrust forces arising from the bellows. The FS design has been optimized for the thermal, structural and for combined loads with thermal optimization criteria; less than 8 Watt at 100 K and less than 1.5 Watt at 4.5 K. ANSYS 10.0 has been used for the analysis and CATIA V5 R16 has been used for the modelling as well as geometry optimization. In order to bring the Von-Mises stress within the acceptable limit of 115 MPa, a detailed mesh sensitivity study has been carried out along with design optimization. The iterative process of mesh refinement continued till stress convergence is achieved. The stress analysis has been carried out for optimized mesh size. The paper will present the design methodology, construction details and the results of the analysis.
Mesh sensitivity study and optimization of fixed support for ITER torus and cryostat cryoline
Badgujar, S; Vaghela, H; Shah, N; Bhattacharya, R; Sarkar, B, E-mail: satishrb@ipr.res.i [ITER-INDIA, Institute for Plasma Research, Bhat, Gandhinagar - 382428 (India)
2010-02-01
The torus and cryostat cryoline of ITER cryodistribution system has been designed as per the process specifications. The cryoline is an ensemble of six process pipes, thermal shield, fixed, sliding support and outer jacket. The fixed support (FS), which also acts as the anchor for the bellows, is one of the most important part of the cryoline. The FS has to withstand the static weight of pipes as well as the spring and thrust forces arising from the bellows. The FS design has been optimized for the thermal, structural and for combined loads with thermal optimization criteria; less than 8 Watt at 100 K and less than 1.5 Watt at 4.5 K. ANSYS 10.0 has been used for the analysis and CATIA V5 R16 has been used for the modelling as well as geometry optimization. In order to bring the Von-Mises stress within the acceptable limit of 115 MPa, a detailed mesh sensitivity study has been carried out along with design optimization. The iterative process of mesh refinement continued till stress convergence is achieved. The stress analysis has been carried out for optimized mesh size. The paper will present the design methodology, construction details and the results of the analysis.
To mesh or not to mesh: a review of pelvic organ reconstructive surgery
Dällenbach P
2015-04-01
Full Text Available Patrick Dällenbach Department of Gynecology and Obstetrics, Division of Gynecology, Urogynecology Unit, Geneva University Hospitals, Geneva, Switzerland Abstract: Pelvic organ prolapse (POP is a major health issue with a lifetime risk of undergoing at least one surgical intervention estimated at close to 10%. In the 1990s, the risk of reoperation after primary standard vaginal procedure was estimated to be as high as 30% to 50%. In order to reduce the risk of relapse, gynecological surgeons started to use mesh implants in pelvic organ reconstructive surgery with the emergence of new complications. Recent studies have nevertheless shown that the risk of POP recurrence requiring reoperation is lower than previously estimated, being closer to 10% rather than 30%. The development of mesh surgery – actively promoted by the marketing industry – was tremendous during the past decade, and preceded any studies supporting its benefit for our patients. Randomized trials comparing the use of mesh to native tissue repair in POP surgery have now shown better anatomical but similar functional outcomes, and meshes are associated with more complications, in particular for transvaginal mesh implants. POP is not a life-threatening condition, but a functional problem that impairs quality of life for women. The old adage “primum non nocere” is particularly appropriate when dealing with this condition which requires no treatment when asymptomatic. It is currently admitted that a certain degree of POP is physiological with aging when situated above the landmark of the hymen. Treatment should be individualized and the use of mesh needs to be selective and appropriate. Mesh implants are probably an important tool in pelvic reconstructive surgery, but the ideal implant has yet to be found. The indications for its use still require caution and discernment. This review explores the reasons behind the introduction of mesh augmentation in POP surgery, and aims to
Auxiliary units for refining of high nitrogen content oils: Premium II refinery case
Nicolato, Paolo Contim; Pinotti, Rafael [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)
2012-07-01
PETROBRAS is constantly investing on its refining park in order to increase the production of clean and stable fuels and to be capable to process heavier oils with high contaminants content. Sulfur and nitrogen are the main heteroatoms present in petroleum. They are responsible for some undesirable fuels properties like corrosivity and instability, and also emit pollutants when burnt. Hydrotreating and hydrocracking processes are designed to remove these contaminants and adjust other fuel properties, generating, as byproduct, sour gases and sour water streams rich in H{sub 2}S and NH{sub 3}, which are usually sent to Sour Water Treatment Units and Sulfur Recovery Units. The regeneration of the amine used for the light streams treatment, as fuel gas and LPG, also generates sour gas streams that must be also sent to Sulfur Recovery Units. As the ammonia content in the sour streams increases, some design parameters must be adjusted to avoid increasing the Refinery emissions. Sulfur Recovery Units must provide proper NH3 destruction. Sour Water Treatment must have a proper segregation between H{sub 2}S and ammonia streams, whenever desirable. Amine Regeneration Systems must have an efficient procedure to avoid the ammonia concentration in the amine solution. This paper presents some solutions usually applied to the Petroleum Industry and analyses some aspects related to Premium II Refinery Project and how its design will help the Brazilian refining park to meet future environmental regulation and market demands. (author)
Lewis, G F; Ferguson, A M N; Ibata, R A; Irwin, M J; McConnachie, A W; Tanvir, N
2004-01-01
The existence of a stream of tidally stripped stars from the Sagittarius Dwarf galaxy demonstrates that the Milky Way is still in the process of accreting mass. More recently, an extensive stream of stars has been uncovered in the halo of the Andromeda galaxy (M31), revealing that it too is cannibalizing a small companion. This paper reports the recent observations of this stream, determining it spatial and kinematic properties, and tracing its three-dimensional structure, as well as describing future observations and what we may learn about the Andromeda galaxy from this giant tidal stream.
Hydrography - Streams and Shorelines
California Department of Resources — The hydrography layer consists of flowing waters (rivers and streams), standing waters (lakes and ponds), and wetlands -- both natural and manmade. Two separate...
Natalia Mironova
2014-01-01
.... Cassidy, the safety coordinator at the Airline Pilots Association, says Levine and others advocating for live data streaming are oversimplifying the issue and overlooking the logistical concerns...
Inventory of miscellaneous streams
Lueck, K.J.
1995-09-01
On December 23, 1991, the US Department of Energy, Richland Operations Office (RL) and the Washington State Department of Ecology (Ecology) agreed to adhere to the provisions of the Department of Ecology Consent Order. The Consent Order lists the regulatory milestones for liquid effluent streams at the Hanford Site to comply with the permitting requirements of Washington Administrative Code. The RL provided the US Congress a Plan and Schedule to discontinue disposal of contaminated liquid effluent into the soil column on the Hanford Site. The plan and schedule document contained a strategy for the implementation of alternative treatment and disposal systems. This strategy included prioritizing the streams into two phases. The Phase 1 streams were considered to be higher priority than the Phase 2 streams. The actions recommended for the Phase 1 and 2 streams in the two reports were incorporated in the Hanford Federal Facility Agreement and Consent Order. Miscellaneous Streams are those liquid effluents streams identified within the Consent Order that are discharged to the ground but are not categorized as Phase 1 or Phase 2 Streams. This document consists of an inventory of the liquid effluent streams being discharged into the Hanford soil column.
Zone refining of cadmium and related characterization
N R Munirathnam; D S Prasad; Ch Sudheer; J V Rao; T L Prakash
2005-06-01
We present the zone refining results of cadmium using horizontal resistive zone refiner under constant flow of moisture free hydrogen gas. The boron impurity in cadmium can be avoided using quartz (GE 214 grade) boat in lieu of high pure graphite boat. The analytical results using inductively coupled plasma optical emission spectrometry (ICPOES) show that majority of the impurities are less than the detection limits. Comparatively, zinc is the most difficult impurity element to remove in cadmium matrix by zone refining.
Refined curve counting on complex surfaces
Göttsche, Lothar; Shende, Vivek
2012-01-01
We define refined invariants which "count" nodal curves in sufficiently ample linear systems on surfaces, conjecture that their generating function is multiplicative, and conjecture explicit formulas in the case of K3 and abelian surfaces. We also give a refinement of the Caporaso-Harris recursion, and conjecture that it produces the same invariants in the sufficiently ample setting. The refined recursion specializes at y = -1 to the Itenberg-Kharlamov-Shustin recursion for Welschinger invari...
Aliyazicioglu, Tolga; Yalti, Tunc; Kabaoglu, Burcak
2017-08-01
Approximately one fifth of patients suffer from inguinal pain after laparoscopic total extraperitoneal (TEP) inguinal hernia repair. There is existing literature suggesting that the staples used to fix the mesh can cause postoperative inguinal pain. In this study, we describe our experience with laparoscopic TEP inguinal hernia surgery using 3-dimensional mesh without mesh fixation, in our institution. A total of 300 patients who had undergone laparoscopic TEP inguinal hernia repair with 3-dimensional mesh in VKV American Hospital, Istanbul from November 2006 to November 2015 were studied retrospectively. Using the hospital's electronic archive, we studied patients' selected parameters, which are demographic features (age, sex), body mass index, hernia locations and types, duration of operations, preoperative and postoperative complications, duration of hospital stays, cost of surgery, need for analgesics, time elapsed until returning to daily activities and work. A total of 300 patients underwent laparoscopic TEP hernia repair of 437 inguinal hernias from November 2006 to November 2015. Of the 185 patients, 140 were symptomatic. Mean duration of follow-up was 48 months (range, 6 to 104 mo). The mean duration of surgery was 55 minutes for bilateral hernia repair, and 38 minutes for unilateral hernia repair. The mean duration of hospital stay was 0.9 day. There was no conversion to open surgery. In none of the cases the mesh was fixated with either staples or fibrin glue. Six patients (2%) developed seroma that were treated conservatively. One patient had inguinal hernia recurrence. One patient had preperitoneal hematoma. One patient operated due to indirect right-sided hernia developed right-sided hydrocele. One patient had wound dehiscence at the umbilical port entry site. Chronic pain developed postoperatively in 1 patient. Ileus developed in 1 patient. Laparoscopic TEP inguinal repair with 3-dimensional mesh without mesh fixation can be performed as safe as
Meshed split skin graft for extensive vitiligo
Srinivas C
2004-05-01
Full Text Available A 30 year old female presented with generalized stable vitiligo involving large areas of the body. Since large areas were to be treated it was decided to do meshed split skin graft. A phototoxic blister over recipient site was induced by applying 8 MOP solution followed by exposure to UVA. The split skin graft was harvested from donor area by Padgett dermatome which was meshed by an ampligreffe to increase the size of the graft by 4 times. Significant pigmentation of the depigmented skin was seen after 5 months. This procedure helps to cover large recipient areas, when pigmented donor skin is limited with minimal risk of scarring. Phototoxic blister enables easy separation of epidermis thus saving time required for dermabrasion from recipient site.
Variational mesh segmentation via quadric surface fitting
Yan, Dongming
2012-11-01
We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.
Capacity estimation of wireless mesh networks
2005-01-01
Resumo: Este trabalho apresenta uma estimação da capacidade das redes sem fio tipo Mesh. As redes deste tipo têm topologias e padrões de tráfego únicos que as diferenciam das redes sem fio convencionais. Nas redes sem fio tipo mesh os nós atuam como clientes e como servidores e o tráfego e encaminhado para uma ou várias gateways em um modo multi-salto. A estimação da capacidade é baseada em estudos da Camada Física e MAC. Efeitos da propagação do canal são avaliados Abstract: This work add...
Nondispersive optical activity of meshed helical metamaterials.
Park, Hyun Sung; Kim, Teun-Teun; Kim, Hyeon-Don; Kim, Kyungjin; Min, Bumki
2014-11-17
Extreme optical properties can be realized by the strong resonant response of metamaterials consisting of subwavelength-scale metallic resonators. However, highly dispersive optical properties resulting from strong resonances have impeded the broadband operation required for frequency-independent optical components or devices. Here we demonstrate that strong, flat broadband optical activity with high transparency can be obtained with meshed helical metamaterials in which metallic helical structures are networked and arranged to have fourfold rotational symmetry around the propagation axis. This nondispersive optical activity originates from the Drude-like response as well as the fourfold rotational symmetry of the meshed helical metamaterials. The theoretical concept is validated in a microwave experiment in which flat broadband optical activity with a designed magnitude of 45° per layer of metamaterial is measured. The broadband capabilities of chiral metamaterials may provide opportunities in the design of various broadband optical systems and applications.
Energy-efficient wireless mesh networks
Ntlatlapa, N
2007-06-01
Full Text Available deficient areas such as rural areas in Africa. Index Terms—Energy efficient design, Wireless Mesh networks, Network Protocols I. INTRODUCTION The objectives of this research group are the application and adaptation of existing wireless local area... networks, especially those based on 802.11 standard, for the energyefficient wireless mesh network (EEWMN) architectures, protocols and controls. In addition to the WMN regular features of self...
Adaptive upscaling with the dual mesh method
Guerillot, D.; Verdiere, S.
1997-08-01
The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.
Gamra: Simple Meshes for Complex Earthquakes
Landry, Walter
2016-01-01
The static offsets caused by earthquakes are well described by elastostatic models with a discontinuity in the displacement along the fault. A traditional approach to model this discontinuity is to align the numerical mesh with the fault and solve the equations using finite elements. However, this distorted mesh can be difficult to generate and update. We present a new numerical method, inspired by the Immersed Interface Method, for solving the elastostatic equations with embedded discontinuities. This method has been carefully designed so that it can be used on parallel machines on an adapted finite difference grid. We have implemented this method in Gamra, a new code for earth modelling. We demonstrate the correctness of the method with analytic tests, and we demonstrate its practical performance by solving a realistic earthquake model to extremely high precision.
Mesh convergence study for hydraulic turbine draft-tube
Devals, C.; Vu, T. C.; Zhang, Y.; Dompierre, J.; Guibault, F.
2016-11-01
Computational flow analysis is an essential tool for hydraulic turbine designers. Grid generation is the first step in the flow analysis process. Grid quality and solution accuracy are strongly linked. Even though many studies have addressed the issue of mesh independence, there is still no definitive consensus on mesh best practices, and research on that topic is still needed. This paper presents a mesh convergence study for turbulence flow in hydraulic turbine draft- tubes which represents the most challenging turbine component for CFD predictions. The findings from this parametric study will be incorporated as mesh control rules in an in-house automatic mesh generator for turbine components.
Overlay Share Mesh for Interactive Group Communication with High Dynamic
WU Yan-hua; CAI Yun-ze; XU Xiao-ming
2007-01-01
An overlay share mesh infrastructure is presented for high dynamic group communication systems, such as distributed interactive simulation (DIS) and distributed virtual environments (DVE). Overlay share mesh infrastructure can own better adapting ability for high dynamic group than tradition multi-tree multicast infrastructure by sharing links among different groups. The mechanism of overlay share mesh based on area of interest (AOI) was discussed in detail in this paper. A large number of simulation experiments were done and the permance of mesh infrastructure was studied. Experiments results proved that overlay mesh infrastructure owns better adaptability than traditional multi-tree infrastructure for high dynamic group communication systems.
Diffusive mesh relaxation in ALE finite element numerical simulations
Dube, E.I.
1996-06-01
The theory for a diffusive mesh relaxation algorithm is developed for use in three-dimensional Arbitary Lagrange/Eulerian (ALE) finite element simulation techniques. This mesh relaxer is derived by a variational principle for an unstructured 3D grid using finite elements, and incorporates hourglass controls in the numerical implementation. The diffusive coefficients are based on the geometric properties of the existing mesh, and are chosen so as to allow for a smooth grid that retains the general shape of the original mesh. The diffusive mesh relaxation algorithm is then applied to an ALE code system, and results from several test cases are discussed.
Effects of mesh resolution on hypersonic heating prediction
无
2011-01-01
Aeroheating prediction is a challenging and critical problem for the design and optimization of hypersonic vehicles.One challenge is that the solution of the Navier-Stokes equations strongly depends on the computational mesh.In this letter,the effect of mesh resolution on heat flux prediction is studied.It is found that mesh-independent solutions can be obtained using fine mesh,whose accuracy is confirmed by results from kinetic particle simulation.It is analyzed that mesh-induced numerical error comes m...
Wireless experiments on a Motorola mesh testbed.
Riblett, Loren E., Jr.; Wiseman, James M.; Witzke, Edward L.
2010-06-01
Motomesh is a Motorola product that performs mesh networking at both the client and access point levels and allows broadband mobile data connections with or between clients moving at vehicular speeds. Sandia National aboratories has extensive experience with this product and its predecessors in infrastructure-less mobile environments. This report documents experiments, which characterize certain aspects of how the Motomesh network performs when obile units are added to a fixed network infrastructure.
Airbag Mapped Mesh Auto-Flattening Method
ZHANG Jinhuan; MA Chunsheng; BAI Yuanli; HUANG Shilin
2005-01-01
Current software cannot easily model an airbag to be flattened without wrinkles. This paper improves the modeling efficiency using the initial metric method to design a mapped mesh auto-flattening algorithm. The element geometric transformation matrix was obtained using the theory of computer graphics. The algorithm proved to be practical for modeling a passenger-side airbag model. The efficiency and precision of modeling airbags are greatly improved by this method.
Gradient Domain Mesh Deformation - A Survey
Wei-Wei Xu; Kun Zhou
2009-01-01
This survey reviews the recent development of gradient domain mesh deformation method. Different to other deformation methods, the gradient domain deformation method is a surface-based, variational optimization method. It directly encodes the geometric details in differential coordinates, which are also called Laplacian coordinates in literature. By preserving the Laplacian coordinates, the mesh details can be well preserved during deformation. Due to the locality of the Laplacian coordinates, the variational optimization problem can be casted into a sparse linear system. Fast sparse linear solver can be adopted to generate deformation result interactively, or even in real-time. The nonlinear nature of gradient domain mesh deformation leads to the development of two categories of deformation methods: linearization methods and nonlinear optimization methods. Basically, the linearization methods only need to solve the linear least-squares system once. They are fast, easy to understand and control, while the deformation result might be suboptimal. Nonlinear optimization methods can reach optimal solution of deformation energy function by iterative updating. Since the computation of nonlinear methods is expensive, reduced deformable models should be adopted to achieve interactive performance. The nonlinear optimization methods avoid the user burden to input transformation at deformation handles, and they can be extended to incorporate various nonlinear constraints, like volume constraint, skeleton constraint, and so on. We review representative methods and related approaches of each category comparatively and hope to help the user understand the motivation behind the algorithms. Finally, we discuss the relation between physical simulation and gradient domain mesh deformation to reveal why it can achieve physically plausible deformation result.
Solid Mesh Registration for Radiotherapy Treatment Planning
Noe, Karsten Østergaard; Sørensen, Thomas Sangild
2010-01-01
We present an algorithm for solid organ registration of pre-segmented data represented as tetrahedral meshes. Registration of the organ surface is driven by force terms based on a distance field representation of the source and reference shapes. Registration of internal morphology is achieved usi...... to complete. The proposed method has many potential uses in image guided radiotherapy (IGRT) which relies on registration to account for organ deformation between treatment sessions....
Performance Evaluation of Coded Meshed Networks
Krigslund, Jeppe; Hansen, Jonas; Pedersen, Morten Videbæk
2013-01-01
of the former to enhance the gains of the latter. We first motivate our work through measurements in WiFi mesh networks. Later, we compare state-of-the-art approaches, e.g., COPE, RLNC, to CORE. Our measurements show the higher reliability and throughput of CORE over other schemes, especially, for asymmetric...... and/or high loss probabilities. We show that a store and forward scheme outperforms COPE under some channel conditions, while CORE yields 3dB gains....
Particle Mesh Hydrodynamics for Astrophysics Simulations
Chatelain, Philippe; Cottet, Georges-Henri; Koumoutsakos, Petros
We present a particle method for the simulation of three dimensional compressible hydrodynamics based on a hybrid Particle-Mesh discretization of the governing equations. The method is rooted on the regularization of particle locations as in remeshed Smoothed Particle Hydrodynamics (rSPH). The rSPH method was recently introduced to remedy problems associated with the distortion of computational elements in SPH, by periodically re-initializing the particle positions and by using high order interpolation kernels. In the PMH formulation, the particles solely handle the convective part of the compressible Euler equations. The particle quantities are then interpolated onto a mesh, where the pressure terms are computed. PMH, like SPH, is free of the convection CFL condition while at the same time it is more efficient as derivatives are computed on a mesh rather than particle-particle interactions. PMH does not detract from the adaptive character of SPH and allows for control of its accuracy. We present simulations of a benchmark astrophysics problem demonstrating the capabilities of this approach.
Mesh Learning for Classifying Cognitive Processes
Ozay, Mete; Öztekin, Uygar; Vural, Fatos T Yarman
2012-01-01
The major goal of this study is to model the encoding and retrieval operations of the brain during memory processing, using statistical learning tools. The suggested method assumes that the memory encoding and retrieval processes can be represented by a supervised learning system, which is trained by the brain data collected from the functional Magnetic Resonance (fMRI) measurements, during the encoding stage. Then, the system outputs the same class labels as that of the fMRI data collected during the retrieval stage. The most challenging problem of modeling such a learning system is the design of the interactions among the voxels to extract the information about the underlying patterns of brain activity. In this study, we suggest a new method called Mesh Learning, which represents each voxel by a mesh of voxels in a neighborhood system. The nodes of the mesh are a set of neighboring voxels, whereas the arc weights are estimated by a linear regression model. The estimated arc weights are used to form Local Re...
Mesh deployable antenna mechanics testing method
Jiang, Li
Rapid development in spatial technologies and continuous expansion of astronautics applications require stricter and stricter standards in spatial structure. Deployable space structure as a newly invented structural form is being extensively adopted because of its characteristic (i.e. deployability). Deployable mesh reflector antenna is a kind of common deployable antennas. Its reflector consists in a kind of metal mesh. Its electrical properties are highly dependent on its mechanics parameters (including surface accuracy, angle, and position). Therefore, these mechanics parameters have to be calibrated. This paper presents a mesh antenna mechanics testing method that employs both an electronic theodolite and a laser tracker. The laser tracker is firstly used to measure the shape of radial rib deployable antenna. The measurement data are then fitted to a paraboloid by means of error compensation. Accordingly, the focus and the focal axis of the paraboloid are obtained. The following step is to synchronize the coordinate systems of the electronic theodolite and the measured antenna. Finally, in a microwave anechoic chamber environment, the electromechanical axis is calibrated. Testing results verify the effectiveness of the presented method.
MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank.
Mao, Yuqing; Lu, Zhiyong
2017-04-17
MeSH indexing is the task of assigning relevant MeSH terms based on a manual reading of scholarly publications by human indexers. The task is highly important for improving literature retrieval and many other scientific investigations in biomedical research. Unfortunately, given its manual nature, the process of MeSH indexing is both time-consuming (new articles are not immediately indexed until 2 or 3 months later) and costly (approximately ten dollars per article). In response, automatic indexing by computers has been previously proposed and attempted but remains challenging. In order to advance the state of the art in automatic MeSH indexing, a community-wide shared task called BioASQ was recently organized. We propose MeSH Now, an integrated approach that first uses multiple strategies to generate a combined list of candidate MeSH terms for a target article. Through a novel learning-to-rank framework, MeSH Now then ranks the list of candidate terms based on their relevance to the target article. Finally, MeSH Now selects the highest-ranked MeSH terms via a post-processing module. We assessed MeSH Now on two separate benchmarking datasets using traditional precision, recall and F1-score metrics. In both evaluations, MeSH Now consistently achieved over 0.60 in F-score, ranging from 0.610 to 0.612. Furthermore, additional experiments show that MeSH Now can be optimized by parallel computing in order to process MEDLINE documents on a large scale. We conclude that MeSH Now is a robust approach with state-of-the-art performance for automatic MeSH indexing and that MeSH Now is capable of processing PubMed scale documents within a reasonable time frame. http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/MeSHNow/ .
Data-Parallel Mesh Connected Components Labeling and Analysis
Harrison, Cyrus; Childs, Hank; Gaither, Kelly
2011-04-10
We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.
Data-Parallel Mesh Connected Components Labeling and Analysis
Harrison, Cyrus; Childs, Hank; Gaither, Kelly
2011-04-10
We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.
Design and Implementation of a Single-Frequency Mesh Network Using OpenAirInterface
Kaltenberger Florian
2010-01-01
Full Text Available OpenAirInterface is an experimental open-source real-time hardware and software platform for experimentation in wireless communications and signal processing. With the help of OpenAirInterface, researchers can demonstrate novel ideas quickly and verify them in a realistic environment. Its current implementation provides a full open-source software modem comprising physical and link layer functionalities for cellular and mesh network topologies. The physical (PHY layer of the platform targets fourth generation wireless networks and thus uses orthogonal frequency division multiple access (OFDMA together with multiple-input multiple-output (MIMO techniques. The current hardware supports 5 MHz bandwidth and two transmit/receive antennas. The media access (MAC layer of the platform supports an abundant two-way signaling for enabling collaboration, scheduling protocols, as well as traffic and channel measurements. In this paper, we focus on the mesh topology and show how to implement a single-frequency mesh network with OpenAirInterface. The key ingredients to enable such a network are a dual-stream MIMO receiver structure and a distributed network synchronization algorithm. We show how to implement these two algorithms in real-time on the OpenAirInterface platform. Further more, we provide results from field trials and compare them to the simulation results.
Promoting wired links in wireless mesh networks: an efficient engineering solution.
Barekatain, Behrang; Raahemifar, Kaamran; Ariza Quintana, Alfonso; Triviño Cabrera, Alicia
2015-01-01
Wireless Mesh Networks (WMNs) cannot completely guarantee good performance of traffic sources such as video streaming. To improve the network performance, this study proposes an efficient engineering solution named Wireless-to-Ethernet-Mesh-Portal-Passageway (WEMPP) that allows effective use of wired communication in WMNs. WEMPP permits transmitting data through wired and stable paths even when the destination is in the same network as the source (Intra-traffic). Tested with four popular routing protocols (Optimized Link State Routing or OLSR as a proactive protocol, Dynamic MANET On-demand or DYMO as a reactive protocol, DYMO with spanning tree ability and HWMP), WEMPP considerably decreases the end-to-end delay, jitter, contentions and interferences on nodes, even when the network size or density varies. WEMPP is also cost-effective and increases the network throughput. Moreover, in contrast to solutions proposed by previous studies, WEMPP is easily implemented by modifying the firmware of the actual Ethernet hardware without altering the routing protocols and/or the functionality of the IP/MAC/Upper layers. In fact, there is no need for modifying the functionalities of other mesh components in order to work with WEMPPs. The results of this study show that WEMPP significantly increases the performance of all routing protocols, thus leading to better video quality on nodes.
Promoting wired links in wireless mesh networks: an efficient engineering solution.
Behrang Barekatain
Full Text Available Wireless Mesh Networks (WMNs cannot completely guarantee good performance of traffic sources such as video streaming. To improve the network performance, this study proposes an efficient engineering solution named Wireless-to-Ethernet-Mesh-Portal-Passageway (WEMPP that allows effective use of wired communication in WMNs. WEMPP permits transmitting data through wired and stable paths even when the destination is in the same network as the source (Intra-traffic. Tested with four popular routing protocols (Optimized Link State Routing or OLSR as a proactive protocol, Dynamic MANET On-demand or DYMO as a reactive protocol, DYMO with spanning tree ability and HWMP, WEMPP considerably decreases the end-to-end delay, jitter, contentions and interferences on nodes, even when the network size or density varies. WEMPP is also cost-effective and increases the network throughput. Moreover, in contrast to solutions proposed by previous studies, WEMPP is easily implemented by modifying the firmware of the actual Ethernet hardware without altering the routing protocols and/or the functionality of the IP/MAC/Upper layers. In fact, there is no need for modifying the functionalities of other mesh components in order to work with WEMPPs. The results of this study show that WEMPP significantly increases the performance of all routing protocols, thus leading to better video quality on nodes.
Toxicological characteristics of refinery streams used to manufacture lubricating oils.
Kane, M L; Ladov, E N; Holdsworth, C E; Weaver, N K
1984-01-01
In the past, reports on the tumorigenic potential of lubricating oils in experimental animals have poorly defined the materials under study. In this paper the results of mouse skin painting studies with 46 clearly defined samples of refinery streams associated with lubricating oil processing show that modern conventional solvent refining of distillates removes tumorigenic potential while conventional acid refining may not. Furthermore, dewaxing, hydrofinishing, and clay treatments do not appear to mitigate the tumorigenicity of the lubricant distillates. Lubricant processing has changed over the years and assessments of the carcinogenicity of present-day lubricating materials must be based on knowledge of modern processing.
Oral, intestinal, and skin bacteria in ventral hernia mesh implants
Odd Langbach
2016-07-01
Full Text Available Background: In ventral hernia surgery, mesh implants are used to reduce recurrence. Infection after mesh implantation can be a problem and rates around 6–10% have been reported. Bacterial colonization of mesh implants in patients without clinical signs of infection has not been thoroughly investigated. Molecular techniques have proven effective in demonstrating bacterial diversity in various environments and are able to identify bacteria on a gene-specific level. Objective: The purpose of this study was to detect bacterial biofilm in mesh implants, analyze its bacterial diversity, and look for possible resemblance with bacterial biofilm from the periodontal pocket. Methods: Thirty patients referred to our hospital for recurrence after former ventral hernia mesh repair, were examined for periodontitis in advance of new surgical hernia repair. Oral examination included periapical radiographs, periodontal probing, and subgingival plaque collection. A piece of mesh (1×1 cm from the abdominal wall was harvested during the new surgical hernia repair and analyzed for bacteria by PCR and 16S rRNA gene sequencing. From patients with positive PCR mesh samples, subgingival plaque samples were analyzed with the same techniques. Results: A great variety of taxa were detected in 20 (66.7% mesh samples, including typical oral commensals and periodontopathogens, enterics, and skin bacteria. Mesh and periodontal bacteria were further analyzed for similarity in 16S rRNA gene sequences. In 17 sequences, the level of resemblance between mesh and subgingival bacterial colonization was 98–100% suggesting, but not proving, a transfer of oral bacteria to the mesh. Conclusion: The results show great bacterial diversity on mesh implants from the anterior abdominal wall including oral commensals and periodontopathogens. Mesh can be reached by bacteria in several ways including hematogenous spread from an oral site. However, other sites such as gut and skin may also
Protein structure refinement by optimization.
Carlsen, Martin; Røgen, Peter
2015-09-01
Knowledge-based protein potentials are simplified potentials designed to improve the quality of protein models, which is important as more accurate models are more useful for biological and pharmaceutical studies. Consequently, knowledge-based potentials often are designed to be efficient in ordering a given set of deformed structures denoted decoys according to how close they are to the relevant native protein structure. This, however, does not necessarily imply that energy minimization of this potential will bring the decoys closer to the native structure. In this study, we introduce an iterative strategy to improve the convergence of decoy structures. It works by adding energy optimized decoys to the pool of decoys used to construct the next and improved knowledge-based potential. We demonstrate that this strategy results in significantly improved decoy convergence on Titan high resolution decoys and refinement targets from Critical Assessment of protein Structure Prediction competitions. Our potential is formulated in Cartesian coordinates and has a fixed backbone potential to restricts motions to be close to those of a dihedral model, a fixed hydrogen-bonding potential and a variable coarse grained carbon alpha potential consisting of a pair potential and a novel solvent potential that are b-spline based as we use explicit gradient and Hessian for efficient energy optimization.
Model Checking Linearizability via Refinement
Liu, Yang; Chen, Wei; Liu, Yanhong A.; Sun, Jun
Linearizability is an important correctness criterion for implementations of concurrent objects. Automatic checking of linearizability is challenging because it requires checking that 1) all executions of concurrent operations be serializable, and 2) the serialized executions be correct with respect to the sequential semantics. This paper describes a new method to automatically check linearizability based on refinement relations from abstract specifications to concrete implementations. Our method avoids the often difficult task of determining linearization points in implementations, but can also take advantage of linearization points if they are given. The method exploits model checking of finite state systems specified as concurrent processes with shared variables. Partial order reduction is used to effectively reduce the search space. The approach is built into a toolset that supports a rich set of concurrent operators. The tool has been used to automatically check a variety of implementations of concurrent objects, including the first algorithms for the mailbox problem and scalable NonZero indicators. Our system was able to find all known and injected bugs in these implementations.