Adaptive Mesh Refinement in CTH
Crawford, David
1999-01-01
This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems
Multigrid for refined triangle meshes
Shapira, Yair
1997-02-01
A two-level preconditioning method for the solution of (locally) refined finite element schemes using triangle meshes is introduced. In the isotropic SPD case, it is shown that the condition number of the preconditioned stiffness matrix is bounded uniformly for all sufficiently regular triangulations. This is also verified numerically for an isotropic diffusion problem with highly discontinuous coefficients.
Local adaptive mesh refinement for shock hydrodynamics
Berger, M.J.; Colella, P.; Lawrence Livermore Laboratory, Livermore, 94550 California)
1989-01-01
The aim of this work is the development of an automatic, adaptive mesh refinement strategy for solving hyperbolic conservation laws in two dimensions. There are two main difficulties in doing this. The first problem is due to the presence of discontinuities in the solution and the effect on them of discontinuities in the mesh. The second problem is how to organize the algorithm to minimize memory and CPU overhead. This is an important consideration and will continue to be important as more sophisticated algorithms that use data structures other than arrays are developed for use on vector and parallel computers. copyright 1989 Academic Press, Inc
Adaptive mesh refinement in titanium
Colella, Phillip; Wen, Tong
2005-01-21
In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.
Adaptive hybrid mesh refinement for multiphysics applications
Khamayseh, Ahmed; Almeida, Valmor de
2007-01-01
The accuracy and convergence of computational solutions of mesh-based methods is strongly dependent on the quality of the mesh used. We have developed methods for optimizing meshes that are comprised of elements of arbitrary polygonal and polyhedral type. We present in this research the development of r-h hybrid adaptive meshing technology tailored to application areas relevant to multi-physics modeling and simulation. Solution-based adaptation methods are used to reposition mesh nodes (r-adaptation) or to refine the mesh cells (h-adaptation) to minimize solution error. The numerical methods perform either the r-adaptive mesh optimization or the h-adaptive mesh refinement method on the initial isotropic or anisotropic meshes to equidistribute weighted geometric and/or solution error function. We have successfully introduced r-h adaptivity to a least-squares method with spherical harmonics basis functions for the solution of the spherical shallow atmosphere model used in climate modeling. In addition, application of this technology also covers a wide range of disciplines in computational sciences, most notably, time-dependent multi-physics, multi-scale modeling and simulation
Adaptive mesh refinement for storm surge
Mandli, Kyle T.; Dawson, Clint N.
2014-01-01
An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the GeoClaw framework and compared to ADCIRC for Hurricane Ike along with observed tide gauge data and the computational cost of each model run. © 2014 Elsevier Ltd.
Adaptive mesh refinement for storm surge
Mandli, Kyle T.
2014-03-01
An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the GeoClaw framework and compared to ADCIRC for Hurricane Ike along with observed tide gauge data and the computational cost of each model run. © 2014 Elsevier Ltd.
COSMOLOGICAL ADAPTIVE MESH REFINEMENT MAGNETOHYDRODYNAMICS WITH ENZO
Collins, David C.; Xu Hao; Norman, Michael L.; Li Hui; Li Shengtai
2010-01-01
In this work, we present EnzoMHD, the extension of the cosmological code Enzo to include the effects of magnetic fields through the ideal magnetohydrodynamics approximation. We use a higher order Godunov method for the computation of interface fluxes. We use two constrained transport methods to compute the electric field from those interface fluxes, which simultaneously advances the induction equation and maintains the divergence of the magnetic field. A second-order divergence-free reconstruction technique is used to interpolate the magnetic fields in the block-structured adaptive mesh refinement framework already extant in Enzo. This reconstruction also preserves the divergence of the magnetic field to machine precision. We use operator splitting to include gravity and cosmological expansion. We then present a series of cosmological and non-cosmological test problems to demonstrate the quality of solution resulting from this combination of solvers.
Adaptive mesh refinement for shocks and material interfaces
Dai, William Wenlong [Los Alamos National Laboratory
2010-01-01
There are three kinds of adaptive mesh refinement (AMR) in structured meshes. Block-based AMR sometimes over refines meshes. Cell-based AMR treats cells cell by cell and thus loses the advantage of the nature of structured meshes. Patch-based AMR is intended to combine advantages of block- and cell-based AMR, i.e., the nature of structured meshes and sharp regions of refinement. But, patch-based AMR has its own difficulties. For example, patch-based AMR typically cannot preserve symmetries of physics problems. In this paper, we will present an approach for a patch-based AMR for hydrodynamics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, management of patches, and load balance. The special features of this patch-based AMR include symmetry preserving, efficiency of refinement across shock fronts and material interfaces, special implementation of flux correction, and patch management in parallel computing environments. To demonstrate the capability of the AMR framework, we will show both two- and three-dimensional hydrodynamics simulations with many levels of refinement.
Trajectory Optimization Based on Multi-Interval Mesh Refinement Method
Ningbo Li
2017-01-01
Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.
Mesh Generation via Local Bisection Refinement of Triangulated Grids
2015-06-01
Science and Technology Organisation DSTO–TR–3095 ABSTRACT This report provides a comprehensive implementation of an unstructured mesh generation method...and Technology Organisation 506 Lorimer St, Fishermans Bend, Victoria 3207, Australia Telephone: 1300 333 362 Facsimile: (03) 9626 7999 c© Commonwealth...their behaviour is critically linked to Maubach’s method and the data structures N and T . The top- level mesh refinement algorithm is also presented
Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations
Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.
2012-09-01
Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.
Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units
Beckingsale, D. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Gaudin, W. P. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Hornung, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gunney, B. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Herdman, J. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Jarvis, S. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom)
2014-11-17
Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.
On mesh refinement and accuracy of numerical solutions
Zhou, Hong; Peters, Maria; van Oosterom, Adriaan
1993-01-01
This paper investigates mesh refinement and its relation with the accuracy of the boundary element method (BEM) and the finite element method (FEM). TO this end an isotropic homogeneous spherical volume conductor, for which the analytical solution is available, wag used. The numerical results
Adaptive mesh refinement and adjoint methods in geophysics simulations
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Improvement of neutronic calculations on a Masurca core using adaptive mesh refinement capabilities
Fournier, D.; Archier, P.; Le Tellier, R.; Suteau, C.
2011-01-01
The simulation of 3D cores with homogenized assemblies in transport theory remains time and memory consuming for production calculations. With a multigroup discretization for the energy variable and a discrete ordinate method for the angle, a system of about 10"4 coupled hyperbolic transport equations has to be solved. For these equations, we intend to optimize the spatial discretization. In the framework of the SNATCH solver used in this study, the spatial problem is dealt with by using a structured hexahedral mesh and applying a Discontinuous Galerkin Finite Element Method (DGFEM). This paper shows the improvements due to the development of Adaptive Mesh Refinement (AMR) methods. As the SNATCH solver uses a hierarchical polynomial basis, p−refinement is possible but also h−refinement thanks to non conforming capabilities. Besides, as the flux spatial behavior is highly dependent on the energy, we propose to adapt differently the spatial discretization according to the energy group. To avoid dealing with too many meshes, some energy groups are joined and share the same mesh. The different energy-dependent AMR strategies are compared to each other but also with the classical approach of a conforming and highly refined spatial mesh. This comparison is carried out on different quantities such as the multiplication factor, the flux or the current. The gain in time and memory is shown for 2D and 3D benchmarks coming from the ZONA2B experimental core configuration of the MASURCA mock-up at CEA Cadarache. (author)
Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement
Leng, W.; Zhong, S.
2008-12-01
In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].
Mesh refinement of simulation with the AID riser transmission gamma
Lima Filho, Hilario J.B. de; Benachour, Mohand; Dantas, Carlos C.; Brito, Marcio F.P.; Santos, Valdemir A. dos
2013-01-01
Type reactors Circulating Fluidized Bed (CFBR) vertical, in which the particulate and gaseous phases have flows upward (riser) have been widely used in gasification processes, combustion and fluid catalytic cracking (FCC). These biphasic reactors (gas-solid) efficiency depends largely on their hydrodynamic characteristics, and shows different behaviors in the axial and radial directions. The solids axial distribution is observed by the higher concentration in the base, getting more diluted toward the top. Radially, the solids concentration is characterized as core-annular, in which the central region is highly diluted, consisting of dispersed particles and fluid. In the present work developed a two-dimensional geometry (2D) techniques through simulations in computational fluid dynamics (CFD) to predict the gas-solid flow in the riser type CFBR through transient modeling, based on the kinetic theory of granular flow . The refinement of computational meshes provide larger amounts of information on the parameters studied, but may increase the processing time of the simulations. A minimum number of cells applied to the mesh construction was obtained by testing five meshes. The validation of the hydrodynamic parameters was performed using a range of 241Am source and detector NaI (Tl). The numerical results were provided consistent with the experimental data, indicating that the refined computational mesh in a controlled manner, improve the approximation of the expected results. (author)
Mesh refinement for uncertainty quantification through model reduction
Li, Jing; Stinis, Panos
2015-01-01
We present a novel way of deciding when and where to refine a mesh in probability space in order to facilitate uncertainty quantification in the presence of discontinuities in random space. A discontinuity in random space makes the application of generalized polynomial chaos expansion techniques prohibitively expensive. The reason is that for discontinuous problems, the expansion converges very slowly. An alternative to using higher terms in the expansion is to divide the random space in smaller elements where a lower degree polynomial is adequate to describe the randomness. In general, the partition of the random space is a dynamic process since some areas of the random space, particularly around the discontinuity, need more refinement than others as time evolves. In the current work we propose a way to decide when and where to refine the random space mesh based on the use of a reduced model. The idea is that a good reduced model can monitor accurately, within a random space element, the cascade of activity to higher degree terms in the chaos expansion. In turn, this facilitates the efficient allocation of computational sources to the areas of random space where they are more needed. For the Kraichnan–Orszag system, the prototypical system to study discontinuities in random space, we present theoretical results which show why the proposed method is sound and numerical results which corroborate the theory
Object-Oriented Implementation of Adaptive Mesh Refinement Algorithms
William Y. Crutchfield
1993-01-01
Full Text Available We describe C++ classes that simplify development of adaptive mesh refinement (AMR algorithms. The classes divide into two groups, generic classes that are broadly useful in adaptive algorithms, and application-specific classes that are the basis for our AMR algorithm. We employ two languages, with C++ responsible for the high-level data structures, and Fortran responsible for low-level numerics. The C++ implementation is as fast as the original Fortran implementation. Use of inheritance has allowed us to extend the original AMR algorithm to other problems with greatly reduced development time.
CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM
Miniati, Francesco; Martin, Daniel F.
2011-01-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
Liu, Hao
2016-01-01
This Ph.D. work takes place within the framework of studies on Pellet-Cladding mechanical Interaction (PCI) which occurs in the fuel rods of pressurized water reactor. This manuscript focuses on automatic mesh refinement to simulate more accurately this phenomena while maintaining acceptable computational time and memory space for industrial calculations. An automatic mesh refinement strategy based on the combination of the Local Defect Correction multigrid method (LDC) with the Zienkiewicz and Zhu a posteriori error estimator is proposed. The estimated error is used to detect the zones to be refined, where the local sub-grids of the LDC method are generated. Several stopping criteria are studied to end the refinement process when the solution is accurate enough or when the refinement does not improve the global solution accuracy anymore. Numerical results for elastic 2D test cases with pressure discontinuity show the efficiency of the proposed strategy. The automatic mesh refinement in case of unilateral contact problems is then considered. The strategy previously introduced can be easily adapted to the multi-body refinement by estimating solution error on each body separately. Post-processing is often necessary to ensure the conformity of the refined areas regarding the contact boundaries. A variety of numerical experiments with elastic contact (with or without friction, with or without an initial gap) confirms the efficiency and adaptability of the proposed strategy. (author) [fr
Local mesh refinement for incompressible fluid flow with free surfaces
Terasaka, H.; Kajiwara, H.; Ogura, K. [Tokyo Electric Power Company (Japan)] [and others
1995-09-01
A new local mesh refinement (LMR) technique has been developed and applied to incompressible fluid flows with free surface boundaries. The LMR method embeds patches of fine grid in arbitrary regions of interest. Hence, more accurate solutions can be obtained with a lower number of computational cells. This method is very suitable for the simulation of free surface movements because free surface flow problems generally require a finer computational grid to obtain adequate results. By using this technique, one can place finer grids only near the surfaces, and therefore greatly reduce the total number of cells and computational costs. This paper introduces LMR3D, a three-dimensional incompressible flow analysis code. Numerical examples calculated with the code demonstrate well the advantages of the LMR method.
Block-structured Adaptive Mesh Refinement - Theory, Implementation and Application
Deiterding Ralf
2011-12-01
Full Text Available Structured adaptive mesh refinement (SAMR techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.
Baiges Aznar, Joan; Bayona Roa, Camilo Andrés
2017-01-01
No separate or additional fees are collected for access to or distribution of the work. In this paper we present a novel algorithm for adaptive mesh refinement in computational physics meshes in a distributed memory parallel setting. The proposed method is developed for nodally based parallel domain partitions where the nodes of the mesh belong to a single processor, whereas the elements can belong to multiple processors. Some of the main features of the algorithm presented in this paper a...
Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.
2013-01-01
Highlights: ► A new adaptive h-refinement approach has been developed for a class of nodal method. ► The resulting system of nodal equations is more amenable to efficient numerical solution. ► The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. ► Spatially adaptive approach greatly enhances the accuracy of the solution. - Abstract: The aim of this work is to develop a spatially adaptive coarse mesh strategy that progressively refines the nodes in appropriate regions of domain to solve the neutron balance equation by zeroth order nodal expansion method. A flux gradient based a posteriori estimation scheme has been utilized for checking the approximate solutions for various nodes. The relative surface net leakage of nodes has been considered as an assessment criterion. In this approach, the core module is called in by adaptive mesh generator to determine gradients of node surfaces flux to explore the possibility of node refinements in appropriate regions and directions of the problem. The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. For this purpose, a computer program ANRNE-2D, Adaptive Node Refinement Nodal Expansion, has been developed to solve neutron diffusion equation using average current nodal expansion method for 2D rectangular geometries. Implementing the adaptive algorithm confirms its superiority in enhancing the accuracy of the solution without using fine nodes throughout the domain and increasing the number of unknown solution. Some well-known benchmarks have been investigated and improvements are reported
Local multigrid mesh refinement in view of nuclear fuel 3D modelling in pressurised water reactors
Barbie, L.
2013-01-01
The aim of this study is to improve the performances, in terms of memory space and computational time, of the current modelling of the Pellet-Cladding mechanical Interaction (PCI), complex phenomenon which may occurs during high power rises in pressurised water reactors. Among the mesh refinement methods - methods dedicated to efficiently treat local singularities - a local multi-grid approach was selected because it enables the use of a black-box solver while dealing few degrees of freedom at each level. The Local Defect Correction (LDC) method, well suited to a finite element discretization, was first analysed and checked in linear elasticity, on configurations resulting from the PCI, since its use in solid mechanics is little widespread. Various strategies concerning the implementation of the multilevel algorithm were also compared. Coupling the LDC method with the Zienkiewicz-Zhu a posteriori error estimator in order to automatically detect the zones to be refined, was then tested. Performances obtained on two-dimensional and three-dimensional cases are very satisfactory, since the algorithm proposed is more efficient than h-adaptive refinement methods. Lastly, the LDC algorithm was extended to nonlinear mechanics. Space/time refinement as well as transmission of the initial conditions during the re-meshing step were looked at. The first results obtained are encouraging and show the interest of using the LDC method for PCI modelling. (author) [fr
Direct numerical simulation of bubbles with parallelized adaptive mesh refinement
Talpaert, A.
2015-01-01
The study of two-phase Thermal-Hydraulics is a major topic for Nuclear Engineering for both security and efficiency of nuclear facilities. In addition to experiments, numerical modeling helps to knowing precisely where bubbles appear and how they behave, in the core as well as in the steam generators. This work presents the finest scale of representation of two-phase flows, Direct Numerical Simulation of bubbles. We use the 'Di-phasic Low Mach Number' equation model. It is particularly adapted to low-Mach number flows, that is to say flows which velocity is much slower than the speed of sound; this is very typical of nuclear thermal-hydraulics conditions. Because we study bubbles, we capture the front between vapor and liquid phases thanks to a downward flux limiting numerical scheme. The specific discrete analysis technique this work introduces is well-balanced parallel Adaptive Mesh Refinement (AMR). With AMR, we refined the coarse grid on a batch of patches in order to locally increase precision in areas which matter more, and capture fine changes in the front location and its topology. We show that patch-based AMR is very adapted for parallel computing. We use a variety of physical examples: forced advection, heat transfer, phase changes represented by a Stefan model, as well as the combination of all those models. We will present the results of those numerical simulations, as well as the speed up compared to equivalent non-AMR simulation and to serial computation of the same problems. This document is made up of an abstract and the slides of the presentation. (author)
Conforming to interface structured adaptive mesh refinement: 3D algorithm and implementation
Nagarajan, Anand; Soghrati, Soheil
2018-03-01
A new non-iterative mesh generation algorithm named conforming to interface structured adaptive mesh refinement (CISAMR) is introduced for creating 3D finite element models of problems with complex geometries. CISAMR transforms a structured mesh composed of tetrahedral elements into a conforming mesh with low element aspect ratios. The construction of the mesh begins with the structured adaptive mesh refinement of elements in the vicinity of material interfaces. An r-adaptivity algorithm is then employed to relocate selected nodes of nonconforming elements, followed by face-swapping a small fraction of them to eliminate tetrahedrons with high aspect ratios. The final conforming mesh is constructed by sub-tetrahedralizing remaining nonconforming elements, as well as tetrahedrons with hanging nodes. In addition to studying the convergence and analyzing element-wise errors in meshes generated using CISAMR, several example problems are presented to show the ability of this method for modeling 3D problems with intricate morphologies.
Hydrodynamics in full general relativity with conservative adaptive mesh refinement
East, William E.; Pretorius, Frans; Stephens, Branson C.
2012-06-01
There is great interest in numerical relativity simulations involving matter due to the likelihood that binary compact objects involving neutron stars will be detected by gravitational wave observatories in the coming years, as well as to the possibility that binary compact object mergers could explain short-duration gamma-ray bursts. We present a code designed for simulations of hydrodynamics coupled to the Einstein field equations targeted toward such applications. This code has recently been used to study eccentric mergers of black hole-neutron star binaries. We evolve the fluid conservatively using high-resolution shock-capturing methods, while the field equations are solved in the generalized-harmonic formulation with finite differences. In order to resolve the various scales that may arise, we use adaptive mesh refinement (AMR) with grid hierarchies based on truncation error estimates. A noteworthy feature of this code is the implementation of the flux correction algorithm of Berger and Colella to ensure that the conservative nature of fluid advection is respected across AMR boundaries. We present various tests to compare the performance of different limiters and flux calculation methods, as well as to demonstrate the utility of AMR flux corrections.
Resolution convergence in cosmological hydrodynamical simulations using adaptive mesh refinement
Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim
2018-06-01
We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions (ICs) and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low-mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of ICs is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold-back falls with increasing spatial and IC resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of the order of 10-20 per cent, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold-back on the star formation rate.
A simple nodal force distribution method in refined finite element meshes
Park, Jai Hak [Chungbuk National University, Chungju (Korea, Republic of); Shin, Kyu In [Gentec Co., Daejeon (Korea, Republic of); Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Cho, Seungyon [National Fusion Research Institute, Daejeon (Korea, Republic of)
2017-05-15
In finite element analyses, mesh refinement is frequently performed to obtain accurate stress or strain values or to accurately define the geometry. After mesh refinement, equivalent nodal forces should be calculated at the nodes in the refined mesh. If field variables and material properties are available at the integration points in each element, then the accurate equivalent nodal forces can be calculated using an adequate numerical integration. However, in certain circumstances, equivalent nodal forces cannot be calculated because field variable data are not available. In this study, a very simple nodal force distribution method was proposed. Nodal forces of the original finite element mesh are distributed to the nodes of refined meshes to satisfy the equilibrium conditions. The effect of element size should also be considered in determining the magnitude of the distributing nodal forces. A program was developed based on the proposed method, and several example problems were solved to verify the accuracy and effectiveness of the proposed method. From the results, accurate stress field can be recognized to be obtained from refined meshes using the proposed nodal force distribution method. In example problems, the difference between the obtained maximum stress and target stress value was less than 6 % in models with 8-node hexahedral elements and less than 1 % in models with 20-node hexahedral elements or 10-node tetrahedral elements.
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable
Radiation transport code with adaptive Mesh Refinement: acceleration techniques and applications
Velarde, Pedro; Garcia-Fernaandez, Carlos; Portillo, David; Barbas, Alfonso
2011-01-01
We present a study of acceleration techniques for solving Sn radiation transport equations with Adaptive Mesh Refinement (AMR). Both DSA and TSA are considered, taking into account the influence of the interaction between different levels of the mesh structure and the order of approximation in angle. A Hybrid method is proposed in order to obtain better convergence rate and lower computer times. Some examples are presented relevant to ICF and X ray secondary sources. (author)
Becker, Roland; Vexler, Boris
2005-06-01
We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.
Gerya, T.; Duretz, T.; May, D. A.
2012-04-01
We present new 2D adaptive mesh refinement (AMR) algorithm based on stress-conservative finite-differences formulated for non-uniform rectangular staggered grid. The refinement approach is based on a repetitive cell splitting organized via a quad-tree construction (every parent cell is split into 4 daughter cells of equal size). Irrespective of the level of resolution every cell has 5 staggered nodes (2 horizontal velocities, 2 vertical velocities and 1 pressure) for which respective governing equations, boundary conditions and interpolation equations are formulated. The connectivity of the grid is achieved via cross-indexing of grid cells and basic nodal points located in their corners: four corner nodes are indexed for every cell and up to 4 surrounding cells are indexed for every node. The accuracy of the approach depends critically on the formulation of the stencil used at the "hanging" velocity nodes located at the boundaries between different levels of resolution. Most accurate results are obtained for the scheme based on the volume flux balance across the resolution boundary combined with stress-based interpolation of velocity orthogonal to the boundary. We tested this new approach with a number of 2D variable viscosity analytical solutions. Our tests demonstrate that the adaptive staggered grid formulation has convergence properties similar to those obtained in case of a standard, non-adaptive staggered grid formulation. This convergence is also achieved when resolution boundary crosses sharp viscosity contrast interfaces. The convergence rates measured are found to be insensitive to scenarios when the transition in grid resolution crosses sharp viscosity contrast interfaces. We compared various grid refinement strategies based on distribution of different field variables such as viscosity, density and velocity. According to these tests the refinement allows for significant (0.5-1 order of magnitude) increase in the computational accuracy at the same
Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.
2006-01-01
Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.
Hornung, R.D. [Duke Univ., Durham, NC (United States)
1996-12-31
An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.
A parallel adaptive mesh refinement algorithm for predicting turbulent non-premixed combusting flows
Gao, X.; Groth, C.P.T.
2005-01-01
A parallel adaptive mesh refinement (AMR) algorithm is proposed for predicting turbulent non-premixed combusting flows characteristic of gas turbine engine combustors. The Favre-averaged Navier-Stokes equations governing mixture and species transport for a reactive mixture of thermally perfect gases in two dimensions, the two transport equations of the κ-ψ turbulence model, and the time-averaged species transport equations, are all solved using a fully coupled finite-volume formulation. A flexible block-based hierarchical data structure is used to maintain the connectivity of the solution blocks in the multi-block mesh and facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. This AMR approach allows for anisotropic mesh refinement and the block-based data structure readily permits efficient and scalable implementations of the algorithm on multi-processor architectures. Numerical results for turbulent non-premixed diffusion flames, including cold- and hot-flow predictions for a bluff body burner, are described and compared to available experimental data. The numerical results demonstrate the validity and potential of the parallel AMR approach for predicting complex non-premixed turbulent combusting flows. (author)
Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver
Kůs, Pavel; Šístek, Jakub
2017-01-01
Roč. 110, August (2017), s. 34-54 ISSN 0965-9978 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : adaptive mesh refinement * parallel algorithms * domain decomposition Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016 http://www.sciencedirect.com/science/article/pii/S0965997816305737
Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver
Kůs, Pavel; Šístek, Jakub
2017-01-01
Roč. 110, August (2017), s. 34-54 ISSN 0965-9978 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : adaptive mesh refinement * parallel algorithms * domain decomposition Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016 http://www.sciencedirect.com/science/ article /pii/S0965997816305737
Greg L. Bryan
2002-01-01
Full Text Available As an entry for the 2001 Gordon Bell Award in the "special" category, we describe our 3-d, hybrid, adaptive mesh refinement (AMR code Enzo designed for high-resolution, multiphysics, cosmological structure formation simulations. Our parallel implementation places no limit on the depth or complexity of the adaptive grid hierarchy, allowing us to achieve unprecedented spatial and temporal dynamic range. We report on a simulation of primordial star formation which develops over 8000 subgrids at 34 levels of refinement to achieve a local refinement of a factor of 1012 in space and time. This allows us to resolve the properties of the first stars which form in the universe assuming standard physics and a standard cosmological model. Achieving extreme resolution requires the use of 128-bit extended precision arithmetic (EPA to accurately specify the subgrid positions. We describe our EPA AMR implementation on the IBM SP2 Blue Horizon system at the San Diego Supercomputer Center.
Adaptive mesh refinement with spectral accuracy for magnetohydrodynamics in two space dimensions
Rosenberg, D; Pouquet, A; Mininni, P D
2007-01-01
We examine the effect of accuracy of high-order spectral element methods, with or without adaptive mesh refinement (AMR), in the context of a classical configuration of magnetic reconnection in two space dimensions, the so-called Orszag-Tang (OT) vortex made up of a magnetic X-point centred on a stagnation point of the velocity. A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code is applied to simulate this problem. The MHD solver is explicit, and uses the Elsaesser formulation on high-order elements. It automatically takes advantage of the adaptive grid mechanics that have been described elsewhere in the fluid context (Rosenberg et al 2006 J. Comput. Phys. 215 59-80); the code allows both statically refined and dynamically refined grids. Tests of the algorithm using analytic solutions are described, and comparisons of the OT solutions with pseudo-spectral computations are performed. We demonstrate for moderate Reynolds numbers that the algorithms using both static and refined grids reproduce the pseudo-spectral solutions quite well. We show that low-order truncation-even with a comparable number of global degrees of freedom-fails to correctly model some strong (sup-norm) quantities in this problem, even though it satisfies adequately the weak (integrated) balance diagnostics
Penner, Joyce E.; Andronova, Natalia; Oehmke, Robert C.; Brown, Jonathan; Stout, Quentin F.; Jablonowski, Christiane; van Leer, Bram; Powell, Kenneth G.; Herzog, Michael
2007-07-01
One of the most important advances needed in global climate models is the development of atmospheric General Circulation Models (GCMs) that can reliably treat convection. Such GCMs require high resolution in local convectively active regions, both in the horizontal and vertical directions. During previous research we have developed an Adaptive Mesh Refinement (AMR) dynamical core that can adapt its grid resolution horizontally. Our approach utilizes a finite volume numerical representation of the partial differential equations with floating Lagrangian vertical coordinates and requires resolving dynamical processes on small spatial scales. For the latter it uses a newly developed general-purpose library, which facilitates 3D block-structured AMR on spherical grids. The library manages neighbor information as the blocks adapt, and handles the parallel communication and load balancing, freeing the user to concentrate on the scientific modeling aspects of their code. In particular, this library defines and manages adaptive blocks on the sphere, provides user interfaces for interpolation routines and supports the communication and load-balancing aspects for parallel applications. We have successfully tested the library in a 2-D (longitude-latitude) implementation. During the past year, we have extended the library to treat adaptive mesh refinement in the vertical direction. Preliminary results are discussed. This research project is characterized by an interdisciplinary approach involving atmospheric science, computer science and mathematical/numerical aspects. The work is done in close collaboration between the Atmospheric Science, Computer Science and Aerospace Engineering Departments at the University of Michigan and NOAA GFDL.
GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS
Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong
2010-01-01
We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.
Penner, Joyce E; Andronova, Natalia; Oehmke, Robert C; Brown, Jonathan; Stout, Quentin F; Jablonowski, Christiane; Leer, Bram van; Powell, Kenneth G; Herzog, Michael
2007-01-01
One of the most important advances needed in global climate models is the development of atmospheric General Circulation Models (GCMs) that can reliably treat convection. Such GCMs require high resolution in local convectively active regions, both in the horizontal and vertical directions. During previous research we have developed an Adaptive Mesh Refinement (AMR) dynamical core that can adapt its grid resolution horizontally. Our approach utilizes a finite volume numerical representation of the partial differential equations with floating Lagrangian vertical coordinates and requires resolving dynamical processes on small spatial scales. For the latter it uses a newly developed general-purpose library, which facilitates 3D block-structured AMR on spherical grids. The library manages neighbor information as the blocks adapt, and handles the parallel communication and load balancing, freeing the user to concentrate on the scientific modeling aspects of their code. In particular, this library defines and manages adaptive blocks on the sphere, provides user interfaces for interpolation routines and supports the communication and load-balancing aspects for parallel applications. We have successfully tested the library in a 2-D (longitude-latitude) implementation. During the past year, we have extended the library to treat adaptive mesh refinement in the vertical direction. Preliminary results are discussed. This research project is characterized by an interdisciplinary approach involving atmospheric science, computer science and mathematical/numerical aspects. The work is done in close collaboration between the Atmospheric Science, Computer Science and Aerospace Engineering Departments at the University of Michigan and NOAA GFDL
A new adaptive mesh refinement data structure with an application to detonation
Ji, Hua; Lien, Fue-Sang; Yee, Eugene
2010-11-01
A new Cell-based Structured Adaptive Mesh Refinement (CSAMR) data structure is developed. In our CSAMR data structure, Cartesian-like indices are used to identify each cell. With these stored indices, the information on the parent, children and neighbors of a given cell can be accessed simply and efficiently. Owing to the usage of these indices, the computer memory required for storage of the proposed AMR data structure is only {5}/{8} word per cell, in contrast to the conventional oct-tree [P. MacNeice, K.M. Olson, C. Mobary, R. deFainchtein, C. Packer, PARAMESH: a parallel adaptive mesh refinement community toolkit, Comput. Phys. Commun. 330 (2000) 126] and the fully threaded tree (FTT) [A.M. Khokhlov, Fully threaded tree algorithms for adaptive mesh fluid dynamics simulations, J. Comput. Phys. 143 (1998) 519] data structures which require, respectively, 19 and 2{3}/{8} words per cell for storage of the connectivity information. Because the connectivity information (e.g., parent, children and neighbors) of a cell in our proposed AMR data structure can be accessed using only the cell indices, a tree structure which was required in previous approaches for the organization of the AMR data is no longer needed for this new data structure. Instead, a much simpler hash table structure is used to maintain the AMR data, with the entry keys in the hash table obtained directly from the explicitly stored cell indices. The proposed AMR data structure simplifies the implementation and parallelization of an AMR code. Two three-dimensional test cases are used to illustrate and evaluate the computational performance of the new CSAMR data structure.
Juan J. Garcia-Cantero
2017-06-01
Full Text Available Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells’ overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma’s morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been
Yuan, H. Z.; Wang, Y.; Shu, C.
2017-12-01
This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.
3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks
Samtaney, S.; Jardin, S.C.; Colella, P.; Martin, D.F.
2003-01-01
We present results of Adaptive Mesh Refinement (AMR) simulations of the pellet injection process, a proven method of refueling tokamaks. AMR is a computationally efficient way to provide the resolution required to simulate realistic pellet sizes relative to device dimensions. The mathematical model comprises of single-fluid MHD equations with source terms in the continuity equation along with a pellet ablation rate model. The numerical method developed is an explicit unsplit upwinding treatment of the 8-wave formulation, coupled with a MAC projection method to enforce the solenoidal property of the magnetic field. The Chombo framework is used for AMR. The role of the E x B drift in mass redistribution during inside and outside pellet injections is emphasized
Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm
Küchlin, Stephan; Jenny, Patrick
2018-06-01
Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.
Patched based methods for adaptive mesh refinement solutions of partial differential equations
Saltzman, J.
1997-09-02
This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.
Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling
Rastigejev, Y.
2011-12-01
Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems
Nonaka, A.; Aspden, A. J.; Almgren, A. S.; Bell, J. B.; Zingale, M.; Woosley, S. E.
2012-01-01
We extend our previous three-dimensional, full-star simulations of the final hours of convection preceding ignition in Type Ia supernovae to higher resolution using the adaptive mesh refinement capability of our low Mach number code, MAESTRO. We report the statistics of the ignition of the first flame at an effective 4.34 km resolution and general flow field properties at an effective 2.17 km resolution. We find that off-center ignition is likely, with radius of 50 km most favored and a likely range of 40-75 km. This is consistent with our previous coarser (8.68 km resolution) simulations, implying that we have achieved sufficient resolution in our determination of likely ignition radii. The dynamics of the last few hot spots preceding ignition suggest that a multiple ignition scenario is not likely. With improved resolution, we can more clearly see the general flow pattern in the convective region, characterized by a strong outward plume with a lower speed recirculation. We show that the convective core is turbulent with a Kolmogorov spectrum and has a lower turbulent intensity and larger integral length scale than previously thought (on the order of 16 km s –1 and 200 km, respectively), and we discuss the potential consequences for the first flames.
Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics
Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.
2006-06-01
Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.
Modeling NIF Experimental Designs with Adaptive Mesh Refinement and Lagrangian Hydrodynamics
Koniges, A E; Anderson, R W; Wang, P; Gunney, B N; Becker, R; Eder, D C; MacGowan, B J
2005-01-01
Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs
Modeling Nif experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics
Koniges, A.E.; Anderson, R.W.; Wang, P.; Gunney, B.T.N.; Becker, R.; Eder, D.C.; MacGowan, B.J.; Schneider, M.B.
2006-01-01
Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs. (authors)
Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model
O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.
2015-12-01
Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.
Direct numerical simulation of bubbles with adaptive mesh refinement with distributed algorithms
Talpaert, Arthur
2017-01-01
This PhD work presents the implementation of the simulation of two-phase flows in conditions of water-cooled nuclear reactors, at the scale of individual bubbles. To achieve that, we study several models for Thermal-Hydraulic flows and we focus on a technique for the capture of the thin interface between liquid and vapour phases. We thus review some possible techniques for adaptive Mesh Refinement (AMR) and provide algorithmic and computational tools adapted to patch-based AMR, which aim is to locally improve the precision in regions of interest. More precisely, we introduce a patch-covering algorithm designed with balanced parallel computing in mind. This approach lets us finely capture changes located at the interface, as we show for advection test cases as well as for models with hyperbolic-elliptic coupling. The computations we present also include the simulation of the incompressible Navier-Stokes system, which models the shape changes of the interface between two non-miscible fluids. (author) [fr
Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion
Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.
2014-04-01
The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion
Philip, B.; Wang, Z.; Berrill, M.A.; Birke, M.; Pernice, M.
2014-01-01
The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton–Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence
Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement
Shervani-Tabar, Navid; Vasilyev, Oleg V.
2016-11-01
This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.
Skillman, Samuel W.; Hallman, Eric J.; Burns, Jack O.; Smith, Britton D.; O'Shea, Brian W.; Turk, Matthew J.
2011-01-01
Cosmological shocks are a critical part of large-scale structure formation, and are responsible for heating the intracluster medium in galaxy clusters. In addition, they are capable of accelerating non-thermal electrons and protons. In this work, we focus on the acceleration of electrons at shock fronts, which is thought to be responsible for radio relics-extended radio features in the vicinity of merging galaxy clusters. By combining high-resolution adaptive mesh refinement/N-body cosmological simulations with an accurate shock-finding algorithm and a model for electron acceleration, we calculate the expected synchrotron emission resulting from cosmological structure formation. We produce synthetic radio maps of a large sample of galaxy clusters and present luminosity functions and scaling relationships. With upcoming long-wavelength radio telescopes, we expect to see an abundance of radio emission associated with merger shocks in the intracluster medium. By producing observationally motivated statistics, we provide predictions that can be compared with observations to further improve our understanding of magnetic fields and electron shock acceleration.
Hummels, Cameron B.; Bryan, Greg L.
2012-01-01
We carry out adaptive mesh refinement cosmological simulations of Milky Way mass halos in order to investigate the formation of disk-like galaxies in a Λ-dominated cold dark matter model. We evolve a suite of five halos to z = 0 and find a gas disk formation in each; however, in agreement with previous smoothed particle hydrodynamics simulations (that did not include a subgrid feedback model), the rotation curves of all halos are centrally peaked due to a massive spheroidal component. Our standard model includes radiative cooling and star formation, but no feedback. We further investigate this angular momentum problem by systematically modifying various simulation parameters including: (1) spatial resolution, ranging from 1700 to 212 pc; (2) an additional pressure component to ensure that the Jeans length is always resolved; (3) low star formation efficiency, going down to 0.1%; (4) fixed physical resolution as opposed to comoving resolution; (5) a supernova feedback model that injects thermal energy to the local cell; and (6) a subgrid feedback model which suppresses cooling in the immediate vicinity of a star formation event. Of all of these, we find that only the last (cooling suppression) has any impact on the massive spheroidal component. In particular, a simulation with cooling suppression and feedback results in a rotation curve that, while still peaked, is considerably reduced from our standard runs.
Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths
Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.
2018-04-01
We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.
De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico
2012-02-01
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.
De Colle, Fabio; Ramirez-Ruiz, Enrico; Granot, Jonathan; López-Cámara, Diego
2012-01-01
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρ∝r –k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.
De Colle, Fabio; Ramirez-Ruiz, Enrico [Astronomy and Astrophysics Department, University of California, Santa Cruz, CA 95064 (United States); Granot, Jonathan [Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); Lopez-Camara, Diego [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Ap. 70-543, 04510 D.F. (Mexico)
2012-02-20
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the
Phillips, Carolyn L.
2014-09-01
In a complex self-organizing system, small changes in the interactions between the system's components can result in different emergent macrostructures or macrobehavior. In chemical engineering and material science, such spontaneously self-assembling systems, using polymers, nanoscale or colloidal-scale particles, DNA, or other precursors, are an attractive way to create materials that are precisely engineered at a fine scale. Changes to the interactions can often be described by a set of parameters. Different contiguous regions in this parameter space correspond to different ordered states. Since these ordered states are emergent, often experiment, not analysis, is necessary to create a diagram of ordered states over the parameter space. By issuing queries to points in the parameter space (e.g., performing a computational or physical experiment), ordered states can be discovered and mapped. Queries can be costly in terms of resources or time, however. In general, one would like to learn the most information using the fewest queries. Here we introduce a learning heuristic for issuing queries to map and search a two-dimensional parameter space. Using a method inspired by adaptive mesh refinement, the heuristic iteratively issues batches of queries to be executed in parallel based on past information. By adjusting the search criteria, different types of searches (for example, a uniform search, exploring boundaries, sampling all regions equally) can be flexibly implemented. We show that this method will densely search the space, while preferentially targeting certain features. Using numerical examples, including a study simulating the self-assembly of complex crystals, we show how this heuristic can discover new regions and map boundaries more accurately than a uniformly distributed set of queries.
Donmez, Orhan
We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.
Essadki Mohamed
2016-09-01
Full Text Available Predictive simulation of liquid fuel injection in automotive engines has become a major challenge for science and applications. The key issue in order to properly predict various combustion regimes and pollutant formation is to accurately describe the interaction between the carrier gaseous phase and the polydisperse evaporating spray produced through atomization. For this purpose, we rely on the EMSM (Eulerian Multi-Size Moment Eulerian polydisperse model. It is based on a high order moment method in size, with a maximization of entropy technique in order to provide a smooth reconstruction of the distribution, derived from a Williams-Boltzmann mesoscopic model under the monokinetic assumption [O. Emre (2014 PhD Thesis, École Centrale Paris; O. Emre, R.O. Fox, M. Massot, S. Chaisemartin, S. Jay, F. Laurent (2014 Flow, Turbulence and Combustion 93, 689-722; O. Emre, D. Kah, S. Jay, Q.-H. Tran, A. Velghe, S. de Chaisemartin, F. Laurent, M. Massot (2015 Atomization Sprays 25, 189-254; D. Kah, F. Laurent, M. Massot, S. Jay (2012 J. Comput. Phys. 231, 394-422; D. Kah, O. Emre, Q.-H. Tran, S. de Chaisemartin, S. Jay, F. Laurent, M. Massot (2015 Int. J. Multiphase Flows 71, 38-65; A. Vié, F. Laurent, M. Massot (2013 J. Comp. Phys. 237, 277-310]. The present contribution relies on a major extension of this model [M. Essadki, S. de Chaisemartin, F. Laurent, A. Larat, M. Massot (2016 Submitted to SIAM J. Appl. Math.], with the aim of building a unified approach and coupling with a separated phases model describing the dynamics and atomization of the interface near the injector. The novelty is to be found in terms of modeling, numerical schemes and implementation. A new high order moment approach is introduced using fractional moments in surface, which can be related to geometrical quantities of the gas-liquid interface. We also provide a novel algorithm for an accurate resolution of the evaporation. Adaptive mesh refinement properly scaling on massively
AbouEisha, Hassan M.
2017-07-13
We consider a class of two-and three-dimensional h-refined meshes generated by an adaptive finite element method. We introduce an element partition tree, which controls the execution of the multi-frontal solver algorithm over these refined grids. We propose and study algorithms with polynomial computational cost for the optimization of these element partition trees. The trees provide an ordering for the elimination of unknowns. The algorithms automatically optimize the element partition trees using extensions of dynamic programming. The construction of the trees by the dynamic programming approach is expensive. These generated trees cannot be used in practice, but rather utilized as a learning tool to propose fast heuristic algorithms. In this first part of our paper we focus on the dynamic programming approach, and draw a sketch of the heuristic algorithm. The second part will be devoted to a more detailed analysis of the heuristic algorithm extended for the case of hp-adaptive
AbouEisha, Hassan M.; Calo, Victor Manuel; Jopek, Konrad; Moshkov, Mikhail; Paszyńka, Anna; Paszyński, Maciej; Skotniczny, Marcin
2017-01-01
We consider a class of two-and three-dimensional h-refined meshes generated by an adaptive finite element method. We introduce an element partition tree, which controls the execution of the multi-frontal solver algorithm over these refined grids. We propose and study algorithms with polynomial computational cost for the optimization of these element partition trees. The trees provide an ordering for the elimination of unknowns. The algorithms automatically optimize the element partition trees using extensions of dynamic programming. The construction of the trees by the dynamic programming approach is expensive. These generated trees cannot be used in practice, but rather utilized as a learning tool to propose fast heuristic algorithms. In this first part of our paper we focus on the dynamic programming approach, and draw a sketch of the heuristic algorithm. The second part will be devoted to a more detailed analysis of the heuristic algorithm extended for the case of hp-adaptive
Parallelization of Unsteady Adaptive Mesh Refinement for Unstructured Navier-Stokes Solvers
Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.
2014-01-01
This paper explores the implementation of the MPI parallelization in a Navier-Stokes solver using adaptive mesh re nement. Viscous and inviscid test problems are considered for the purpose of benchmarking, as are implicit and explicit time advancement methods. The main test problem for comparison includes e ects from boundary layers and other viscous features and requires a large number of grid points for accurate computation. Ex- perimental validation against double cone experiments in hypersonic ow are shown. The adaptive mesh re nement shows promise for a staple test problem in the hypersonic com- munity. Extension to more advanced techniques for more complicated ows is described.
Vay, J.-L.; Colella, P.; McCorquodale, P.; Van Straalen, B.; Friedman, A.; Grote, D.P.
2002-01-01
The numerical simulation of the driving beams in a heavy ion fusion power plant is a challenging task, and simulation of the power plant as a whole, or even of the driver, is not yet possible. Despite the rapid progress in computer power, past and anticipated, one must consider the use of the most advanced numerical techniques, if we are to reach our goal expeditiously. One of the difficulties of these simulations resides in the disparity of scales, in time and in space, which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g., fluid dynamics simulations) is the mesh refinement technique. They discuss the challenges posed by the implementation of this technique into plasma simulations (due to the presence of particles and electromagnetic waves). They will present the prospects for and projected benefits of its application to heavy ion fusion. In particular to the simulation of the ion source and the final beam propagation in the chamber. A collaboration project is under way at LBNL between the Applied Numerical Algorithms Group (ANAG) and the HIF group to couple the Adaptive Mesh Refinement (AMR) library (CHOMBO) developed by the ANAG group to the Particle-In-Cell accelerator code WARP developed by the HIF-VNL. They describe their progress and present their initial findings
A short note on the use of the red-black tree in Cartesian adaptive mesh refinement algorithms
Hasbestan, Jaber J.; Senocak, Inanc
2017-12-01
Mesh adaptivity is an indispensable capability to tackle multiphysics problems with large disparity in time and length scales. With the availability of powerful supercomputers, there is a pressing need to extend time-proven computational techniques to extreme-scale problems. Cartesian adaptive mesh refinement (AMR) is one such method that enables simulation of multiscale, multiphysics problems. AMR is based on construction of octrees. Originally, an explicit tree data structure was used to generate and manipulate an adaptive Cartesian mesh. At least eight pointers are required in an explicit approach to construct an octree. Parent-child relationships are then used to traverse the tree. An explicit octree, however, is expensive in terms of memory usage and the time it takes to traverse the tree to access a specific node. For these reasons, implicit pointerless methods have been pioneered within the computer graphics community, motivated by applications requiring interactivity and realistic three dimensional visualization. Lewiner et al. [1] provides a concise review of pointerless approaches to generate an octree. Use of a hash table and Z-order curve are two key concepts in pointerless methods that we briefly discuss next.
European oil refining: strategies for a competitive future
MacDonald, James.
1997-07-01
European Oil Refining investigates how the industry came to be in crisis and what the future holds. As well as an extensive analysis of past and present market shifts, the report predicts likely future developments and their consequences for investors. The report reviews the European oil sector in a global context, calculates the cost to refiners of key environmental legislation, assesses the problems caused by changing product demand and crude supply, examines possible solutions to the problems of low margins and overcapacity, evaluates the key players' main strategies to increase their competitiveness, analyses the western European oil refining industry by country, details the refinery operations of the major countries of central and eastern Europe, profiles 15 of the major oil companies and estimates the increase in investment required as a result of legislative and demand changes. (author)
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2018-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
Padmanabhan, R.; Oliveira, M. C.; Baptista, A. J.; Menezes, L. F.; Alves, J. L.
2007-01-01
Springback phenomenon associated with the elastic properties of sheet metals makes the design of forming dies a complex task. Thus, to develop consistent algorithms for springback compensation an accurate prediction of the amount of springback is mandatory. The numerical simulation using the finite element method is consensually the only feasible method to predict springback. However, springback prediction is a very complicated task and highly sensitive to various numerical parameters of finite elements (FE), such as: type, order, integration scheme, shape and size, as well the time integration formulae and the unloading strategy. All these numerical parameters make numerical simulation of springback more sensitive to numerical tolerances than the forming operation. In case of an unconstrained cylindrical bending, the in-plane to thickness FE size ratio is more relevant than the number of FE layers through-thickness, for the numerical prediction of final stress and strain states, variables of paramount importance for an accurate springback prediction. The aim of the present work is to evaluate the influence of the refinement of a 3-D FE mesh, namely the in-plane mesh refinement and the number of through-thickness FE layers, in springback prediction. The selected example corresponds to the first stage of the 'Numisheet'05 Benchmark no. 3', which consists basically in the sheet forming of a channel section in an industrial-scale channel draw die. The physical drawbeads are accurately taken into account in the numerical model in order to accurately reproduce its influence during the forming process simulation. FEM simulations were carried out with the in-house code DD3IMP. Solid finite elements were used. They are recommended for accuracy in FE springback simulation when the ratio between the tool radius and blank thickness is lower than 5-6. In the selected example the drawbead radius is 4.0 mm. The influence of the FE mesh refinement in springback prediction is
Papoutsakis, Andreas; Sazhin, Sergei S.; Begg, Steven; Danaila, Ionut; Luddens, Francky
2018-06-01
We present an Adaptive Mesh Refinement (AMR) method suitable for hybrid unstructured meshes that allows for local refinement and de-refinement of the computational grid during the evolution of the flow. The adaptive implementation of the Discontinuous Galerkin (DG) method introduced in this work (ForestDG) is based on a topological representation of the computational mesh by a hierarchical structure consisting of oct- quad- and binary trees. Adaptive mesh refinement (h-refinement) enables us to increase the spatial resolution of the computational mesh in the vicinity of the points of interest such as interfaces, geometrical features, or flow discontinuities. The local increase in the expansion order (p-refinement) at areas of high strain rates or vorticity magnitude results in an increase of the order of accuracy in the region of shear layers and vortices. A graph of unitarian-trees, representing hexahedral, prismatic and tetrahedral elements is used for the representation of the initial domain. The ancestral elements of the mesh can be split into self-similar elements allowing each tree to grow branches to an arbitrary level of refinement. The connectivity of the elements, their genealogy and their partitioning are described by linked lists of pointers. An explicit calculation of these relations, presented in this paper, facilitates the on-the-fly splitting, merging and repartitioning of the computational mesh by rearranging the links of each node of the tree with a minimal computational overhead. The modal basis used in the DG implementation facilitates the mapping of the fluxes across the non conformal faces. The AMR methodology is presented and assessed using a series of inviscid and viscous test cases. Also, the AMR methodology is used for the modelling of the interaction between droplets and the carrier phase in a two-phase flow. This approach is applied to the analysis of a spray injected into a chamber of quiescent air, using the Eulerian
Simurda, Matej; Duggen, Lars; Basse, Nils T; Lassen, Benny
2018-02-01
A numerical model for transit-time ultrasonic flowmeters operating under multiphase flow conditions previously presented by us is extended by mesh refinement and grid point redistribution. The method solves modified first-order stress-velocity equations of elastodynamics with additional terms to account for the effect of the background flow. Spatial derivatives are calculated by a Fourier collocation scheme allowing the use of the fast Fourier transform, while the time integration is realized by the explicit third-order Runge-Kutta finite-difference scheme. The method is compared against analytical solutions and experimental measurements to verify the benefit of using mapped grids. Additionally, a study of clamp-on and in-line ultrasonic flowmeters operating under multiphase flow conditions is carried out.
Le Tellier, R.; Fournier, D.; Suteau, C.
2011-01-01
Within the framework of a Discontinuous Galerkin spatial approximation of the multigroup discrete ordinates transport equation, we present a generalization of the exact standard perturbation formula that takes into account spatial discretization-induced reactivity changes. It encompasses in two separate contributions the nuclear data-induced reactivity change and the reactivity modification induced by two different spatial discretizations. The two potential uses of such a formulation when considering adaptive mesh refinement are discussed, and numerical results on a simple two-group Cartesian two-dimensional benchmark are provided. In particular, such a formulation is shown to be useful to filter out a more accurate estimate of nuclear data-related reactivity effects from initial and perturbed calculations based on independent adaptation processes. (authors)
Lopez-Camara, D.; Lazzati, Davide [Department of Physics, NC State University, 2401 Stinson Drive, Raleigh, NC 27695-8202 (United States); Morsony, Brian J. [Department of Astronomy, University of Wisconsin-Madison, 2535 Sterling Hall, 475 N. Charter Street, Madison, WI 53706-1582 (United States); Begelman, Mitchell C., E-mail: dlopezc@ncsu.edu [JILA, University of Colorado, 440 UCB, Boulder, CO 80309-0440 (United States)
2013-04-10
We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.
Truelove, J.K.; Klein, R.I.; McKee, C.F.; Holliman, J.H. II; Truelove, J.K.; McKee, C.F.; Truelove, J.K.; Holliman, J.H. II; Klein, R.I.; Woods, D.T.; McKee, C.F.; Woods, D.T.; Howell, L.H.; Greenough, J.A.
1998-01-01
We describe a new code for numerical solution of three-dimensional self-gravitational hydrodynamics problems. This code utilizes the technique of local adaptive mesh refinement (AMR), employing multiple grids at multiple levels of resolution and automatically and dynamically adding and removing these grids as necessary to maintain adequate resolution. This technology allows solution of problems that would be prohibitively expensive with a code using fixed resolution, and it is more versatile and efficient than competing methods of achieving variable resolution. In particular, we apply this technique to simulate the collapse and fragmentation of a molecular cloud, a key step in star formation. The simulation involves many orders of magnitude of variation in length scale as fragments form at positions that are not a priori discernible from general initial conditions. In this paper, we describe the methodology behind this new code and present several illustrative applications. The criterion that guides the degree of adaptive mesh refinement is critical to the success of the scheme, and, for the isothermal problems considered here, we employ the Jeans condition for this purpose. By maintaining resolution finer than the local Jeans length, we set new benchmarks of accuracy by which to measure other codes on each problem we consider, including the uniform collapse of a finite pressured cloud. We find that the uniformly rotating, spherical clouds treated here first collapse to disks in the equatorial plane and then, in the presence of applied perturbations, form filamentary singularities that do not fragment while isothermal. Our results provide numerical confirmation of recent work by Inutsuka ampersand Miyama on this scenario of isothermal filament formation. copyright copyright 1998. The American Astronomical Society
Angelidis, Dionysios; Sotiropoulos, Fotis
2015-11-01
The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.
Amaziane, Brahim; Bourgeois, Marc; El Fatini, Mohamed
2014-01-01
In this paper, we consider adaptive numerical simulation of miscible displacement problems in porous media, which are modeled by single phase flow equations. A vertex-centred finite volume method is employed to discretize the coupled system: the Darcy flow equation and the diffusion-convection concentration equation. The convection term is approximated with a Godunov scheme over the dual finite volume mesh, whereas the diffusion-dispersion term is discretized by piecewise linear conforming finite elements. We introduce two kinds of indicators, both of them of residual type. The first one is related to time discretization and is local with respect to the time discretization: thus, at each time, it provides an appropriate information for the choice of the next time step. The second is related to space discretization and is local with respect to both the time and space variable and the idea is that at each time it is an efficient tool for mesh adaptivity. An error estimation procedure evaluates where additional refinement is needed and grid generation procedures dynamically create or remove fine-grid patches as resolution requirements change. The method was implemented in the software MELODIE, developed by the French Institute for Radiological Protection and Nuclear Safety (IRSN, Institut de Radioprotection et de Surete Nucleaire). The algorithm is then used to simulate the evolution of radionuclide migration from the waste packages through a heterogeneous disposal, demonstrating its capability to capture complex behavior of the resulting flow. (authors)
Rising costs call for new European refining strategies
Sweeney, B.N.C.
1993-01-01
The outlook for the global refining industry is for increased spending and reduced margins, largely because of efforts to improve the environment. A look at these trends through the end of the decade is thus in order. Three major industry thrusts are proposed to see refiners through this uncertain period. Three main thrusts are necessary: fixed costs must be reduced by re-engineering business processes and reexamining noncore business units against total and marginal costs. In this respect the best refiners are well ahead of the good ones. New cooperative ways of meeting regulations must be sought, to avoid wasteful over capacity. Joint ventures and alliances with competitors will be needed. The cooperative principle upstream must be extended and new strategies must be sought to meet product demand changes and reduce feedstock costs. The picture that is presented is tough, largely because of the wish to improve the environment. The question that must be continually reviewed is ''Have governments got the right balance in these regulations between the environment and the downstream industry?''
Deng, Xiaolong; Dong, Haibo
2017-11-01
Developing a high-fidelity, high-efficiency numerical method for bio-inspired flow problems with flow-structure interaction is important for understanding related physics and developing many bio-inspired technologies. To simulate a fast-swimming big fish with multiple finlets or fish schooling, we need fine grids and/or a big computational domain, which are big challenges for 3-D simulations. In current work, based on the 3-D finite-difference sharp-interface immersed boundary method for incompressible flows (Mittal et al., JCP 2008), we developed an octree-like Adaptive Mesh Refinement (AMR) technique to enhance the computational ability and increase the computational efficiency. The AMR is coupled with a multigrid acceleration technique and a MPI +OpenMP hybrid parallelization. In this work, different AMR layers are treated separately and the synchronization is performed in the buffer regions and iterations are performed for the convergence of solution. Each big region is calculated by a MPI process which then uses multiple OpenMP threads for further acceleration, so that the communication cost is reduced. With these acceleration techniques, various canonical and bio-inspired flow problems with complex boundaries can be simulated accurately and efficiently. This work is supported by the MURI Grant Number N00014-14-1-0533 and NSF Grant CBET-1605434.
Vay, J.L.; Colella, P.; McCorquodale, P.; Van Straalen, B.; Friedman, A.; Grote, D.P.
2002-01-01
The numerical simulation of the driving beams in a heavy ion fusion power plant is a challenging task, and simulation of the power plant as a whole, or even of the drive,r is not yet possible. Despite the rapid progress in computer power, past and anticipated, one must consider the use of the most advanced numerical techniques, if they are to reach the goal expeditiously. One of the difficulties of these simulations resides in the disparity of scales, in time and in space, which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g., fluid dynamics simulations) is the mesh refinement technique. They discuss the challenges posed by the implementation of this technique into plasma simulations (due to the presence of particles and electromagnetic waves). They will present the prospects for and projected benefits of its application to heavy ion fusion, in particular to the simulation of the ion source and the final beam propagation in the chamber
Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC
Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C
2007-01-01
More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficiently optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS)
Amor, H.; Bourgeois, M.
2012-01-01
using an adaptive mesh refinement strategy was introduced in MELODIE for the simulation of groundwater flow and solute transport in saturated porous media in 2 dimensions. The selected estimator, based on the explicit residual error, is expected to allow local refinements and thus minimization of the discretization error at an optimal computational cost. Test case: a realistic heterogeneous case with fracturing. In addition to theoretical test cases a more complex case was tested. The purpose of this test case was twofold: - to move from pure theoretical work to an illustrative case within a realistic generic context; however parameter values for hydrodynamic characteristics were chosen so as to highlight the investigated phenomena; - to account for large time and space scales, representative for those required for the simulation of radioactive waste repositories. The general shape of the geological media was designed to cover main features representative of sedimentary formations. Three distinct radionuclide source locations were chosen in order to obtain a set of flow and transport configurations. The entire layer sequence was structured into three hydrogeological units intersected by three sub-vertical faults. The vertical 2D cross-section dimensions are 5 km long by 500 m thick. Two source terms are located in a 100 m-thick layer in the right part of the domain and another one is located in a larger layer in the left part. These two 'host rock' layers consist of the same sedimentary unit with a low permeability, though an offset due to the middle fault. Faults are considered as conductive features. Radionuclides are assumed to be instantaneously released from the three source term locations at t = 0. The a posteriori error estimator and the adaptive mesh algorithm were applied to this heterogeneous problem. Preliminary calculations showed that the implemented a posteriori error estimator method is efficient to solve the equations of flow and advective
Dobravec, Tadej; Mavrič, Boštjan; Šarler, Božidar
2017-11-01
A two-dimensional model to simulate the dendritic and eutectic growth in binary alloys is developed. A cellular automaton method is adopted to track the movement of the solid-liquid interface. The diffusion equation is solved in the solid and liquid phases by using an explicit finite volume method. The computational domain is divided into square cells that can be hierarchically refined or coarsened using an adaptive mesh based on the quadtree algorithm. Such a mesh refines the regions of the domain near the solid-liquid interface, where the highest concentration gradients are observed. In the regions where the lowest concentration gradients are observed the cells are coarsened. The originality of the work is in the novel, adaptive approach to the efficient and accurate solution of the posed multiscale problem. The model is verified and assessed by comparison with the analytical results of the Lipton-Glicksman-Kurz model for the steady growth of a dendrite tip and the Jackson-Hunt model for regular eutectic growth. Several examples of typical microstructures are simulated and the features of the method as well as further developments are discussed.
Success in Asian refining -- Strategies for a growth industry
Burke, B.F.
1994-01-01
Asia offers some of the best growth opportunities to the global refining industry. Many of its nations are in the process of industrializing, with a rapid rise in living standards and associated energy use. Growth trends are firmly established, with petroleum use in particular increasing rapidly as modern transport infrastructures develop. Of perhaps greater importance is Asia's potential for continued growth based on population and GDP trends. While the expected growth offers a wide range of opportunities, succeeding in the unique Asian marketplace will be a challenge mastered by only a few new entrants. This paper will examine some of the critical issues that will drive success and shape the refining business in Asia. Industry fundamentals will be reviewed, with a focus on growth and profit drivers. With the fundamental framework established, the unique challenges posed by the Asian environment, including cultural issues, investment requirements, regulatory trends and issues in market development will be discussed. The ability to succeed in Asia by identifying factors that create local advantage within a very diverse region will be discussed. The need to merge corporate capabilities and objectives with regional opportunities is the key requirement to succeed in entering or expanding in the region
Nonlinear iterative strategy for NEM refinement and extension
Engrand, P.R.; Maldonado, G.I.; Al-Chalabi, R.; Turinsky, P.J.
1992-01-01
The work discussed in this paper is related to the nonlinear iterative strategy developed by Smith to solve the nodal expansion method (NEM) representation of the neutron diffusion equations. The authors show how it is possible to save computation time by taking advantage of the reducibility of the matrices that have to be inverted when employing this strategy. In addition, they show how this strategy can be adapted in an easy and efficient manner to time-dependent problems
Torej, Allen J.; Rizwan-Uddin
2001-01-01
The nodal integral method (NIM) has been developed for several problems, including the Navier-Stokes equations, the convection-diffusion equation, and the multigroup neutron diffusion equations. The coarse-mesh efficiency of the NIM is not fully realized in problems characterized by a wide range of spatial scales. However, the combination of adaptive mesh refinement (AMR) capability with the NIM can recover the coarse mesh efficiency by allowing high degrees of resolution in specific localized areas where it is needed and by using a lower resolution everywhere else. Furthermore, certain features of the NIM can be fruitfully exploited in the application of the AMR process. In this paper, we outline a general approach to couple nodal schemes with AMR and then apply it to the convection-diffusion (energy) equation. The development of the NIM with AMR capability (NIMAMR) is based on the well-known Berger-Oliger method for structured AMR. In general, the main components of all AMR schemes are 1. the solver; 2. the level-grid hierarchy; 3. the selection algorithm; 4. the communication procedures; 5. the governing algorithm. The first component, the solver, consists of the numerical scheme for the governing partial differential equations and the algorithm used to solve the resulting system of discrete algebraic equations. In the case of the NIM-AMR, the solver is the iterative approach to the solution of the set of discrete equations obtained by applying the NIM. Furthermore, in the NIM-AMR, the level-grid hierarchy (the second component) is based on the Hierarchical Adaptive Mesh Refinement (HAMR) system,6 and hence, the details of the hierarchy are omitted here. In the selection algorithm, regions of the domain that require mesh refinement are identified. The criterion to select regions for mesh refinement can be based on the magnitude of the gradient or on the Richardson truncation error estimate. Although an excellent choice for the selection criterion, the Richardson
Non-linear iterative strategy for nem refinement and extension
Engrand, P.R.; Maldonado, G.I.; Al-Chalabi, R.; Turinsky, P.J.
1994-10-01
The following work is related to the non-linear iterative strategy developed by K. Smith to solve the Nodal Expansion Method (NEM) representation of the neutron diffusion equations. We show how to improve this strategy and how to adapt it to time dependant problems. This work has been done in the NESTLE code, developed at North Carolina State University. When using Smith's strategy, one ends up with a two-node problem which corresponds to a matrix with a fixed structure and a size of 16 x 16 (for a 2 group representation). We show how to reduce this matrix into 2 equivalent systems which sizes are 4 x 4 and 8 x 8. The whole problem needs many of these 2 node problems solution. Therefore the gain in CPU time reaches 45% in the nodal part of the code. To adapt Smith's strategy to time dependent problems, the idea is to get the same structure of the 2 node problem system as in steady-state calculation. To achieve this, one has to approximate the values of the past time-step and of the previous by a second order polynomial and to treat it as a source term. We show here how to make this approximation consistent and accurate. (authors). 1 tab., 2 refs
Nicholas, Paul; Stasiuk, David; Nørgaard, Esben
2015-01-01
This paper describes the development of a modelling approach for the design and fabrication of an incrementally formed, stressed skin metal structure. The term incremental forming refers to a progression of localised plastic deformation to impart 3D form onto a 2D metal sheet, directly from 3D...... design data. A brief introduction presents this fabrication concept, as well as the context of structures whose skin plays a significant structural role. Existing research into ISF privileges either the control of forming parameters to minimise geometric deviation, or the more accurate measurement...... of the impact of the forming process at the scale of the grain. But to enhance structural performance for architectural applications requires that both aspects are considered synthetically. We demonstrate a mesh-based approach that incorporates critical parameters at the scales of structure, element...
Refining the focus: Alberta's international marketing strategy
NONE
2000-07-01
This strategic plan is the key initiative established under 'Alberta's Framework for International Strategies'. Its objective is to ensure the wellbeing of Albertans, sustaining Alberta's environment, and economic growth through successfully taking advantage of Alberta's many opportunities for marketing its goods, products, and services. It is predicated on industry and government continuing to work together to sustain a strong market-driven economy, strengthen Alberta's economic advantages and build an economic environment conducive to investment and growth in quality jobs. At present more than 150 foreign markets buy Alberta's goods and services, but the obvious focus of any strategy must be those regions and sectors of industry that offer the greatest possibilities for new and expanded opportunities for Alberta business. Accordingly, this strategy identifies priorities, selects the best initiatives and develops activities to achieve economic growth. Adding value to Albertan commodities before they are being shipped to export markets is a particular objective of the plan. An equally important consideration is to achieve growth through expansion of existing investments, attract new investment to the province, and to increase exports in response to international market and investment opportunities for Alberta's goods and services. Major topics discussed in the document include a discussion of the importance of trade and investment, a thorough analysis of the marketing priorities, the strategic framework, and priority market profiles for the United States, Japan,China, other Asia-Pacific markets, the European Union, Mexico, the Middle East and South Asia, and South America.
User-centric Query Refinement and Processing Using Granularity Based Strategies
Zeng, Y.; Zhong, N.; Wang, Y.; Qin, Y.; Huang, Z.; Zhou, H; Yao, Y; van Harmelen, F.A.H.
2011-01-01
Under the context of large-scale scientific literatures, this paper provides a user-centric approach for refining and processing incomplete or vague query based on cognitive- and granularity-based strategies. From the viewpoints of user interests retention and granular information processing, we
Multi-criteria Group Decision Making based on Linguistic Refined Neutrosophic Strategy
Kalyan Mondal; Surapati Pramanik; Bibhas C. Giri
2018-01-01
Multi-criteria group decision making (MCGDM) strategy, which consists of a group of experts acting collectively for best selection among all possible alternatives with respect to some criteria, is focused on in this study. To develop the paper, we define linguistic neutrosophic refine set.
Vay, J.-L.; Friedman, A.; Grote, D.P.
2002-01-01
The numerical simulation of the driving beams in a heavy ion fusion power plant is a challenging task, and, despite rapid progress in computer power, one must consider the use of the most advanced numerical techniques. One of the difficulties of these simulations resides in the disparity of scales in time and in space which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g. fluid dynamics simulations) is the Adaptive-Mesh-Refinement (AMR) technique. We follow in this article the progress accomplished in the last few months in the merging of the AMR technique with Particle-In-Cell (PIC) method. This includes a detailed modeling of the Lampel-Tiefenback solution for the one-dimensional diode using novel techniques to suppress undesirable numerical oscillations and an AMR patch to follow the head of the particle distribution. We also report new results concerning the modeling of ion sources using the axisymmetric WARPRZ-AMR prototype showing the utility of an AMR patch resolving the emitter vicinity and the beam edge
An adaptive grid refinement strategy for the simulation of negative streamers
Montijn, C.; Hundsdorfer, W.; Ebert, U.
2006-01-01
The evolution of negative streamers during electric breakdown of a non-attaching gas can be described by a two-fluid model for electrons and positive ions. It consists of continuity equations for the charged particles including drift, diffusion and reaction in the local electric field, coupled to the Poisson equation for the electric potential. The model generates field enhancement and steep propagating ionization fronts at the tip of growing ionized filaments. An adaptive grid refinement method for the simulation of these structures is presented. It uses finite volume spatial discretizations and explicit time stepping, which allows the decoupling of the grids for the continuity equations from those for the Poisson equation. Standard refinement methods in which the refinement criterion is based on local error monitors fail due to the pulled character of the streamer front that propagates into a linearly unstable state. We present a refinement method which deals with all these features. Tests on one-dimensional streamer fronts as well as on three-dimensional streamers with cylindrical symmetry (hence effectively 2D for numerical purposes) are carried out successfully. Results on fine grids are presented, they show that such an adaptive grid method is needed to capture the streamer characteristics well. This refinement strategy enables us to adequately compute negative streamers in pure gases in the parameter regime where a physical instability appears: branching streamers
Pointer, William David [ORNL
2017-08-01
The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes were used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge
Dynamics of the European refining and petrochemical industry. Strategies, structure and change
Steenbakkers, K.
1997-01-01
The changes in the market position of producers engaged in the oil refining and basic petrochemical industry on the Western European market are the central theme of this book. Analysis of this reshuffling process among these actors is conducted on three levels. First, research is carried out at the level of world regions. In order to understand the reorganization of oil refining and basic petrochemical production in Western Europe, it is necessary to explore the recent aggregate dynamics of these activities on a global scale. Second, the differences in strategic behaviour are exanuned at the level of groups of market participants, namely the major oil companies, the chemical companies, the state-owned companies from both consumer and producer countries, and the independents. Finally, the investment/disinvestment decisions in the Western European oil refining and basic petrochemical industry are investigated at the level of the individual firm. Particular emphasis is placed upon explaining why companies active in the sectors under study have followed different strategies, although they have been confronted with similar adverse market conditions in Western Europe during the last decades. 341 refs
Core, X.
2002-02-01
The isobar approximation for the system of the balance equations of mass, momentum, energy and chemical species is a suitable approximation to represent low Mach number reactive flows. In this approximation, which neglects acoustics phenomena, the mixture is hydrodynamically incompressible and the thermodynamic effects lead to an uniform compression of the system. We present a novel numerical scheme for this approximation. An incremental projection method, which uses the original form of mass balance equation, discretizes in time the Navier-Stokes equations. Spatial discretization is achieved through a finite volume approach on MAC-type staggered mesh. A higher order de-centered scheme is used to compute the convective fluxes. We associate to this discretization a local mesh refinement method, based on Flux Interface Correction technique. A first application concerns a forced flow with variable density which mimics a combustion problem. The second application is natural convection with first small temperature variations and then beyond the limit of validity of the Boussinesq approximation. Finally, we treat a third application which is a laminar diffusion flame. For each of these test problems, we demonstrate the robustness of the proposed numerical scheme, notably for the density spatial variations. We analyze the gain in accuracy obtained with the local mesh refinement method. (author)
Li, Gaohua; Fu, Xiang; Wang, Fuxin
2017-10-01
The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.
Toraya, H.
2000-01-01
The crystal structure of α-silicon nitride (Si 3 N 4 ) was refined by the Rietveld method using synchrotron radiation powder diffraction data (wavelength = 1.2 A) collected at station BL-4B2 in the photon factory. A refinement procedure that adopted a new weight function, w = 1/Y o e (Y o is the observed profile intensity and e ≅ 2), for the least-squares fitting [Toraya (1998). J. Appl. Cryst. 31, 333-343] was studied. The most reasonable structural parameters were obtained with e = 1.7. Crystal data of α-Si 3 N 4 : trigonal, P31c, a = 7.75193 (3), c = 5.61949 (4) A, V = 292.447 (3) A 3 , Z = 4; R p = 5.08, R wp = 6.50, R B = 3.36, R F = 2.26%. The following five factors are considered equally important for deriving accurate structural parameters from powder diffraction data: (i) sufficiently large sin θ/λ range of >0.8 A -1 ; (ii) adequate counting statistics; (iii) correct profile model; (iv) proper weighting on observations to give a uniform distribution of the mean weighted squared residuals; (v) high-angular-resolution powder diffraction data. (orig.)
Bogaers, Alfred EJ
2010-01-01
Full Text Available of Laplacian or Bi-harmonic equations [7], radial basis function (RBF) interpolation [3, 15] or through mesh optimization [1, 6]. Despite the successes of these algorithms in reducing the frequency and necessity for re- meshing, they still account for a... simulations of a real system. What makes POD remarkable is that the selected modes are not only appropriate but make up the optimal linear basis for describing any given system. POD has been applied in a wide range of disciplines including image processing...
Surface meshing with curvature convergence
Li, Huibin; Zeng, Wei; Morvan, Jean-Marie; Chen, Liming; Gu, Xianfengdavid
2014-01-01
Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.
Surface meshing with curvature convergence
Li, Huibin
2014-06-01
Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.
2015-01-01
Mesh generation and visualization software based on the CGAL library. Folder content: drawmesh Visualize slices of the mesh (surface/volumetric) as wireframe on top of an image (3D). drawsurf Visualize surfaces of the mesh (surface/volumetric). img2mesh Convert isosurface in image to volumetric m...... mesh (medit format). img2off Convert isosurface in image to surface mesh (off format). off2mesh Convert surface mesh (off format) to volumetric mesh (medit format). reduce Crop and resize 3D and stacks of images. data Example data to test the library on...
Jiang, Yang; Zhang, Haiyang; Feng, Wei; Tan, Tianwei
2015-12-28
Metal ions play an important role in the catalysis of metalloenzymes. To investigate metalloenzymes via molecular modeling, a set of accurate force field parameters for metal ions is highly imperative. To extend its application range and improve the performance, the dummy atom model of metal ions was refined through a simple parameter screening strategy using the Mg(2+) ion as an example. Using the AMBER ff03 force field with the TIP3P model, the refined model accurately reproduced the experimental geometric and thermodynamic properties of Mg(2+). Compared with point charge models and previous dummy atom models, the refined dummy atom model yields an enhanced performance for producing reliable ATP/GTP-Mg(2+)-protein conformations in three metalloenzyme systems with single or double metal centers. Similar to other unbounded models, the refined model failed to reproduce the Mg-Mg distance and favored a monodentate binding of carboxylate groups, and these drawbacks needed to be considered with care. The outperformance of the refined model is mainly attributed to the use of a revised (more accurate) experimental solvation free energy and a suitable free energy correction protocol. This work provides a parameter screening strategy that can be readily applied to refine the dummy atom models for metal ions.
Geometrically Consistent Mesh Modification
Bonito, A.
2010-01-01
A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.
Michele Maruccia
2017-01-01
Full Text Available Extensive skin defect represents a real problem and major challenge in plastic and reconstructive surgery. On one hand, skin grafts offer a practical method to deal with skin defects despite their unsuitability for several complicated wounds. On the other hand, negative pressure wound therapy (NPWT, applied before skin grafting, promotes granulation tissue growth. The aim of the study is to evaluate the improvement in wound healing given by the merger of these two different approaches. We treated 23 patients for large wounds of multiple factors. Of these, 15 were treated with the application of V.A.C.® Therapy (KCI Medical S.r.l., Milan, Italy, in combination with skin grafts after a prior unsuccessful treatment of 4 weeks with mesh skin grafts and dressings. Another 8 were treated with only mesh skin graft. Pain reduction and wound area reduction were found statistically significant (p<0.0009, p<0.0001. Infection was resolved in almost all patients. According to our study, the use of the negative pressure wound therapy over mesh skin grafts is significantly effective especially in wounds resistant to conventional therapies, thereby improving the rate of skin graft take.
Mesh Adaptation and Shape Optimization on Unstructured Meshes, Phase I
National Aeronautics and Space Administration — In this SBIR CRM proposes to implement the entropy adjoint method for solution adaptive mesh refinement into the Loci/CHEM unstructured flow solver. The scheme will...
Prescott, Tonya L; Phillips Ii, Gregory; DuBois, L Zachary; Bull, Sheana S; Mustanski, Brian; Ybarra, Michele L
2016-08-04
Using social networking websites to recruit research participants is increasingly documented in the literature, although few studies have leveraged these sites to reach those younger than 18 years. To discuss the development and refinement of a recruitment protocol to reach and engage adolescent gay, bisexual, and other teenaged men who have sex with men (AGBM). Participants were recruited for development and evaluation activities related to Guy2Guy, a text messaging-based human immunodeficiency virus infection prevention program. Eligibility criteria included being between 14 to 18 years old; being a cisgender male; self-identifying as gay, bisexual, and/or queer; being literate in English, exclusively owning a cell phone, enrolled in an unlimited text messaging plan, intending to keep their current phone number over the next 6 months, and having used text messaging for at least the past 6 months. Recruitment experiences and subsequent steps to refine the Internet-based recruitment strategy are discussed for 4 research activities: online focus groups, content advisory team, beta test, and randomized controlled trial (RCT). Recruitment relied primarily on Facebook advertising. To a lesser extent, Google AdWords and promotion through partner organizations working with AGBM youth were also utilized. Facebook advertising strategies were regularly adjusted based on preidentified recruitment targets for race, ethnicity, urban-rural residence, and sexual experience. The result was a diverse sample of participants, of whom 30% belonged to a racial minority and 20% were Hispanic. Facebook advertising was the most cost-effective method, and it was also able to reach diverse recruitment goals: recruitment for the first focus group cost an average of US $2.50 per enrolled participant, and it took 9 days to enroll 40 participants; the second focus group cost an average of US $6.96 per enrolled participant, and it took 11 days to enroll 40 participants. Recruitment for the
Optimal algebraic multilevel preconditioning for local refinement along a line
Margenov, S.D.; Maubach, J.M.L.
1995-01-01
The application of some recently proposed algebraic multilevel methods for the solution of two-dimensional finite element problems on nonuniform meshes is studied. The locally refined meshes are created by the newest vertex mesh refinement method. After the introduction of this refinement technique
Parallel adaptation of general three-dimensional hybrid meshes
Kavouklis, Christos; Kallinderis, Yannis
2010-01-01
A new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid grids has been developed. The meshes considered in this work are composed of four kinds of elements; tetrahedra, prisms, hexahedra and pyramids, which poses a challenge to parallel mesh adaptation. Additional complexity imposed by the presence of multiple types of elements affects especially data migration, updates of local data structures and interpartition data structures. Efficient partition of hybrid meshes has been accomplished by transforming them to suitable graphs and using serial graph partitioning algorithms. Communication among processors is based on the faces of the interpartition boundary and the termination detection algorithm of Dijkstra is employed to ensure proper flagging of edges for refinement. An inexpensive dynamic load balancing strategy is introduced to redistribute work load among processors after adaptation. In particular, only the initial coarse mesh, with proper weighting, is balanced which yields savings in computation time and relatively simple implementation of mesh quality preservation rules, while facilitating coarsening of refined elements. Special algorithms are employed for (i) data migration and dynamic updates of the local data structures, (ii) determination of the resulting interpartition boundary and (iii) identification of the communication pattern of processors. Several representative applications are included to evaluate the method.
Paryz, Roman W.
2014-01-01
Several upgrade projects have been completed at the NASA Langley Research Center National Transonic Facility over the last 1.5 years in an effort defined as STARBUKS - Subsonic Transonic Applied Refinements By Using Key Strategies. This multi-year effort was undertaken to improve NTF's overall capabilities by addressing Accuracy and Validation, Productivity, and Reliability areas at the NTF. This presentation will give a brief synopsis of each of these efforts.
Mock, Stephen; Reynolds, William S; Dmochowski, Roger R
2014-05-01
The use of polypropylene mesh to augment surgery aimed to correct pelvic organ prolapse and stress urinary incontinence stems largely from the high recurrence rates of native tissue repairs. While objective outcomes were improved, mesh related complications began to emerge that included mesh exposures, extrusions, dyspareunia and other pain issues. However, the indication for and benefit of surgical intervention(s) to address these complications are lacking. We aim to review to current literature regarding postoperative pain outcomes following vaginal mesh revision. Evidence based literature indicates that mesh complications are not rare and surgery that aims to address them generally have an overall benefit. However, studies available are generally small case series of a retrospective nature with short follow up. Some themes are evident: there is a long lag period from mesh insertion to removal; there is a lack of a true denominator of total mesh insertions making it hard to gauge the real scope of the problem; mesh material found not along the expected trocar path or coursing close to neurovascular structures thus raises the possibility of technical errors during insertion. Transvaginal mesh revision(s) for mesh complications generally have a positive effect on pain outcomes, but better controlled studies are needed. Additionally, since technical issues may be a factor in the development of mesh complications, rigorous training and sufficient surgical case volume should be emphasized. © 2014 Wiley Publishing Asia Pty Ltd.
Bhalla, Amneet Pal Singh; Johansen, Hans; Graves, Dan; Martin, Dan; Colella, Phillip; Applied Numerical Algorithms Group Team
2017-11-01
We present a consistent cell-averaged discretization for incompressible Navier-Stokes equations on complex domains using embedded boundaries. The embedded boundary is allowed to freely cut the locally-refined background Cartesian grid. Implicit-function representation is used for the embedded boundary, which allows us to convert the required geometric moments in the Taylor series expansion (upto arbitrary order) of polynomials into an algebraic problem in lower dimensions. The computed geometric moments are then used to construct stencils for various operators like the Laplacian, divergence, gradient, etc., by solving a least-squares system locally. We also construct the inter-level data-transfer operators like prolongation and restriction for multi grid solvers using the same least-squares system approach. This allows us to retain high-order of accuracy near coarse-fine interface and near embedded boundaries. Canonical problems like Taylor-Green vortex flow and flow past bluff bodies will be presented to demonstrate the proposed method. U.S. Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231).
Ahmadi, M. [Heriot Watt Univ., Edinburgh (United Kingdom)
2008-10-15
This paper described a project in which a higher order up-winding scheme was used to solve mass/energy conservation equations for simulating steam flood processes in an oil reservoir. Thermal recovery processes are among the most complex because they require a detailed accounting of thermal energy and chemical reaction kinetics. The numerical simulation of thermal recovery processes involves localized phenomena such as saturation and temperatures fronts due to hyperbolic features of governing conservation laws. A second order accurate FV method that was improved by a moving mesh strategy was used to adjust for moving coordinates on a finely gridded domain. The Finite volume method was used and the problem of steam injection was then tested using derived solution frameworks on both mixed and moving coordinates. The benefits of using a higher-order Godunov solver instead of lower-order ones were qualified. This second order correction resulted in better resolution on moving features. Preferences of higher-order solvers over lower-order ones in terms of shock capturing is under further investigation. It was concluded that although this simulation study was limited to steam flooding processes, the newly presented approach may be suitable to other enhanced oil recovery processes such as VAPEX, SAGD and in situ combustion processes. 23 refs., 28 figs.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.
Jakeman, J.D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation
The quasidiffusion method for transport problems on unstructured meshes
Wieselquist, William A.
2009-06-01
In this work, we develop a quasidiffusion (QD) method for solving radiation transport problems on unstructured quadrilateral meshes in 2D Cartesian geometry, for example hanging-node meshes from adaptive mesh refinement (AMR) applications or skewed quadrilateral meshes from radiation hydrodynamics with Lagrangian meshing. The main result of the work is a new low-order quasidiffusion (LOQD) discretization on arbitrary quadrilaterals and a strategy for the efficient iterative solution which uses Krylov methods and incomplete LU factorization (ILU) preconditioning. The LOQD equations are a non-symmetric set of first-order PDEs that in second-order form resembles convection- diffusion with a diffusion tensor, with the difference that the LOQD equations contain extra cross-derivative terms. Our finite volume (FV) discretization of the LOQD equations is compared with three LOQD discretizations from literature. We then present a conservative, short characteristics discretization based on subcell balances (SCSB) that uses polynomial exponential moments to achieve robust behavior in various limits (e.g. small cells and voids) and is second- order accurate in space. A linear representation of the isotropic component of the scattering source based on face-average and cell-average scalar fluxes is also proposed and shown to be effective in some problems. In numerical tests, our QD method with linear scattering source representation shows some advantages compared to other transport methods. We conclude with avenues for future research and note that this QD method may easily be extended to arbitrary meshes in 3D Cartesian geometry.
Watson, F.V.
1982-01-01
An adaptation of the alternate direction method for coarse mesh calculation, is presented. The algorithm is applicable to two-and three dimensional problems, the last being the more interesting one. (E.G.) [pt
Pandemic planning : oilsands operators and the regional municipality constantly refine strategy
Ball, C.G.
2008-06-15
The Alberta government anticipates that between 17 to 43 per cent of the province's population will be affected during a potential influenza pandemic. It is expected that between 3 and 12,000 Albertans will be hospitalized and up to 3000 will die. This article discussed emergency plans made by the oil and gas industry for future pandemics. Oil sands operators in the Wood Buffalo municipality prepared plans based on guidelines made by the World Health Organization (WHO) and various government bodies. The transient nature of the region's population and its limited health resources may increase the level of risk associated with a pandemic. The planning process adopted by the region has been designed to provide staff with the ability to deal with increased numbers of people visiting the hospital. The planning process includes training exercises that range from desktop drills to the setting up of triage areas. Other plans include the identification of operations and processes that would be at risk in the event of a pandemic, as well the identification of key operations and roles. Plans are constantly being refined in order to identify new areas of risk. 1 fig.
2015-12-01
J as having exceeded their expectations with a 96 percent customer satisfaction rating. While the 2014 Strategy introduced 22 implementation plans...for each of the military services . GAO has previously reported that milestones provide decision makers with the information they need to assess...supply chain to effectively and efficiently provide spare parts, food , fuel, and other critical supplies in support of U.S. military forces. DOD’s goal
Ralf Deiterding
2011-01-01
Full Text Available Numerical simulation can be key to the understanding of the multidimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the nonequilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniques in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, that is, under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis, and a diagram of the transition boundaries between possible Mach reflection structures is constructed.
Fournier, Damien; Le-Tellier, Romain; Herbin, Raphaele
2013-01-01
This paper presents an hp-refinement method for a first order scalar transport reaction equation discretized by a discontinuous Galerkin method. First, the theoretical rates of convergence of h- and p-refinement are recalled and numerically tested. Then, in order to design some meshes, we propose two different estimators of the local error on the spatial domain. These quantities are analyzed and compared depending on the regularity of the solution so as to find the best way to lead the refinement process and the best strategy to choose between h- and p-refinement. Finally, the different possible refinement strategies are compared first on analytical examples and then on realistic applications for neutron transport in a nuclear reactor core. (authors)
6th International Meshing Roundtable '97
White, D.
1997-09-01
The goal of the 6th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the pas~ the Roundtable has enjoyed significant participation born each of these groups from a wide variety of countries. The Roundtable will consist of technical presentations from contributed papers and abstracts, two invited speakers, and two invited panels of experts discussing topics related to the development and use of automatic mesh generation tools. In addition, this year we will feature a "Bring Your Best Mesh" competition and poster session to encourage discussion and participation from a wide variety of mesh generation tool users. The schedule and evening social events are designed to provide numerous opportunities for informal dialog. A proceedings will be published by Sandia National Laboratories and distributed at the Roundtable. In addition, papers of exceptionally high quaIity will be submitted to a special issue of the International Journal of Computational Geometry and Applications. Papers and one page abstracts were sought that present original results on the meshing process. Potential topics include but are got limited to: Unstructured triangular and tetrahedral mesh generation Unstructured quadrilateral and hexahedral mesh generation Automated blocking and structured mesh generation Mixed element meshing Surface mesh generation Geometry decomposition and clean-up techniques Geometry modification techniques related to meshing Adaptive mesh refinement and mesh quality control Mesh visualization Special purpose meshing algorithms for particular applications Theoretical or novel ideas with practical potential Technical presentations from industrial researchers.
Honda, Hideo; Shimizu, Yasuo; Nitto, Yukari; Imai, Miho; Ozawa, Takeshi; Iwasa, Mitsuaki; Shiga, Keiko; Hira, Tomoko
2009-01-01
Background: For early detection of autism, it is difficult to maintain an efficient level of sensitivity and specificity based on observational data from a single screening. The Extraction and Refinement (E&R) Strategy utilizes a public children's health surveillance program to produce maximum efficacy in early detection of autism. In the…
Urogynecologic Surgical Mesh Implants
... procedures performed to treat pelvic floor disorders with surgical mesh: Transvaginal mesh to treat POP Transabdominal mesh to treat ... address safety risks Final Order for Reclassification of Surgical Mesh for Transvaginal Pelvic Organ Prolapse Repair Final Order for Effective ...
Fesharaki, F.; Isaak, D.
1984-01-01
A review of changes in the oil refining industry since 1973 examines the drop in capacity use and its effect on profits of the Organization of Economic Cooperation and Development (OECD) countries compared to world refining. OPEC countries used their new oil revenues to expand Gulf refineries, which put additional pressure on OECD refiners. OPEC involvement in global marketing, however, could help to secure supplies. Scrapping some older OECD refineries could improve the percentage of capacity in use if new construction is kept to a minimum. Other issues facing refiners are the changes in oil demand patterns and government responses to the market. 2 tables.
Towards automated crystallographic structure refinement with phenix.refine
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Echols, Nathaniel; Headd, Jeffrey J.; Moriarty, Nigel W.; Mustyakimov, Marat; Terwilliger, Thomas C.; Urzhumtsev, Alexandre; Zwart, Peter H.; Adams, Paul D.
2012-01-01
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. It has several automation features and is also highly flexible. Several hundred parameters enable extensive customizations for complex use cases. Multiple user-defined refinement strategies can be applied to specific parts of the model in a single refinement run. An i...
Cartesian anisotropic mesh adaptation for compressible flow
Keats, W.A.; Lien, F.-S.
2004-01-01
Simulating transient compressible flows involving shock waves presents challenges to the CFD practitioner in terms of the mesh quality required to resolve discontinuities and prevent smearing. This paper discusses a novel two-dimensional Cartesian anisotropic mesh adaptation technique implemented for compressible flow. This technique, developed for laminar flow by Ham, Lien and Strong, is efficient because it refines and coarsens cells using criteria that consider the solution in each of the cardinal directions separately. In this paper the method will be applied to compressible flow. The procedure shows promise in its ability to deliver good quality solutions while achieving computational savings. The convection scheme used is the Advective Upstream Splitting Method (Plus), and the refinement/ coarsening criteria are based on work done by Ham et al. Transient shock wave diffraction over a backward step and shock reflection over a forward step are considered as test cases because they demonstrate that the quality of the solution can be maintained as the mesh is refined and coarsened in time. The data structure is explained in relation to the computational mesh, and the object-oriented design and implementation of the code is presented. Refinement and coarsening algorithms are outlined. Computational savings over uniform and isotropic mesh approaches are shown to be significant. (author)
Towards automated crystallographic structure refinement with phenix.refine
Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Echols, Nathaniel; Headd, Jeffrey J.; Moriarty, Nigel W. [Lawrence Berkeley National Laboratory, One Cyclotron Road, MS64R0121, Berkeley, CA 94720 (United States); Mustyakimov, Marat; Terwilliger, Thomas C. [Los Alamos National Laboratory, M888, Los Alamos, NM 87545 (United States); Urzhumtsev, Alexandre [CNRS–INSERM–UdS, 1 Rue Laurent Fries, BP 10142, 67404 Illkirch (France); Université Henri Poincaré, Nancy 1, BP 239, 54506 Vandoeuvre-lès-Nancy (France); Zwart, Peter H. [Lawrence Berkeley National Laboratory, One Cyclotron Road, MS64R0121, Berkeley, CA 94720 (United States); Adams, Paul D. [Lawrence Berkeley National Laboratory, One Cyclotron Road, MS64R0121, Berkeley, CA 94720 (United States); University of California Berkeley, Berkeley, CA 94720 (United States)
2012-04-01
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods. phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. It has several automation features and is also highly flexible. Several hundred parameters enable extensive customizations for complex use cases. Multiple user-defined refinement strategies can be applied to specific parts of the model in a single refinement run. An intuitive graphical user interface is available to guide novice users and to assist advanced users in managing refinement projects. X-ray or neutron diffraction data can be used separately or jointly in refinement. phenix.refine is tightly integrated into the PHENIX suite, where it serves as a critical component in automated model building, final structure refinement, structure validation and deposition to the wwPDB. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods.
Gorrieri, R.; Rensink, Arend; Bergstra, J.A.; Ponse, A.; Smolka, S.A.
2001-01-01
In this chapter, we give a comprehensive overview of the research results in the field of action refinement during the past 12 years. The different approaches that have been followed are outlined in detail and contrasted to each other in a uniform framework. We use two running examples to discuss
Rimsa, Vadim; Eadsforth, Thomas C. [University of Dundee, Dundee DD1 5EH, Scotland (United Kingdom); Joosten, Robbie P. [Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Hunter, William N., E-mail: w.n.hunter@dundee.ac.uk [University of Dundee, Dundee DD1 5EH, Scotland (United Kingdom)
2014-02-01
The structure of a bacterial M14-family carboxypeptidase determined exploiting microfocus synchrotron radiation and highly automated refinement protocols reveals its potential to act as a polyglutamylase. A potential cytosolic metallocarboxypeptidase from Burkholderia cenocepacia has been crystallized and a synchrotron-radiation microfocus beamline allowed the acquisition of diffraction data to 1.9 Å resolution. The asymmetric unit comprises a tetramer containing over 1500 amino acids, and the high-throughput automated protocols embedded in PDB-REDO were coupled with model–map inspections in refinement. This approach has highlighted the value of such protocols for efficient analyses. The subunit is constructed from two domains. The N-terminal domain has previously only been observed in cytosolic carboxypeptidase (CCP) proteins. The C-terminal domain, which carries the Zn{sup 2+}-containing active site, serves to classify this protein as a member of the M14D subfamily of carboxypeptidases. Although eukaryotic CCPs possess deglutamylase activity and are implicated in processing modified tubulin, the function and substrates of the bacterial family members remain unknown. The B. cenocepacia protein did not display deglutamylase activity towards a furylacryloyl glutamate derivative, a potential substrate. Residues previously shown to coordinate the divalent cation and that contribute to peptide-bond cleavage in related enzymes such as bovine carboxypeptidase are conserved. The location of a conserved basic patch in the active site adjacent to the catalytic Zn{sup 2+}, where an acetate ion is identified, suggests recognition of the carboxy-terminus in a similar fashion to other carboxypeptidases. However, there are significant differences that indicate the recognition of substrates with different properties. Of note is the presence of a lysine in the S1′ recognition subsite that suggests specificity towards an acidic substrate.
Rimsa, Vadim; Eadsforth, Thomas C.; Joosten, Robbie P.; Hunter, William N.
2014-01-01
The structure of a bacterial M14-family carboxypeptidase determined exploiting microfocus synchrotron radiation and highly automated refinement protocols reveals its potential to act as a polyglutamylase. A potential cytosolic metallocarboxypeptidase from Burkholderia cenocepacia has been crystallized and a synchrotron-radiation microfocus beamline allowed the acquisition of diffraction data to 1.9 Å resolution. The asymmetric unit comprises a tetramer containing over 1500 amino acids, and the high-throughput automated protocols embedded in PDB-REDO were coupled with model–map inspections in refinement. This approach has highlighted the value of such protocols for efficient analyses. The subunit is constructed from two domains. The N-terminal domain has previously only been observed in cytosolic carboxypeptidase (CCP) proteins. The C-terminal domain, which carries the Zn 2+ -containing active site, serves to classify this protein as a member of the M14D subfamily of carboxypeptidases. Although eukaryotic CCPs possess deglutamylase activity and are implicated in processing modified tubulin, the function and substrates of the bacterial family members remain unknown. The B. cenocepacia protein did not display deglutamylase activity towards a furylacryloyl glutamate derivative, a potential substrate. Residues previously shown to coordinate the divalent cation and that contribute to peptide-bond cleavage in related enzymes such as bovine carboxypeptidase are conserved. The location of a conserved basic patch in the active site adjacent to the catalytic Zn 2+ , where an acetate ion is identified, suggests recognition of the carboxy-terminus in a similar fashion to other carboxypeptidases. However, there are significant differences that indicate the recognition of substrates with different properties. Of note is the presence of a lysine in the S1′ recognition subsite that suggests specificity towards an acidic substrate
Lores, F.R.
2001-01-01
An overview of petroleum refining in Spain is presented (by Repsol YPF) and some views on future trends are discussed. Spain depends heavily on imports. Sub-headings in the article cover: sources of crude imports, investments and logistics and marketing, -detailed data for each are shown diagrammatically. Tables show: (1) economic indicators (e.g. total GDP, vehicle numbers and inflation) for 1998-200; (2) crude oil imports for 1995-2000; (3) oil products balance for 1995-2000; (4) commodities demand, by product; (5) refining in Spain in terms of capacity per region; (6) outlets in Spain and other European countries in 2002 and (7) sales distribution channel by product
Elias, Gabriel A; Bieszczad, Kasia M; Weinberger, Norman M
2015-12-01
Primary sensory cortical fields develop highly specific associative representational plasticity, notably enlarged area of representation of reinforced signal stimuli within their topographic maps. However, overtraining subjects after they have solved an instrumental task can reduce or eliminate the expansion while the successful behavior remains. As the development of this plasticity depends on the learning strategy used to solve a task, we asked whether the loss of expansion is due to the strategy used during overtraining. Adult male rats were trained in a three-tone auditory discrimination task to bar-press to the CS+ for water reward and refrain from doing so during the CS- tones and silent intertrial intervals; errors were punished by a flashing light and time-out penalty. Groups acquired this task to a criterion within seven training sessions by relying on a strategy that was "bar-press from tone-onset-to-error signal" ("TOTE"). Three groups then received different levels of overtraining: Group ST, none; Group RT, one week; Group OT, three weeks. Post-training mapping of their primary auditory fields (A1) showed that Groups ST and RT had developed significantly expanded representational areas, specifically restricted to the frequency band of the CS+ tone. In contrast, the A1 of Group OT was no different from naïve controls. Analysis of learning strategy revealed this group had shifted strategy to a refinement of TOTE in which they self-terminated bar-presses before making an error ("iTOTE"). Across all animals, the greater the use of iTOTE, the smaller was the representation of the CS+ in A1. Thus, the loss of cortical expansion is attributable to a shift or refinement in strategy. This reversal of expansion was considered in light of a novel theoretical framework (CONCERTO) highlighting four basic principles of brain function that resolve anomalous findings and explaining why even a minor change in strategy would involve concomitant shifts of involved brain
Elias, Gabriel A.; Bieszczad, Kasia M.; Weinberger, Norman M.
2015-01-01
Primary sensory cortical fields develop highly specific associative representational plasticity, notably enlarged area of representation of reinforced signal stimuli within their topographic maps. However, overtraining subjects after they have solved an instrumental task can reduce or eliminate the expansion while the successful behavior remains. As the development of this plasticity depends on the learning strategy used to solve a task, we asked whether the loss of expansion is due to the strategy used during overtraining. Adult male rats were trained in a three-tone auditory discrimination task to bar-press to the CS+ for water reward and refrain from doing so during the CS− tones and silent intertrial intervals; errors were punished by a flashing light and time-out penalty. Groups acquired this task to a criterion within seven training sessions by relying on a strategy that was “bar-press from tone-onset-to-error signal” (“TOTE”). Three groups then received different levels of overtraining: Group ST, none; Group RT, one week; Group OT, three weeks. Post-training mapping of their primary auditory fields (A1) showed that Groups ST and RT had developed significantly expanded representational areas, specifically restricted to the frequency band of the CS+ tone. In contrast, the A1 of Group OT was no different from naïve controls. Analysis of learning strategy revealed this group had shifted strategy to a refinement of TOTE in which they self-terminated bar-presses before making an error (“iTOTE”). Across all animals, the greater the use of iTOTE, the smaller was the representation of the CS+ in A1. Thus, the loss of cortical expansion is attributable to a shift or refinement in strategy. This reversal of expansion was considered in light of a novel theoretical framework (CONCERTO) highlighting four basic principles of brain function that resolve anomalous findings and explaining why even a minor change in strategy would involve concomitant shifts of
Connectivity editing for quad-dominant meshes
Peng, Chihan; Wonka, Peter
2013-01-01
and illustrate the advantages and disadvantages of different strategies for quad-dominant mesh design. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.
Finite element method for solving Kohn-Sham equations based on self-adaptive tetrahedral mesh
Zhang Dier; Shen Lihua; Zhou Aihui; Gong Xingao
2008-01-01
A finite element (FE) method with self-adaptive mesh-refinement technique is developed for solving the density functional Kohn-Sham equations. The FE method adopts local piecewise polynomials basis functions, which produces sparsely structured matrices of Hamiltonian. The method is well suitable for parallel implementation without using Fourier transform. In addition, the self-adaptive mesh-refinement technique can control the computational accuracy and efficiency with optimal mesh density in different regions
Mesh Excision: Is Total Mesh Excision Necessary?
Wolff, Gillian F; Winters, J Christian; Krlin, Ryan M
2016-04-01
Nearly 29% of women will undergo a secondary, repeat operation for pelvic organ prolapse (POP) symptom recurrence following a primary repair, as reported by Abbott et al. (Am J Obstet Gynecol 210:163.e1-163.e1, 2014). In efforts to decrease the rates of failure, graft materials have been utilized to augment transvaginal repairs. Following the success of using polypropylene mesh (PPM) for stress urinary incontinence (SUI), the use of PPM in the transvaginal repair of POP increased. However, in recent years, significant concerns have been raised about the safety of PPM mesh. Complications, some specific to mesh, such as exposures, erosion, dyspareunia, and pelvic pain, have been reported with increased frequency. In the current literature, there is not substantive evidence to suggest that PPM has intrinsic properties that warrant total mesh removal in the absence of complications. There are a number of complications that can occur after transvaginal mesh placement that do warrant surgical intervention after failure of conservative therapy. In aggregate, there are no high-quality controlled studies that clearly demonstrate that total mesh removal is consistently more likely to achieve pain reduction. In the cases of obstruction and erosion, it seems clear that definitive removal of the offending mesh is associated with resolution of symptoms in the majority of cases and reasonable practice. There are a number of complications that can occur with removal of mesh, and patients should be informed of this as they formulate a choice of treatment. We will review these considerations as we examine the clinical question of whether total versus partial removal of mesh is necessary for the resolution of complications following transvaginal mesh placement.
Predicting mesh density for adaptive modelling of the global atmosphere.
Weller, Hilary
2009-11-28
The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1-20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.
Kohn, S.; Weare, J.; Ong, E.; Baden, S.
1997-05-01
We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradient with FAC multigrid preconditioning. We have parallelized our solver using an object- oriented adaptive mesh refinement framework.
Wells, Jered R.; Segars, W. Paul; Kigongo, Christopher J. N.; Dobbins, James T., III
2011-03-01
This paper describes a recently developed post-acquisition motion correction strategy for application to lower-cost computed tomography (LCCT) for under-resourced regions of the world. Increased awareness regarding global health and its challenges has encouraged the development of more affordable healthcare options for underserved people worldwide. In regions such as sub-Saharan Africa, intermediate level medical facilities may serve millions with inadequate or antiquated equipment due to financial limitations. In response, the authors have proposed a LCCT design which utilizes a standard chest x-ray examination room with a digital flat panel detector (FPD). The patient rotates on a motorized stage between the fixed cone-beam source and FPD, and images are reconstructed using a Feldkamp algorithm for cone-beam scanning. One of the most important proofs-of-concept in determining the feasibility of this system is the successful correction of undesirable motion. A 3D motion correction algorithm was developed in order to correct for potential patient motion, stage instabilities and detector misalignments which can all lead to motion artifacts in reconstructed images. Motion will be monitored by the radiographic position of fiducial markers to correct for rigid body motion in three dimensions. Based on simulation studies, projection images corrupted by motion were re-registered with average errors of 0.080 mm, 0.32 mm and 0.050 mm in the horizontal, vertical and depth dimensions, respectively. The overall absence of motion artifacts in motion-corrected reconstructions indicates that reasonable amounts of motion may be corrected using this novel technique without significant loss of image quality.
Mesh Optimization for Ground Vehicle Aerodynamics
Adrian Gaylard; Essam F Abo-Serie; Nor Elyana Ahmad
2010-01-01
Mesh optimization strategy for estimating accurate drag of a ground vehicle is proposed based on examining the effect of different mesh parameters. The optimized mesh parameters were selected using design of experiment (DOE) method to be able to work in a...
Besse, Nicolas
2003-01-01
This work is dedicated to the mathematical and numerical studies of the Vlasov equation on phase-space unstructured meshes. In the first part, new semi-Lagrangian methods are developed to solve the Vlasov equation on unstructured meshes of phase space. As the Vlasov equation describes multi-scale phenomena, we also propose original methods based on a wavelet multi-resolution analysis. The resulting algorithm leads to an adaptive mesh-refinement strategy. The new massively-parallel computers allow to use these methods with several phase-space dimensions. Particularly, these numerical schemes are applied to plasma physics and charged particle beams in the case of two-, three-, and four-dimensional Vlasov-Poisson systems. In the second part we prove the convergence and give error estimates for several numerical schemes applied to the Vlasov-Poisson system when strong and classical solutions are considered. First we show the convergence of a semi-Lagrangian scheme on an unstructured mesh of phase space, when the regularity hypotheses for the initial data are minimal. Then we demonstrate the convergence of classes of high-order semi-Lagrangian schemes in the framework of the regular classical solution. In order to reconstruct the distribution function, we consider symmetrical Lagrange polynomials, B-Splines and wavelets bases. Finally we prove the convergence of a semi-Lagrangian scheme with propagation of gradients yielding a high-order and stable reconstruction of the solution. (author) [fr
MESHREF, Finite Elements Mesh Combination with Renumbering
1973-01-01
1 - Nature of physical problem solved: The program can assemble different meshes stored on tape or cards. Renumbering is performed in order to keep band width low. Voids and/ or local refinement are possible. 2 - Method of solution: Topology and geometry are read according to input specifications. Abundant nodes and elements are eliminated. The new topology and geometry are stored on tape. 3 - Restrictions on the complexity of the problem: Maximum number of nodes = 2000. Maximum number of elements = 1500
... knitted mesh or non-knitted sheet forms. The synthetic materials used can be absorbable, non-absorbable or a combination of absorbable and non-absorbable materials. Animal-derived mesh are made of animal tissue, such as intestine or skin, that has been processed and disinfected to be ...
Comprehensive adaptive mesh refinement in wrinkling prediction analysis
Selman, A.; Meinders, Vincent T.; Huetink, Han; van den Boogaard, Antonius H.
2002-01-01
Discretisation errors indicator, contact free wrinkling and wrinkling with contact indicators are, in a challenging task, brought together and used in a comprehensive approach to wrinkling prediction analysis in thin sheet metal forming processes.
Robust, multidimensional mesh motion based on Monge-Kantorovich equidistribution
Delzanno, G L [Los Alamos National Laboratory; Finn, J M [Los Alamos National Laboratory
2009-01-01
Mesh-motion (r-refinement) grid adaptivity schemes are attractive due to their potential to minimize the numerical error for a prescribed number of degrees of freedom. However, a key roadblock to a widespread deployment of the technique has been the formulation of robust, reliable mesh motion governing principles, which (1) guarantee a solution in multiple dimensions (2D and 3D), (2) avoid grid tangling (or folding of the mesh, whereby edges of a grid cell cross somewhere in the domain), and (3) can be solved effectively and efficiently. In this study, we formulate such a mesh-motion governing principle, based on volume equidistribution via Monge-Kantorovich optimization (MK). In earlier publications [1, 2], the advantages of this approach in regards to these points have been demonstrated for the time-independent case. In this study, demonstrate that Monge-Kantorovich equidistribution can in fact be used effectively in a time stepping context, and delivers an elegant solution to the otherwise pervasive problem of grid tangling in mesh motion approaches, without resorting to ad-hoc time-dependent terms (as in moving-mesh PDEs, or MMPDEs [3, 4]). We explore two distinct r-refinement implementations of MK: direct, where the current mesh relates to an initial, unchanging mesh, and sequential, where the current mesh is related to the previous one in time. We demonstrate that the direct approach is superior in regards to mesh distortion and robustness. The properties of the approach are illustrated with a paradigmatic hyperbolic PDE, the advection of a passive scalar. Imposed velocity flow fields or varying vorticity levels and flow shears are considered.
Foy, Jean-Philippe; Tortereau, Antonin; Caulin, Carlos; Le Texier, Vincent; Lavergne, Emilie; Thomas, Emilie; Chabaud, Sylvie; Perol, David; Lachuer, Joël; Lang, Wenhua; Hong, Waun Ki; Goudot, Patrick; Lippman, Scott M; Bertolus, Chloé; Saintigny, Pierre
2016-06-14
A better understanding of the dynamics of molecular changes occurring during the early stages of oral tumorigenesis may help refine prevention and treatment strategies. We generated genome-wide expression profiles of microdissected normal mucosa, hyperplasia, dysplasia and tumors derived from the 4-NQO mouse model of oral tumorigenesis. Genes differentially expressed between tumor and normal mucosa defined the "tumor gene set" (TGS), including 4 non-overlapping gene subsets that characterize the dynamics of gene expression changes through different stages of disease progression. The majority of gene expression changes occurred early or progressively. The relevance of these mouse gene sets to human disease was tested in multiple datasets including the TCGA and the Genomics of Drug Sensitivity in Cancer project. The TGS was able to discriminate oral squamous cell carcinoma (OSCC) from normal oral mucosa in 3 independent datasets. The OSCC samples enriched in the mouse TGS displayed high frequency of CASP8 mutations, 11q13.3 amplifications and low frequency of PIK3CA mutations. Early changes observed in the 4-NQO model were associated with a trend toward a shorter oral cancer-free survival in patients with oral preneoplasia that was not seen in multivariate analysis. Progressive changes observed in the 4-NQO model were associated with an increased sensitivity to 4 different MEK inhibitors in a panel of 51 squamous cell carcinoma cell lines of the areodigestive tract. In conclusion, the dynamics of molecular changes in the 4-NQO model reveal that MEK inhibition may be relevant to prevention and treatment of a specific molecularly-defined subgroup of OSCC.
Stephanie Krueger
2016-12-01
Full Text Available A Review of: Kelly, M. (2015. Citation patterns of engineering, statistics, and computer science researchers: An internal and external citation analysis across multiple engineering subfields. College and Research Libraries, 76(7, 859-882. http://doi.org/10.5860/crl.76.7.859 Objective – To determine internal and external citation analysis methods and their potential applicability to the refinement of collection development strategies at both the institutional and cross-institutional levels for selected science, technology, engineering, and mathematics (STEM subfields. Design – Multidimensional citation analysis; specifically, analysis of citations from 1 key scholarly journals in selected STEM subfields (external analysis compared to those from 2 local doctoral dissertations in similar subfields (internal analysis. Setting – Medium-sized, STEM-dominant public research university in the United States of America. Subjects – Two citation datasets: 1 14,149 external citations from16 journals (i.e., 2 journals per subfield; citations from 2012 volumes representing bioengineering, civil engineering, computer science (CS, electrical engineering, environmental engineering, operations research, statistics (STAT, and systems engineering; and 2 8,494 internal citations from 99 doctoral dissertations (18-22 per subfield published between 2008-–2012 from CS, electrical and computer engineering (ECE, and applied information technology (AIT and published between 2005-–2012 for systems engineering and operations research (SEOR and STAT. Methods – Citations, including titles and publication dates, were harvested from source materials and stored in Excel and then manually categorized according to format (book, book chapter, journal, conference proceeding, website, and several others. To analyze citations, percentages of occurrence by subfield were calculated for variables including format, age (years since date cited, journal distribution, and the
MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank.
Mao, Yuqing; Lu, Zhiyong
2017-04-17
MeSH indexing is the task of assigning relevant MeSH terms based on a manual reading of scholarly publications by human indexers. The task is highly important for improving literature retrieval and many other scientific investigations in biomedical research. Unfortunately, given its manual nature, the process of MeSH indexing is both time-consuming (new articles are not immediately indexed until 2 or 3 months later) and costly (approximately ten dollars per article). In response, automatic indexing by computers has been previously proposed and attempted but remains challenging. In order to advance the state of the art in automatic MeSH indexing, a community-wide shared task called BioASQ was recently organized. We propose MeSH Now, an integrated approach that first uses multiple strategies to generate a combined list of candidate MeSH terms for a target article. Through a novel learning-to-rank framework, MeSH Now then ranks the list of candidate terms based on their relevance to the target article. Finally, MeSH Now selects the highest-ranked MeSH terms via a post-processing module. We assessed MeSH Now on two separate benchmarking datasets using traditional precision, recall and F 1 -score metrics. In both evaluations, MeSH Now consistently achieved over 0.60 in F-score, ranging from 0.610 to 0.612. Furthermore, additional experiments show that MeSH Now can be optimized by parallel computing in order to process MEDLINE documents on a large scale. We conclude that MeSH Now is a robust approach with state-of-the-art performance for automatic MeSH indexing and that MeSH Now is capable of processing PubMed scale documents within a reasonable time frame. http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/MeSHNow/ .
Mesh sensitivity effects on fatigue crack growth by crack-tip blunting and re-sharpening
Tvergaard, Viggo
2007-01-01
remeshing at several stages of the plastic deformation, with studies of the effect of overloads or compressive underloads. Recent published analyses for the first two cycles have shown folding of the crack surface in compression, leading to something that looks like striations. The influence of mesh...... refinement is used to study the possibility of this type of behaviour within the present method. Even with much refined meshes no indication of crack surface folding is found here....
Lieberoth, J.
1975-06-15
The numerical solution of the neutron diffusion equation plays a very important role in the analysis of nuclear reactors. A wide variety of numerical procedures has been proposed, at which most of the frequently used numerical methods are fundamentally based on the finite- difference approximation where the partial derivatives are approximated by the finite difference. For complex geometries, typical of the practical reactor problems, the computational accuracy of the finite-difference method is seriously affected by the size of the mesh width relative to the neutron diffusion length and by the heterogeneity of the medium. Thus, a very large number of mesh points are generally required to obtain a reasonably accurate approximate solution of the multi-dimensional diffusion equation. Since the computation time is approximately proportional to the number of mesh points, a detailed multidimensional analysis, based on the conventional finite-difference method, is still expensive even with modern large-scale computers. Accordingly, there is a strong incentive to develop alternatives that can reduce the number of mesh-points and still retain accuracy. One of the promising alternatives is the finite element method, which consists of the expansion of the neutron flux by piecewise polynomials. One of the advantages of this procedure is its flexibility in selecting the locations of the mesh points and the degree of the expansion polynomial. The small number of mesh points of the coarse grid enables to store the results of several of the least outer iterations and to calculate well extrapolated values of them by comfortable formalisms. This holds especially if only one energy distribution of fission neutrons is assumed for all fission processes in the reactor, because the whole information of an outer iteration is contained in a field of fission rates which has the size of all mesh points of the coarse grid.
Adaptive mesh generation for image registration and segmentation
Fogtmann, Mads; Larsen, Rasmus
2013-01-01
measure. The method was tested on a T1 weighted MR volume of an adult brain and showed a 66% reduction in the number of mesh vertices compared to a red-subdivision strategy. The deformation capability of the mesh was tested by registration to five additional T1-weighted MR volumes....
Impact of Variable-Resolution Meshes on Regional Climate Simulations
Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.
2014-12-01
The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using ERA-Interim re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally- refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.
Interoperable mesh and geometry tools for advanced petascale simulations
Diachin, L; Bauer, A; Fix, B; Kraftcheck, J; Jansen, K; Luo, X; Miller, M; Ollivier-Gooch, C; Shephard, M S; Tautges, T; Trease, H
2007-01-01
SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. The Center for Interoperable Technologies for Advanced Petascale Simulations (ITAPS) will deliver interoperable and interchangeable mesh, geometry, and field manipulation services that are of direct use to SciDAC applications. The premise of our technology development goal is to provide such services as libraries that can be used with minimal intrusion into application codes. To develop these technologies, we focus on defining a common data model and data-structure neutral interfaces that unify a number of different services such as mesh generation and improvement, front tracking, adaptive mesh refinement, shape optimization, and solution transfer operations. We highlight the use of several ITAPS services in SciDAC applications
Documentation for MeshKit - Reactor Geometry (&mesh) Generator
Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-09-30
This report gives documentation for using MeshKit’s Reactor Geometry (and mesh) Generator (RGG) GUI and also briefly documents other algorithms and tools available in MeshKit. RGG is a program designed to aid in modeling and meshing of complex/large hexagonal and rectilinear reactor cores. RGG uses Argonne’s SIGMA interfaces, Qt and VTK to produce an intuitive user interface. By integrating a 3D view of the reactor with the meshing tools and combining them into one user interface, RGG streamlines the task of preparing a simulation mesh and enables real-time feedback that reduces accidental scripting mistakes that could waste hours of meshing. RGG interfaces with MeshKit tools to consolidate the meshing process, meaning that going from model to mesh is as easy as a button click. This report is designed to explain RGG v 2.0 interface and provide users with the knowledge and skills to pilot RGG successfully. Brief documentation of MeshKit source code, tools and other algorithms available are also presented for developers to extend and add new algorithms to MeshKit. RGG tools work in serial and parallel and have been used to model complex reactor core models consisting of conical pins, load pads, several thousands of axially varying material properties of instrumentation pins and other interstices meshes.
Connectivity editing for quad-dominant meshes
Peng, Chihan
2013-08-01
We propose a connectivity editing framework for quad-dominant meshes. In our framework, the user can edit the mesh connectivity to control the location, type, and number of irregular vertices (with more or fewer than four neighbors) and irregular faces (non-quads). We provide a theoretical analysis of the problem, discuss what edits are possible and impossible, and describe how to implement an editing framework that realizes all possible editing operations. In the results, we show example edits and illustrate the advantages and disadvantages of different strategies for quad-dominant mesh design. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.
Vickers, Trevor
1992-01-01
On the Refinement Calculus gives one view of the development of the refinement calculus and its attempt to bring together - among other things - Z specifications and Dijkstra's programming language. It is an excellent source of reference material for all those seeking the background and mathematical underpinnings of the refinement calculus.
Anisotropic mesh adaptation for marine ice-sheet modelling
Gillet-Chaulet, Fabien; Tavard, Laure; Merino, Nacho; Peyaud, Vincent; Brondex, Julien; Durand, Gael; Gagliardini, Olivier
2017-04-01
Improving forecasts of ice-sheets contribution to sea-level rise requires, amongst others, to correctly model the dynamics of the grounding line (GL), i.e. the line where the ice detaches from its underlying bed and goes afloat on the ocean. Many numerical studies, including the intercomparison exercises MISMIP and MISMIP3D, have shown that grid refinement in the GL vicinity is a key component to obtain reliable results. Improving model accuracy while maintaining the computational cost affordable has then been an important target for the development of marine icesheet models. Adaptive mesh refinement (AMR) is a method where the accuracy of the solution is controlled by spatially adapting the mesh size. It has become popular in models using the finite element method as they naturally deal with unstructured meshes, but block-structured AMR has also been successfully applied to model GL dynamics. The main difficulty with AMR is to find efficient and reliable estimators of the numerical error to control the mesh size. Here, we use the estimator proposed by Frey and Alauzet (2015). Based on the interpolation error, it has been found effective in practice to control the numerical error, and has some flexibility, such as its ability to combine metrics for different variables, that makes it attractive. Routines to compute the anisotropic metric defining the mesh size have been implemented in the finite element ice flow model Elmer/Ice (Gagliardini et al., 2013). The mesh adaptation is performed using the freely available library MMG (Dapogny et al., 2014) called from Elmer/Ice. Using a setup based on the inter-comparison exercise MISMIP+ (Asay-Davis et al., 2016), we study the accuracy of the solution when the mesh is adapted using various variables (ice thickness, velocity, basal drag, …). We show that combining these variables allows to reduce the number of mesh nodes by more than one order of magnitude, for the same numerical accuracy, when compared to uniform mesh
Toward An Unstructured Mesh Database
Rezaei Mahdiraji, Alireza; Baumann, Peter Peter
2014-05-01
Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi
SUPERIMPOSED MESH PLOTTING IN MCNP
J. HENDRICKS
2001-02-01
The capability to plot superimposed meshes has been added to MCNP{trademark}. MCNP4C featured a superimposed mesh weight window generator which enabled users to set up geometries without having to subdivide geometric cells for variance reduction. The variance reduction was performed with weight windows on a rectangular or cylindrical mesh superimposed over the physical geometry. Experience with the new capability was favorable but also indicated that a number of enhancements would be very beneficial, particularly a means of visualizing the mesh and its values. The mathematics for plotting the mesh and its values is described here along with a description of other upgrades.
Wang, Xinheng
2008-01-01
Wireless telemedicine using GSM and GPRS technologies can only provide low bandwidth connections, which makes it difficult to transmit images and video. Satellite or 3G wireless transmission provides greater bandwidth, but the running costs are high. Wireless networks (WLANs) appear promising, since they can supply high bandwidth at low cost. However, the WLAN technology has limitations, such as coverage. A new wireless networking technology named the wireless mesh network (WMN) overcomes some of the limitations of the WLAN. A WMN combines the characteristics of both a WLAN and ad hoc networks, thus forming an intelligent, large scale and broadband wireless network. These features are attractive for telemedicine and telecare because of the ability to provide data, voice and video communications over a large area. One successful wireless telemedicine project which uses wireless mesh technology is the Emergency Room Link (ER-LINK) in Tucson, Arizona, USA. There are three key characteristics of a WMN: self-organization, including self-management and self-healing; dynamic changes in network topology; and scalability. What we may now see is a shift from mobile communication and satellite systems for wireless telemedicine to the use of wireless networks based on mesh technology, since the latter are very attractive in terms of cost, reliability and speed.
Error sensitivity to refinement: a criterion for optimal grid adaptation
Luchini, Paolo; Giannetti, Flavio; Citro, Vincenzo
2017-12-01
Most indicators used for automatic grid refinement are suboptimal, in the sense that they do not really minimize the global solution error. This paper concerns with a new indicator, related to the sensitivity map of global stability problems, suitable for an optimal grid refinement that minimizes the global solution error. The new criterion is derived from the properties of the adjoint operator and provides a map of the sensitivity of the global error (or its estimate) to a local mesh refinement. Examples are presented for both a scalar partial differential equation and for the system of Navier-Stokes equations. In the last case, we also present a grid-adaptation algorithm based on the new estimator and on the FreeFem++ software that improves the accuracy of the solution of almost two order of magnitude by redistributing the nodes of the initial computational mesh.
Hybrid direct and iterative solvers for h refined grids with singularities
Paszyński, Maciej R.; Paszyńska, Anna; Dalcin, Lisandro; Calo, Victor M.
2015-01-01
on top of it. The hybrid solver is applied for two or three dimensional grids automatically h refined towards point or edge singularities. The automatic refinement is based on the relative error estimations between the coarse and fine mesh solutions [2
Lucas, P.; Van Zuijlen, A.H.; Bijl, H.
2009-01-01
Mesh adaptation is a fairly established tool to obtain numerically accurate solutions for flow problems. Computational efficiency is, however, not always guaranteed for the adaptation strategies found in literature. Typically excessive mesh growth diminishes the potential efficiency gain. This
Hybrid direct and iterative solvers for h refined grids with singularities
Paszyński, Maciej R.
2015-04-27
This paper describes a hybrid direct and iterative solver for two and three dimensional h adaptive grids with point singularities. The point singularities are eliminated by using a sequential linear computational cost solver O(N) on CPU [1]. The remaining Schur complements are submitted to incomplete LU preconditioned conjugated gradient (ILUPCG) iterative solver. The approach is compared to the standard algorithm performing static condensation over the entire mesh and executing the ILUPCG algorithm on top of it. The hybrid solver is applied for two or three dimensional grids automatically h refined towards point or edge singularities. The automatic refinement is based on the relative error estimations between the coarse and fine mesh solutions [2], and the optimal refinements are selected using the projection based interpolation. The computational mesh is partitioned into sub-meshes with local point and edge singularities separated. This is done by using the following greedy algorithm.
Cobb, C.B.
2001-01-01
This article focuses on recent developments in the US refining industry and presents a model for improving the performance of refineries based on the analysis of the refining industry by Cap Gemini Ernst and Young. The identification of refineries in risk of failing, the construction of pipelines for refinery products from Gulf State refineries, mergers and acquisitions, and poor financial performance are discussed. Current challenges concerning the stagnant demand for refinery products, environmental regulations, and shareholder value are highlighted. The structure of the industry, the creation of value in refining, and the search for business models are examined. The top 25 US companies and US refining business groups are listed
Energy mesh optimization for multi-level calculation schemes
Mosca, P.; Taofiki, A.; Bellier, P.; Prevost, A.
2011-01-01
The industrial calculations of third generation nuclear reactors are based on sophisticated strategies of homogenization and collapsing at different spatial and energetic levels. An important issue to ensure the quality of these calculation models is the choice of the collapsing energy mesh. In this work, we show a new approach to generate optimized energy meshes starting from the SHEM 281-group library. The optimization model is applied on 1D cylindrical cells and consists of finding an energy mesh which minimizes the errors between two successive collision probability calculations. The former is realized over the fine SHEM mesh with Livolant-Jeanpierre self-shielded cross sections and the latter is performed with collapsed cross sections over the energy mesh being optimized. The optimization is done by the particle swarm algorithm implemented in the code AEMC and multigroup flux solutions are obtained from standard APOLLO2 solvers. By this new approach, a set of new optimized meshes which encompass from 10 to 50 groups has been defined for PWR and BWR calculations. This set will allow users to adapt the energy detail of the solution to the complexity of the calculation (assembly, multi-assembly, two-dimensional whole core). Some preliminary verifications, in which the accuracy of the new meshes is measured compared to a direct 281-group calculation, show that the 30-group optimized mesh offers a good compromise between simulation time and accuracy for a standard 17 x 17 UO 2 assembly with and without control rods. (author)
Mesh erosion after abdominal sacrocolpopexy.
Kohli, N; Walsh, P M; Roat, T W; Karram, M M
1998-12-01
To report our experience with erosion of permanent suture or mesh material after abdominal sacrocolpopexy. A retrospective chart review was performed to identify patients who underwent sacrocolpopexy by the same surgeon over 8 years. Demographic data, operative notes, hospital records, and office charts were reviewed after sacrocolpopexy. Patients with erosion of either suture or mesh were treated initially with conservative therapy followed by surgical intervention as required. Fifty-seven patients underwent sacrocolpopexy using synthetic mesh during the study period. The mean (range) postoperative follow-up was 19.9 (1.3-50) months. Seven patients (12%) had erosions after abdominal sacrocolpopexy with two suture erosions and five mesh erosions. Patients with suture erosion were asymptomatic compared with patients with mesh erosion, who presented with vaginal bleeding or discharge. The mean (+/-standard deviation) time to erosion was 14.0+/-7.7 (range 4-24) months. Both patients with suture erosion were treated conservatively with estrogen cream. All five patients with mesh erosion required transvaginal removal of the mesh. Mesh erosion can follow abdominal sacrocolpopexy over a long time, and usually presents as vaginal bleeding or discharge. Although patients with suture erosion can be managed successfully with conservative treatment, patients with mesh erosion require surgical intervention. Transvaginal removal of the mesh with vaginal advancement appears to be an effective treatment in patients failing conservative management.
Notes on the Mesh Handler and Mesh Data Conversion
Lee, Sang Yong; Park, Chan Eok
2009-01-01
At the outset of the development of the thermal-hydraulic code (THC), efforts have been made to utilize the recent technology of the computational fluid dynamics. Among many of them, the unstructured mesh approach was adopted to alleviate the restriction of the grid handling system. As a natural consequence, a mesh handler (MH) has been developed to manipulate the complex mesh data from the mesh generator. The mesh generator, Gambit, was chosen at the beginning of the development of the code. But a new mesh generator, Pointwise, was introduced to get more flexible mesh generation capability. An open source code, Paraview, was chosen as a post processor, which can handle unstructured as well as structured mesh data. Overall data processing system for THC is shown in Figure-1. There are various file formats to save the mesh data in the permanent storage media. A couple of dozen of file formats are found even in the above mentioned programs. A competent mesh handler should have the capability to import or export mesh data as many as possible formats. But, in reality, there are two aspects that make it difficult to achieve the competence. The first aspect to consider is the time and efforts to program the interface code. And the second aspect, which is even more difficult one, is the fact that many mesh data file formats are proprietary information. In this paper, some experience of the development of the format conversion programs will be presented. File formats involved are Gambit neutral format, Ansys-CFX grid file format, VTK legacy file format, Nastran format and CGNS
Development and verification of unstructured adaptive mesh technique with edge compatibility
Ito, Kei; Ohshima, Hiroyuki; Kunugi, Tomoaki
2010-01-01
In the design study of the large-sized sodium-cooled fast reactor (JSFR), one key issue is suppression of gas entrainment (GE) phenomena at a gas-liquid interface. Therefore, the authors have been developed a high-precision CFD algorithm to evaluate the GE phenomena accurately. The CFD algorithm has been developed on unstructured meshes to establish an accurate modeling of JSFR system. For two-phase interfacial flow simulations, a high-precision volume-of-fluid algorithm is employed. It was confirmed that the developed CFD algorithm could reproduce the GE phenomena in a simple GE experiment. Recently, the authors have been developed an important technique for the simulation of the GE phenomena in JSFR. That is an unstructured adaptive mesh technique which can apply fine cells dynamically to the region where the GE occurs in JSFR. In this paper, as a part of the development, a two-dimensional unstructured adaptive mesh technique is discussed. In the two-dimensional adaptive mesh technique, each cell is refined isotropically to reduce distortions of the mesh. In addition, connection cells are formed to eliminate the edge incompatibility between refined and non-refined cells. The two-dimensional unstructured adaptive mesh technique is verified by solving well-known lid-driven cavity flow problem. As a result, the two-dimensional unstructured adaptive mesh technique succeeds in providing a high-precision solution, even though poor-quality distorted initial mesh is employed. In addition, the simulation error on the two-dimensional unstructured adaptive mesh is much less than the error on the structured mesh with a larger number of cells. (author)
Relational Demonic Fuzzy Refinement
Fairouz Tchier
2014-01-01
Full Text Available We use relational algebra to define a refinement fuzzy order called demonic fuzzy refinement and also the associated fuzzy operators which are fuzzy demonic join (⊔fuz, fuzzy demonic meet (⊓fuz, and fuzzy demonic composition (□fuz. Our definitions and properties are illustrated by some examples using mathematica software (fuzzy logic.
Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.
2017-04-01
In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.
Streaming simplification of tetrahedral meshes.
Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T
2007-01-01
Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.
Optimizing refiner operation with statistical modelling
Broderick, G [Noranda Research Centre, Pointe Claire, PQ (Canada)
1997-02-01
The impact of refining conditions on the energy efficiency of the process and on the handsheet quality of a chemi-mechanical pulp was studied as part of a series of pilot scale refining trials. Statistical models of refiner performance were constructed from these results and non-linear optimization of process conditions were conducted. Optimization results indicated that increasing the ratio of specific energy applied in the first stage led to a reduction of some 15 per cent in the total energy requirement. The strategy can also be used to obtain significant increases in pulp quality for a given energy input. 20 refs., 6 tabs.
Refining margins and prospects
Baudouin, C.; Favennec, J.P.
1997-01-01
Refining margins throughout the world have remained low in 1996. In Europe, in spite of an improvement, particularly during the last few weeks, they are still not high enough to finance new investments. Although the demand for petroleum products is increasing, experts are still sceptical about any rapid recovery due to prevailing overcapacity and to continuing capacity growth. After a historical review of margins and an analysis of margins by regions, we analyse refining over-capacities in Europe and the unbalances between production and demand. Then we discuss the current situation concerning barriers to the rationalization, agreements between oil companies, and the consequences on the future of refining capacities and margins. (author)
Osten, James; Haltmaier, Susan
2000-01-01
This article examines the current status of the North American refining industry, and considers the North American economy and the growth in demand in the petroleum industry, petroleum product demand and quality, crude oil upgrading to meet product standards, and changes in crude oil feedstocks such as the use of heavier crudes and bitumens. Refining expansion, the declining profits in refining, and changes due to environmental standards are discussed. The Gross Domestic Product and oil demand for the USA, Canada, Mexico, and Venezuela for the years 1995-2020 are tabulated
Linearly Refined Session Types
Pedro Baltazar
2012-11-01
Full Text Available Session types capture precise protocol structure in concurrent programming, but do not specify properties of the exchanged values beyond their basic type. Refinement types are a form of dependent types that can address this limitation, combining types with logical formulae that may refer to program values and can constrain types using arbitrary predicates. We present a pi calculus with assume and assert operations, typed using a session discipline that incorporates refinement formulae written in a fragment of Multiplicative Linear Logic. Our original combination of session and refinement types, together with the well established benefits of linearity, allows very fine-grained specifications of communication protocols in which refinement formulae are treated as logical resources rather than persistent truths.
Refinement by interface instantiation
Hallerstede, Stefan; Hoang, Thai Son
2012-01-01
be easily refined. Our first contribution hence is a proposal for a new construct called interface that encapsulates the external variables, along with a mechanism for interface instantiation. Using the new construct and mechanism, external variables can be refined consistently. Our second contribution...... is an approach for verifying the correctness of Event-B extensions using the supporting Rodin tool. We illustrate our approach by proving the correctness of interface instantiation....
Relational Demonic Fuzzy Refinement
Tchier, Fairouz
2014-01-01
We use relational algebra to define a refinement fuzzy order called demonic fuzzy refinement and also the associated fuzzy operators which are fuzzy demonic join $({\\bigsqcup }_{\\mathrm{\\text{f}}\\mathrm{\\text{u}}\\mathrm{\\text{z}}})$ , fuzzy demonic meet $({\\sqcap }_{\\mathrm{\\text{f}}\\mathrm{\\text{u}}\\mathrm{\\text{z}}})$ , and fuzzy demonic composition $({\\square }_{\\mathrm{\\text{f}}\\mathrm{\\text{u}}\\mathrm{\\text{z}}})$ . Our definitions and properties are illustrated by some examples using ma...
Convergence study of global meshing on enamel-cement-bracket finite element model
Samshuri, S. F.; Daud, R.; Rojan, M. A.; Basaruddin, K. S.; Abdullah, A. B.; Ariffin, A. K.
2017-09-01
This paper presents on meshing convergence analysis of finite element (FE) model to simulate enamel-cement-bracket fracture. Three different materials used in this study involving interface fracture are concerned. Complex behavior ofinterface fracture due to stress concentration is the reason to have a well-constructed meshing strategy. In FE analysis, meshing size is a critical factor that influenced the accuracy and computational time of analysis. The convergence study meshing scheme involving critical area (CA) and non-critical area (NCA) to ensure an optimum meshing sizes are acquired for this FE model. For NCA meshing, the area of interest are at the back of enamel, bracket ligature groove and bracket wing. For CA meshing, area of interest are enamel area close to cement layer, the cement layer and bracket base. The value of constant NCA meshing tested are meshing size 1 and 0.4. The value constant CA meshing tested are 0.4 and 0.1. Manipulative variables are randomly selected and must abide the rule of NCA must be higher than CA. This study employed first principle stresses due to brittle failure nature of the materials used. Best meshing size are selected according to convergence error analysis. Results show that, constant CA are more stable compare to constant NCA meshing. Then, 0.05 constant CA meshing are tested to test the accuracy of smaller meshing. However, unpromising result obtained as the errors are increasing. Thus, constant CA 0.1 with NCA mesh of 0.15 until 0.3 are the most stable meshing as the error in this region are lowest. Convergence test was conducted on three selected coarse, medium and fine meshes at the range of NCA mesh of 0.15 until 3 and CA mesh area stay constant at 0.1. The result shows that, at coarse mesh 0.3, the error are 0.0003% compare to 3% acceptable error. Hence, the global meshing are converge as the meshing size at CA 0.1 and NCA 0.15 for this model.
Parallel Implementation and Scaling of an Adaptive Mesh Discrete Ordinates Algorithm for Transport
Howell, L H
2004-01-01
Block-structured adaptive mesh refinement (AMR) uses a mesh structure built up out of locally-uniform rectangular grids. In the BoxLib parallel framework used by the Raptor code, each processor operates on one or more of these grids at each refinement level. The decomposition of the mesh into grids and the distribution of these grids among processors may change every few timesteps as a calculation proceeds. Finer grids use smaller timesteps than coarser grids, requiring additional work to keep the system synchronized and ensure conservation between different refinement levels. In a paper for NECDC 2002 I presented preliminary results on implementation of parallel transport sweeps on the AMR mesh, conjugate gradient acceleration, accuracy of the AMR solution, and scalar speedup of the AMR algorithm compared to a uniform fully-refined mesh. This paper continues with a more in-depth examination of the parallel scaling properties of the scheme, both in single-level and multi-level calculations. Both sweeping and setup costs are considered. The algorithm scales with acceptable performance to several hundred processors. Trends suggest, however, that this is the limit for efficient calculations with traditional transport sweeps, and that modifications to the sweep algorithm will be increasingly needed as job sizes in the thousands of processors become common
Lober, R.R.; Tautges, T.J.; Vaughan, C.T.
1997-03-01
Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.
Glass, H. [Cellnet, Alpharetta, GA (United States)
2006-07-01
Mesh network applications are used by utilities for metering, demand response, and mobile workforce management. This presentation provided an overview of a multi-dimensional mesh application designed to offer improved scalability and higher throughput in advanced metering infrastructure (AMI) systems. Mesh applications can be used in AMI for load balancing and forecasting, as well as for distribution and transmission planning. New revenue opportunities can be realized through the application's ability to improve notification and monitoring services, and customer service communications. Mesh network security features include data encryption, data fragmentation and the automatic re-routing of data. In order to use mesh network applications, networks must have sufficient bandwidth and provide flexibility at the endpoint layer to support multiple devices from multiple vendors, as well as support multiple protocols. It was concluded that smart meters will not enable energy response solutions without an underlying AMI that is reliable, scalable and self-healing. .refs., tabs., figs.
A mesh density study for application to large deformation rolling process evaluation
Martin, J.A.
1997-12-01
When addressing large deformation through an elastic-plastic analysis the mesh density is paramount in determining the accuracy of the solution. However, given the nonlinear nature of the problem, a highly-refined mesh will generally require a prohibitive amount of computer resources. This paper addresses finite element mesh optimization studies considering accuracy of results and computer resource needs as applied to large deformation rolling processes. In particular, the simulation of the thread rolling manufacturing process is considered using the MARC software package and a Cray C90 supercomputer. Both mesh density and adaptive meshing on final results for both indentation of a rigid body to a specified depth and contact rolling along a predetermined length are evaluated
Zhang, Ai-Yong, E-mail: ayzhang@hfut.edu.cn; He, Yuan-Yi; Lin, Tan; Huang, Nai-Hui; Xu, Qiao; Feng, Jing-Wei, E-mail: jingweifeng@hfut.edu.cn
2017-05-15
Highlights: • A simple strategy was proposed to improve Cu{sub 2}O photochemical performance. • The photocatalysis-driven Fenton was developed for advanced water treatment. • The novel system had superior performance under visible light irradiation. • The catalytic mechanisms of novel system were elucidated and clearly presented. - Abstract: Visible-light-driven photocatalysis is a promising technology for advanced water treatment, but it usually exhibits a low efficiency. Cu{sub 2}O is a low-cost semiconductor with narrow band gap, high absorption coefficient and suitable conduction band, but suffers from low charge mobility, poor quantum yield and weak catalytic performance. Herein, the Cu{sub 2}O catalytic capacity for refractory pollutants degradation is drastically improved by a simple and effective strategy. By virtue of the synergistic effects between photocatalysis and Fenton, a novel and efficient photocatalysis-driven Fenton system, PFC, is originally proposed and experimentally validated using Cu{sub 2}O/Nano-C hybrids. The synergistic PFC is highly Nano-C-dependent and exhibits a significant superiority for the removal of rhodamine B and p-nitrophenol, two typical refractory pollutants in wastewater. The PFC superiority is mainly attributed to: (1) the rapid photo-electron transfer driven by Schottky-like junction, (2) the selective O{sub 2} reduction mediated by semi-metallic Nano-C for efficient H{sub 2}O{sub 2} generation, (3) the specific H{sub 2}O{sub 2} activation and large ·OH generation catalyzed by Haber-Weiss Fenton mechanism, and (4) the accelerated Fe{sup 2+}/Fe{sup 3+} cycling and robust Fe{sup 2+} regeneration via two additional pathways. Our findings might provide a new chance to overcome the intrinsic challenges of both photocatalysis and Fenton, as well as develop novel technology for advanced water treatment.
Agneta eMontgomery
2016-01-01
Full Text Available The incidence of deep infection using a synthetic mesh in inguinal hernia repair is low and reported to be well below 1%. This is in contrast to incisional hernia surgery where the reported incidence is 3% respective 13% comparing laparoscopic to open mesh repair reported in a Cochrane review. Main risk factors were long operation time, surgical site contamination and early wound complications. An infected mesh can be preserved using conservative treatment were negative pressure wound therapy (VAC® could play an important role. If strategy fails, the mesh needs to be removed. This review aims to look at evidence for situations were a biological mesh would work as a replacement of a removed infected synthetic mesh. Material and MethodsA literature search of the Medline database was performed using the PubMed search engine. Twenty publications were found relevant for this review.ResultsFor studies reviewed three options are presented: removal of the infected synthetic mesh alone, replacement with either a new synthetic or a new biological mesh. Operations were all performed at specialist centers. Removal of the mesh alone was an option limited to inguinal hernias. In ventral/incisional hernias the use of a biological mesh for replacement resulted in a very high recurrence rate, if bridging was required. Either a synthetic or a biological mesh seems to work as a replacement when fascial closure can be achieved. Evidence is though very low. ConclusionWhen required, either a synthetic or a biological meshes seems to work as a replacement for an infected synthetic mesh if the defect can be closed. It is however not recommended to use a biological mesh for bridging. Mesh replacement surgery is demanding and is recommended to be performed in a specialist center.
Streaming Compression of Hexahedral Meshes
Isenburg, M; Courbet, C
2010-02-03
We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.
Refining margins: recent trends
Baudoin, C.; Favennec, J.P.
1999-01-01
Despite a business environment that was globally mediocre due primarily to the Asian crisis and to a mild winter in the northern hemisphere, the signs of improvement noted in the refining activity in 1996 were borne out in 1997. But the situation is not yet satisfactory in this sector: the low return on invested capital and the financing of environmental protection expenditure are giving cause for concern. In 1998, the drop in crude oil prices and the concomitant fall in petroleum product prices was ultimately rather favorable to margins. Two elements tended to put a damper on this relative optimism. First of all, margins continue to be extremely volatile and, secondly, the worsening of the economic and financial crisis observed during the summer made for a sharp decline in margins in all geographic regions, especially Asia. Since the beginning of 1999, refining margins are weak and utilization rates of refining capacities have decreased. (authors)
Constancio, Silva
2006-07-01
In 2004, refining margins showed a clear improvement that persisted throughout the first three quarters of 2005. This enabled oil companies to post significantly higher earnings for their refining activity in 2004 compared to 2003, with the results of the first half of 2005 confirming this trend. As for petrochemicals, despite a steady rise in the naphtha price, higher cash margins enabled a turnaround in 2004 as well as a clear improvement in oil company financial performance that should continue in 2005, judging by the net income figures reported for the first half-year. Despite this favorable business environment, capital expenditure in refining and petrochemicals remained at a low level, especially investment in new capacity, but a number of projects are being planned for the next five years. (author)
Constancio, Silva
2006-01-01
In 2004, refining margins showed a clear improvement that persisted throughout the first three quarters of 2005. This enabled oil companies to post significantly higher earnings for their refining activity in 2004 compared to 2003, with the results of the first half of 2005 confirming this trend. As for petrochemicals, despite a steady rise in the naphtha price, higher cash margins enabled a turnaround in 2004 as well as a clear improvement in oil company financial performance that should continue in 2005, judging by the net income figures reported for the first half-year. Despite this favorable business environment, capital expenditure in refining and petrochemicals remained at a low level, especially investment in new capacity, but a number of projects are being planned for the next five years. (author)
Singh, I.J.
2002-01-01
The author discusses the history of the Indian refining industry and ongoing developments under the headings: the present state; refinery configuration; Indian capabilities for refinery projects; and reforms in the refining industry. Tables lists India's petroleum refineries giving location and capacity; new refinery projects together with location and capacity; and expansion projects of Indian petroleum refineries. The Indian refinery industry has undergone substantial expansion as well as technological changes over the past years. There has been progressive technology upgrading, energy efficiency, better environmental control and improved capacity utilisation. Major reform processes have been set in motion by the government of India: converting the refining industry from a centrally controlled public sector dominated industry to a delicensed regime in a competitive market economy with the introduction of a liberal exploration policy; dismantling the administered price mechanism; and a 25 year hydrocarbon vision. (UK)
杨军
2012-01-01
With the development of the whole business and 3G operators,extensive marketing has significantly lagged behind in the development of the telecommunications market.The refined marketing,data mining,marketing management concepts and intelligent computing methods applied to the marketing of telecommunications products,refined marketing strategy research based on customer behavior analysis of mobile communications was build.This model based on customer behavior,build customer value model,customer stickiness model,the client transaction model and customer demand discrimination model,the four models build a marketing matrix,in order to achieve customer segments for different customers,a different retention or marketing strategies were implemented for different customers,then the model of implementation are assessed.Precise marketing strategy for the marketing team could provide regular marketing support services and enhance the market competitiveness of enterprises.%随着全业务和3G运营的发展,粗放式的营销方式已明显滞后于电信市场的发展。将精细化营销、数据挖掘等营销管理理念和智能计算方法运用到电信产品营销中,提出了基于移动通信客户行为分析的精确营销策略模型。本模型依据客户行为分别建立客户价值模型、客户粘性模型、客户异动模型和客户需求鉴别模型,利用4个模型构建营销矩阵,实现客户群细分,针对不同客户实施不同保有和营销策略并对模型实施应用评估。精确营销策略能够为营销队伍提供常规化营销支持服务,提升企业市场竞争力。
2008-01-01
Investment rallied in 2007, and many distillation and conversion projects likely to reach the industrial stage were announced. With economic growth sustained in 2006 and still pronounced in 2007, oil demand remained strong - especially in emerging countries - and refining margins stayed high. Despite these favorable business conditions, tensions persisted in the refining sector, which has fallen far behind in terms of investing in refinery capacity. It will take renewed efforts over a long period to catch up. Looking at recent events that have affected the economy in many countries (e.g. the sub-prime crisis), prudence remains advisable
Mersiline mesh in premaxillary augmentation.
Foda, Hossam M T
2005-01-01
Premaxillary retrusion may distort the aesthetic appearance of the columella, lip, and nasal tip. This defect is characteristically seen in, but not limited to, patients with cleft lip nasal deformity. This study investigated 60 patients presenting with premaxillary deficiencies in which Mersiline mesh was used to augment the premaxilla. All the cases had surgery using the external rhinoplasty technique. Two methods of augmentation with Mersiline mesh were used: the Mersiline roll technique, for the cases with central symmetric deficiencies, and the Mersiline packing technique, for the cases with asymmetric deficiencies. Premaxillary augmentation with Mersiline mesh proved to be simple technically, easy to perform, and not associated with any complications. Periodic follow-up evaluation for a mean period of 32 months (range, 12-98 months) showed that an adequate degree of premaxillary augmentation was maintained with no clinically detectable resorption of the mesh implant.
GENERATION OF IRREGULAR HEXAGONAL MESHES
Vlasov Aleksandr Nikolaevich
2012-07-01
Decomposition is performed in a constructive way and, as option, it involves meshless representation. Further, this mapping method is used to generate the calculation mesh. In this paper, the authors analyze different cases of mapping onto simply connected and bi-connected canonical domains. They represent forward and backward mapping techniques. Their potential application for generation of nonuniform meshes within the framework of the asymptotic homogenization theory is also performed to assess and project effective characteristics of heterogeneous materials (composites.
Cignoni, Paolo; Pietroni, Nico; Malomo, Luigi
2014-01-01
Mesh joinery is an innovative method to produce illustrative shape approximations suitable for fabrication. Mesh joinery is capable of producing complex fabricable structures in an efficient and visually pleasing manner. We represent an input geometry as a set of planar pieces arranged to compose a rigid structure, by exploiting an efficient slit mechanism. Since slices are planar, to fabricate them a standard 2D cutting system is enough. We automatically arrange slices according to a smooth ...
Marion, Pierre; Saint-Antonin, Valerie
2011-11-01
The major uncertainty characterizing the global energy landscape impacts particularly on transport, which remains the virtually-exclusive bastion of the oil industry. The industry must therefore respond to increasing demand for mobility against a background marked by the emergence of alternatives to oil-based fuels and the need to reduce emissions of pollutants and greenhouse gases (GHG). It is in this context that the 'Refining 2030' study conducted by IFP Energies Nouvelles (IFPEN) forecasts what the global supply and demand balance for oil products could be, and highlights the type and geographical location of the refinery investment required. Our study shows that the bulk of the refining investment will be concentrated in the emerging countries (mainly those in Asia), whilst the areas historically strong in refining (Europe and North America) face reductions in capacity. In this context, the drastic reduction in the sulphur specification of bunker oil emerges as a structural issue for European refining, in the same way as increasingly restrictive regulation of refinery CO 2 emissions (quotas/taxation) and the persistent imbalance between gasoline and diesel fuels. (authors)
Cai, Sixiang; Liu, Jie; Zha, Kaiwen; Li, Hongrui; Shi, Liyi; Zhang, Dengsong
2017-05-04
Owing to their advantages of strong mechanical stability, plasticity, thermal conductivity and mass transfer ability, metal foam or meshes are considered promising monolith supports for de-NO x application. In this work, we developed a facile method for the decoration of porous Mn-Co bi-metal oxides on Fe meshes. The block-like structure was derived from in situ coating, and simultaneous nucleation and growth of the Mn-Co hydroxide precursor, while the porous Mn-Co oxides were formed via the calcination process. Moreover, the decoration of the high-purity Co 2 MnO 4 spinel could lead to enhanced reducibility and adsorption behaviors, which are crucial to the catalytic process. Of note is the fact that the Fe mesh used in the synthesis procedure could be substituted by various metal supports including Ti mesh, Cu foam and Ni foam. Driven by the above motivations, metal supports decorated with Mn-Co oxides were evaluated as monolith de-NO x catalysts for the first time. Inspiringly, these catalysts demonstrate outstanding low-temperature catalytic activity, desirable stability and excellent H 2 O resistance. This work might open up a new path for the design and development of high performance de-NO x monolith catalysts.
Method and system for mesh network embedded devices
Wang, Ray (Inventor)
2009-01-01
A method and system for managing mesh network devices. A mesh network device with integrated features creates an N-way mesh network with a full mesh network topology or a partial mesh network topology.
Mesh versus non-mesh repair of ventral abdominal hernias
Jawaid, M.A.; Talpur, A.H.
2008-01-01
To investigate the relative effectiveness of mesh and suture repair of ventral abdominal hernias in terms of clinical outcome, quality of life and rate of recurrence in both the techniques. This is a retrospective descriptive analysis of 236 patients with mesh and non-mesh repair of primary ventral hernias performed between January 2000 to December 2004 at Surgery Department, Liaquat University of Medical and Health Sciences, Jamshoro. The record sheets of the patients were analyzed and data retrieved to compare the results of both techniques for short-term and long-term results. The data retrieved is statistically analyzed on SPSS version 11. There were 43 (18.22%) males and 193 (81.77%) females with a mean age of 51.79 years and a range of 59 (81-22). Para-umbilical hernia was the commonest of ventral hernia and accounted for 49.8% (n=118) of the total study population followed by incisional hernia comprising 24% (n=57) of the total number. There was a significant difference in the recurrent rate at 3 years interval with 23/101 (22.77%) recurrences in suture-repaired subjects compared to 10/135 (7.40%) in mesh repair group. Chronic pain lasting up to 1-2 years was noted in 14 patients with suture repair. Wound infection is comparatively more common (8.14%) in mesh group. The other variables such as operative and postoperative complications, total hospital stay and quality of life is also discussed. Mesh repair of ventral hernia is much superior to non-mesh suture repair in terms of recurrence and overall outcome. (author)
User Manual for the PROTEUS Mesh Tools
Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, Emily R. [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-06-01
This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.
User Manual for the PROTEUS Mesh Tools
Smith, Micheal A.; Shemon, Emily R.
2015-01-01
This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT M eshToMesh.x and the MT R adialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as ''mesh'' input for any of the mesh tools discussed in this manual.
Yamaguchi, N.D.
1998-01-01
The paper reviews the history, present position and future prospects of the petroleum industry in the USA. The main focus is on supply and demand, the high quality of the products, refinery capacity and product trade balances. Diagrams show historical trends in output, product demand, demand for transport fuels and oil, refinery capacity, refinery closures, and imports and exports. Some particularly salient points brought out were (i) production of US crude shows a marked downward trend but imports of crude will continue to increase, (ii) product demand will continue to grow even though the levels are already high, (iii) the demand is dominated by those products that typically yield the highest income for the refiner, (i.e. high quality transport fuels for environmental compliance), (iv) refinery capacity has decreased since 1980 and (v) refining will continue to have financial problems but will still be profitable. (UK)
Boje, G.
1998-01-01
The petroleum supply and demand balance was discussed and a comparison between Canadian and U.S. refineries was provided. The impact of changing product specifications on the petroleum industry was also discussed. The major changes include sulphur reductions in gasoline, benzene and MMT additives. These changes have been made in an effort to satisfy environmental needs. Geographic margin variations in refineries between east and west were reviewed. An overview of findings from the Solomon Refining Study of Canadian and American refineries, which has been very complimentary of the Canadian refining industry, was provided. From this writer's point of view refinery utilization has improved but there is a threat from increasing efficiency of US competitors. Environmental issues will continue to impact upon the industry and while the chances for making economic returns on investment are good for the years ahead, it will be a challenge to maintain profitability
Calvet, B.
1993-01-01
Over recent years, the refining industry has had to grapple with a growing burden of environmental and safety regulations concerning not only its plants and other facilities, but also its end products. At the same time, it has had to bear the effects of the reduction of the special status that used to apply to petroleum, and the consequences of economic freedom, to which we should add, as specifically concerns the French market, the impact of energy policy and the pro-nuclear option. The result is a drop in heavy fuel oil from 36 million tonnes per year in 1973 to 6.3 million in 1992, and in home-heating fuel from 37 to 18 million per year. This fast-moving market is highly competitive. The French market in particular is wide open to imports, but the refining companies are still heavy exporters for those products with high added-value, like lubricants, jet fuel, and lead-free gasolines. The competition has led the refining companies to commit themselves to quality, and to publicize their efforts in this direction. This is why the long-term perspectives for petroleum fuels are still wide open. This is supported by the probable expectation that the goal of economic efficiency is likely to soften the effects of the energy policy, which penalizes petroleum products, in that they have now become competitive again. In the European context, with the challenge of environmental protection and the decline in heavy fuel outlets, French refining has to keep on improving the quality of its products and plants, which means major investments. The industry absolutely must return to a more normal level of profitability, in order to sustain this financial effort, and generate the prosperity of its high-performance plants and equipment. 1 fig., 5 tabs
Process for refining hydrocarbons
Risenfeld, E H
1924-11-26
A process is disclosed for the refining of hydrocarbons or other mixtures through treatment in vapor form with metal catalysts, characterized by such metals being used as catalysts, which are obtained by reduction of the oxide of minerals containing the iron group, and by the vapors of the hydrocarbons, in the presence of the water vapor, being led over these catalysts at temperatures from 200 to 300/sup 0/C.
2008-01-01
For oil companies to invest in new refining and conversion capacity, favorable conditions over time are required. In other words, refining margins must remain high and demand sustained over a long period. That was the situation prevailing before the onset of the financial crisis in the second half of 2008. The economic conjuncture has taken a substantial turn for the worse since then and the forecasts for 2009 do not look bright. Oil demand is expected to decrease in the OECD countries and to grow much more slowly in the emerging countries. It is anticipated that refining margins will fall in 2009 - in 2008, they slipped significantly in the United States - as a result of increasingly sluggish demand, especially for light products. The next few months will probably be unfavorable to investment. In addition to a gloomy business outlook, there may also be a problem of access to sources of financing. As for investment projects, a mainstream trend has emerged in the last few years: a shift away from the regions that have historically been most active (the OECD countries) towards certain emerging countries, mostly in Asia or the Middle East. The new conjuncture will probably not change this trend
Refining discordant gene trees.
Górecki, Pawel; Eulenstein, Oliver
2014-01-01
Evolutionary studies are complicated by discordance between gene trees and the species tree in which they evolved. Dealing with discordant trees often relies on comparison costs between gene and species trees, including the well-established Robinson-Foulds, gene duplication, and deep coalescence costs. While these costs have provided credible results for binary rooted gene trees, corresponding cost definitions for non-binary unrooted gene trees, which are frequently occurring in practice, are challenged by biological realism. We propose a natural extension of the well-established costs for comparing unrooted and non-binary gene trees with rooted binary species trees using a binary refinement model. For the duplication cost we describe an efficient algorithm that is based on a linear time reduction and also computes an optimal rooted binary refinement of the given gene tree. Finally, we show that similar reductions lead to solutions for computing the deep coalescence and the Robinson-Foulds costs. Our binary refinement of Robinson-Foulds, gene duplication, and deep coalescence costs for unrooted and non-binary gene trees together with the linear time reductions provided here for computing these costs significantly extends the range of trees that can be incorporated into approaches dealing with discordance.
Yoon, S; Lindstrom, P; Pascucci, V; Manocha, D
2005-01-01
We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications
Connectivity editing for quadrilateral meshes
Peng, Chihan; Zhang, Eugene; Kobayashi, Yoshihiro; Wonka, Peter
2011-01-01
We propose new connectivity editing operations for quadrilateral meshes with the unique ability to explicitly control the location, orientation, type, and number of the irregular vertices (valence not equal to four) in the mesh while preserving sharp edges. We provide theoretical analysis on what editing operations are possible and impossible and introduce three fundamental operations to move and re-orient a pair of irregular vertices. We argue that our editing operations are fundamental, because they only change the quad mesh in the smallest possible region and involve the fewest irregular vertices (i.e., two). The irregular vertex movement operations are supplemented by operations for the splitting, merging, canceling, and aligning of irregular vertices. We explain how the proposed highlevel operations are realized through graph-level editing operations such as quad collapses, edge flips, and edge splits. The utility of these mesh editing operations are demonstrated by improving the connectivity of quad meshes generated from state-of-art quadrangulation techniques. © 2011 ACM.
Connectivity editing for quadrilateral meshes
Peng, Chihan
2011-12-12
We propose new connectivity editing operations for quadrilateral meshes with the unique ability to explicitly control the location, orientation, type, and number of the irregular vertices (valence not equal to four) in the mesh while preserving sharp edges. We provide theoretical analysis on what editing operations are possible and impossible and introduce three fundamental operations to move and re-orient a pair of irregular vertices. We argue that our editing operations are fundamental, because they only change the quad mesh in the smallest possible region and involve the fewest irregular vertices (i.e., two). The irregular vertex movement operations are supplemented by operations for the splitting, merging, canceling, and aligning of irregular vertices. We explain how the proposed highlevel operations are realized through graph-level editing operations such as quad collapses, edge flips, and edge splits. The utility of these mesh editing operations are demonstrated by improving the connectivity of quad meshes generated from state-of-art quadrangulation techniques. © 2011 ACM.
Oil refining expansion criteria for Brazil
Tavares, M.E.E.; Szklo, A.S.; Machado, G.V.; Schaeffer, R.; Mariano, J.B.; Sala, J.F.
2006-01-01
This paper assesses different strategies for the expansion of Brazil's oil refining segment, using criteria that range from energy security (reducing imports and vulnerability for key products) through to maximizing the profitability of this sector (boosting the output of higher value oil products) and adding value to Brazil's oil production (reducing exports of heavy acid oil). The development prospects are analyzed for conventional fuel production technology routes, sketching out three possible refining schemes for Brazilian oil and a GTL plant for producing gasoil from natural gas. Market scenario simulations indicate that investments will be required in Brazil's oil refining segment over and above those allocated to planned modifications in its current facilities, reducing the nation's vulnerability in terms of gasoil and petrochemical naphtha imports. Although not economically attractive, oil refining is a key activity that is crucial to oil company strategies. The decision to invest in this segment depends on local infrastructure conditions, environmental constraints and fuel specifications, in addition to oil company strategies, steady growth in demand and the definition of a government policy that eases institutional risks. (author)
Oil refining expansion criteria for Brazil
Tavares, Marina Elisabete Espinho; Szklo, Alexandre Salem; Machado, Giovani Vitoria; Schaeffer, Roberto; Mariano, Jacqueline Barboza; Sala, Janaina Francisco
2006-01-01
This paper assesses different strategies for the expansion of Brazil's oil refining segment, using criteria that range from energy security (reducing imports and vulnerability for key products) through to maximizing the profitability of this sector (boosting the output of higher value oil products) and adding value to Brazil's oil production (reducing exports of heavy acid oil). The development prospects are analyzed for conventional fuel production technology routes, sketching out three possible refining schemes for Brazilian oil and a GTL plant for producing gasoil from natural gas. Market scenario simulations indicate that investments will be required in Brazil's oil refining segment over and above those allocated to planned modifications in its current facilities, reducing the nation's vulnerability in terms of gasoil and petrochemical naphtha imports. Although not economically attractive, oil refining is a key activity that is crucial to oil company strategies. The decision to invest in this segment depends on local infrastructure conditions, environmental constraints and fuel specifications, in addition to oil company strategies, steady growth in demand and the definition of a government policy that eases institutional risks
Nahavandi, N.; Minuchehr, A.; Zolfaghari, A.; Abbasi, M.
2015-01-01
Highlights: • Powerful hp-SEM refinement approach for P N neutron transport equation has been presented. • The method provides great geometrical flexibility and lower computational cost. • There is a capability of using arbitrary high order and non uniform meshes. • Both posteriori and priori local error estimation approaches have been employed. • High accurate results are compared against other common adaptive and uniform grids. - Abstract: In this work we presented the adaptive hp-SEM approach which is obtained from the incorporation of Spectral Element Method (SEM) and adaptive hp refinement. The SEM nodal discretization and hp adaptive grid-refinement for even-parity Boltzmann neutron transport equation creates powerful grid refinement approach with high accuracy solutions. In this regard a computer code has been developed to solve multi-group neutron transport equation in one-dimensional geometry using even-parity transport theory. The spatial dependence of flux has been developed via SEM method with Lobatto orthogonal polynomial. Two commonly error estimation approaches, the posteriori and the priori has been implemented. The incorporation of SEM nodal discretization method and adaptive hp grid refinement leads to high accurate solutions. Coarser meshes efficiency and significant reduction of computer program runtime in comparison with other common refining methods and uniform meshing approaches is tested along several well-known transport benchmarks
Reactor physics verification of the MCNP6 unstructured mesh capability
Burke, T. P.; Kiedrowski, B. C.; Martz, R. L.; Martin, W. R.
2013-01-01
The Monte Carlo software package MCNP6 has the ability to transport particles on unstructured meshes generated from the Computed-Aided Engineering software Abaqus. Verification is performed using benchmarks with features relevant to reactor physics - Big Ten and the C5G7 computational benchmark. Various meshing strategies are tested and results are compared to reference solutions. Computational performance results are also given. The conclusions show MCNP6 is capable of producing accurate calculations for reactor physics geometries and the computational requirements for small lattice benchmarks are reasonable on modern computing platforms. (authors)
Reactor physics verification of the MCNP6 unstructured mesh capability
Burke, T. P. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States); Kiedrowski, B. C.; Martz, R. L. [X-Computational Physics Division, Monte Carlo Codes Group, Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 87545 (United States); Martin, W. R. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States)
2013-07-01
The Monte Carlo software package MCNP6 has the ability to transport particles on unstructured meshes generated from the Computed-Aided Engineering software Abaqus. Verification is performed using benchmarks with features relevant to reactor physics - Big Ten and the C5G7 computational benchmark. Various meshing strategies are tested and results are compared to reference solutions. Computational performance results are also given. The conclusions show MCNP6 is capable of producing accurate calculations for reactor physics geometries and the computational requirements for small lattice benchmarks are reasonable on modern computing platforms. (authors)
Adaptive temporal refinement in injection molding
Karyofylli, Violeta; Schmitz, Mauritius; Hopmann, Christian; Behr, Marek
2018-05-01
Mold filling is an injection molding stage of great significance, because many defects of the plastic components (e.g. weld lines, burrs or insufficient filling) can occur during this process step. Therefore, it plays an important role in determining the quality of the produced parts. Our goal is the temporal refinement in the vicinity of the evolving melt front, in the context of 4D simplex-type space-time grids [1, 2]. This novel discretization method has an inherent flexibility to employ completely unstructured meshes with varying levels of resolution both in spatial dimensions and in the time dimension, thus allowing the use of local time-stepping during the simulations. This can lead to a higher simulation precision, while preserving calculation efficiency. A 3D benchmark case, which concerns the filling of a plate-shaped geometry, is used for verifying our numerical approach [3]. The simulation results obtained with the fully unstructured space-time discretization are compared to those obtained with the standard space-time method and to Moldflow simulation results. This example also serves for providing reliable timing measurements and the efficiency aspects of the filling simulation of complex 3D molds while applying adaptive temporal refinement.
Capelli, Silvia C; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan
2014-09-01
Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly-l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree-Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints - even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu's), all other structural parameters agree within less than 2 csu's. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å(2) as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements - an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å.
Benazzi, E.
2003-01-01
Down sharply in 2002, refining margins showed a clear improvement in the first half-year of 2003. As a result, the earnings reported by oil companies for financial year 2002 were significantly lower than in 2001, but the prospects are brighter for 2003. In the petrochemicals sector, slow demand and higher feedstock prices eroded margins in 2002, especially in Europe and the United States. The financial results for the first part of 2003 seem to indicate that sector profitability will not improve before 2004. (author)
Benazzi, E.; Alario, F.
2004-01-01
In 2003, refining margins showed a clear improvement that continued throughout the first three quarters of 2004. Oil companies posted significantly higher earnings in 2003 compared to 2002, with the results of first quarter 2004 confirming this trend. Due to higher feedstock prices, the implementation of new capacity and more intense competition, the petrochemicals industry was not able to boost margins in 2003. In such difficult business conditions, aggravated by soaring crude prices, the petrochemicals industry is not likely to see any improvement in profitability before the second half of 2004. (author)
1946-07-05
A process is described refining raw oils such as mineral oils, shale oils, tar, their fractions and derivatives, by extraction with a selected solvent or a mixture of solvents containing water, forming a solvent more favorable for the hydrocarbons poor in hydrogen than for hydrocarbons rich in hydrogen, this process is characterized by the addition of an aiding solvent for the water which can be mixed or dissolved in the water and the solvent or in the dissolving mixture and increasing in this way the solubility of the water in the solvent or the dissolving mixture.
Tetrahedral meshing via maximal Poisson-disk sampling
Guo, Jianwei
2016-02-15
In this paper, we propose a simple yet effective method to generate 3D-conforming tetrahedral meshes from closed 2-manifold surfaces. Our approach is inspired by recent work on maximal Poisson-disk sampling (MPS), which can generate well-distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform or adaptive sampling, respectively. We also propose an efficient optimization strategy to protect the domain boundaries and to remove slivers to improve the meshing quality. We present various experimental results to illustrate the efficiency and the robustness of our proposed approach. We demonstrate that the performance and quality (e.g., minimal dihedral angle) of our approach are superior to current state-of-the-art optimization-based approaches.
Atlantic Basin refining profitability
Jones, R.J.
1998-01-01
A review of the profitability margins of oil refining in the Atlantic Basin was presented. Petroleum refiners face the continuous challenge of balancing supply with demand. It would appear that the profitability margins in the Atlantic Basin will increase significantly in the near future because of shrinking supply surpluses. Refinery capacity utilization has reached higher levels than ever before. The American Petroleum Institute reported that in August 1997, U.S. refineries used 99 per cent of their capacity for several weeks in a row. U.S. gasoline inventories have also declined as the industry has focused on reducing capital costs. This is further evidence that supply and demand are tightly balanced. Some of the reasons for tightening supplies were reviewed. It was predicted that U.S. gasoline demand will continue to grow in the near future. Gasoline demand has not declined as expected because new vehicles are not any more fuel efficient today than they were a decade ago. Although federally-mandated fuel efficiency standards were designed to lower gasoline consumption, they may actually have prevented consumption from falling. Atlantic margins were predicted to continue moving up because of the supply and demand evidence: high capacity utilization rates, low operating inventories, limited capacity addition resulting from lower capital spending, continued U.S. gasoline demand growth, and steady total oil demand growth. 11 figs
Petroleum refining industry in China
Walls, W.D.
2010-01-01
The oil refining industry in China has faced rapid growth in oil imports of increasingly sour grades of crude with which to satisfy growing domestic demand for a slate of lighter and cleaner finished products sold at subsidized prices. At the same time, the world petroleum refining industry has been moving from one that serves primarily local and regional markets to one that serves global markets for finished products, as world refining capacity utilization has increased. Globally, refined product markets are likely to experience continued globalization until refining investments significantly expand capacity in key demand regions. We survey the oil refining industry in China in the context of the world market for heterogeneous crude oils and growing world trade in refined petroleum products. (author)
Huang, W.; Zheng, Lingyun; Zhan, X.
2002-01-01
Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.
Resterilized Polypropylene Mesh for Inguinal Hernia Repair
2018-04-19
Apr 19, 2018 ... Conclusion: The use of sterilized polypropylene mesh for the repair of inguinal ... and nonabsorbable materials to reduce the tissue–mesh. INTRODUCTION ... which we have been practicing in our center since we introduced ...
Management of complications of mesh surgery.
Lee, Dominic; Zimmern, Philippe E
2015-07-01
Transvaginal placements of synthetic mid-urethral slings and vaginal meshes have largely superseded traditional tissue repairs in the current era because of presumed efficacy and ease of implant with device 'kits'. The use of synthetic material has generated novel complications including mesh extrusion, pelvic and vaginal pain and mesh contraction. In this review, our aim is to discuss the management, surgical techniques and outcomes associated with mesh removal. Recent publications have seen an increase in presentation of these mesh-related complications, and reports from multiple tertiary centers have suggested that not all patients benefit from surgical intervention. Although the true incidence of mesh complications is unknown, recent publications can serve to guide physicians and inform patients of the surgical outcomes from mesh-related complications. In addition, the literature highlights the growing need for a registry to account for a more accurate reporting of these events and to counsel patients on the risk and benefits before proceeding with mesh surgeries.
Textile properties of synthetic prolapse mesh in response to uniaxial loading
Barone, William R.; Moalli, Pamela A.; Abramowitch, Steven D.
2016-01-01
, with values decreasing by as much as 87% (P mesh products that were tested were found to have porosities that approached 0% and 0 pores with diameters >1 mm. CONCLUSION In this study, it was shown that the pore size of current prolapse meshes dramatically decreases in response to mechanical loading. These findings suggest that prolapse meshes, which are more likely to experience tensile forces in vivo relative to hernia repair meshes, have pores that are unfavorable for tissue integration after surgical tensioning and/or loading in urogynecologic surgeries. Such decreases in pore geometry support the hypothesis that regional increases in the concentration of mesh leads to an enhanced local foreign body response. Although pore deformation in transvaginal meshes requires further characterization, the findings presented here provide a mechanical understanding that can be used to recognize potential areas of concern for complex mesh geometries. Understanding mesh mechanics in response to surgical and in vivo loading conditions may provide improved design criteria for mesh and a refinement of surgical techniques, ultimately leading to better patient outcomes. PMID:27001219
Automatic mesh adaptivity for CADIS and FW-CADIS neutronics modeling of difficult shielding problems
Ibrahim, A. M.; Peplow, D. E.; Mosher, S. W.; Wagner, J. C.; Evans, T. M.; Wilson, P. P.; Sawan, M. E.
2013-01-01
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macro-material approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm de-couples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, obviating the need for a world-class super computer. (authors)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-01-01
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer
User Manual for the PROTEUS Mesh Tools
Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, Emily R [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-09-19
PROTEUS is built around a finite element representation of the geometry for visualization. In addition, the PROTEUS-SN solver was built to solve the even-parity transport equation on a finite element mesh provided as input. Similarly, PROTEUS-MOC and PROTEUS-NEMO were built to apply the method of characteristics on unstructured finite element meshes. Given the complexity of real world problems, experience has shown that using commercial mesh generator to create rather simple input geometries is overly complex and slow. As a consequence, significant effort has been put into place to create multiple codes that help assist in the mesh generation and manipulation. There are three input means to create a mesh in PROTEUS: UFMESH, GRID, and NEMESH. At present, the UFMESH is a simple way to generate two-dimensional Cartesian and hexagonal fuel assembly geometries. The UFmesh input allows for simple assembly mesh generation while the GRID input allows the generation of Cartesian, hexagonal, and regular triangular structured grid geometry options. The NEMESH is a way for the user to create their own mesh or convert another mesh file format into a PROTEUS input format. Given that one has an input mesh format acceptable for PROTEUS, we have constructed several tools which allow further mesh and geometry construction (i.e. mesh extrusion and merging). This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial
Comparing Refinements for Failure and Bisimulation Semantics
Eshuis, H.; Fokkinga, M.M.
2002-01-01
Refinement in bisimulation semantics is defined differently from refinement in failure semantics: in bisimulation semantics refinement is based on simulations between labelled transition systems, whereas in failure semantics refinement is based on inclusions between failure systems. There exist
Voltammetry at micro-mesh electrodes
Wadhawan Jay D.
2003-01-01
Full Text Available The voltammetry at three micro-mesh electrodes is explored. It is found that at sufficiently short experimental durations, the micro-mesh working electrode first behaves as an ensemble of microband electrodes, then follows the behaviour anticipated for an array of diffusion-independent micro-ring electrodes of the same perimeter as individual grid-squares within the mesh. During prolonged electrolysis, the micro-mesh electrode follows that behaviour anticipated theoretically for a cubically-packed partially-blocked electrode. Application of the micro-mesh electrode for the electrochemical determination of carbon dioxide in DMSO electrolyte solutions is further illustrated.
22nd International Meshing Roundtable
Staten, Matthew
2014-01-01
This volume contains the articles presented at the 22nd International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held on Oct 13-16, 2013 in Orlando, Florida, USA. The first IMR was held in 1992, and the conference series has been held annually since. Each year the IMR brings together researchers, developers, and application experts in a variety of disciplines, from all over the world, to present and discuss ideas on mesh generation and related topics. The technical papers in this volume present theoretical and novel ideas and algorithms with practical potential, as well as technical applications in science and engineering, geometric modeling, computer graphics and visualization.
21st International Meshing Roundtable
Weill, Jean-Christophe
2013-01-01
This volume contains the articles presented at the 21st International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held on October 7–10, 2012 in San Jose, CA, USA. The first IMR was held in 1992, and the conference series has been held annually since. Each year the IMR brings together researchers, developers, and application experts in a variety of disciplines, from all over the world, to present and discuss ideas on mesh generation and related topics. The technical papers in this volume present theoretical and novel ideas and algorithms with practical potential, as well as technical applications in science and engineering, geometric modeling, computer graphics, and visualization.
Commercial refining in the Mediterranean
Packer, P.
1999-01-01
About 9% of the world's oil refining capacity is on the Mediterranean: some of the world's biggest and most advanced refineries are on Sicily and Sardinia. The Mediterranean refineries are important suppliers to southern Europe and N. Africa. The article discusses commercial refining in the Mediterranean under the headings of (i) historic development, (ii) product demand, (iii) refinery configurations, (iv) refined product trade, (v) financial performance and (vi) future outlook. Although some difficulties are foreseen, refining in the Mediterranean is likely to continue to be important well into the 21st century. (UK)
An efficient approach to unstructured mesh hydrodynamics on the cell broadband engine (u)
Ferenbaugh, Charles R [Los Alamos National Laboratory
2010-12-14
Unstructured mesh physics for the Cell Broadband Engine (CBE) has received little or no attention to date, largely because the CBE architecture poses particular challenges for unstructured mesh algorithms. SPU memory management strategies such as data preloading cannot be applied to the irregular memory storage patterns of unstructured meshes; and the SPU vector instruction set does not support the indirect addressing needed by connectivity arrays. This paper presents an approach to unstructured mesh physics that addresses these challenges, by creating a new mesh data structure and reorganizing code to give efficient CBE performance. The approach is demonstrated on the FLAG production hydrodynamics code using standard test problems, and results show an average speedup of more than 5x over the original code.
An efficient approach to unstructured mesh hydrodynamics on the cell broadband engine
Ferenbaugh, Charles R [Los Alamos National Laboratory
2010-01-01
Unstructured mesh physics for the Cell Broadband Engine (CBE) has received little or no attention to date, largely because the CBE architecture poses particular challenges for unstructured mesh algorithms. The most common SPU memory management strategies cannot be applied to the irregular memory access patterns of unstructured meshes, and the SPU vector instruction set does not support the indirect addressing needed by connectivity arrays. This paper presents an approach to unstructured mesh physics that addresses these challenges, by creating a new mesh data structure and reorganizing code to give efficient CBE performance. The approach is demonstrated on the FLAG production hydrodynamics code using standard test problems, and results show an average speedup of more than 5x over the original code.
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel
On Modal Refinement and Consistency
Nyman, Ulrik; Larsen, Kim Guldstrand; Wasowski, Andrzej
2007-01-01
Almost 20 years after the original conception, we revisit several fundamental question about modal transition systems. First, we demonstrate the incompleteness of the standard modal refinement using a counterexample due to Hüttel. Deciding any refinement, complete with respect to the standard...
Crystal structure refinement with SHELXL
Sheldrick, George M., E-mail: gsheldr@shelx.uni-ac.gwdg.de [Department of Structural Chemistry, Georg-August Universität Göttingen, Tammannstraße 4, Göttingen 37077 (Germany)
2015-01-01
New features added to the refinement program SHELXL since 2008 are described and explained. The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as ‘a CIF’) containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors.
Jennings Jason
2010-01-01
Full Text Available Laparoscopic inguinal herniorraphy via a transabdominal preperitoneal (TAPP approach using Polypropylene Mesh (Mesh and staples is an accepted technique. Mesh induces a localised inflammatory response that may extend to, and involve, adjacent abdominal and pelvic viscera such as the appendix. We present an interesting case of suspected Mesh-induced appendicitis treated successfully with laparoscopic appendicectomy, without Mesh removal, in an elderly gentleman who presented with symptoms and signs of acute appendicitis 18 months after laparoscopic inguinal hernia repair. Possible mechanisms for Mesh-induced appendicitis are briefly discussed.
Y. FASSIN
2008-01-01
The popularity of the stakeholder model has been achieved thanks to its powerful visual scheme and its very simplicity. Stakeholder management has become an important tool to transfer ethics to management practice and strategy. Nevertheless, legitimate criticism continues to insist on clarification and emphasises on the perfectible nature of the model. Here, rather than building on the discussion from a philosophical or theoretical point of view, a different and innovative approach has been c...
Parallel Performance Optimizations on Unstructured Mesh-based Simulations
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid
2015-01-01
© The Authors. Published by Elsevier B.V. This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.
High-order discrete ordinate transport in non-conforming 2D Cartesian meshes
Gastaldo, L.; Le Tellier, R.; Suteau, C.; Fournier, D.; Ruggieri, J. M.
2009-01-01
We present in this paper a numerical scheme for solving the time-independent first-order form of the Boltzmann equation in non-conforming 2D Cartesian meshes. The flux solution technique used here is the discrete ordinate method and the spatial discretization is based on discontinuous finite elements. In order to have p-refinement capability, we have chosen a hierarchical polynomial basis based on Legendre polynomials. The h-refinement capability is also available and the element interface treatment has been simplified by the use of special functions decomposed over the mesh entities of an element. The comparison to a classical S N method using the Diamond Differencing scheme as spatial approximation confirms the good behaviour of the method. (authors)
INGEN: a general-purpose mesh generator for finite element codes
Cook, W.A.
1979-05-01
INGEN is a general-purpose mesh generator for two- and three-dimensional finite element codes. The basic parts of the code are surface and three-dimensional region generators that use linear-blending interpolation formulas. These generators are based on an i, j, k index scheme that is used to number nodal points, construct elements, and develop displacement and traction boundary conditions. This code can generate truss elements (2 modal points); plane stress, plane strain, and axisymmetry two-dimensional continuum elements (4 to 8 nodal points); plate elements (4 to 8 nodal points); and three-dimensional continuum elements (8 to 21 nodal points). The traction loads generated are consistent with the element generated. The expansion--contraction option is of special interest. This option makes it possible to change an existing mesh such that some regions are refined and others are made coarser than the original mesh. 9 figures
Sierra toolkit computational mesh conceptual model
Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.
2010-01-01
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
Anisotropic evaluation of synthetic surgical meshes.
Saberski, E R; Orenstein, S B; Novitsky, Y W
2011-02-01
The material properties of meshes used in hernia repair contribute to the overall mechanical behavior of the repair. The anisotropic potential of synthetic meshes, representing a difference in material properties (e.g., elasticity) in different material axes, is not well defined to date. Haphazard orientation of anisotropic mesh material can contribute to inconsistent surgical outcomes. We aimed to characterize and compare anisotropic properties of commonly used synthetic meshes. Six different polypropylene (Trelex(®), ProLite™, Ultrapro™), polyester (Parietex™), and PTFE-based (Dualmesh(®), Infinit) synthetic meshes were selected. Longitudinal and transverse axes were defined for each mesh, and samples were cut in each axis orientation. Samples underwent uniaxial tensile testing, from which the elastic modulus (E) in each axis was determined. The degree of anisotropy (λ) was calculated as a logarithmic expression of the ratio between the elastic modulus in each axis. Five of six meshes displayed significant anisotropic behavior. Ultrapro™ and Infinit exhibited approximately 12- and 20-fold differences between perpendicular axes, respectively. Trelex(®), ProLite™, and Parietex™ were 2.3-2.4 times. Dualmesh(®) was the least anisotropic mesh, without marked difference between the axes. Anisotropy of synthetic meshes has been underappreciated. In this study, we found striking differences between elastic properties of perpendicular axes for most commonly used synthetic meshes. Indiscriminate orientation of anisotropic mesh may adversely affect hernia repairs. Proper labeling of all implants by manufacturers should be mandatory. Understanding the specific anisotropic behavior of synthetic meshes should allow surgeons to employ rational implant orientation to maximize outcomes of hernia repair.
Unstructured mesh adaptivity for urban flooding modelling
Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.
2018-05-01
Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.
Meshes optimized for discrete exterior calculus (DEC).
Mousley, Sarah C. [Univ. of Illinois, Urbana-Champaign, IL (United States); Deakin, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knupp, Patrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-12-01
We study the optimization of an energy function used by the meshing community to measure and improve mesh quality. This energy is non-traditional because it is dependent on both the primal triangulation and its dual Voronoi (power) diagram. The energy is a measure of the mesh's quality for usage in Discrete Exterior Calculus (DEC), a method for numerically solving PDEs. In DEC, the PDE domain is triangulated and this mesh is used to obtain discrete approximations of the continuous operators in the PDE. The energy of a mesh gives an upper bound on the error of the discrete diagonal approximation of the Hodge star operator. In practice, one begins with an initial mesh and then makes adjustments to produce a mesh of lower energy. However, we have discovered several shortcomings in directly optimizing this energy, e.g. its non-convexity, and we show that the search for an optimized mesh may lead to mesh inversion (malformed triangles). We propose a new energy function to address some of these issues.
Transrectal Mesh Erosion Requiring Bowel Resection.
Kemp, Marta Maria; Slim, Karem; Rabischong, Benoît; Bourdel, Nicolas; Canis, Michel; Botchorishvili, Revaz
To report a case of a transrectal mesh erosion as complication of laparoscopic promontofixation with mesh repair, necessitating bowel resection and subsequent surgical interventions. Sacrocolpopexy has become a standard procedure for vaginal vault prolapse [1], and the laparoscopic approach has gained popularity owing to more rapid recovery and less morbidity [2,3]. Mesh erosion is a well-known complication of surgical treatment for prolapse as reported in several negative evaluations, including a report from the US Food and Drug Administration in 2011 [4]. Mesh complications are more common after surgeries via the vaginal approach [5]; nonetheless, the incidence of vaginal mesh erosion after laparoscopic procedures is as high as 9% [6]. The incidence of transrectal mesh exposure after laparoscopic ventral rectopexy is roughly 1% [7]. The diagnosis may be delayed because of its rarity and variable presentation. In addition, polyester meshes, such as the mesh used in this case, carry a higher risk of exposure [8]. A 57-year-old woman experiencing genital prolapse, with the cervix classified as +3 according to the Pelvic Organ Prolapse Quantification system, underwent laparoscopic standard sacrocolpopexy using polyester mesh. Subtotal hysterectomy and bilateral adnexectomy were performed concomitantly. A 3-year follow-up consultation demonstrated no signs or symptoms of erosion of any type. At 7 years after the surgery, however, the patient presented with rectal discharge, diagnosed as infectious rectocolitis with the isolation of Clostridium difficile. She underwent a total of 5 repair surgeries in a period of 4 months, including transrectal resection of exposed mesh, laparoscopic ablation of mesh with digestive resection, exploratory laparoscopy with abscess drainage, and exploratory laparoscopy with ablation of residual mesh and transverse colostomy. She recovered well after the last intervention, exhibiting no signs of vaginal or rectal fistula and no recurrence
RGG: Reactor geometry (and mesh) generator
Jain, R.; Tautges, T.
2012-01-01
The reactor geometry (and mesh) generator RGG takes advantage of information about repeated structures in both assembly and core lattices to simplify the creation of geometry and mesh. It is released as open source software as a part of the MeshKit mesh generation library. The methodology operates in three stages. First, assembly geometry models of various types are generated by a tool called AssyGen. Next, the assembly model or models are meshed by using MeshKit tools or the CUBIT mesh generation tool-kit, optionally based on a journal file output by AssyGen. After one or more assembly model meshes have been constructed, a tool called CoreGen uses a copy/move/merge process to arrange the model meshes into a core model. In this paper, we present the current state of tools and new features in RGG. We also discuss the parallel-enabled CoreGen, which in several cases achieves super-linear speedups since the problems fit in available RAM at higher processor counts. Several RGG applications - 1/6 VHTR model, 1/4 PWR reactor core, and a full-core model for Monju - are reported. (authors)
Parallel adaptive simulations on unstructured meshes
Shephard, M S; Jansen, K E; Sahni, O; Diachin, L A
2007-01-01
This paper discusses methods being developed by the ITAPS center to support the execution of parallel adaptive simulations on unstructured meshes. The paper first outlines the ITAPS approach to the development of interoperable mesh, geometry and field services to support the needs of SciDAC application in these areas. The paper then demonstrates the ability of unstructured adaptive meshing methods built on such interoperable services to effectively solve important physics problems. Attention is then focused on ITAPs' developing ability to solve adaptive unstructured mesh problems on massively parallel computers
Ragusa, Jean C.
2015-01-01
In this paper, we propose a piece-wise linear discontinuous (PWLD) finite element discretization of the diffusion equation for arbitrary polygonal meshes. It is based on the standard diffusion form and uses the symmetric interior penalty technique, which yields a symmetric positive definite linear system matrix. A preconditioned conjugate gradient algorithm is employed to solve the linear system. Piece-wise linear approximations also allow a straightforward implementation of local mesh adaptation by allowing unrefined cells to be interpreted as polygons with an increased number of vertices. Several test cases, taken from the literature on the discretization of the radiation diffusion equation, are presented: random, sinusoidal, Shestakov, and Z meshes are used. The last numerical example demonstrates the application of the PWLD discretization to adaptive mesh refinement
Refinement of boards' role required.
Umbdenstock, R J
1987-01-01
The governing board's role in health care is not changing, but new competitive forces necessitate a refinement of the board's approach to fulfilling its role. In a free-standing, community, not-for-profit hospital, the board functions as though it were the "owner." Although it does not truly own the facility in the legal sense, the board does have legal, fiduciary, and financial responsibilities conferred on it by the state. In a religious-sponsored facility, the board fulfills these same obligations on behalf of the sponsoring institute, subject to the institute's reserved powers. In multi-institutional systems, the hospital board's power and authority depend on the role granted it by the system. Boards in all types of facilities are currently faced with the following challenges: Fulfilling their basic responsibilities, such as legal requirements, financial duties, and obligations for the quality of care. Encouraging management and the board itself to "think strategically" in attacking new competitive market forces while protecting the organization's traditional mission and values. Assessing recommended strategies in light of consequences if constituencies think the organization is abandoning its commitments. Boards can take several steps to match their mode of operation with the challenges of the new environment. Boards must rededicate themselves to the hospital's mission. Trustees must expand their understanding of health care trends and issues and their effect on the organization. Boards must evaluate and help strengthen management's performance, rather than acting as a "watchdog" in an adversarial position. Boards must think strategically, rather than focusing solely on operational details. Boards must evaluate the methods they use for conducting business.
Ibrahim, Ahmad M.; Wilson, Paul P.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Grove, Robert E.
2014-01-01
Highlights: •Calculate the prompt dose rate everywhere throughout the entire fusion energy facility. •Utilize FW-CADIS to accurately perform difficult neutronics calculations for fusion energy systems. •Develop three mesh adaptivity algorithms to enhance FW-CADIS efficiency in fusion-neutronics calculations. -- Abstract: Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer
Tensile Behaviour of Welded Wire Mesh and Hexagonal Metal Mesh for Ferrocement Application
Tanawade, A. G.; Modhera, C. D.
2017-08-01
Tension tests were conducted on welded mesh and hexagonal Metal mesh. Welded Mesh is available in the market in different sizes. The two types are analysed viz. Ø 2.3 mm and Ø 2.7 mm welded mesh, having opening size 31.75 mm × 31.75 mm and 25.4 mm × 25.4 mm respectively. Tensile strength test was performed on samples of welded mesh in three different orientations namely 0°, 30° and 45° degrees with the loading axis and hexagonal Metal mesh of Ø 0.7 mm, having opening 19.05 × 19.05 mm. Experimental tests were conducted on samples of these meshes. The objective of this study was to investigate the behaviour of the welded mesh and hexagonal Metal mesh. The result shows that the tension load carrying capacity of welded mesh of Ø 2.7 mm of 0° orientation is good as compared to Ø2.3 mm mesh and ductility of hexagonal Metal mesh is good in behaviour.
Zhang, Fang; Merrill, Matthew D.; Tokash, Justin C.; Saito, Tomonori; Cheng, Shaoan; Hickner, Michael A.; Logan, Bruce E.
2011-01-01
that the mesh properties of these cathodes can significantly affect performance. Cathodes made from the coarsest mesh (30-mesh) achieved the highest maximum power of 1616 ± 25 mW m-2 (normalized to cathode projected surface area; 47.1 ± 0.7 W m-3 based on liquid
Intravesical midurethral sling mesh erosion secondary to transvaginal mesh reconstructive surgery
Sukanda Bin Jaili
2015-05-01
Conclusion: Repeated vaginal reconstructive surgery may jeopardize a primary mesh or sling, and pose a high risk of mesh erosion, which may be delayed for several years. Removal of the mesh erosion and bladder repair are feasible pervaginally with good outcome.
South Korea - oil refining overview
Hayes, D.
1999-01-01
Following the economic problems of the 1990s, the petroleum refining industry of South Korea underwent much involuntary restructuring in 1999 with respect to takeovers and mergers and these are discussed. The demand for petroleum has now pretty well recovered. The reasons for fluctuating prices in the 1990s, how the new structure should be cushioned against changes in the future, and the potential for South Korea to export refined petroleum, are all discussed
Steel refining possibilities in LF
Dumitru, M. G.; Ioana, A.; Constantin, N.; Ciobanu, F.; Pollifroni, M.
2018-01-01
This article presents the main possibilities for steel refining in Ladle Furnace (LF). These, are presented: steelmaking stages, steel refining through argon bottom stirring, online control of the bottom stirring, bottom stirring diagram during LF treatment of a heat, porous plug influence over the argon stirring, bottom stirring porous plug, analysis of porous plugs disposal on ladle bottom surface, bottom stirring simulation with ANSYS, bottom stirring simulation with Autodesk CFD.
Stable grid refinement and singular source discretization for seismic wave simulations
Petersson, N A; Sjogreen, B
2009-10-30
An energy conserving discretization of the elastic wave equation in second order formulation is developed for a composite grid, consisting of a set of structured rectangular component grids with hanging nodes on the grid refinement interface. Previously developed summation-by-parts properties are generalized to devise a stable second order accurate coupling of the solution across mesh refinement interfaces. The discretization of singular source terms of point force and point moment tensor type are also studied. Based on enforcing discrete moment conditions that mimic properties of the Dirac distribution and its gradient, previous single grid formulas are generalized to work in the vicinity of grid refinement interfaces. These source discretization formulas are shown to give second order accuracy in the solution, with the error being essentially independent of the distance between the source and the grid refinement boundary. Several numerical examples are given to illustrate the properties of the proposed method.
Evaluation of mesh morphing and mapping techniques in patient specific modeling of the human pelvis.
Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa
2013-01-01
Robust generation of pelvic finite element models is necessary to understand the variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis and their strain distributions evaluated. Morphing and mapping techniques were effectively applied to generate good quality geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.
Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa
2012-08-01
Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.
Autotuning of Adaptive Mesh Refinement PDE Solvers on Shared Memory Architectures
Nogina, Svetlana
2012-01-01
Many multithreaded, grid-based, dynamically adaptive solvers for partial differential equations permanently have to traverse subgrids (patches) of different and changing sizes. The parallel efficiency of this traversal depends on the interplay of the patch size, the architecture used, the operations triggered throughout the traversal, and the grain size, i.e. the size of the subtasks the patch is broken into. We propose an oracle mechanism delivering grain sizes on-the-fly. It takes historical runtime measurements for different patch and grain sizes as well as the traverse\\'s operations into account, and it yields reasonable speedups. Neither magic configuration settings nor an expensive pre-tuning phase are necessary. It is an autotuning approach. © 2012 Springer-Verlag.
Three-Dimensional Adaptive Mesh Refinement Simulations of Point-Symmetric Nebulae
Rijkhorst, E.-J.; Icke, V.; Mellema, G.; Meixner, M.; Kastner, J.H.; Balick, B.; Soker, N.
2004-01-01
Previous analytical and numerical work shows that the generalized interacting stellar winds model can explain the observed bipolar shapes of planetary nebulae very well. However, many circumstellar nebulae have a multipolar or point-symmetric shape. With two-dimensional calculations, Icke showed
A dynamic mesh refinement technique for Lattice Boltzmann simulations on octree-like grids
Neumann, Philipp; Neckel, Tobias
2012-01-01
computations in two and three dimensions. An extension to dynamically changing grids and a spatially adaptive approach to fluctuating hydrodynamics, allowing for the thermalisation of the fluid in particular regions of interest, is proposed. Both dynamic
Autotuning of Adaptive Mesh Refinement PDE Solvers on Shared Memory Architectures
Nogina, Svetlana; Unterweger, Kristof; Weinzierl, Tobias
2012-01-01
runtime measurements for different patch and grain sizes as well as the traverse's operations into account, and it yields reasonable speedups. Neither magic configuration settings nor an expensive pre-tuning phase are necessary. It is an autotuning
Conservative multi-implicit integral deferred correction methods with adaptive mesh refinement
Layton, A.T.
2004-01-01
In most models of reacting gas dynamics, the characteristic time scales of chemical reactions are much shorter than the hydrodynamic and diffusive time scales, rendering the reaction part of the model equations stiff. Moreover, nonlinear forcings may introduce into the solutions sharp gradients or shocks, the robust behavior and correct propagation of which require the use of specialized spatial discretization procedures. This study presents high-order conservative methods for the temporal integration of model equations of reacting flows. By means of a method of lines discretization on the flux difference form of the equations, these methods compute approximations to the cell-averaged or finite-volume solution. The temporal discretization is based on a multi-implicit generalization of integral deferred correction methods. The advection term is integrated explicitly, and the diffusion and reaction terms are treated implicitly but independently, with the splitting errors present in traditional operator splitting methods reduced via the integral deferred correction procedure. To reduce computational cost, time steps used to integrate processes with widely-differing time scales may differ in size. (author)
A dynamic mesh refinement technique for Lattice Boltzmann simulations on octree-like grids
Neumann, Philipp
2012-04-27
In this contribution, we present our new adaptive Lattice Boltzmann implementation within the Peano framework, with special focus on nanoscale particle transport problems. With the continuum hypothesis not holding anymore on these small scales, new physical effects - such as Brownian fluctuations - need to be incorporated. We explain the overall layout of the application, including memory layout and access, and shortly review the adaptive algorithm. The scheme is validated by different benchmark computations in two and three dimensions. An extension to dynamically changing grids and a spatially adaptive approach to fluctuating hydrodynamics, allowing for the thermalisation of the fluid in particular regions of interest, is proposed. Both dynamic adaptivity and adaptive fluctuating hydrodynamics are validated separately in simulations of particle transport problems. The application of this scheme to an oscillating particle in a nanopore illustrates the importance of Brownian fluctuations in such setups. © 2012 Springer-Verlag.
woptic: Optical conductivity with Wannier functions and adaptive k-mesh refinement
Assmann, E.; Wissgott, P.; Kuneš, Jan; Toschi, A.; Blaha, P.; Held, K.
2016-01-01
Roč. 202, May (2016), s. 1-11 ISSN 0010-4655 Institutional support: RVO:68378271 Keywords : optical spectra * Wannier orbital Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.936, year: 2016
Ueyama, H.
2008-01-01
Hilly and mountainous areas occupy approximately 70% of Japan, and the area of farmland in these regions is decreasing; these areas are defined as those from the outer plains to the mountains. The development of strategies for the revitalization of local agriculture in hilly and mountainous areas is therefore a significant problem in Japan. Systematic agriculture is efficient in hilly and mountainous areas, and distribution maps are effective planning tools for evaluating the meteorological conditions for individual farms in those areas where farms are small and interspersed. Public agricultural research centers in each prefecture of Japan have developed mesh meteorological data maps with some kilometers grid cell resolutions for local agriculture, and have been made many studies using mesh meteorological data maps. However, critical variations exist between estimated mesh data and actual meteorological condition within the area of each grid cell. To address this problem, methods of estimating air temperature and solar radiation on a 50 m mesh (latitude 1.5 sec x longitude 2.25 sec) were developed. While many studies with mesh meteorological data maps have been made, numbers of concrete examples of utility for agricultural activity in hilly and mountainous areas have been few. This paper presents therefore some studies for utilization facilitated of mesh meteorological data maps in hilly and mountainous areas. And furthermore, it is proposed some guides to utilize mesh meteorological data maps for the purpose of revitalizing an agricultural activity in hilly and mountainous area with concrete examples
Laparoscopic Pelvic Floor Repair Using Polypropylene Mesh
Shih-Shien Weng
2008-09-01
Conclusion: Laparoscopic pelvic floor repair using a single piece of polypropylene mesh combined with uterosacral ligament suspension appears to be a feasible procedure for the treatment of advanced vaginal vault prolapse and enterocele. Fewer mesh erosions and postoperative pain syndromes were seen in patients who had no previous pelvic floor reconstructive surgery.
Robust diamond meshes with unique wettability properties.
Yang, Yizhou; Li, Hongdong; Cheng, Shaoheng; Zou, Guangtian; Wang, Chuanxi; Lin, Quan
2014-03-18
Robust diamond meshes with excellent superhydrophobic and superoleophilic properties have been fabricated. Superhydrophobicity is observed for water with varying pH from 1 to 14 with good recyclability. Reversible superhydrophobicity and hydrophilicity can be easily controlled. The diamond meshes show highly efficient water-oil separation and water pH droplet transference.
Mesh-graft urethroplasty: a case report
田中, 敏博; 滝川, 浩; 香川, 征; 長江, 浩朗
1987-01-01
We used a meshed free-foreskin transplant in a two-stage procedure for reconstruction of the extended stricture of urethra after direct vision urethrotomy. The results were excellent. Mesh-graft urethroplasty is a useful method for patients with extended strictures of the urethra or recurrent strictures after several operations.
7th International Meshing Roundtable '98
Eldred, T.J.
1998-10-01
The goal of the 7th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the past, the Roundtable has enjoyed significant participation from each of these groups from a wide variety of countries.
Postoperative pain outcomes after transvaginal mesh revision.
Danford, Jill M; Osborn, David J; Reynolds, W Stuart; Biller, Daniel H; Dmochowski, Roger R
2015-01-01
Although the current literature discusses mesh complications including pain, as well as suggesting different techniques for removing mesh, there is little literature regarding pain outcomes after surgical removal or revision. The purpose of this study is to determine if surgical removal or revision of vaginal mesh improves patient's subjective complaints of pelvic pain associated with original placement of mesh. After obtaining approval from the Vanderbilt University Medical Center Institutional Review Board, a retrospective review of female patients with pain secondary to previous mesh placement who underwent excision or revision of vaginal mesh from January 2000 to August 2012 was performed. Patient age, relevant medical history including menopause status, previous hysterectomy, smoking status, and presence of diabetes, fibromyalgia, interstitial cystitis, and chronic pelvic pain, was obtained. Patients' postoperative pain complaints were assessed. Of the 481 patients who underwent surgery for mesh revision, removal or urethrolysis, 233 patients met our inclusion criteria. One hundred and sixty-nine patients (73 %) reported that their pain improved, 19 (8 %) reported that their pain worsened, and 45 (19 %) reported that their pain remained unchanged after surgery. Prior history of chronic pelvic pain was associated with increased risk of failure of the procedure to relieve pain (OR 0.28, 95 % CI 0.12-0.64, p = 0.003). Excision or revision of vaginal mesh appears to be effective in improving patients' pain symptoms most of the time. Patients with a history of chronic pelvic pain are at an increased risk of no improvement or of worsening pain.
Converting skeletal structures to quad dominant meshes
Bærentzen, Jakob Andreas; Misztal, Marek Krzysztof; Welnicka, Katarzyna
2012-01-01
We propose the Skeleton to Quad-dominant polygonal Mesh algorithm (SQM), which converts skeletal structures to meshes composed entirely of polar and annular regions. Both types of regions have a regular structure where all faces are quads except for a single ring of triangles at the center of each...
Code meshing: Online bilingual tutoring in Higher Education
Batyi, Thelma Thokozile
2016-12-01
Full Text Available Students’ academic writing literacies are required to express their knowledge, as academic writing is the common mode of assessment in higher education. 28 isiXhosa-speaking first-year diploma students, who failed an academic literacies admission test evaluating the level of their academic writing literacies in the Business faculty, participated once a week over a period of eight months in a course including the practice of code meshing. In the June and November Tourism Communication tests, which also evaluated their academic writing literacies, there was a significant difference in the mean scores when compared to the admission test in the Business faculty. Their academic writing had also improved, according to their assignment marks. The researcher in this project provides evidence that code meshing as a bi/multilingual strategy could be used to improve academic writing literacies in students.
Refinement of Parallel and Reactive Programs
Back, R. J. R.
1992-01-01
We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...
No. 351-Transvaginal Mesh Procedures for Pelvic Organ Prolapse.
Larouche, Maryse; Geoffrion, Roxana; Walter, Jens-Erik
2017-11-01
This guideline reviews the evidence related to the risks and benefits of using transvaginal mesh in pelvic organ prolapse repairs in order to update recommendations initially made in 2011. Gynaecologists, residents, urologists, urogynaecologists, and other health care providers who assess, counsel, and care for women with pelvic organ prolapse. Adult women with symptomatic pelvic organ prolapse considering surgery and those who have previously undergone transvaginal mesh procedures for the treatment of pelvic organ prolapse. The discussion relates to transvaginal mesh procedures compared with other surgical options for pelvic organ prolapse (mainly about vaginal native tissue repairs and minimally about other alternatives such as biological and absorbable vaginal mesh and abdominally placed surgical mesh). The outcomes of interest are objective and subjective success rates and intraoperative and postoperative complications, such as adjacent organ injury (urinary, gastrointestinal), infection, hematoma/bleeding, vaginal mesh exposure, persistent pain, dyspareunia, de novo stress urinary incontinence, and reoperation. PubMed, Medline, the Cochrane Database, and EMBASE were searched using the key words pelvic organ prolapse/surgery*, prolapse/surgery*, surgical mesh, surgical mesh*/adverse effects, transvaginal mesh, and pelvic organ prolapse. were restricted to English or French language and human research. Articles obtained through this search strategy were included until the end of June 2016. Pertinent new studies were added up to September 2016. Grey literature was not searched. Clinical practice guidelines and guidelines of specialty societies were reviewed. Systematic reviews were included when available. Randomized controlled trials and observational studies were included when evidence for the outcome of interest or in the target population was not available from systematic reviews. New studies not yet included in systematic reviews were also included. Only
Automatic mesh generation with QMESH program
Ise, Takeharu; Tsutsui, Tsuneo
1977-05-01
Usage of the two-dimensional self-organizing mesh generation program, QMESH, is presented together with the descriptions and the experience, as it has recently been converted and reconstructed from the NEACPL version to the FACOM. The program package consists of the QMESH code to generate quadrilaterial meshes with smoothing techniques, the QPLOT code to plot the data obtained from the QMESH on the graphic COM, and the RENUM code to renumber the meshes by using a bandwidth minimization procedure. The technique of mesh reconstructuring coupled with smoothing techniques is especially useful when one generates the meshes for computer codes based on the finite element method. Several typical examples are given for easy access to the QMESH program, which is registered in the R.B-disks of JAERI for users. (auth.)
Fog water collection effectiveness: Mesh intercomparisons
Fernandez, Daniel; Torregrosa, Alicia; Weiss-Penzias, Peter; Zhang, Bong June; Sorensen, Deckard; Cohen, Robert; McKinley, Gareth; Kleingartner, Justin; Oliphant, Andrew; Bowman, Matthew
2018-01-01
To explore fog water harvesting potential in California, we conducted long-term measurements involving three types of mesh using standard fog collectors (SFC). Volumetric fog water measurements from SFCs and wind data were collected and recorded in 15-minute intervals over three summertime fog seasons (2014–2016) at four California sites. SFCs were deployed with: standard 1.00 m2 double-layer 35% shade coefficient Raschel; stainless steel mesh coated with the MIT-14 hydrophobic formulation; and FogHa-Tin, a German manufactured, 3-dimensional spacer fabric deployed in two orientations. Analysis of 3419 volumetric samples from all sites showed strong relationships between mesh efficiency and wind speed. Raschel mesh collected 160% more fog water than FogHa-Tin at wind speeds less than 1 m s–1 and 45% less for wind speeds greater than 5 m s–1. MIT-14 coated stainless-steel mesh collected more fog water than Raschel mesh at all wind speeds. At low wind speeds of steel mesh collected 3% more and at wind speeds of 4–5 m s–1, it collected 41% more. FogHa-Tin collected 5% more fog water when the warp of the weave was oriented vertically, per manufacturer specification, than when the warp of the weave was oriented horizontally. Time series measurements of three distinct mesh across similar wind regimes revealed inconsistent lags in fog water collection and inconsistent performance. Since such differences occurred under similar wind-speed regimes, we conclude that other factors play important roles in mesh performance, including in-situ fog event and aerosol dynamics that affect droplet-size spectra and droplet-to-mesh surface interactions.
Daniel Pérez-Grande
2016-11-01
Full Text Available This manuscript explores numerical errors in highly anisotropic diffusion problems. First, the paper addresses the use of regular structured meshes in numerical solutions versus meshes aligned with the preferential directions of the problem. Numerical diffusion in structured meshes is quantified by solving the classical anisotropic diffusion problem; the analysis is exemplified with the application to a numerical model of conducting fluids under magnetic confinement, where rates of transport in directions parallel and perpendicular to a magnetic field are quite different. Numerical diffusion errors in this problem promote the use of magnetic field aligned meshes (MFAM. The generation of this type of meshes presents some challenges; several meshing strategies are implemented and analyzed in order to provide insight into achieving acceptable mesh regularity. Second, Gradient Reconstruction methods for magnetically aligned meshes are addressed and numerical errors are compared for the structured and magnetically aligned meshes. It is concluded that using the latter provides a more correct and straightforward approach to solving problems where anisotropicity is present, especially, if the anisotropicity level is high or difficult to quantify. The conclusions of the study may be extrapolated to the study of anisotropic flows different from conducting fluids.
Biomolecular structure refinement using the GROMOS simulation software
Schmid, Nathan; Allison, Jane R.; Dolenc, Jožica; Eichenberger, Andreas P.; Kunz, Anna-Pitschna E.; Gunsteren, Wilfred F. van
2011-01-01
For the understanding of cellular processes the molecular structure of biomolecules has to be accurately determined. Initial models can be significantly improved by structure refinement techniques. Here, we present the refinement methods and analysis techniques implemented in the GROMOS software for biomolecular simulation. The methodology and some implementation details of the computation of NMR NOE data, 3 J-couplings and residual dipolar couplings, X-ray scattering intensities from crystals and solutions and neutron scattering intensities used in GROMOS is described and refinement strategies and concepts are discussed using example applications. The GROMOS software allows structure refinement combining different types of experimental data with different types of restraining functions, while using a variety of methods to enhance conformational searching and sampling and the thermodynamically calibrated GROMOS force field for biomolecular simulation.
Biomolecular structure refinement using the GROMOS simulation software
Schmid, Nathan; Allison, Jane R.; Dolenc, Jozica; Eichenberger, Andreas P.; Kunz, Anna-Pitschna E.; Gunsteren, Wilfred F. van, E-mail: wfvgn@igc.phys.chem.ethz.ch [Swiss Federal Institute of Technology ETH, Laboratory of Physical Chemistry (Switzerland)
2011-11-15
For the understanding of cellular processes the molecular structure of biomolecules has to be accurately determined. Initial models can be significantly improved by structure refinement techniques. Here, we present the refinement methods and analysis techniques implemented in the GROMOS software for biomolecular simulation. The methodology and some implementation details of the computation of NMR NOE data, {sup 3}J-couplings and residual dipolar couplings, X-ray scattering intensities from crystals and solutions and neutron scattering intensities used in GROMOS is described and refinement strategies and concepts are discussed using example applications. The GROMOS software allows structure refinement combining different types of experimental data with different types of restraining functions, while using a variety of methods to enhance conformational searching and sampling and the thermodynamically calibrated GROMOS force field for biomolecular simulation.
A moving mesh finite difference method for equilibrium radiation diffusion equations
Yang, Xiaobo, E-mail: xwindyb@126.com [Department of Mathematics, College of Science, China University of Mining and Technology, Xuzhou, Jiangsu 221116 (China); Huang, Weizhang, E-mail: whuang@ku.edu [Department of Mathematics, University of Kansas, Lawrence, KS 66045 (United States); Qiu, Jianxian, E-mail: jxqiu@xmu.edu.cn [School of Mathematical Sciences and Fujian Provincial Key Laboratory of Mathematical Modeling and High-Performance Scientific Computing, Xiamen University, Xiamen, Fujian 361005 (China)
2015-10-01
An efficient moving mesh finite difference method is developed for the numerical solution of equilibrium radiation diffusion equations in two dimensions. The method is based on the moving mesh partial differential equation approach and moves the mesh continuously in time using a system of meshing partial differential equations. The mesh adaptation is controlled through a Hessian-based monitor function and the so-called equidistribution and alignment principles. Several challenging issues in the numerical solution are addressed. Particularly, the radiation diffusion coefficient depends on the energy density highly nonlinearly. This nonlinearity is treated using a predictor–corrector and lagged diffusion strategy. Moreover, the nonnegativity of the energy density is maintained using a cutoff method which has been known in literature to retain the accuracy and convergence order of finite difference approximation for parabolic equations. Numerical examples with multi-material, multiple spot concentration situations are presented. Numerical results show that the method works well for radiation diffusion equations and can produce numerical solutions of good accuracy. It is also shown that a two-level mesh movement strategy can significantly improve the efficiency of the computation.
A moving mesh finite difference method for equilibrium radiation diffusion equations
Yang, Xiaobo; Huang, Weizhang; Qiu, Jianxian
2015-01-01
An efficient moving mesh finite difference method is developed for the numerical solution of equilibrium radiation diffusion equations in two dimensions. The method is based on the moving mesh partial differential equation approach and moves the mesh continuously in time using a system of meshing partial differential equations. The mesh adaptation is controlled through a Hessian-based monitor function and the so-called equidistribution and alignment principles. Several challenging issues in the numerical solution are addressed. Particularly, the radiation diffusion coefficient depends on the energy density highly nonlinearly. This nonlinearity is treated using a predictor–corrector and lagged diffusion strategy. Moreover, the nonnegativity of the energy density is maintained using a cutoff method which has been known in literature to retain the accuracy and convergence order of finite difference approximation for parabolic equations. Numerical examples with multi-material, multiple spot concentration situations are presented. Numerical results show that the method works well for radiation diffusion equations and can produce numerical solutions of good accuracy. It is also shown that a two-level mesh movement strategy can significantly improve the efficiency of the computation
Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A
2018-05-01
An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.
Romanian refining industry assesses restructuring
Tanasescu, D.G.
1991-01-01
The Romanian crude oil refining industry, as all the other economic sectors, faces the problems accompanying the transition from a centrally planned economy to a market economy. At present, all refineries have registered as joint-stock companies and all are coordinated and assisted by Rafirom S.A., from both a legal and a production point of view. Rafirom S.A. is a joint-stock company that holds shares in refineries and other stock companies with activities related to oil refining. Such activities include technological research, development, design, transportation, storage, and domestic and foreign marketing. This article outlines the market forces that are expected to: drive rationalization and restructuring of refining operations and define the targets toward which the reconfigured refineries should strive
Data refinement for true concurrency
Brijesh Dongol
2013-05-01
Full Text Available The majority of modern systems exhibit sophisticated concurrent behaviour, where several system components modify and observe the system state with fine-grained atomicity. Many systems (e.g., multi-core processors, real-time controllers also exhibit truly concurrent behaviour, where multiple events can occur simultaneously. This paper presents data refinement defined in terms of an interval-based framework, which includes high-level operators that capture non-deterministic expression evaluation. By modifying the type of an interval, our theory may be specialised to cover data refinement of both discrete and continuous systems. We present an interval-based encoding of forward simulation, then prove that our forward simulation rule is sound with respect to our data refinement definition. A number of rules for decomposing forward simulation proofs over both sequential and parallel composition are developed.
Bauxite Mining and Alumina Refining
Frisch, Neale; Olney, David
2014-01-01
Objective: To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Methods: Review article. Results: The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust, alumina dust, and caustic mist in contemporary best-practice bauxite mining and alumina refining operations have not been demonstrated to be associated with clinically significant decrements in lung function. Exposures to bauxite dust and alumina dust at such operations are also not associated with the incidence of cancer. Conclusions: A range of occupational health risks in bauxite mining and alumina refining require the maintenance of effective control measures. PMID:24806720
[CLINICAL EVALUATION OF THE NEW ANTISEPTIC MESHES].
Gogoladze, M; Kiladze, M; Chkhikvadze, T; Jiqia, D
2016-12-01
Improving the results of hernia treatment and prevention of complications became a goal of our research which included two parts - experimental and clinical. Histomorphological and bacteriological researches showed that the best result out of the 3 control groups was received in case of covering implant "Coladerm"+ with chlorhexidine. Based on the experiment results working process continued in clinics in order to test and introduce new "coladerm"+ chlorhexidine covered poliprophilene meshes into practice. For clinical illustration there were 60 patients introduced to the research who had hernioplasty procedures by different nets: I group - standard meshes+"coladerm"+chlorhexidine, 35 patients; II group - standard meshes +"coladerm", 15 patients; III group - standard meshes, 10 patients. Assessment of the wound and echo-control was done post-surgery on the 8th, 30th and 90th days. This clinical research based on the experimental results once again showed the best anti-microbe features of new antiseptic polymeric biocomposite meshes (standard meshes+"coladerm"+chlorhexidine); timely termination of regeneration and reparation processes without any post-surgery suppurative complications. We hope that new antiseptic polymeric biocomposite meshes presented by us will be successfully used in surgical practice of hernia treatment based on and supported by expermental-clinical research.
Fog water collection effectiveness: Mesh intercomparisons
Fernandez, Daniel; Torregrosa, Alicia; Weiss-Penzias, Peter; Zhang, Bong June; Sorensen, Deckard; Cohen, Robert; McKinley, Gareth; Kleingartner, Justin; Oliphant, Andrew; Bowman, Matthew
2018-01-01
To explore fog water harvesting potential in California, we conducted long-term measurements involving three types of mesh using standard fog collectors (SFC). Volumetric fog water measurements from SFCs and wind data were collected and recorded in 15-minute intervals over three summertime fog seasons (2014–2016) at four California sites. SFCs were deployed with: standard 1.00 m2 double-layer 35% shade coefficient Raschel; stainless steel mesh coated with the MIT-14 hydrophobic formulation; and FogHa-Tin, a German manufactured, 3-dimensional spacer fabric deployed in two orientations. Analysis of 3419 volumetric samples from all sites showed strong relationships between mesh efficiency and wind speed. Raschel mesh collected 160% more fog water than FogHa-Tin at wind speeds less than 1 m s–1 and 45% less for wind speeds greater than 5 m s–1. MIT-14 coated stainless-steel mesh collected more fog water than Raschel mesh at all wind speeds. At low wind speeds of wind speeds of 4–5 m s–1, it collected 41% more. FogHa-Tin collected 5% more fog water when the warp of the weave was oriented vertically, per manufacturer specification, than when the warp of the weave was oriented horizontally. Time series measurements of three distinct mesh across similar wind regimes revealed inconsistent lags in fog water collection and inconsistent performance. Since such differences occurred under similar wind-speed regimes, we conclude that other factors play important roles in mesh performance, including in-situ fog event and aerosol dynamics that affect droplet-size spectra and droplet-to-mesh surface interactions.
Transvaginal mesh procedures for pelvic organ prolapse.
Walter, Jens-Erik
2011-02-01
To provide an update on transvaginal mesh procedures, newly available minimally invasive surgical techniques for pelvic floor repair. The discussion is limited to minimally invasive transvaginal mesh procedures. PubMed and Medline were searched for articles published in English, using the key words "pelvic organ prolapse," transvaginal mesh," and "minimally invasive surgery." Results were restricted to systematic reviews, randomized control trials/controlled clinical trials, and observational studies. Searches were updated on a regular basis, and articles were incorporated in the guideline to May 2010. Grey (unpublished) literature was identified through searching the websites of health technology assessment and health technology assessment-related agencies, clinical practice guideline collections, clinical trial registries, and national and international medical specialty societies. The quality of evidence was rated using the criteria described in the Report of the Canadian Task Force on the Preventive Health Care. Recommendations for practice were ranked according to the method described in that report (Table 1). Counselling for the surgical treatment of pelvic organ prolapse should consider all benefits, harms, and costs of the surgical procedure, with particular emphasis on the use of mesh. 1. Patients should be counselled that transvaginal mesh procedures are considered novel techniques for pelvic floor repair that demonstrate high rates of anatomical cure in uncontrolled short-term case series. (II-2B) 2. Patients should be informed of the range of success rates until stronger evidence of superiority is published. (II-2B) 3. Training specific to transvaginal mesh procedures should be undertaken before procedures are performed. (III-C) 4. Patients should undergo thorough preoperative counselling regarding (a) the potential serious adverse sequelae of transvaginal mesh repairs, including mesh exposure, pain, and dyspareunia; and (b) the limited data available
Zhang, Fang
2011-02-01
Mesh current collectors made of stainless steel (SS) can be integrated into microbial fuel cell (MFC) cathodes constructed of a reactive carbon black and Pt catalyst mixture and a poly(dimethylsiloxane) (PDMS) diffusion layer. It is shown here that the mesh properties of these cathodes can significantly affect performance. Cathodes made from the coarsest mesh (30-mesh) achieved the highest maximum power of 1616 ± 25 mW m-2 (normalized to cathode projected surface area; 47.1 ± 0.7 W m-3 based on liquid volume), while the finest mesh (120-mesh) had the lowest power density (599 ± 57 mW m-2). Electrochemical impedance spectroscopy showed that charge transfer and diffusion resistances decreased with increasing mesh opening size. In MFC tests, the cathode performance was primarily limited by reaction kinetics, and not mass transfer. Oxygen permeability increased with mesh opening size, accounting for the decreased diffusion resistance. At higher current densities, diffusion became a limiting factor, especially for fine mesh with low oxygen transfer coefficients. These results demonstrate the critical nature of the mesh size used for constructing MFC cathodes. © 2010 Elsevier B.V. All rights reserved.
Polygonal Prism Mesh in the Viscous Layers for the Polyhedral Mesh Generator, PolyGen
Lee, Sang Yong; Park, Chan Eok; Kim, Shin Whan
2015-01-01
Polyhedral mesh has been known to have some benefits over the tetrahedral mesh. Efforts have been made to set up a polyhedral mesh generation system with open source programs SALOME and TetGen. The evaluation has shown that the polyhedral mesh generation system is promising. But it is necessary to extend the capability of the system to handle the viscous layers to be a generalized mesh generator. A brief review to the previous works on the mesh generation for the viscous layers will be made in section 2. Several challenging issues for the polygonal prism mesh generation will be discussed as well. The procedure to generate a polygonal prism mesh will be discussed in detail in section 3. Conclusion will be followed in section 4. A procedure to generate meshes in the viscous layers with PolyGen has been successfully designed. But more efforts have to be exercised to find the best way for the generating meshes for viscous layers. Using the extrusion direction of the STL data will the first of the trials in the near future
Engagement of Metal Debris into Gear Mesh
handschuh, Robert F.; Krantz, Timothy L.
2010-01-01
A series of bench-top experiments was conducted to determine the effects of metallic debris being dragged through meshing gear teeth. A test rig that is typically used to conduct contact fatigue experiments was used for these tests. Several sizes of drill material, shim stock and pieces of gear teeth were introduced and then driven through the meshing region. The level of torque required to drive the "chip" through the gear mesh was measured. From the data gathered, chip size sufficient to jam the mechanism can be determined.
Mesh requirements for neutron transport calculations
Askew, J.R.
1967-07-01
Fine-structure calculations are reported for a cylindrical natural uranium-graphite cell using different solution methods (discrete ordinate and collision probability codes) and varying the spatial mesh. It is suggested that of formulations assuming the source constant in a mesh interval the differential approach is generally to be preferred. Due to cancellation between approximations made in the derivation of the finite difference equations and the errors in neglecting source variation, the discrete ordinate code gave a more accurate estimate of fine structure for a given mesh even for unusually coarse representations. (author)
Sentís, Manuel Lorenzo; Gable, Carl W.
2017-11-01
There are many applications in science and engineering modeling where an accurate representation of a complex model geometry in the form of a mesh is important. In applications of flow and transport in subsurface porous media, this is manifest in models that must capture complex geologic stratigraphy, structure (faults, folds, erosion, deposition) and infrastructure (tunnels, boreholes, excavations). Model setup, defined as the activities of geometry definition, mesh generation (creation, optimization, modification, refine, de-refine, smooth), assigning material properties, initial conditions and boundary conditions requires specialized software tools to automate and streamline the process. In addition, some model setup tools will provide more utility if they are designed to interface with and meet the needs of a particular flow and transport software suite. A control volume discretization that uses a two point flux approximation is for example most accurate when the underlying control volumes are 2D or 3D Voronoi tessellations. In this paper we will present the coupling of LaGriT, a mesh generation and model setup software suite and TOUGH2 (Pruess et al., 1999) to model subsurface flow problems and we show an example of how LaGriT can be used as a model setup tool for the generation of a Voronoi mesh for the simulation program TOUGH2. To generate the MESH file for TOUGH2 from the LaGriT output a standalone module Lagrit2Tough2 was developed, which is presented here and will be included in a future release of LaGriT. In this paper an alternative method to generate a Voronoi mesh for TOUGH2 with LaGriT is presented and thanks to the modular and command based structure of LaGriT this method is well suited to generating a mesh for complex models.
Refining Nodes and Edges of State Machines
Hallerstede, Stefan; Snook, Colin
2011-01-01
State machines are hierarchical automata that are widely used to structure complex behavioural specifications. We develop two notions of refinement of state machines, node refinement and edge refinement. We compare the two notions by means of examples and argue that, by adopting simple conventions...... refinement theory and UML-B state machine refinement influences the style of node refinement. Hence we propose a method with direct proof of state machine refinement avoiding the detour via Event-B that is needed by UML-B....
Process for refining shale bitumen
Plauson, H
1920-09-19
A process is disclosed for refining shale bitumen for use as heavy mineral oil, characterized by mixtures of blown hard shale pitch and heavy mineral oil being blown with hot air at temperatures of 120 to 150/sup 0/ with 1 to 3 percent sulfur, and if necessary with 0.5 to 3 percent of an aldehyde.
Panorama 2007: Refining and Petrochemicals
Silva, C.
2007-01-01
The year 2005 saw a new improvement in refining margins that continued during the first three quarters of 2006. The restoration of margins in the last three years has allowed the refining sector to regain its profitability. In this context, the oil companies reported earnings for fiscal year 2005 that were up significantly compared to 2004, and the figures for the first half-year 2006 confirm this trend. Despite this favorable business environment, investments only saw a minimal increase in 2005 and the improvement expected for 2006 should remain fairly limited. Looking to 2010-2015, it would appear that the planned investment projects with the highest probability of reaching completion will be barely adequate to cover the increase in demand. Refining sector should continue to find itself under pressure. As for petrochemicals, despite a steady up-trend in the naphtha price, the restoration of margins consolidated a comeback that started in 2005. All in all, capital expenditure remained fairly low in both the refining and petrochemicals sectors, but many projects are planned for the next ten years. (author)
Obtuse triangle suppression in anisotropic meshes
Sun, Feng; Choi, Yi King; Wang, Wen Ping; Yan, Dongming; Liu, Yang; Lé vy, Bruno L.
2011-01-01
Anisotropic triangle meshes are used for efficient approximation of surfaces and flow data in finite element analysis, and in these applications it is desirable to have as few obtuse triangles as possible to reduce the discretization error. We present a variational approach to suppressing obtuse triangles in anisotropic meshes. Specifically, we introduce a hexagonal Minkowski metric, which is sensitive to triangle orientation, to give a new formulation of the centroidal Voronoi tessellation (CVT) method. Furthermore, we prove several relevant properties of the CVT method with the newly introduced metric. Experiments show that our algorithm produces anisotropic meshes with much fewer obtuse triangles than using existing methods while maintaining mesh anisotropy. © 2011 Elsevier B.V. All rights reserved.
Grid adaptation using chimera composite overlapping meshes
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1994-01-01
The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.
Grid adaption using Chimera composite overlapping meshes
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
Shape space exploration of constrained meshes
Yang, Yongliang
2011-12-12
We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.
Shape space exploration of constrained meshes
Yang, Yongliang; Yang, Yijun; Pottmann, Helmut; Mitra, Niloy J.
2011-01-01
We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.
Obtuse triangle suppression in anisotropic meshes
Sun, Feng
2011-12-01
Anisotropic triangle meshes are used for efficient approximation of surfaces and flow data in finite element analysis, and in these applications it is desirable to have as few obtuse triangles as possible to reduce the discretization error. We present a variational approach to suppressing obtuse triangles in anisotropic meshes. Specifically, we introduce a hexagonal Minkowski metric, which is sensitive to triangle orientation, to give a new formulation of the centroidal Voronoi tessellation (CVT) method. Furthermore, we prove several relevant properties of the CVT method with the newly introduced metric. Experiments show that our algorithm produces anisotropic meshes with much fewer obtuse triangles than using existing methods while maintaining mesh anisotropy. © 2011 Elsevier B.V. All rights reserved.
Mesh Processing in Medical Image Analysis
The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....
Capacity Analysis of Wireless Mesh Networks
M. I. Gumel
2012-06-01
Full Text Available The next generation wireless networks experienced a great development with emergence of wireless mesh networks (WMNs, which can be regarded as a realistic solution that provides wireless broadband access. The limited available bandwidth makes capacity analysis of the network very essential. While the network offers broadband wireless access to community and enterprise users, the problems that limit the network capacity must be addressed to exploit the optimum network performance. The wireless mesh network capacity analysis shows that the throughput of each mesh node degrades in order of l/n with increasing number of nodes (n in a linear topology. The degradation is found to be higher in a fully mesh network as a result of increase in interference and MAC layer contention in the network.
Energy-efficient wireless mesh networks
Ntlatlapa, N
2007-06-01
Full Text Available This paper outlines the objectives of a recently formed research group at Meraka Institute. The authors consider application of wireless mesh networks in rural infrastructure deficient parts of the African continent where nodes operate on batteries...
LR: Compact connectivity representation for triangle meshes
Gurung, T; Luffel, M; Lindstrom, P; Rossignac, J
2011-01-28
We propose LR (Laced Ring) - a simple data structure for representing the connectivity of manifold triangle meshes. LR provides the option to store on average either 1.08 references per triangle or 26.2 bits per triangle. Its construction, from an input mesh that supports constant-time adjacency queries, has linear space and time complexity, and involves ordering most vertices along a nearly-Hamiltonian cycle. LR is best suited for applications that process meshes with fixed connectivity, as any changes to the connectivity require the data structure to be rebuilt. We provide an implementation of the set of standard random-access, constant-time operators for traversing a mesh, and show that LR often saves both space and traversal time over competing representations.
Seeking new surgical predictors of mesh exposure after transvaginal mesh repair.
Wu, Pei-Ying; Chang, Chih-Hung; Shen, Meng-Ru; Chou, Cheng-Yang; Yang, Yi-Ching; Huang, Yu-Fang
2016-10-01
The purpose of this study was to explore new preventable risk factors for mesh exposure. A retrospective review of 92 consecutive patients treated with transvaginal mesh (TVM) in the urogynecological unit of our university hospital. An analysis of perioperative predictors was conducted in patients after vaginal repairs using a type 1 mesh. Mesh complications were recorded according to International Urogynecological Association (IUGA) definitions. Mesh-exposure-free durations were calculated by using the Kaplan-Meier method and compared between different closure techniques using log-rank test. Hazard ratios (HR) of predictors for mesh exposure were estimated by univariate and multivariate analyses using Cox proportional hazards regression models. The median surveillance interval was 24.1 months. Two late occurrences were found beyond 1 year post operation. No statistically significant correlation was observed between mesh exposure and concomitant hysterectomy. Exposure risks were significantly higher in patients with interrupted whole-layer closure in univariate analysis. In the multivariate analysis, hematoma [HR 5.42, 95 % confidence interval (CI) 1.26-23.35, P = 0.024), Prolift mesh (HR 5.52, 95 % CI 1.15-26.53, P = 0.033), and interrupted whole-layer closure (HR 7.02, 95 % CI 1.62-30.53, P = 0.009) were the strongest predictors of mesh exposure. Findings indicate the risks of mesh exposure and reoperation may be prevented by avoiding hematoma, large amount of mesh, or interrupted whole-layer closure in TVM surgeries. If these risk factors are prevented, hysterectomy may not be a relative contraindication for TVM use. We also provide evidence regarding mesh exposure and the necessity for more than 1 year of follow-up and preoperative counselling.
Pollution prevention in the petroleum refining industry - bibliography
Fournier, M.
1995-03-01
The Great Lakes Pollution Prevention Centre has compiled a list of references to assist the petroleum refining industry in adopting pollution prevention as an important environmental management strategy. Items included were divided into 14 categories of pollution types, such as air emissions, alternative fuels, chemical substitution, grounds keeping, leaks and spills, paints, waste management plan and others
MHD simulations on an unstructured mesh
Strauss, H.R.; Park, W.; Belova, E.; Fu, G.Y.; Sugiyama, L.E.
1998-01-01
Two reasons for using an unstructured computational mesh are adaptivity, and alignment with arbitrarily shaped boundaries. Two codes which use finite element discretization on an unstructured mesh are described. FEM3D solves 2D and 3D RMHD using an adaptive grid. MH3D++, which incorporates methods of FEM3D into the MH3D generalized MHD code, can be used with shaped boundaries, which might be 3D
Towards Blockchain-enabled Wireless Mesh Networks
Selimi, Mennan; Kabbinale, Aniruddh Rao; Ali, Anwaar; Navarro, Leandro; Sathiaseelan, Arjuna
2018-01-01
Recently, mesh networking and blockchain are two of the hottest technologies in the telecommunications industry. Combining both can reformulate internet access and make connecting to the Internet not only easy, but affordable too. Hyperledger Fabric (HLF) is a blockchain framework implementation and one of the Hyperledger projects hosted by The Linux Foundation. We evaluate HLF in a real production mesh network and in the laboratory, quantify its performance, bottlenecks and limitations of th...
Unstructured Mesh Movement and Viscous Mesh Generation for CFD-Based Design Optimization, Phase II
National Aeronautics and Space Administration — The innovations proposed are twofold: 1) a robust unstructured mesh movement method able to handle isotropic (Euler), anisotropic (viscous), mixed element (hybrid)...
MHD simulations on an unstructured mesh
Strauss, H.R.; Park, W.
1996-01-01
We describe work on a full MHD code using an unstructured mesh. MH3D++ is an extension of the PPPL MH3D resistive full MHD code. MH3D++ replaces the structured mesh and finite difference / fourier discretization of MH3D with an unstructured mesh and finite element / fourier discretization. Low level routines which perform differential operations, solution of PDEs such as Poisson's equation, and graphics, are encapsulated in C++ objects to isolate the finite element operations from the higher level code. The high level code is the same, whether it is run in structured or unstructured mesh versions. This allows the unstructured mesh version to be benchmarked against the structured mesh version. As a preliminary example, disruptions in DIIID reverse shear equilibria are studied numerically with the MH3D++ code. Numerical equilibria were first produced starting with an EQDSK file containing equilibrium data of a DIII-D L-mode negative central shear discharge. Using these equilibria, the linearized equations are time advanced to get the toroidal mode number n = 1 linear growth rate and eigenmode, which is resistively unstable. The equilibrium and linear mode are used to initialize 3D nonlinear runs. An example shows poloidal slices of 3D pressure surfaces: initially, on the left, and at an intermediate time, on the right
How to model wireless mesh networks topology
Sanni, M L; Hashim, A A; Anwar, F; Ali, S; Ahmed, G S M
2013-01-01
The specification of network connectivity model or topology is the beginning of design and analysis in Computer Network researches. Wireless Mesh Networks is an autonomic network that is dynamically self-organised, self-configured while the mesh nodes establish automatic connectivity with the adjacent nodes in the relay network of wireless backbone routers. Researches in Wireless Mesh Networks range from node deployment to internetworking issues with sensor, Internet and cellular networks. These researches require modelling of relationships and interactions among nodes including technical characteristics of the links while satisfying the architectural requirements of the physical network. However, the existing topology generators model geographic topologies which constitute different architectures, thus may not be suitable in Wireless Mesh Networks scenarios. The existing methods of topology generation are explored, analysed and parameters for their characterisation are identified. Furthermore, an algorithm for the design of Wireless Mesh Networks topology based on square grid model is proposed in this paper. The performance of the topology generated is also evaluated. This research is particularly important in the generation of a close-to-real topology for ensuring relevance of design to the intended network and validity of results obtained in Wireless Mesh Networks researches
[Implants for genital prolapse : Contra mesh surgery].
Hampel, C
2017-12-01
Alloplastic transvaginal meshes have become very popular in the surgery of pelvic organ prolapse (POP) as did alloplastic suburethral slings in female stress incontinence surgery, but without adequate supporting data. The simplicity of the mesh procedure facilitates its propagation with acceptance of higher revision and complication rates. Since attending physicians do more and more prolapse surgeries without practicing or teaching alternative techniques, expertise in these alternatives, which might be very useful in cases of recurrence, persistence or complications, is permanently lost. It is doubtful that proper and detailed information about alternatives, risks, and benefits of transvaginal alloplastic meshes is provided to every single prolapse patient according to the recommendations of the German POP guidelines, since the number of implanted meshes exceeds the number of properly indicated mesh candidates by far. Although there is no dissent internationally about the available mesh data, thousands of lawsuits in the USA, insolvency of companies due to claims for compensation and unambiguous warnings from foreign urological societies leave German urogynecologists still unimpressed. The existing literature in pelvic organ prolapse exclusively focusses on POP stage and improvement of that stage with surgical therapy. Instead, typical prolapse symptoms should trigger therapy and improvement of these symptoms should be the utmost treatment goal. It is strongly recommended for liability reasons to obtain specific written informed consent.
Asian oil refining. Demand growth and deregulation - an uncertain future
Sameer Nawaz.
1996-01-01
The objective of the report is to identify the most important features of the oil refining industry in Asia. Major developments in consumption patterns changes in regional importance of countries are discussed, highlighting potential future developments. The first chapter introduces the various refining processes and presents a simple model for the analysis of complex refineries. Chapter 2 examines the development of the Asian refining industry against a background of economic growth and analyses trends in consumption of all products in Asian countries. In Chapter 3, the key issues concerning the refining industry are examined, among them the forces driving consumption, including the importance of economic development, and electricity and transport demand. The importance of product imports and international trade is discussed, and the extent of government involvement and the effects of changing retail and market prices are analysed. Chapter 4 looks at the strategies that oil and gas companies are following in the Asian refining industry. Particular significance is attached to the vertical integration of the oil majors, Japanese and Middle Eastern oil companies. A brief overview of the importance of the petrochemical industry is presented. The countries of Asia that are involved in the refining industry are profiled in Chapter 5. The future trend in oil consumption is examined in Chapter 6. There follows a brief discussion of the plans to expand crude refining capacity in the various countries and a forecast of the state of overcapacity which will result. In the final chapter, brief profiles of some of the most important companies in the Asian refining industry are presented, discussing their major activities and future plans. (Author)
Mansour, M M; Spink, A E F
2013-01-01
Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.
Fire performance of basalt FRP mesh reinforced HPC thin plates
Hulin, Thomas; Hodicky, Kamil; Schmidt, Jacob Wittrup
2013-01-01
An experimental program was carried out to investigate the influence of basalt FRP (BFRP) reinforcing mesh on the fire behaviour of thin high performance concrete (HPC) plates applied to sandwich elements. Samples with BFRP mesh were compared to samples with no mesh, samples with steel mesh...
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
Parallel octree-based hexahedral mesh generation for eulerian to lagrangian conversion.
Staten, Matthew L.; Owen, Steven James
2010-09-01
Computational simulation must often be performed on domains where materials are represented as scalar quantities or volume fractions at cell centers of an octree-based grid. Common examples include bio-medical, geotechnical or shock physics calculations where interface boundaries are represented only as discrete statistical approximations. In this work, we introduce new methods for generating Lagrangian computational meshes from Eulerian-based data. We focus specifically on shock physics problems that are relevant to ASC codes such as CTH and Alegra. New procedures for generating all-hexahedral finite element meshes from volume fraction data are introduced. A new primal-contouring approach is introduced for defining a geometric domain. New methods for refinement, node smoothing, resolving non-manifold conditions and defining geometry are also introduced as well as an extension of the algorithm to handle tetrahedral meshes. We also describe new scalable MPI-based implementations of these procedures. We describe a new software module, Sculptor, which has been developed for use as an embedded component of CTH. We also describe its interface and its use within the mesh generation code, CUBIT. Several examples are shown to illustrate the capabilities of Sculptor.
European refining: evolution or revolution?
Cuthbert, N.
1999-01-01
A recent detailed analysis of the refining business in Europe (by Purvin and Gurtz) was used to highlight some key issues facing the industry. The article was written under five sub-sections: (i) economic environment (assessment of the economic prospects for Europe), (ii) energy efficiency and global warming (lists the four points of the EU car makers' voluntary agreement), (iii) fuel quality and refinery investment (iv) refinery capacity and utilisation and (v) industry structure and development. Diagrams show GDP per capita for East and West, European road fuel demand to 2015 and European net trade and European refinery ownership by crude capacity. It was concluded that the future of refining in Europe is 'exciting and challenging' and there are likely to be more large joint venture refineries. (UK)
Prolapse Recurrence after Transvaginal Mesh Removal.
Rawlings, Tanner; Lavelle, Rebecca S; Coskun, Burhan; Alhalabi, Feras; Zimmern, Philippe E
2015-11-01
We determined the rate of pelvic organ prolapse recurrence after transvaginal mesh removal. Following institutional review board approval a longitudinally collected database of women undergoing transvaginal mesh removal for complications after transvaginal mesh placement with at least 1 year minimum followup was queried for pelvic organ prolapse recurrence. Recurrent prolapse was defined as greater than stage 1 on examination or the need for reoperation at the site of transvaginal mesh removal. Outcome measures were based on POP-Q (Pelvic Organ Prolapse Quantification System) at the last visit. Patients were grouped into 3 groups, including group 1--recurrent prolapse in the same compartment as transvaginal mesh removal, 2--persistent prolapse and 3--prolapse in a compartment different than transvaginal mesh removal. Of 73 women 52 met study inclusion criteria from 2007 to 2013, including 73% who presented with multiple indications for transvaginal mesh removal. The mean interval between insertion and removal was 45 months (range 10 to 165). Overall mean followup after transvaginal mesh removal was 30 months (range 12 to 84). In group 1 (recurrent prolapse) the rate was 15% (6 of 40 patients). Four women underwent surgery for recurrent prolapse at a mean 7 of months (range 5 to 10). Two patients elected observation. The rate of persistent prolapse (group 2) was 23% (12 of 52 patients). Three women underwent prolapse reoperation at a mean of 10 months (range 8 to 12). In group 3 (de novo/different compartment prolapse) the rate was 6% (3 of 52 patients). One woman underwent surgical repair at 52 months. At a mean 2.5-year followup 62% of patients (32 of 52) did not have recurrent or persistent prolapse after transvaginal mesh removal and 85% (44 of 52) did not undergo any further procedure for prolapse. Specifically for pelvic organ prolapse in the same compartment as transvaginal mesh removal 12% of patients had recurrence, of whom 8% underwent prolapse repair
Uranium refining by solvent extraction
Kraikaew, J.
1996-01-01
The yellow cake refining was studied in both laboratory and semi-pilot scales. The process units mainly consist of dissolution and filtration, solvent extraction, and precipitation and filtration. Effect of flow ratio (organic flow rate/ aqueous flow rate) on working efficiencies of solvent extraction process was studied. Detailed studies were carried out on extraction, scrubbing and stripping processes. Purity of yellow cake product obtained is high as 90.32% U 3 O 8
Process for refining naphthalene, etc
Petroff, G
1922-05-13
A process is described for the refining of naphthalene, its distillates, and mineral oils by the use of dilute sulfuric acid, characterized in that the oils are oxidized with oxygen of the air and thereafter are treated with 65 to 75 percent sulfuric acid to separate the unsaturated hydrocarbons in the form of polymerized products whereby, if necessary, heating and application of usual or higher pressure can take place.
Preparation of refined oils, etc
1931-02-03
A process is disclosed for the preparation of refined sulfur-containing oils from sulfur-containing crude oils obtained by distillation of bituminous limestone, characterized by this crude oil being first subjected to a purification by distillation with steam in the known way, then treated with lime and chloride of lime and distilled preferably in the presence of zinc powder, whereby in this purification a rectification can be added for the purpose of recovering definite fractions.
Bauxite Mining and Alumina Refining
Donoghue, A. Michael; Frisch, Neale; Olney, David
2014-01-01
Objective: To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Methods: Review article. Results: The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust,...
The Charfuel coal refining process
Meyer, L.G.
1991-01-01
The patented Charfuel coal refining process employs fluidized hydrocracking to produce char and liquid products from virtually all types of volatile-containing coals, including low rank coal and lignite. It is not gasification or liquefaction which require the addition of expensive oxygen or hydrogen or the use of extreme heat or pressure. It is not the German pyrolysis process that merely 'cooks' the coal, producing coke and tar-like liquids. Rather, the Charfuel coal refining process involves thermal hydrocracking which results in the rearrangement of hydrogen within the coal molecule to produce a slate of co-products. In the Charfuel process, pulverized coal is rapidly heated in a reducing atmosphere in the presence of internally generated process hydrogen. This hydrogen rearrangement allows refinement of various ranks of coals to produce a pipeline transportable, slurry-type, environmentally clean boiler fuel and a slate of value-added traditional fuel and chemical feedstock co-products. Using coal and oxygen as the only feedstocks, the Charfuel hydrocracking technology economically removes much of the fuel nitrogen, sulfur, and potential air toxics (such as chlorine, mercury, beryllium, etc.) from the coal, resulting in a high heating value, clean burning fuel which can increase power plant efficiency while reducing operating costs. The paper describes the process, its thermal efficiency, its use in power plants, its pipeline transport, co-products, environmental and energy benefits, and economics
A Macdonald refined topological vertex
Foda, Omar; Wu, Jian-Feng
2017-07-01
We consider the refined topological vertex of Iqbal et al (2009 J. High Energy Phys. JHEP10(2009)069), as a function of two parameters ≤ft\\lgroup x, y \\right\\rgroup , and deform it by introducing the Macdonald parameters ≤ft\\lgroup q, t \\right\\rgroup , as in the work of Vuletić on plane partitions (Vuletić M 2009 Trans. Am. Math. Soc. 361 2789-804), to obtain ‘a Macdonald refined topological vertex’. In the limit q → t , we recover the refined topological vertex of Iqbal et al and in the limit x → y , we obtain a qt-deformation of the original topological vertex of Aganagic et al (2005 Commun. Math. Phys. 25 425-78). Copies of the vertex can be glued to obtain qt-deformed 5D instanton partition functions that have well-defined 4D limits and, for generic values of ≤ft\\lgroup q, t\\right\\rgroup , contain infinite-towers of poles for every pole present in the limit q → t .
Anon.
1992-01-01
This paper reports that at a time when profit margins are slim and gasoline demand is down, the U.S. petroleum-refining industry is facing one of its greatest challenges; How to meet new federal and state laws for reformulated gasoline, oxygenated fuels, low-sulfur diesel and other measures to improve the environment. The American Petroleum Institute (API) estimates that industry will spend between $15 and $23 billion by the end of the decade to meet the U.S. Clean Air Act Amendments (CAAA) of 1990, and other legislation. ENSR Consulting and Engineering's capital-spending figure runs to between $70 and 100 billion this decade, including $24 billion to produce reformulated fuels and $10-12 billion to reduce refinery emissions. M.W. Kellogg Co. estimates that refiners may have to spend up to $30 billion this decade to meet the demand for reformulated gasoline. The estimates are wide-ranging because refiners are still studying their options and delaying final decisions as long as they can, to try to ensure they are the best and least-costly decisions. Oxygenated fuels will be required next winter, but federal regulations for reformulated gasoline won't go into effect until 1995, while California's tougher reformulated-fuels law will kick in the following year
Southeast Asian oil markets and refining
Yamaguchi, N.D. [FACTS, Inc., Honolulu, Hawaii (United States)
1999-09-01
An overview of the Southeast Asian oil markets and refining is presented concentrating on Brunei, Malaysia, the Philippines, Singapore and Thailand refiners. Key statistics of the refiners in this region are tabulated. The demand and the quality of Indonesian, Malaysian, Philippine, Singapore and Thai petroleum products are analysed. Crude distillation unit capacity trends in the Southeastern Asian refining industry are discussed along with cracking to distillation ratios, refining in these countries, and the impact of changes in demand and refining on the product trade.
Southeast Asian oil markets and refining
Yamaguchi, N.D.
1999-01-01
An overview of the Southeast Asian oil markets and refining is presented concentrating on Brunei, Malaysia, the Philippines, Singapore and Thailand refiners. Key statistics of the refiners in this region are tabulated. The demand and the quality of Indonesian, Malaysian, Philippine, Singapore and Thai petroleum products are analysed. Crude distillation unit capacity trends in the Southeastern Asian refining industry are discussed along with cracking to distillation ratios, refining in these countries, and the impact of changes in demand and refining on the product trade
Feature-Sensitive Tetrahedral Mesh Generation with Guaranteed Quality
Wang, Jun; Yu, Zeyun
2012-01-01
Tetrahedral meshes are being extensively used in finite element methods (FEM). This paper proposes an algorithm to generate feature-sensitive and high-quality tetrahedral meshes from an arbitrary surface mesh model. A top-down octree subdivision is conducted on the surface mesh and a set of tetrahedra are constructed using adaptive body-centered cubic (BCC) lattices. Special treatments are given to the tetrahedra near the surface such that the quality of the resulting tetrahedral mesh is prov...
Bessel smoothing filter for spectral-element mesh
Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.
2017-06-01
Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the
Roth, Ted M; Reight, Ian
2012-07-01
Sacral colpopexy may be complicated by mesh exposure, and the surgical treatment of mesh exposure typically results in minor postoperative morbidity and few delayed complications. A 75-year-old woman presented 7 years after a laparoscopic sacral colpopexy, with Mersilene mesh, with an apical mesh exposure. She underwent an uncomplicated transvaginal excision and was asymptomatic until 8 months later when she presented with vaginal drainage and a sacral abscess. This was successfully treated with laparoscopic enterolysis, drainage of the abscess, and explantation of the remaining mesh. Incomplete excision of exposed colpopexy mesh can lead to ascending infection and sacral abscess. Laparoscopic drainage and mesh removal may be considered in these patients.
Biomaterials Evaluation: Conceptual Refinements and Practical Reforms.
Masaeli, Reza; Zandsalimi, Kavosh; Tayebi, Lobat
2018-01-01
Regarding the widespread and ever-increasing applications of biomaterials in different medical fields, their accurate assessment is of great importance. Hence the safety and efficacy of biomaterials is confirmed only through the evaluation process, the way it is done has direct effects on public health. Although every biomaterial undergoes rigorous premarket evaluation, the regulatory agencies receive a considerable number of complications and adverse event reports annually. The main factors that challenge the process of biomaterials evaluation are dissimilar regulations, asynchrony of biomaterials evaluation and biomaterials development, inherent biases of postmarketing data, and cost and timing issues. Several pieces of evidence indicate that current medical device regulations need to be improved so that they can be used more effectively in the evaluation of biomaterials. This article provides suggested conceptual refinements and practical reforms to increase the efficiency and effectiveness of the existing regulations. The main focus of the article is on strategies for evaluating biomaterials in US, and then in EU.
Ridgeway, Beri; Walters, Mark D; Paraiso, Marie Fidela R; Barber, Matthew D; McAchran, Sarah E; Goldman, Howard B; Jelovsek, J Eric
2008-12-01
The purpose of this study was to determine the complications, treatments, and outcomes in patients choosing to undergo removal of mesh previously placed with a mesh procedural kit. This was a retrospective review of all patients who underwent surgical removal of transvaginal mesh for mesh-related complications during a 3-year period at Cleveland Clinic. At last follow-up, patients reported degree of pain, level of improvement, sexual activity, and continued symptoms. Nineteen patients underwent removal of mesh during the study period. Indications for removal included chronic pain (6/19), dyspareunia (6/19), recurrent pelvic organ prolapse (8/19), mesh erosion (12/19), and vesicovaginal fistula (3/19), with most patients (16/19) citing more than 1 reason. There were few complications related to the mesh removal. Most patients reported significant relief of symptoms. Mesh removal can be technically difficult but appears to be safe with few complications and high relief of symptoms, although some symptoms can persist.
Mesh networks: an optimum solution for AMR
Mimno, G.
2003-12-01
Characteristics of mesh networks and the advantage of using them in automatic meter reading equipment (AMR) are discussed. Mesh networks are defined as being similar to a fishing net made of knots and links. In mesh networks the knots represent meter sites and the links are the radio paths between the meter sites and the neighbourhood concentrator. In mesh networks any knot in the communications chain can link to any other and the optimum path is calculated by the network by hopping from meter to meter until the radio message reaches a concentrator. This mesh communications architecture is said to be vastly superior to many older types of radio-based meter reading technologies; its main advantage is that it not only significantly improves the economics of fixed network deployment, but also supports time-of-use metering, remote disconnect services and advanced features, such as real-time pricing, demand response, and other efficiency measures, providing a better return on investment and reliability.
Gear selectivity of large-mesh nets and drumlines used to catch ...
Catches of sharks and bycatch in large-mesh nets and baited drumlines used by the Queensland Shark Control Program were examined to determine the efficacy of both gear types and assess fishing strategies that minimise their impacts. There were few significant differences in the size of both sharks and bycatch in the ...
Mellano, Erin M; Nakamura, Leah Y; Choi, Judy M; Kang, Diana C; Grisales, Tamara; Raz, Shlomo; Rodriguez, Larissa V
2016-01-01
Vaginal mesh complications necessitating excision are increasingly prevalent. We aim to study whether subclinical chronically infected mesh contributes to the development of delayed-onset mesh complications or recurrent urinary tract infections (UTIs). Women undergoing mesh removal from August 2013 through May 2014 were identified by surgical code for vaginal mesh removal. Only women undergoing removal of anti-incontinence mesh were included. Exclusion criteria included any women undergoing simultaneous prolapse mesh removal. We abstracted preoperative and postoperative information from the medical record and compared mesh culture results from patients with and without mesh extrusion, de novo recurrent UTIs, and delayed-onset pain. One hundred seven women with only anti-incontinence mesh removed were included in the analysis. Onset of complications after mesh placement was within the first 6 months in 70 (65%) of 107 and delayed (≥6 months) in 37 (35%) of 107. A positive culture from the explanted mesh was obtained from 82 (77%) of 107 patients, and 40 (37%) of 107 were positive with potential pathogens. There were no significant differences in culture results when comparing patients with delayed-onset versus immediate pain, extrusion with no extrusion, and de novo recurrent UTIs with no infections. In this large cohort of patients with mesh removed for a diverse array of complications, cultures of the explanted vaginal mesh demonstrate frequent low-density bacterial colonization. We found no differences in culture results from women with delayed-onset pain versus acute pain, vaginal mesh extrusions versus no extrusions, or recurrent UTIs using standard culture methods. Chronic prosthetic infections in other areas of medicine are associated with bacterial biofilms, which are resistant to typical culture techniques. Further studies using culture-independent methods are needed to investigate the potential role of chronic bacterial infections in delayed vaginal mesh
Latin American oil markets and refining
Yamaguchi, N.D.; Obadia, C.
1999-01-01
This paper provides an overview of the oil markets and refining in Argentina, Brazil, Chile, Colombia, Ecuador, Mexico, Peru and Venezuela, and examines the production of crude oil in these countries. Details are given of Latin American refiners highlighting trends in crude distillation unit capacity, cracking to distillation ratios, and refining in the different countries. Latin American oil trade is discussed, and charts are presented illustrating crude production, oil consumption, crude refining capacity, cracking to distillation ratios, and oil imports and exports
Lee, Lawrence; Saleem, Abdulaziz; Landry, Tara; Latimer, Eric; Chaudhury, Prosanto; Feldman, Liane S
2014-01-01
Parastomal hernia (PSH) is common after stoma formation. Studies have reported that mesh prophylaxis reduces PSH, but there are no cost-effectiveness data. Our objective was to determine the cost effectiveness of mesh prophylaxis vs no prophylaxis to prevent PSH in patients undergoing abdominoperineal resection with permanent colostomy for rectal cancer. Using a cohort Markov model, we modeled the costs and effectiveness of mesh prophylaxis vs no prophylaxis at the index operation in a cohort of 60-year-old patients undergoing abdominoperineal resection for rectal cancer during a time horizon of 5 years. Costs were expressed in 2012 Canadian dollars (CAD$) and effectiveness in quality-adjusted life years. Deterministic and probabilistic sensitivity analyses were performed. In patients with stage I to III rectal cancer, prophylactic mesh was dominant (less costly and more effective) compared with no mesh. In patients with stage IV disease, mesh prophylaxis was associated with higher cost (CAD$495 more) and minimally increased effectiveness (0.05 additional quality-adjusted life years), resulting in an incremental cost-effectiveness ratio of CAD$10,818 per quality-adjusted life year. On sensitivity analyses, the decision was sensitive to the probability of mesh infection and the cost of the mesh, and method of diagnosing PSH. In patients undergoing abdominoperineal resection with permanent colostomy for rectal cancer, mesh prophylaxis might be the less costly and more effective strategy compared with no mesh to prevent PSH in patients with stage I to III disease, and might be cost effective in patients with stage IV disease. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Grain refinement of aluminum and its alloys
Zaid, A.I.O.
2001-01-01
Grain refinement of aluminum and its alloys by the binary Al-Ti and Ternary Al-Ti-B master alloys is reviewed and discussed. The importance of grain refining to the cast industry and the parameters affecting it are presented and discussed. These include parameters related to the cast, parameters related to the grain refining alloy and parameters related to the process. The different mechanisms, suggested in the literature for the process of grain refining are presented and discussed, from which it is found that although the mechanism of refining by the binary Al-Ti is well established the mechanism of grain refining by the ternary Al-Ti-B is still a controversial matter and some research work is still needed in this area. The effect of the addition of other alloying elements in the presence of the grain refiner on the grain refining efficiency is also reviewed and discussed. It is found that some elements e.g. V, Mo, C improves the grain refining efficiency, whereas other elements e.g. Cr, Zr, Ta poisons the grain refinement. Based on the parameters affecting the grain refinement and its mechanism, a criterion for selection of the optimum grain refiner is forwarded and discussed. (author)
Neutron Powder Diffraction and Constrained Refinement
Pawley, G. S.; Mackenzie, Gordon A.; Dietrich, O. W.
1977-01-01
The first use of a new program, EDINP, is reported. This program allows the constrained refinement of molecules in a crystal structure with neutron diffraction powder data. The structures of p-C6F4Br2 and p-C6F4I2 are determined by packing considerations and then refined with EDINP. Refinement is...
Guo, Zhikui; Chen, Chao; Tao, Chunhui
2016-04-01
Since 2007, there are four China Da yang cruises (CDCs), which have been carried out to investigate polymetallic sulfides in the southwest Indian ridge (SWIR) and have acquired both gravity data and bathymetry data on the corresponding survey lines(Tao et al., 2014). Sandwell et al. (2014) published a new global marine gravity model including the free air gravity data and its first order vertical gradient (Vzz). Gravity data and its gradient can be used to extract unknown density structure information(e.g. crust thickness) under surface of the earth, but they contain all the mass effect under the observation point. Therefore, how to get accurate gravity and its gradient effect of the existing density structure (e.g. terrain) has been a key issue. Using the bathymetry data or ETOPO1 (http://www.ngdc.noaa.gov/mgg/global/global.html) model at a full resolution to calculate the terrain effect could spend too much computation time. We expect to develop an effective method that takes less time but can still yield the desired accuracy. In this study, a constant-density polyhedral model is used to calculate the gravity field and its vertical gradient, which is based on the work of Tsoulis (2012). According to gravity field attenuation with distance and variance of bathymetry, we present an adaptive mesh refinement and coarsening strategies to merge both global topography data and multi-beam bathymetry data. The local coarsening or size of mesh depends on user-defined accuracy and terrain variation (Davis et al., 2011). To depict terrain better, triangular surface element and rectangular surface element are used in fine and coarse mesh respectively. This strategy can also be applied to spherical coordinate in large region and global scale. Finally, we applied this method to calculate Bouguer gravity anomaly (BGA), mantle Bouguer anomaly(MBA) and their vertical gradient in SWIR. Further, we compared the result with previous results in the literature. Both synthetic model
ZONE: a finite element mesh generator
Burger, M.J.
1976-05-01
The ZONE computer program is a finite-element mesh generator which produces the nodes and element description of any two-dimensional geometry. The geometry is subdivided into a mesh of quadrilateral and triangular zones arranged sequentially in an ordered march through the geometry. The order of march can be chosen so that the minimum bandwidth is obtained. The node points are defined in terms of the x and y coordinates in a global rectangular coordinate system. The zones generated are quadrilaterals or triangles defined by four node points in a counterclockwise sequence. Node points defining the outside boundary are generated to describe pressure boundary conditions. The mesh that is generated can be used as input to any two-dimensional as well as any axisymmetrical structure program. The output from ZONE is essentially the input file to NAOS, HONDO, and other axisymmetric finite element programs. 14 figures
Open preperitoneal groin hernia repair with mesh
Andresen, Kristoffer; Rosenberg, Jacob
2017-01-01
Background For the repair of inguinal hernias, several surgical methods have been presented where the purpose is to place a mesh in the preperitoneal plane through an open access. The aim of this systematic review was to describe preperitoneal repairs with emphasis on the technique. Data sources...... A systematic review was conducted and reported according to the PRISMA statement. PubMed, Cochrane library and Embase were searched systematically. Studies were included if they provided clinical data with more than 30 days follow up following repair of an inguinal hernia with an open preperitoneal mesh......-analysis. Open preperitoneal techniques with placement of a mesh through an open approach seem promising compared with the standard anterior techniques. This systematic review provides an overview of these techniques together with a description of surgical methods and clinical outcomes....
Open preperitoneal groin hernia repair with mesh
Andresen, Kristoffer; Rosenberg, Jacob
2017-01-01
BACKGROUND: For the repair of inguinal hernias, several surgical methods have been presented where the purpose is to place a mesh in the preperitoneal plane through an open access. The aim of this systematic review was to describe preperitoneal repairs with emphasis on the technique. DATA SOURCES......: A systematic review was conducted and reported according to the PRISMA statement. PubMed, Cochrane library and Embase were searched systematically. Studies were included if they provided clinical data with more than 30 days follow up following repair of an inguinal hernia with an open preperitoneal mesh......-analysis. Open preperitoneal techniques with placement of a mesh through an open approach seem promising compared with the standard anterior techniques. This systematic review provides an overview of these techniques together with a description of surgical methods and clinical outcomes....
Unstructured Adaptive Meshes: Bad for Your Memory?
Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob
2003-01-01
This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.
MUSIC: a mesh-unrestricted simulation code
Bonalumi, R.A.; Rouben, B.; Dastur, A.R.; Dondale, C.S.; Li, H.Y.H.
1978-01-01
A general formalism to solve the G-group neutron diffusion equation is described. The G-group flux is represented by complementing an ''asymptotic'' mode with (G-1) ''transient'' modes. A particular reduction-to-one-group technique gives a high computational efficiency. MUSIC, a 2-group code using the above formalism, is presented. MUSIC is demonstrated on a fine-mesh calculation and on 2 coarse-mesh core calculations: a heavy-water reactor (HWR) problem and the 2-D lightwater reactor (LWR) IAEA benchmark. Comparison is made to finite-difference results
Mesh removal following transvaginal mesh placement: a case series of 104 operations.
Marcus-Braun, Naama; von Theobald, Peter
2010-04-01
The objective of the study was to reveal the way we treat vaginal mesh complications in a trained referral center. This is a retrospective review of all patients who underwent surgical removal of transvaginal mesh for mesh-related complications during a 5-year period. Eighty-three patients underwent 104 operations including 61 complete mesh removal, 14 partial excision, 15 section of sub-urethral sling, and five laparoscopies. Main indications were erosion, infection, granuloma, incomplete voiding, and pain. Fifty-eight removals occurred more than 2 years after the primary mesh placement. Mean operation time was 21 min, and there were two intraoperative and ten minor postoperative complications. Stress urinary incontinence (SUI) recurred in 38% and cystocele in 19% of patients. In a trained center, mesh removal was found to be a quick and safe procedure. Mesh-related complications may frequently occur more than 2 years after the primary operation. Recurrence was mostly associated with SUI and less with genital prolapse.
Niobium-base grain refiner for aluminium
Silva Pontes, P. da; Robert, M.H.; Cupini, N.L.
1980-01-01
A new chemical grain refiner for aluminium has been developed, using inoculation of a niobium-base compound. When a bath of molten aluminium is inoculated whith this refiner, an intermetallic aluminium-niobium compound is formed which acts as a powerful nucleant, producing extremely fine structure comparable to those obtained by means of the traditional grain refiner based on titanium and boron. It was found that the refinement of the structure depends upon the weight percentage of the new refiner inoculated as well as the time of holding the bath after inoculation and before pouring, but mainly on the inoculating temperature. (Author) [pt
Pure transvaginal excision of mesh erosion involving the bladder.
Firoozi, Farzeen; Goldman, Howard B
2013-06-01
We present a pure transvaginal approach to the removal of eroded mesh involving the bladder secondary to placement of transvaginal mesh for management of pelvic organ prolapse (POP) using a mesh kit. Although technically challenging, we demonstrate the feasibility of a purely transvaginal approach, avoiding a potentially more morbid transabdominal approach. The video presents the surgical technique of pure transvaginal excision of mesh erosion involving the bladder after mesh placement using a prolapse kit was performed. This video shows that purely transvaginal removal of mesh erosion involving the bladder can be done safely and is feasible.
Seker, D; Oztuna, D; Kulacoglu, H; Genc, Y; Akcil, M
2013-04-01
Small mesh size has been recognized as one of the factors responsible for recurrence after Lichtenstein hernia repair due to insufficient coverage or mesh shrinkage. The Lichtenstein Hernia Institute recommends a 7 × 15 cm mesh that can be trimmed up to 2 cm from the lateral side. We performed a systematic review to determine surgeons' mesh size preference for the Lichtenstein hernia repair and made a meta-analysis to determine the effect of mesh size, mesh type, and length of follow-up time on recurrence. Two medical databases, PubMed and ISI Web of Science, were systematically searched using the key word "Lichtenstein repair." All full text papers were selected. Publications mentioning mesh size were brought for further analysis. A mesh surface area of 90 cm(2) was accepted as the threshold for defining the mesh as small or large. Also, a subgroup analysis for recurrence pooled proportion according to the mesh size, mesh type, and follow-up period was done. In total, 514 papers were obtained. There were no prospective or retrospective clinical studies comparing mesh size and clinical outcome. A total of 141 papers were duplicated in both databases. As a result, 373 papers were obtained. The full text was available in over 95 % of papers. Only 41 (11.2 %) papers discussed mesh size. In 29 studies, a mesh larger than 90 cm(2) was used. The most frequently preferred commercial mesh size was 7.5 × 15 cm. No papers mentioned the size of the mesh after trimming. There was no information about the relationship between mesh size and patient BMI. The pooled proportion in recurrence for small meshes was 0.0019 (95 % confidence interval: 0.007-0.0036), favoring large meshes to decrease the chance of recurrence. Recurrence becomes more marked when follow-up period is longer than 1 year (p < 0.001). Heavy meshes also decreased recurrence (p = 0.015). This systematic review demonstrates that the size of the mesh used in Lichtenstein hernia repair is rarely
Properties of meshes used in hernia repair: a comprehensive review of synthetic and biologic meshes.
Ibrahim, Ahmed M S; Vargas, Christina R; Colakoglu, Salih; Nguyen, John T; Lin, Samuel J; Lee, Bernard T
2015-02-01
Data on the mechanical properties of the adult human abdominal wall have been difficult to obtain rendering manufacture of the ideal mesh for ventral hernia repair a challenge. An ideal mesh would need to exhibit greater biomechanical strength and elasticity than that of the abdominal wall. The aim of this study is to quantitatively compare the biomechanical properties of the most commonly used synthetic and biologic meshes in ventral hernia repair and presents a comprehensive literature review. A narrative review of the literature was performed using the PubMed database spanning articles from 1982 to 2012 including a review of company Web sites to identify all available information relating to the biomechanical properties of various synthetic and biologic meshes used in ventral hernia repair. There exist differences in the mechanical properties and the chemical nature of different meshes. In general, most synthetic materials have greater stiffness and elasticity than what is required for abdominal wall reconstruction; however, each exhibits unique properties that may be beneficial for clinical use. On the contrary, biologic meshes are more elastic but less stiff and with a lower tensile strength than their synthetic counterparts. The current standard of practice for the treatment of ventral hernias is the use of permanent synthetic mesh material. Recently, biologic meshes have become more frequently used. Most meshes exhibit biomechanical properties over the known abdominal wall thresholds. Augmenting strength requires increasing amounts of material contributing to more stiffness and foreign body reaction, which is not necessarily an advantage. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Fournier, D.; Le Tellier, R.; Suteau, C.; Herbin, R.
2011-01-01
The solution of the time-independent neutron transport equation in a deterministic way invariably consists in the successive discretization of the three variables: energy, angle and space. In the SNATCH solver used in this study, the energy and the angle are respectively discretized with a multigroup approach and the discrete ordinate method. A set of spatial coupled transport equations is obtained and solved using the Discontinuous Galerkin Finite Element Method (DGFEM). Within this method, the spatial domain is decomposed into elements and the solution is approximated by a hierarchical polynomial basis in each one. This approach is time and memory consuming when the mesh becomes fine or the basis order high. To improve the computational time and the memory footprint, adaptive algorithms are proposed. These algorithms are based on an error estimation in each cell. If the error is important in a given region, the mesh has to be refined (h−refinement) or the polynomial basis order increased (p−refinement). This paper is related to the choice between the two types of refinement. Two ways to estimate the error are compared on different benchmarks. Analyzing the differences, a hp−refinement method is proposed and tested. (author)
Apisit, Patchimpattapong; Alireza, Haghighat; Shedlock, D.
2003-01-01
An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)
[Expandable metal mesh stents for treatment of tracheal stenoses and tracheomalacia].
Müller, C; Dienemann, H; Hoffmann, H; Berger, H; Storck, M; Jolk, A; Schildberg, F W
1993-01-01
The treatment of tracheo-bronchial stenosis or tracheomalacia is mainly carried out by means of resection or tracheoplastic operative strategies. Since the introduction of metal-mesh stents, a definitive endoluminal therapy has to be considered under new aspects. Six patients with malignant stenosis or tracheomalacia due to compression were treated by implantation of Palmaz- or Wallstents. Immediately after the implantation, patients were relieved from dyspnoea, the forced inspiratory volume-1 (FIV1) was normalized. All implanted stents were well tolerated, even in the long-time follow-up (19 months). Bronchoscopic control showed overgrowth of the metal meshes by respiratory epithelium. The implantation of metal-mesh stents is an adequate alternative in the treatment of malignant stenosis and tracheomalacia.
Wang, Tianyang; Chu, Fulei; Han, Qinkai
2017-03-01
Identifying the differences between the spectra or envelope spectra of a faulty signal and a healthy baseline signal is an efficient planetary gearbox local fault detection strategy. However, causes other than local faults can also generate the characteristic frequency of a ring gear fault; this may further affect the detection of a local fault. To address this issue, a new filtering algorithm based on the meshing resonance phenomenon is proposed. In detail, the raw signal is first decomposed into different frequency bands and levels. Then, a new meshing index and an MRgram are constructed to determine which bands belong to the meshing resonance frequency band. Furthermore, an optimal filter band is selected from this MRgram. Finally, the ring gear fault can be detected according to the envelope spectrum of the band-pass filtering result. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Apisit, Patchimpattapong [Electricity Generating Authority of Thailand, Office of Corporate Planning, Bangkruai, Nonthaburi (Thailand); Alireza, Haghighat; Shedlock, D. [Florida Univ., Department of Nuclear and Radiological Engineering, Gainesville, FL (United States)
2003-07-01
An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)
Highly Symmetric and Congruently Tiled Meshes for Shells and Domes
Rasheed, Muhibur; Bajaj, Chandrajit
2016-01-01
We describe the generation of all possible shell and dome shapes that can be uniquely meshed (tiled) using a single type of mesh face (tile), and following a single meshing (tiling) rule that governs the mesh (tile) arrangement with maximal vertex, edge and face symmetries. Such tiling arrangements or congruently tiled meshed shapes, are frequently found in chemical forms (fullerenes or Bucky balls, crystals, quasi-crystals, virus nano shells or capsids), and synthetic shapes (cages, sports domes, modern architectural facades). Congruently tiled meshes are both aesthetic and complete, as they support maximal mesh symmetries with minimal complexity and possess simple generation rules. Here, we generate congruent tilings and meshed shape layouts that satisfy these optimality conditions. Further, the congruent meshes are uniquely mappable to an almost regular 3D polyhedron (or its dual polyhedron) and which exhibits face-transitive (and edge-transitive) congruency with at most two types of vertices (each type transitive to the other). The family of all such congruently meshed polyhedra create a new class of meshed shapes, beyond the well-studied regular, semi-regular and quasi-regular classes, and their duals (platonic, Catalan and Johnson). While our new mesh class is infinite, we prove that there exists a unique mesh parametrization, where each member of the class can be represented by two integer lattice variables, and moreover efficiently constructable. PMID:27563368
Markov Random Fields on Triangle Meshes
Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas
2010-01-01
In this paper we propose a novel anisotropic smoothing scheme based on Markov Random Fields (MRF). Our scheme is formulated as two coupled processes. A vertex process is used to smooth the mesh by displacing the vertices according to a MRF smoothness prior, while an independent edge process label...
Performance Evaluation of Coded Meshed Networks
Krigslund, Jeppe; Hansen, Jonas; Pedersen, Morten Videbæk
2013-01-01
of the former to enhance the gains of the latter. We first motivate our work through measurements in WiFi mesh networks. Later, we compare state-of-the-art approaches, e.g., COPE, RLNC, to CORE. Our measurements show the higher reliability and throughput of CORE over other schemes, especially, for asymmetric...
Solid Mesh Registration for Radiotherapy Treatment Planning
Noe, Karsten Østergaard; Sørensen, Thomas Sangild
2010-01-01
We present an algorithm for solid organ registration of pre-segmented data represented as tetrahedral meshes. Registration of the organ surface is driven by force terms based on a distance field representation of the source and reference shapes. Registration of internal morphology is achieved usi...
A node-centered local refinement algorithm for poisson's equation in complex geometries
McCorquodale, Peter; Colella, Phillip; Grote, David P.; Vay, Jean-Luc
2004-01-01
This paper presents a method for solving Poisson's equation with Dirichlet boundary conditions on an irregular bounded three-dimensional region. The method uses a nodal-point discretization and adaptive mesh refinement (AMR) on Cartesian grids, and the AMR multigrid solver of Almgren. The discrete Laplacian operator at internal boundaries comes from either linear or quadratic (Shortley-Weller) extrapolation, and the two methods are compared. It is shown that either way, solution error is second order in the mesh spacing. Error in the gradient of the solution is first order with linear extrapolation, but second order with Shortley-Weller. Examples are given with comparison with the exact solution. The method is also applied to a heavy-ion fusion accelerator problem, showing the advantage of adaptivity
Materials refining on the Moon
Landis, Geoffrey A.
2007-05-01
Oxygen, metals, silicon, and glass are raw materials that will be required for long-term habitation and production of structural materials and solar arrays on the Moon. A process sequence is proposed for refining these materials from lunar regolith, consisting of separating the required materials from lunar rock with fluorine. The fluorine is brought to the Moon in the form of potassium fluoride, and is liberated from the salt by electrolysis in a eutectic salt melt. Tetrafluorosilane produced by this process is reduced to silicon by a plasma reduction stage; the fluorine salts are reduced to metals by reaction with metallic potassium. Fluorine is recovered from residual MgF and CaF2 by reaction with K2O.
Refining shale-oil distillates
Altpeter, J
1952-03-17
A process is described for refining distillates from shale oil, brown coal, tar, and other tar products by extraction with selective solvents, such as lower alcohols, halogen-hydrins, dichlorodiethyl ether, liquid sulfur dioxide, and so forth, as well as treating with alkali solution, characterized in that the distillate is first treated with completely or almost completely recovered phenol or cresotate solution, the oil is separated from the phenolate with solvent, for example concentrated or adjusted to a determined water content of lower alcohol, furfural, halogen-hydrin, dichlorodiethyl ether, liquid sulfur dioxide, or the like, extracted, and the raffinate separated from the extract layer, if necessary after distillation or washing out of solvent, and freeing with alkali solution from residual phenol or creosol.
Pushing the dinosaurs[Competition in the refining industry
Cobb, C B [Ernst and Young/Wright Killen (United States)
1999-03-01
The need for change in the business of oil refining is expressed. Since 1981, only three years have yielded high profit margins. The future is said to be in maximising performance from existing assets. In the past, the industry focused on the asset-based strategy of refining crude and getting it into the pipelines as early as possible but apparently the future lies in identifying customer needs and satisfying those needs as quickly as possible. In other words, selling the most product at the highest price. The strategy and tactics for achieving these goals are itemised and discussed. In short, it is essential that oil and gas companies make the transformation from asset focus to customer focus. (UK)
Mikhaylov, Rebecca; Dawson, Douglas; Kwack, Eug
2014-01-01
NASA's Earth observing Soil Moisture Active & Passive (SMAP) Mission is scheduled to launch in November 2014 into a 685 km near-polar, sun synchronous orbit. SMAP will provide comprehensive global mapping measurements of soil moisture and freeze/thaw state in order to enhance understanding of the processes that link the water, energy, and carbon cycles. The primary objectives of SMAP are to improve worldwide weather and flood forecasting, enhance climate prediction, and refine drought and agriculture monitoring during its 3 year mission. The SMAP instrument architecture incorporates an L-band radar and an L-band radiometer which share a common feed horn and parabolic mesh reflector. The instrument rotates about the nadir axis at approximately 15 rpm, thereby providing a conically scanning wide swath antenna beam that is capable of achieving global coverage within 3 days. In order to make the necessary precise surface emission measurements from space, a temperature knowledge of 60 deg C for the mesh reflector is required. In order to show compliance, a thermal vacuum test was conducted using a portable solar simulator to illuminate a non flight, but flight-like test article through the quartz window of the vacuum chamber. The molybdenum wire of the antenna mesh is too fine to accommodate thermal sensors for direct temperature measurements. Instead, the mesh temperature was inferred from resistance measurements made during the test. The test article was rotated to five separate angles between 10 deg and 90 deg via chamber breaks to simulate the maximum expected on-orbit solar loading during the mission. The resistance measurements were converted to temperature via a resistance versus temperature calibration plot that was constructed from data collected in a separate calibration test. A simple thermal model of two different representations of the mesh (plate and torus) was created to correlate the mesh temperature predictions to within 60 deg C. The on-orbit mesh
Vertex Normals and Face Curvatures of Triangle Meshes
Sun, Xiang; Jiang, Caigui; Wallner, Johannes; Pottmann, Helmut
2016-01-01
This study contributes to the discrete differential geometry of triangle meshes, in combination with discrete line congruences associated with such meshes. In particular we discuss when a congruence defined by linear interpolation of vertex normals
Recurrence and Pain after Mesh Repair of Inguinal Hernias
Abstract. Background: Surgery for inguinal hernias has ... repair. Methods: The study was conducted on all inguinal hernia patients operated between 1st. October ... bilateral (1.6%). Only 101 .... Open Mesh Versus Laparoscopic Mesh. Repair ...
Surgical Management of Pelvic floor Prolapse in women using Mesh
RAH
polytetrafluoroethylene) . This article reviews our experience with polypropylene mesh in pelvic floor repair at the. Southern General Hospital Glasgow. The objective was to determine the safety and effectiveness of the prolene mesh in the repair ...
VARIABLE MESH STIFFNESS OF SPUR GEAR TEETH USING ...
gear engagement. A gear mesh kinematic simulation ... model is appropnate for VMS of a spur gear tooth. The assumptions for ... This process has been continued until one complete tooth meshing cycle is ..... Element Method. Using MATLAB,.
Implicit Geometry Meshing for the simulation of Rotary Friction Welding
Schmicker, D.; Persson, P.-O.; Strackeljan, J.
2014-08-01
The simulation of Rotary Friction Welding (RFW) is a challenging task, since it states a coupled problem of phenomena like large plastic deformations, heat flux, contact and friction. In particular the mesh generation and its restoration when using a Lagrangian description of motion is of significant severity. In this regard Implicit Geometry Meshing (IGM) algorithms are promising alternatives to the more conventional explicit methods. Because of the implicit description of the geometry during remeshing, the IGM procedure turns out to be highly robust and generates spatial discretizations of high quality regardless of the complexity of the flash shape and its inclusions. A model for efficient RFW simulation is presented, which is based on a Carreau fluid law, an Augmented Lagrange approach in mapping the incompressible deformations, a penalty contact approach, a fully regularized Coulomb-/fluid friction law and a hybrid time integration strategy. The implementation of the IGM algorithm using 6-node triangular finite elements is described in detail. The techniques are demonstrated on a fairly complex friction welding problem, demonstrating the performance and the potentials of the proposed method. The techniques are general and straight-forward to implement, and offer the potential of successful adoption to a wide range of other engineering problems.
To mesh or not to mesh: a review of pelvic organ reconstructive surgery
Dällenbach, Patrick
2015-01-01
Pelvic organ prolapse (POP) is a major health issue with a lifetime risk of undergoing at least one surgical intervention estimated at close to 10%. In the 1990s, the risk of reoperation after primary standard vaginal procedure was estimated to be as high as 30% to 50%. In order to reduce the risk of relapse, gynecological surgeons started to use mesh implants in pelvic organ reconstructive surgery with the emergence of new complications. Recent studies have nevertheless shown that the risk of POP recurrence requiring reoperation is lower than previously estimated, being closer to 10% rather than 30%. The development of mesh surgery – actively promoted by the marketing industry – was tremendous during the past decade, and preceded any studies supporting its benefit for our patients. Randomized trials comparing the use of mesh to native tissue repair in POP surgery have now shown better anatomical but similar functional outcomes, and meshes are associated with more complications, in particular for transvaginal mesh implants. POP is not a life-threatening condition, but a functional problem that impairs quality of life for women. The old adage “primum non nocere” is particularly appropriate when dealing with this condition which requires no treatment when asymptomatic. It is currently admitted that a certain degree of POP is physiological with aging when situated above the landmark of the hymen. Treatment should be individualized and the use of mesh needs to be selective and appropriate. Mesh implants are probably an important tool in pelvic reconstructive surgery, but the ideal implant has yet to be found. The indications for its use still require caution and discernment. This review explores the reasons behind the introduction of mesh augmentation in POP surgery, and aims to clarify the risks, benefits, and the recognized indications for its use. PMID:25848324
McCoy, Olugbemisola; Vaughan, Taylor; Nickles, S Walker; Ashley, Matt; MacLachlan, Lara S; Ginsberg, David; Rovner, Eric
2016-08-01
We reviewed the outcomes of the autologous fascial pubovaginal sling as a salvage procedure for recurrent stress incontinence after intervention for polypropylene mesh erosion/exposure and/or bladder outlet obstruction in patients treated with prior transvaginal synthetic mesh for stress urinary incontinence. In a review of surgical databases at 2 institutions between January 2007 and June 2013 we identified 46 patients who underwent autologous fascial pubovaginal sling following removal of transvaginal synthetic mesh in simultaneous or staged fashion. This cohort of patients was evaluated for outcomes, including subjective and objective success, change in quality of life and complications between those who underwent staged vs concomitant synthetic mesh removal with autologous fascial pubovaginal sling placement. All 46 patients had received at least 1 prior mesh sling for incontinence and 8 (17%) had received prior transvaginal polypropylene mesh for pelvic organ prolapse repair. A total of 30 patients underwent concomitant mesh incision with or without partial excision and autologous sling placement while 16 underwent staged autologous sling placement. Mean followup was 16 months. Of the patients 22% required a mean of 1.8 subsequent interventions an average of 6.5 months after autologous sling placement with no difference in median quality of life at final followup. At last followup 42 of 46 patients (91%) and 35 of 46 (76%) had achieved objective and subjective success, respectively. There was no difference in subjective success between patients treated with a staged vs a concomitant approach (69% vs 80%, p = 0.48). Autologous fascial pubovaginal sling placement after synthetic mesh removal can be performed successfully in patients with stress urinary incontinence as a single or staged procedure. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Persistent pelvic pain following transvaginal mesh surgery: a cause for mesh removal.
Marcus-Braun, Naama; Bourret, Antoine; von Theobald, Peter
2012-06-01
Persistent pelvic pain after vaginal mesh surgery is an uncommon but serious complication that greatly affects women's quality of life. Our aim was to evaluate various procedures for mesh removal performed at a tertiary referral center in cases of persistent pelvic pain, and to evaluate the ensuing complications and outcomes. A retrospective study was conducted at the University Hospital of Caen, France, including all patients treated for removal or section of vaginal mesh due to pelvic pain as a primary cause, between January 2004 and September 2009. Ten patients met the inclusion criteria. Patients were diagnosed between 10 months and 3 years after their primary operation. Eight cases followed suburethral sling procedures and two followed mesh surgery for pelvic organ prolapse. Patients presented with obturator neuralgia (6), pudendal neuralgia (2), dyspareunia (1), and non-specific pain (1). The surgical treatment to release the mesh included: three cases of extra-peritoneal laparoscopy, four cases of complete vaginal mesh removal, one case of partial mesh removal and two cases of section of the suburethral sling. In all patients with obturator neuralgia, symptoms were resolved or improved, whereas in both cases of pudendal neuralgia the symptoms continued. There were no intra-operative complications. Post-operative Retzius hematoma was observed in one patient after laparoscopy. Mesh removal in a tertiary center is a safe procedure, necessary in some cases of persistent pelvic pain. Obturator neuralgia seems to be easier to treat than pudendal neuralgia. Early diagnosis is the key to success in prevention of chronic disease. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Laparoscopic removal of mesh used in pelvic floor surgery.
Khong, Su-Yen; Lam, Alan
2009-01-01
Various meshes are being used widely in clinical practice for pelvic reconstructive surgery despite the lack of evidence of their long-term safety and efficacy. Management of complications such as mesh erosion and dyspareunia can be challenging. Most mesh-related complications can probably be managed successfully via the transvaginal route; however, this may be impossible if surgical access is poor. This case report demonstrates the successful laparoscopic removal of mesh after several failed attempts via the vaginal route.
On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach
Liu, Zheng; Xue, Kaiping; Hong, Peilin
The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.
Santana, Jose; Marrero, Domingo; Macías, Elsa; Mena, Vicente; Suárez, Álvaro
2017-07-21
Ubiquitous sensing allows smart cities to take control of many parameters (e.g., road traffic, air or noise pollution levels, etc.). An inexpensive Wireless Mesh Network can be used as an efficient way to transport sensed data. When that mesh is autonomously powered (e.g., solar powered), it constitutes an ideal portable network system which can be deployed when needed. Nevertheless, its power consumption must be restrained to extend its operational cycle and for preserving the environment. To this end, our strategy fosters wireless interface deactivation among nodes which do not participate in any route. As we show, this contributes to a significant power saving for the mesh. Furthermore, our strategy is wireless-friendly, meaning that it gives priority to deactivation of nodes receiving (and also causing) interferences from (to) the rest of the smart city. We also show that a routing protocol can adapt to this strategy in which certain nodes deactivate their own wireless interfaces.
E Holzbecher
2016-03-01
Full Text Available In a classical paper Henry set up a conceptual model for simulating saltwater intrusion into coastal aquifers. Up to now the problem has been taken up by software developers and modellers as a benchmark for codes simulating coupled flow and transport in porous media. The Henry test case has been treated using different numerical methods based on various formulations of differential equations. We compare several of these approaches using multiphysics software. We model the problem using Finite Elements, utilizing the primitive variables and the streamfunction approach, both with and without using the Oberbeck-Boussinesq assumption. We compare directly coupled solvers with segregated solver strategies. Changing finite element orders and mesh refinement, we find that models based on the streamfunction converge 2-4 times faster than runs based on primitive variables. Concerning the solution strategy, we find an advantage of Picard iterations compared to monolithic Newton iterations.
Shah, Ketul; Nikolavsky, Dmitriy; Gilsdorf, Daniel; Flynn, Brian J
2013-12-01
We present our management of lower urinary tract (LUT) mesh perforation after mid-urethral polypropylene mesh sling using a novel combination of surgical techniques including total or near total mesh excision, urinary tract reconstruction, and concomitant pubovaginal sling with autologous rectus fascia in a single operation. We retrospectively reviewed the medical records of 189 patients undergoing transvaginal removal of polypropylene mesh from the lower urinary tract or vagina. The focus of this study is 21 patients with LUT mesh perforation after mid-urethral polypropylene mesh sling. We excluded patients with LUT mesh perforation from prolapse kits (n = 4) or sutures (n = 11), or mesh that was removed because of isolated vaginal wall exposure without concomitant LUT perforation (n = 164). Twenty-one patients underwent surgical removal of mesh through a transvaginal approach or combined transvaginal/abdominal approaches. The location of the perforation was the urethra in 14 and the bladder in 7. The mean follow-up was 22 months. There were no major intraoperative complications. All patients had complete resolution of the mesh complication and the primary symptom. Of the patients with urethral perforation, continence was achieved in 10 out of 14 (71.5 %). Of the patients with bladder perforation, continence was achieved in all 7. Total or near total removal of lower urinary tract (LUT) mesh perforation after mid-urethral polypropylene mesh sling can completely resolve LUT mesh perforation in a single operation. A concomitant pubovaginal sling can be safely performed in efforts to treat existing SUI or avoid future surgery for SUI.
21 CFR 870.3650 - Pacemaker polymeric mesh bag.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Pacemaker polymeric mesh bag. 870.3650 Section 870...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Prosthetic Devices § 870.3650 Pacemaker polymeric mesh bag. (a) Identification. A pacemaker polymeric mesh bag is an implanted device used to hold a...
Multiphase flow of immiscible fluids on unstructured moving meshes
Misztal, Marek Krzysztof; Erleben, Kenny; Bargteil, Adam
2012-01-01
In this paper, we present a method for animating multiphase flow of immiscible fluids using unstructured moving meshes. Our underlying discretization is an unstructured tetrahedral mesh, the deformable simplicial complex (DSC), that moves with the flow in a Lagrangian manner. Mesh optimization op...
Multiphase Flow of Immiscible Fluids on Unstructured Moving Meshes
Misztal, Marek Krzysztof; Erleben, Kenny; Bargteil, Adam
2013-01-01
In this paper, we present a method for animating multiphase flow of immiscible fluids using unstructured moving meshes. Our underlying discretization is an unstructured tetrahedral mesh, the deformable simplicial complex (DSC), that moves with the flow in a Lagrangian manner. Mesh optimization op...
Refined isogeometric analysis for a preconditioned conjugate gradient solver
Garcia, Daniel
2018-02-12
Starting from a highly continuous Isogeometric Analysis (IGA) discretization, refined Isogeometric Analysis (rIGA) introduces C0 hyperplanes that act as separators for the direct LU factorization solver. As a result, the total computational cost required to solve the corresponding system of equations using a direct LU factorization solver dramatically reduces (up to a factor of 55) Garcia et al. (2017). At the same time, rIGA enriches the IGA spaces, thus improving the best approximation error. In this work, we extend the complexity analysis of rIGA to the case of iterative solvers. We build an iterative solver as follows: we first construct the Schur complements using a direct solver over small subdomains (macro-elements). We then assemble those Schur complements into a global skeleton system. Subsequently, we solve this system iteratively using Conjugate Gradients (CG) with an incomplete LU (ILU) preconditioner. For a 2D Poisson model problem with a structured mesh and a uniform polynomial degree of approximation, rIGA achieves moderate savings with respect to IGA in terms of the number of Floating Point Operations (FLOPs) and computational time (in seconds) required to solve the resulting system of linear equations. For instance, for a mesh with four million elements and polynomial degree p=3, the iterative solver is approximately 2.6 times faster (in time) when applied to the rIGA system than to the IGA one. These savings occur because the skeleton rIGA system contains fewer non-zero entries than the IGA one. The opposite situation occurs for 3D problems, and as a result, 3D rIGA discretizations provide no gains with respect to their IGA counterparts when considering iterative solvers.
Prosthetic Mesh Repair for Incarcerated Inguinal Hernia
Cihad Tatar
2016-08-01
Full Text Available Background: Incarcerated inguinal hernia is a commonly encountered urgent surgical condition, and tension-free repair is a well-established method for the treatment of noncomplicated cases. However, due to the risk of prosthetic material-related infections, the use of mesh in the repair of strangulated or incarcerated hernia has often been subject to debate. Recent studies have demonstrated that biomaterials represent suitable materials for performing urgent hernia repair. Certain studies recommend mesh repair only for cases where no bowel resection is required; other studies, however, recommend mesh repair for patients requiring bowel resection as well. Aim: The aim of this study was to compare the outcomes of different surgical techniques performed for strangulated hernia, and to evaluate the effect of mesh use on postoperative complications. Study Design: Retrospective cross-sectional study. Methods: This retrospective study was performed with 151 patients who had been admitted to our hospital’s emergency department to undergo surgery for a diagnosis of incarcerated inguinal hernia. The patients were divided into two groups based on the applied surgical technique. Group 1 consisted of 112 patients treated with mesh-based repair techniques, while Group 2 consisted of 39 patients treated with tissue repair techniques. Patients in Group 1 were further divided into two sub-groups: one consisting of patients undergoing bowel resection (Group 3, and the other consisting of patients not undergoing bowel resection (Group 4. Results: In Group 1, it was observed that eight (7.14% of the patients had wound infections, while two (1.78% had hematomas, four (3.57% had seromas, and one (0.89% had relapse. In Group 2, one (2.56% of the patients had a wound infection, while three (7.69% had hematomas, one (2.56% had seroma, and none had relapses. There were no statistically significant differences between the two groups with respect to wound infection
To mesh or not to mesh: a review of pelvic organ reconstructive surgery
Dällenbach P
2015-04-01
Full Text Available Patrick Dällenbach Department of Gynecology and Obstetrics, Division of Gynecology, Urogynecology Unit, Geneva University Hospitals, Geneva, Switzerland Abstract: Pelvic organ prolapse (POP is a major health issue with a lifetime risk of undergoing at least one surgical intervention estimated at close to 10%. In the 1990s, the risk of reoperation after primary standard vaginal procedure was estimated to be as high as 30% to 50%. In order to reduce the risk of relapse, gynecological surgeons started to use mesh implants in pelvic organ reconstructive surgery with the emergence of new complications. Recent studies have nevertheless shown that the risk of POP recurrence requiring reoperation is lower than previously estimated, being closer to 10% rather than 30%. The development of mesh surgery – actively promoted by the marketing industry – was tremendous during the past decade, and preceded any studies supporting its benefit for our patients. Randomized trials comparing the use of mesh to native tissue repair in POP surgery have now shown better anatomical but similar functional outcomes, and meshes are associated with more complications, in particular for transvaginal mesh implants. POP is not a life-threatening condition, but a functional problem that impairs quality of life for women. The old adage “primum non nocere” is particularly appropriate when dealing with this condition which requires no treatment when asymptomatic. It is currently admitted that a certain degree of POP is physiological with aging when situated above the landmark of the hymen. Treatment should be individualized and the use of mesh needs to be selective and appropriate. Mesh implants are probably an important tool in pelvic reconstructive surgery, but the ideal implant has yet to be found. The indications for its use still require caution and discernment. This review explores the reasons behind the introduction of mesh augmentation in POP surgery, and aims to
Refined geometric transition and qq-characters
Kimura, Taro; Mori, Hironori; Sugimoto, Yuji
2018-01-01
We show the refinement of the prescription for the geometric transition in the refined topological string theory and, as its application, discuss a possibility to describe qq-characters from the string theory point of view. Though the suggested way to operate the refined geometric transition has passed through several checks, it is additionally found in this paper that the presence of the preferred direction brings a nontrivial effect. We provide the modified formula involving this point. We then apply our prescription of the refined geometric transition to proposing the stringy description of doubly quantized Seiberg-Witten curves called qq-characters in certain cases.
Automated knowledge-base refinement
Mooney, Raymond J.
1994-01-01
Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.
Bruskin, Yu.A.; Gorokhov, V.V.; Kotler, L.D.; Kovalenko, N.F.; Spasskiy, Yu.B.; Titov, A.M.; Vlasenko, V.Ye.; Vytnov, V.A.
1983-01-01
In the method for refining oil through its distillation with the isolation of directly distilled gases and a benzine fraction (BS) with the use of a benzine fraction pyrolysis, in order to increase the output of the lower olefines and to reduce the energy expenditures, the distillation is conducted with the isolation of 10 to 40 percent of the benzine fraction from its potential content along with the directly distilled gases. The obtained mixture of the remaining part of the benzine fraction is absorbed at a pressure of 1.5 to 6 atmospheres with the feeding of the obtained saturated absorbent to pyrolysis and subsequent mixing of the obtained pyrolysis gas with the unabsorbed product and their joint gas division. As compared to the known method, the proposed method makes it possible to reduce the energy expenditures which is achieved through a reduction in the volume of irrigation in the tower, and to increase the output of the olefines through processing of the steam and gas mixture of the benzine and the directly distilled gases.
Partitioning of unstructured meshes for load balancing
Martin, O.C.; Otto, S.W.
1994-01-01
Many large-scale engineering and scientific calculations involve repeated updating of variables on an unstructured mesh. To do these types of computations on distributed memory parallel computers, it is necessary to partition the mesh among the processors so that the load balance is maximized and inter-processor communication time is minimized. This can be approximated by the problem, of partitioning a graph so as to obtain a minimum cut, a well-studied combinatorial optimization problem. Graph partitioning algorithms are discussed that give good but not necessarily optimum solutions. These algorithms include local search methods recursive spectral bisection, and more general purpose methods such as simulated annealing. It is shown that a general procedure enables to combine simulated annealing with Kernighan-Lin. The resulting algorithm is both very fast and extremely effective. (authors) 23 refs., 3 figs., 1 tab
Adaptive upscaling with the dual mesh method
Guerillot, D.; Verdiere, S.
1997-08-01
The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.
Variational mesh segmentation via quadric surface fitting
Yan, Dongming
2012-11-01
We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.
Variational mesh segmentation via quadric surface fitting
Yan, Dongming; Wang, Wen Ping; Liu, Yang; Yang, Zhouwang
2012-01-01
We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.
Meshed split skin graft for extensive vitiligo
Srinivas C
2004-05-01
Full Text Available A 30 year old female presented with generalized stable vitiligo involving large areas of the body. Since large areas were to be treated it was decided to do meshed split skin graft. A phototoxic blister over recipient site was induced by applying 8 MOP solution followed by exposure to UVA. The split skin graft was harvested from donor area by Padgett dermatome which was meshed by an ampligreffe to increase the size of the graft by 4 times. Significant pigmentation of the depigmented skin was seen after 5 months. This procedure helps to cover large recipient areas, when pigmented donor skin is limited with minimal risk of scarring. Phototoxic blister enables easy separation of epidermis thus saving time required for dermabrasion from recipient site.
Energy-efficient wireless mesh infrastructures
Al-Hazmi, Y.; de Meer, Hermann; Hummel, Karin Anna; Meyer, Harald; Meo, Michela; Remondo Bueno, David
2011-01-01
The Internet comprises access segments with wired and wireless technologies. In the future, we can expect wireless mesh infrastructures (WMIs) to proliferate in this context. Due to the relatively low energy efficiency of wireless transmission, as compared to wired transmission, energy consumption of WMIs can represent a significant part of the energy consumption of the Internet as a whole. We explore different approaches to reduce energy consumption in WMIs, taking into accoun...
Symmetries and the coarse-mesh method
Makai, M.
1980-10-01
This report approaches the basic problem of the coarse-mesh method from a new side. Group theory is used for the determination of the space dependency of the flux. The result is a method called ANANAS after the analytic-analytic solution. This method was tested on two benchmark problems: one given by Melice and the IAEA benchmark. The ANANAS program is an experimental one. The method was intended for use in hexagonal geometry. (Auth.)
Wireless experiments on a Motorola mesh testbed.
Riblett, Loren E., Jr.; Wiseman, James M.; Witzke, Edward L.
2010-06-01
Motomesh is a Motorola product that performs mesh networking at both the client and access point levels and allows broadband mobile data connections with or between clients moving at vehicular speeds. Sandia National aboratories has extensive experience with this product and its predecessors in infrastructure-less mobile environments. This report documents experiments, which characterize certain aspects of how the Motomesh network performs when obile units are added to a fixed network infrastructure.
FPGA Congestion-Driven Placement Refinement
Vicente de, J.
2005-07-01
The routing congestion usually limits the complete proficiency of the FPGA logic resources. A key question can be formulated regarding the benefits of estimating the congestion at placement stage. In the last years, it is gaining acceptance the idea of a detailed placement taking into account congestion. In this paper, we resort to the Thermodynamic Simulated Annealing (TSA) algorithm to perform a congestion-driven placement refinement on the top of the common Bounding-Box pre optimized solution. The adaptive properties of TSA allow the search to preserve the solution quality of the pre optimized solution while improving other fine-grain objectives. Regarding the cost function two approaches have been considered. In the first one Expected Occupation (EO), a detailed probabilistic model to account for channel congestion is evaluated. We show that in spite of the minute detail of EO, the inherent uncertainty of this probabilistic model impedes to relieve congestion beyond the sole application of the Bounding-Box cost function. In the second approach we resort to the fast Rectilinear Steiner Regions algorithm to perform not an estimation but a measurement of the global routing congestion. This second strategy allows us to successfully reduce the requested channel width for a set of benchmark circuits with respect to the widespread Versatile Place and Route (VPR) tool. (Author) 31 refs.
Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings
Tsai, F.; Chang, H.; Lin, Y.-W.
2017-08-01
This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.
Current situation of transvaginal mesh repair for pelvic organ prolapse.
Zhu, Lan; Zhang, Lei
2014-09-01
Surgical mesh is a metallic or polymeric screen intended to be implanted to reinforce soft tissue or bone where weakness exists. Surgical mesh has been used since the 1950s to repair abdominal hernias. In the 1970s, gynecologists began using surgical mesh products to indicate the repair of pelvic organ prolapse (POP), and in the 1990s, gynecologists began using surgical mesh for POP. Then the U.S. Food and Drug Administration (FDA) approved the first surgical mesh product specifically for use in POP. Surgical mesh materials can be divided into several categories. Most surgical mesh devices cleared for POP procedures are composed of non-absorbable synthetic polypropylene. Mesh can be placed in the anterior vaginal wall to aid in the correction of cystocele (anterior repair), in the posterior vaginal wall to aid in correction of rectocele (posterior repair), or attached to the top of the vagina to correct uterine prolapse or vaginal apical prolapse (apical repair). Over the past decades, surgical mesh products for transvaginal POP repair became incorporated into "kits" that included tools to aid in the delivery and insertion of the mesh. Surgical mesh kits continue to evolve, adding new insertion tools, tissue fixation anchors, surgical techniques, and ab- sorbable and biological materials. This procedure has been performed popularly. It was also performed increased in China. But this new technique met some trouble recently and let shake in urogynecology.
Topological patterns of mesh textures in serpentinites
Miyazawa, M.; Suzuki, A.; Shimizu, H.; Okamoto, A.; Hiraoka, Y.; Obayashi, I.; Tsuji, T.; Ito, T.
2017-12-01
Serpentinization is a hydration process that forms serpentine minerals and magnetite within the oceanic lithosphere. Microfractures crosscut these minerals during the reactions, and the structures look like mesh textures. It has been known that the patterns of microfractures and the system evolutions are affected by the hydration reaction and fluid transport in fractures and within matrices. This study aims at quantifying the topological patterns of the mesh textures and understanding possible conditions of fluid transport and reaction during serpentinization in the oceanic lithosphere. Two-dimensional simulation by the distinct element method (DEM) generates fracture patterns due to serpentinization. The microfracture patterns are evaluated by persistent homology, which measures features of connected components of a topological space and encodes multi-scale topological features in the persistence diagrams. The persistence diagrams of the different mesh textures are evaluated by principal component analysis to bring out the strong patterns of persistence diagrams. This approach help extract feature values of fracture patterns from high-dimensional and complex datasets.
Improved Mesh_Based Image Morphing
Mohammed Abdullah Taha
2017-11-01
Full Text Available Image morphing is a multi-step process that generates a sequence of transitions between two images. The thought is to get a ₔgrouping of middle pictures which, when ₔassembled with the first pictures would represent the change from one picture to the other. The process of morphing requires time and attention to detail in order to get good results. Morphing image requires at least two processes warping and cross dissolve. Warping is the process of geometric transformation of images. The cross dissolve is the process interpolation of color of eachₔ pixel from the first image value to theₔ corresponding second imageₔ value over the time. Image morphing techniques differ from in the approach of image warping procedure. This work presents a survey of different techniques to construct morphing images by review the different warping techniques. One of the predominant approaches of warping process is mesh warping which suffers from some problems including ghosting. This work proposed and implements an improved mesh warping technique to construct morphing images. The results show that the proposed approach can overcome the problems of the traditional mesh technique
Cu mesh for flexible transparent conductive electrodes.
Kim, Won-Kyung; Lee, Seunghun; Hee Lee, Duck; Hee Park, In; Seong Bae, Jong; Woo Lee, Tae; Kim, Ji-Young; Hun Park, Ji; Chan Cho, Yong; Ryong Cho, Chae; Jeong, Se-Young
2015-06-03
Copper electrodes with a micromesh/nanomesh structure were fabricated on a polyimide substrate using UV lithography and wet etching to produce flexible transparent conducting electrodes (TCEs). Well-defined mesh electrodes were realized through the use of high-quality Cu thin films. The films were fabricated using radio-frequency (RF) sputtering with a single-crystal Cu target--a simple but innovative approach that overcame the low oxidation resistance of ordinary Cu. Hybrid Cu mesh electrodes were fabricated by adding a capping layer of either ZnO or Al-doped ZnO. The sheet resistance and the transmittance of the electrode with an Al-doped ZnO capping layer were 6.197 ohm/sq and 90.657%, respectively, and the figure of merit was 60.502 × 10(-3)/ohm, which remained relatively unchanged after thermal annealing at 200 °C and 1,000 cycles of bending. This fabrication technique enables the mass production of large-area flexible TCEs, and the stability and high performance of Cu mesh hybrid electrodes in harsh environments suggests they have strong potential for application in smart displays and solar cells.
Numerical Investigation of Corrugated Wire Mesh Laminate
Jeongho Choi
2013-01-01
Full Text Available The aim of this work is to develop a numerical model of Corrugated Wire Mesh Laminate (CWML capturing all its complexities such as nonlinear material properties, nonlinear geometry and large deformation behaviour, and frictional behaviour. Development of such a model will facilitate numerical simulation of the mechanical behaviour of the wire mesh structure under various types of loading as well as the variation of the CWML configuration parameters to tailor its mechanical properties to suit the intended application. Starting with a single strand truss model consisting of four waves with a bilinear stress-strain model to represent the plastic behaviour of stainless steel, the finite element model is gradually built up to study single-layer structures with 18 strands of corrugated wire meshes consistency and double- and quadruple-layered laminates with alternating crossply orientations. The compressive behaviour of the CWML model is simulated using contact elements to model friction and is compared to the load-deflection behaviour determined experimentally in uniaxial compression tests. The numerical model of the CWML is then employed to conduct the aim of establishing the upper and lower bounds of stiffness and load capacity achievable by such structures.
Data-Parallel Mesh Connected Components Labeling and Analysis
Harrison, Cyrus; Childs, Hank; Gaither, Kelly
2011-04-10
We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.
Shuai Mo
2017-01-01
Full Text Available This paper studies the multiple-split load sharing mechanism of gears in two-stage external meshing planetary transmission system of aeroengine. According to the eccentric error, gear tooth thickness error, pitch error, installation error, and bearing manufacturing error, we performed the meshing error analysis of equivalent angles, respectively, and we also considered the floating meshing error caused by the variation of the meshing backlash, which is from the floating of all gears at the same time. Finally, we obtained the comprehensive angle meshing error of the two-stage meshing line, established a refined mathematical computational model of 2-stage external 3-split loading sharing coefficient in consideration of displacement compatibility, got the regular curves of the load sharing coefficient and load sharing characteristic curve of full floating multiple-split and multiple-stage system, and took the variation law of the floating track and the floating quantity of the center wheel. These provide a scientific theory to determine the load sharing coefficient, reasonable load distribution, and control tolerances in aviation design and manufacturing.
Oral, intestinal, and skin bacteria in ventral hernia mesh implants
Odd Langbach
2016-07-01
Full Text Available Background: In ventral hernia surgery, mesh implants are used to reduce recurrence. Infection after mesh implantation can be a problem and rates around 6–10% have been reported. Bacterial colonization of mesh implants in patients without clinical signs of infection has not been thoroughly investigated. Molecular techniques have proven effective in demonstrating bacterial diversity in various environments and are able to identify bacteria on a gene-specific level. Objective: The purpose of this study was to detect bacterial biofilm in mesh implants, analyze its bacterial diversity, and look for possible resemblance with bacterial biofilm from the periodontal pocket. Methods: Thirty patients referred to our hospital for recurrence after former ventral hernia mesh repair, were examined for periodontitis in advance of new surgical hernia repair. Oral examination included periapical radiographs, periodontal probing, and subgingival plaque collection. A piece of mesh (1×1 cm from the abdominal wall was harvested during the new surgical hernia repair and analyzed for bacteria by PCR and 16S rRNA gene sequencing. From patients with positive PCR mesh samples, subgingival plaque samples were analyzed with the same techniques. Results: A great variety of taxa were detected in 20 (66.7% mesh samples, including typical oral commensals and periodontopathogens, enterics, and skin bacteria. Mesh and periodontal bacteria were further analyzed for similarity in 16S rRNA gene sequences. In 17 sequences, the level of resemblance between mesh and subgingival bacterial colonization was 98–100% suggesting, but not proving, a transfer of oral bacteria to the mesh. Conclusion: The results show great bacterial diversity on mesh implants from the anterior abdominal wall including oral commensals and periodontopathogens. Mesh can be reached by bacteria in several ways including hematogenous spread from an oral site. However, other sites such as gut and skin may also
Refinement Checking on Parametric Modal Transition Systems
Benes, Nikola; Kretínsky, Jan; Larsen, Kim Guldstrand
2015-01-01
Modal transition systems (MTS) is a well-studied specification formalism of reactive systems supporting a step-wise refinement methodology. Despite its many advantages, the formalism as well as its currently known extensions are incapable of expressing some practically needed aspects in the refin...
Comparing Syntactic and Semantics Action Refinement
Goltz, Ursula; Gorrieri, Roberto; Rensink, Arend
The semantic definition of action refinement on labelled configuration structures is compared with the notion of syntactic substitution, which can be used as another notion of action refinement in a process algebraic setting. The comparison is done by studying a process algebra equipped with
On Syntactic and Semantic Action Refinement
Hagiya, M.; Goltz, U.; Mitchell, J.C.; Gorrieri, R.; Rensink, Arend
1994-01-01
The semantic definition of action refinement on labelled event structures is compared with the notion of syntactic substitution, which can be used as another notion of action refinement in a process algebraic setting. This is done by studying a process algebra equipped with the ACP sequential
Anomalies in the refinement of isoleucine
Berntsen, Karen R. M.; Vriend, Gert, E-mail: gerrit.vriend@radboudumc.nl [Radboud University Medical Center, Geert Grooteplein 26-28, 6525 GA Nijmegen (Netherlands)
2014-04-01
The side-chain torsion angles of isoleucines in X-ray protein structures are a function of resolution, secondary structure and refinement software. Detailing the standard torsion angles used in refinement software can improve protein structure refinement. A study of isoleucines in protein structures solved using X-ray crystallography revealed a series of systematic trends for the two side-chain torsion angles χ{sub 1} and χ{sub 2} dependent on the resolution, secondary structure and refinement software used. The average torsion angles for the nine rotamers were similar in high-resolution structures solved using either the REFMAC, CNS or PHENIX software. However, at low resolution these programs often refine towards somewhat different χ{sub 1} and χ{sub 2} values. Small systematic differences can be observed between refinement software that uses molecular dynamics-type energy terms (for example CNS) and software that does not use these terms (for example REFMAC). Detailing the standard torsion angles used in refinement software can improve the refinement of protein structures. The target values in the molecular dynamics-type energy functions can also be improved.
Refined large N duality for knots
Kameyama, Masaya; Nawata, Satoshi
We formulate large N duality of U(N) refined Chern-Simons theory with a torus knot/link in S³. By studying refined BPS states in M-theory, we provide the explicit form of low-energy effective actions of Type IIA string theory with D4-branes on the Ω-background. This form enables us to relate...
Anomalies in the refinement of isoleucine
Berntsen, Karen R. M.; Vriend, Gert
2014-01-01
The side-chain torsion angles of isoleucines in X-ray protein structures are a function of resolution, secondary structure and refinement software. Detailing the standard torsion angles used in refinement software can improve protein structure refinement. A study of isoleucines in protein structures solved using X-ray crystallography revealed a series of systematic trends for the two side-chain torsion angles χ 1 and χ 2 dependent on the resolution, secondary structure and refinement software used. The average torsion angles for the nine rotamers were similar in high-resolution structures solved using either the REFMAC, CNS or PHENIX software. However, at low resolution these programs often refine towards somewhat different χ 1 and χ 2 values. Small systematic differences can be observed between refinement software that uses molecular dynamics-type energy terms (for example CNS) and software that does not use these terms (for example REFMAC). Detailing the standard torsion angles used in refinement software can improve the refinement of protein structures. The target values in the molecular dynamics-type energy functions can also be improved
Grain refinement of zinc-aluminium alloys
Zaid, A.I.O.
2006-01-01
It is now well-established that the structure of the zinc-aluminum die casting alloys can be modified by the binary Al-Ti or the ternary Al-Ti-B master alloys. in this paper, grain refinement of zinc-aluminum alloys by rare earth materials is reviewed and discussed. The importance of grain refining of these alloys and parameters affecting it are presented and discussed. These include parameters related to the Zn-Al alloys cast, parameters related to the grain refining elements or alloys and parameters related to the process. The effect of addition of other alloying elements e.g. Zr either alone or in the presence of the main grain refiners Ti or Ti + B on the grain refining efficiency is also reviewed and discussed. Furthermore, based on the grain refinement and the parameters affecting it, a criterion for selection of the optimum grain refiner is suggested. Finally, the recent research work on the effect of grain refiners on the mechanical behaviour, impact strength, wear resistance, and fatigue life of these alloys are presented and discussed. (author)
Refined Phenotyping of Modic Changes
Määttä, Juhani H.; Karppinen, Jaro; Paananen, Markus; Bow, Cora; Luk, Keith D.K.; Cheung, Kenneth M.C.; Samartzis, Dino
2016-01-01
Abstract Low back pain (LBP) is the world's most disabling condition. Modic changes (MC) are vertebral bone marrow changes adjacent to the endplates as noted on magnetic resonance imaging. The associations of specific MC types and patterns with prolonged, severe LBP and disability remain speculative. This study assessed the relationship of prolonged, severe LBP and back-related disability, with the presence and morphology of lumbar MC in a large cross-sectional population-based study of Southern Chinese. We addressed the topographical and morphological dimensions of MC along with other magnetic resonance imaging phenotypes (eg, disc degeneration and displacement) on the basis of axial T1 and sagittal T2-weighted imaging of L1-S1. Prolonged severe LBP was defined as LBP lasting ≥30 days during the past year, and a visual analog scale severest pain intensity of at least 6/10. An Oswestry Disability Index score of 15% was regarded as significant disability. We also assessed subject demographics, occupation, and lifestyle factors. In total, 1142 subjects (63% females, mean age 53 years) were assessed. Of these, 282 (24.7%) had MC (7.1% type I, 17.6% type II). MC subjects were older (P = 0.003), had more frequent disc displacements (P disability. The strength of the associations increased with the number of MC. This large-scale study is the first to definitively note MC types and specific morphologies to be independently associated with prolonged severe LBP and back-related disability. This proposed refined MC phenotype may have direct implications in clinical decision-making as to the development and management of LBP. Understanding of these imaging biomarkers can lead to new preventative and personalized therapeutics related to LBP. PMID:27258491
North Dakota Refining Capacity Study
Dennis Hill; Kurt Swenson; Carl Tuura; Jim Simon; Robert Vermette; Gilberto Marcha; Steve Kelly; David Wells; Ed Palmer; Kuo Yu; Tram Nguyen; Juliam Migliavacca
2011-01-05
According to a 2008 report issued by the United States Geological Survey, North Dakota and Montana have an estimated 3.0 to 4.3 billion barrels of undiscovered, technically recoverable oil in an area known as the Bakken Formation. With the size and remoteness of the discovery, the question became 'can a business case be made for increasing refining capacity in North Dakota?' And, if so what is the impact to existing players in the region. To answer the question, a study committee comprised of leaders in the region's petroleum industry were brought together to define the scope of the study, hire a consulting firm and oversee the study. The study committee met frequently to provide input on the findings and modify the course of the study, as needed. The study concluded that the Petroleum Area Defense District II (PADD II) has an oversupply of gasoline. With that in mind, a niche market, naphtha, was identified. Naphtha is used as a diluent used for pipelining the bitumen (heavy crude) from Canada to crude markets. The study predicted there will continue to be an increase in the demand for naphtha through 2030. The study estimated the optimal configuration for the refinery at 34,000 barrels per day (BPD) producing 15,000 BPD of naphtha and a 52 percent refinery charge for jet and diesel yield. The financial modeling assumed the sponsor of a refinery would invest its own capital to pay for construction costs. With this assumption, the internal rate of return is 9.2 percent which is not sufficient to attract traditional investment given the risk factor of the project. With that in mind, those interested in pursuing this niche market will need to identify incentives to improve the rate of return.
Uranium refining by solvent extraction
Kraikaew, J.; Srinuttrakul, W.
2014-01-01
The solvent extraction process to produce higher purity uranium from yellowcake was studied in laboratory scale. Yellowcake, which the uranium purity is around 70% and the main impurity is thorium, was obtained from monazite processing pilot plant of Rare Earth Research and Development Center in Thailand. For uranium re-extraction process, the extractant chosen was Tributylphosphate (TBP) in kerosene. It was found that the optimum concentration of TBP was 10% in kerosene and the optimum nitric acid concentration in uranyl nitrate feed solution was 4 N. An increase in concentrations of uranium and thorium in feed solution resulted in a decrease in the distribution of both components in the extractant. However, the distribution of uranium into the extractant was found to be more than that of thorium. The equilibration study of the extraction system, UO_2(NO_3)/4N HNO_3 – 10%TBP/Kerosene, was also investigated. Two extraction stages were calculated graphically from 100,000 ppm uranium concentration in feed solution input with 90% extraction efficiency and the flow ratio of aqueous phase to organic phase was adjusted to 1.0. For thorium impurity scrubbing process, 10% TBP in kerosene was loaded with uranium and minor thorium from uranyl nitrate solution prepared from yellowcake and was scrubbed with different low concentration nitric acid. The results showed that at nitric acid normality was lower than 1 N, uranium distributed well to aqueous phase. As conclusion, optimum nitric acid concentration for scrubbing process should not less than 1 N and diluted nitric acid or de-ionized water should be applied to strip uranium from organic phase in the final refining process. (author)
Refining in the 1990's: Restructuring and resurgence
Cobb, C.B.
1994-01-01
After two years of uncertainty in dealing with the 1990 Clean Air Act Amendments coupled with the shutdown of 5% of total US refining capacity, the industry is now positioning itself for continued operations throughout the remainder of the decade. However, refineries are experiencing a shift in the mode of operations to a period of more restructuring (closings, ventures, alliances, etc.) followed by a resurgence in financial performance. The purpose of this paper is to examine the current industry and highlight the reasons for industry's current plans. The authors also speculate about the strategies companies will choose to better their financial performance. Fundamentally, the characteristics of a mature domestic business remain the driving force that shape decision making. In responding to the maturing of refining, the authors suggest that refiners will change the way they conduct business over the next few years. Building on the theme of the 1993 NPRA paper, strategies will target the domestic side of the business while simultaneously shifting to a global perspective
Refining search terms for nanotechnology
Porter, Alan L.; Youtie, Jan; Shapira, Philip; Schoeneck, David J.
2008-01-01
The ability to delineate the boundaries of an emerging technology is central to obtaining an understanding of the technology's research paths and commercialization prospects. Nowhere is this more relevant than in the case of nanotechnology (hereafter identified as 'nano') given its current rapid growth and multidisciplinary nature. (Under the rubric of nanotechnology, we also include nanoscience and nanoengineering.) Past efforts have utilized several strategies, including simple term search for the prefix nano, complex lexical and citation-based approaches, and bootstrapping techniques. This research introduces a modularized Boolean approach to defining nanotechnology which has been applied to several research and patenting databases. We explain our approach to downloading and cleaning data, and report initial results. Comparisons of this approach with other nanotechnology search formulations are presented. Implications for search strategy development and profiling of the nanotechnology field are discussed
Refining search terms for nanotechnology
Porter, Alan L. [Georgia Institute of Technology (United States); Youtie, Jan [Georgia Institute of Technology, Enterprise Innovation Institute (United States)], E-mail: jan.youtie@innovate.gatech.edu; Shapira, Philip [Georgia Institute of Technology (United States); Schoeneck, David J. [Search Technology, Inc. (United States)
2008-05-15
The ability to delineate the boundaries of an emerging technology is central to obtaining an understanding of the technology's research paths and commercialization prospects. Nowhere is this more relevant than in the case of nanotechnology (hereafter identified as 'nano') given its current rapid growth and multidisciplinary nature. (Under the rubric of nanotechnology, we also include nanoscience and nanoengineering.) Past efforts have utilized several strategies, including simple term search for the prefix nano, complex lexical and citation-based approaches, and bootstrapping techniques. This research introduces a modularized Boolean approach to defining nanotechnology which has been applied to several research and patenting databases. We explain our approach to downloading and cleaning data, and report initial results. Comparisons of this approach with other nanotechnology search formulations are presented. Implications for search strategy development and profiling of the nanotechnology field are discussed.
Marijan Lužnik
2018-02-01
Full Text Available Background. Use of alloplastic mesh implantates allow a new urogynecologycal surgical techniques achieve a marked improvement in pelvic organ static and pelvic floor function with minimally invasive needle transvaginal intervention like an anterior transobturator mesh (ATOM and a posterior ischiorectal mesh (PIRM procedures. Methods. In three years, between April 2006 and May 2009, we performed one hundred and eightyfour operative corrections of female pelvic organ prolapse (POP and pelvic floor dysfunction (PFD with mesh implantates. The eighty-three patients with surgical procedure TVT-O or Monarc as solo intervention indicated by stress urinary incontinence without POP, are not included in this number. In 97 % of mesh operations, Gynemesh 10 × 15 cm was used. For correction of anterior vaginal prolapse with ATOM procedure, Gynemesh was individually trimmed in mesh with 6 free arms for tension-free transobturator application and tension-free apical collar. IVS (Intravaginal sling 04 Tunneller (Tyco needle system was used for transobturator application of 6 arms through 4 dermal incisions (2 on right and 2 on left. Minimal anterior median colpotomy was made in two separate parts. For correction of posterior vaginal prolapse with PIRM procedure Gynemesh was trimmed in mesh with 4 free arms and tension-free collar. Two ischiorectal long arms for tension-free application through fossa ischiorectale – right and left, and two short arms for perineal body also on both sides. IVS 02 Tunneller (Tyco needle system was used for tension-free application of 4 arms through 4 dermal incisions (2 on right and 2 on left in PIRM. Results. All 184 procedures were performed relatively safely. In 9 cases of ATOM we had perforation of bladder, in 5 by application of anterior needle, in 3 by application of posterior needle and in one case with pincette when collar was inserted in lateral vesico – vaginal space. In 2 cases of PIRM we had perforation of rectum
Refinery production planning and scheduling: the refining core business
M. Joly
2012-06-01
Full Text Available Intelligent production planning and scheduling are of paramount importance to ensure refinery profitability, logistic reliability and safety at the local and corporate levels. In Brazil, such activities play a particularly critical role, since the Brazilian downstream model is moving towards a demand-driven model rather than a supply-driven one. Moreover, new and specialized non-linear constraints are continuously being incorporated into these large-scale problems: increases in oil prices implying the need for processing poor quality crudes, increasing demand and new demand patterns for petroleum products, new stringent environmental regulations related to clean fuels and start-up of new production technologies embedded into more complex refining schemes. This paper aims at clarifying the central role of refinery planning and scheduling activities in the Petrobras refining business. Major past and present results are outlined and corporate long-term strategies to deal with present and future challenges are presented.
Genomic multiple sequence alignments: refinement using a genetic algorithm
Lefkowitz Elliot J
2005-08-01
Full Text Available Abstract Background Genomic sequence data cannot be fully appreciated in isolation. Comparative genomics – the practice of comparing genomic sequences from different species – plays an increasingly important role in understanding the genotypic differences between species that result in phenotypic differences as well as in revealing patterns of evolutionary relationships. One of the major challenges in comparative genomics is producing a high-quality alignment between two or more related genomic sequences. In recent years, a number of tools have been developed for aligning large genomic sequences. Most utilize heuristic strategies to identify a series of strong sequence similarities, which are then used as anchors to align the regions between the anchor points. The resulting alignment is globally correct, but in many cases is suboptimal locally. We describe a new program, GenAlignRefine, which improves the overall quality of global multiple alignments by using a genetic algorithm to improve local regions of alignment. Regions of low quality are identified, realigned using the program T-Coffee, and then refined using a genetic algorithm. Because a better COFFEE (Consistency based Objective Function For alignmEnt Evaluation score generally reflects greater alignment quality, the algorithm searches for an alignment that yields a better COFFEE score. To improve the intrinsic slowness of the genetic algorithm, GenAlignRefine was implemented as a parallel, cluster-based program. Results We tested the GenAlignRefine algorithm by running it on a Linux cluster to refine sequences from a simulation, as well as refine a multiple alignment of 15 Orthopoxvirus genomic sequences approximately 260,000 nucleotides in length that initially had been aligned by Multi-LAGAN. It took approximately 150 minutes for a 40-processor Linux cluster to optimize some 200 fuzzy (poorly aligned regions of the orthopoxvirus alignment. Overall sequence identity increased only
Texturing of continuous LOD meshes with the hierarchical texture atlas
Birkholz, Hermann
2006-02-01
For the rendering of detailed virtual environments, trade-offs have to be made between image quality and rendering time. An immersive experience of virtual reality always demands high frame-rates with the best reachable image qual-ity. Continuous Level of Detail (cLoD) triangle-meshes provide an continuous spectrum of detail for a triangle mesh that can be used to create view-dependent approximations of the environment in real-time. This enables the rendering with a constant number of triangles and thus with constant frame-rates. Normally the construction of such cLoD mesh representations leads to the loss of all texture information of the original mesh. To overcome this problem, a parameter domain can be created, in order to map the surface properties (colour, texture, normal) to it. This parameter domain can be used to map the surface properties back to arbitrary approximations of the original mesh. The parameter domain is often a simplified version of the mesh to be parameterised. This limits the reachable simplification to the domain mesh which has to map the surface of the original mesh with the least possible stretch. In this paper, a hierarchical domain mesh is presented, that scales between very coarse domain meshes and good property-mapping.
Performance of the hybrid wireless mesh protocol for wireless mesh networks
Boye, Magnus; Staalhagen, Lars
2010-01-01
Wireless mesh networks offer a new way of providing end-user access and deploying network infrastructure. Though mesh networks offer a price competitive solution to wired networks, they also come with a set of new challenges such as optimal path selection, channel utilization, and load balancing....... and proactive. Two scenarios of different node density are considered for both path selection modes. The results presented in this paper are based on a simulation model of the HWMP specification in the IEEE 802.11s draft 4.0 implemented in OPNET Modeler....
Cosmos++: relativistic magnetohydrodynamics on unstructured grids with local adaptive refinement
Salmonson, Jay D; Anninos, Peter; Fragile, P Chris; Camarda, Karen
2007-01-01
A code and methodology are introduced for solving the fully general relativistic magnetohydrodynamic (GRMHD) equations using time-explicit, finite-volume discretization. The code has options for solving the GRMHD equations using traditional artificial-viscosity (AV) or non-oscillatory central difference (NOCD) methods, or a new extended AV (eAV) scheme using artificial-viscosity together with a dual energy-flux-conserving formulation. The dual energy approach allows for accurate modeling of highly relativistic flows at boost factors well beyond what has been achieved to date by standard artificial viscosity methods. It provides the benefit of Godunov methods in capturing high Lorentz boosted flows but without complicated Riemann solvers, and the advantages of traditional artificial viscosity methods in their speed and flexibility. Additionally, the GRMHD equations are solved on an unstructured grid that supports local adaptive mesh refinement using a fully threaded oct-tree (in three dimensions) network to traverse the grid hierarchy across levels and immediate neighbors. Some recent studies will be summarized
On the Performance of Medical Information Retrieval using MeSH Terms – A Survey
Swetha S
2014-09-01
Full Text Available Internet users have increased everywhere. Searching and retrieving documents is a common thing nowadays. Retrieving related documents from the search engines are difficult task. To retrieve correct documents, knowledge about the search topic is essential. Even though separate search engines are there to retrieve medical documents the users are not familiar with MeSH terms (Medical Subject Heading. So, both the search browser and the MeSH terms have to be integrated to make the search effective and efficient. To implement this integration, SimpleMed and MeSHMed were introduced. The MeSH terms have to be ranked to know how frequently it has been used and to know the importance of the MeSH terms. To rank it a semi – automated tool called MeSHy was developed. The terms were extracted, filtered, ranked and displayed to the user. Classifiers have to be constructed to label the documents as health and non – health. Three strategies were used to classify them. The errors that are commonly done by the users have to be found out. It was calculated based on the queries presented by the user to the search browser.
Liu, Peter X.; Lai, Pinhua; Xu, Shaoping; Zou, Yanni
2018-01-01
In the present work, the majority of implemented virtual surgery simulation systems have been based on either a mesh or meshless strategy with regard to soft tissue modelling. To take full advantage of the mesh and meshless models, a novel coupled soft tissue cutting model is proposed. Specifically, the reconstructed virtual soft tissue consists of two essential components. One is associated with surface mesh that is convenient for surface rendering and the other with internal meshless point elements that is used to calculate the force feedback during cutting. To combine two components in a seamless way, virtual points are introduced. During the simulation of cutting, the Bezier curve is used to characterize smooth and vivid incision on the surface mesh. At the same time, the deformation of internal soft tissue caused by cutting operation can be treated as displacements of the internal point elements. Furthermore, we discussed and proved the stability and convergence of the proposed approach theoretically. The real biomechanical tests verified the validity of the introduced model. And the simulation experiments show that the proposed approach offers high computational efficiency and good visual effect, enabling cutting of soft tissue with high stability. PMID:29850006
An adaptively refined XFEM with virtual node polygonal elements for dynamic crack problems
Teng, Z. H.; Sun, F.; Wu, S. C.; Zhang, Z. B.; Chen, T.; Liao, D. M.
2018-02-01
By introducing the shape functions of virtual node polygonal (VP) elements into the standard extended finite element method (XFEM), a conforming elemental mesh can be created for the cracking process. Moreover, an adaptively refined meshing with the quadtree structure only at a growing crack tip is proposed without inserting hanging nodes into the transition region. A novel dynamic crack growth method termed as VP-XFEM is thus formulated in the framework of fracture mechanics. To verify the newly proposed VP-XFEM, both quasi-static and dynamic cracked problems are investigated in terms of computational accuracy, convergence, and efficiency. The research results show that the present VP-XFEM can achieve good agreement in stress intensity factor and crack growth path with the exact solutions or experiments. Furthermore, better accuracy, convergence, and efficiency of different models can be acquired, in contrast to standard XFEM and mesh-free methods. Therefore, VP-XFEM provides a suitable alternative to XFEM for engineering applications.
How MESSENGER Meshes Simulations and Games with Citizen Science
Hirshon, B.; Chapman, C. R.; Edmonds, J.; Goldstein, J.; Hallau, K. G.; Solomon, S. C.; Vanhala, H.; Weir, H. M.; Messenger Education; Public Outreach (Epo) Team
2010-12-01
How MESSENGER Meshes Simulations and Games with Citizen Science In the film The Last Starfighter, an alien civilization grooms their future champion—a kid on Earth—using a video game. As he gains proficiency in the game, he masters the skills he needs to pilot a starship and save their civilization. The NASA MESSENGER Education and Public Outreach (EPO) Team is using the same tactic to train citizen scientists to help the Science Team explore the planet Mercury. We are building a new series of games that appear to be designed primarily for fun, but that guide players through a knowledge and skill set that they will need for future science missions in support of MESSENGER mission scientists. As players score points, they gain expertise. Once they achieve a sufficiently high score, they will be invited to become participants in Mercury Zoo, a new program being designed by Zooniverse. Zooniverse created Galaxy Zoo and Moon Zoo, programs that allow interested citizens to participate in the exploration and interpretation of galaxy and lunar data. Scientists use the citizen interpretations to further refine their exploration of the same data, thereby narrowing their focus and saving precious time. Mercury Zoo will be designed with input from the MESSENGER Science Team. This project will not only support the MESSENGER mission, but it will also add to the growing cadre of informed members of the public available to help with other citizen science projects—building on the concept that engaged, informed citizens can help scientists make new discoveries. The MESSENGER EPO Team comprises individuals from the American Association for the Advancement of Science (AAAS); Carnegie Academy for Science Education (CASE); Center for Educational Resources (CERES) at Montana State University (MSU) - Bozeman; National Center for Earth and Space Science Education (NCESSE); Johns Hopkins University Applied Physics Laboratory (JHU/APL); National Air and Space Museum (NASM); Science
Refinement Types for TypeScript
Vekris, Panagiotis; Cosman, Benjamin; Jhala, Ranjit
2016-01-01
We present Refined TypeScript (RSC), a lightweight refinement type system for TypeScript, that enables static verification of higher-order, imperative programs. We develop a formal core of RSC that delineates the interaction between refinement types and mutability. Next, we extend the core to account for the imperative and dynamic features of TypeScript. Finally, we evaluate RSC on a set of real world benchmarks, including parts of the Octane benchmarks, D3, Transducers, and the TypeScript co...
Price implications for Russia's oil refining
Khartukov, Eugene M.
1998-01-01
Over the past several years, Russia's oil industry has undergone its radical transformation from a wholly state-run and generously subsidized oil distribution system toward a substantially privatized, cash-strapped, and quasi-market ''petropreneurship''. This fully applies to the industry's downstream sector. Still unlike more dynamic E and C operations, the country's refining has turned out better fenced off competitive market forces and is less capable to respond to market imperatives. Consequently, jammed between depressed product prices and persistent feedstock costs, Russian refiners were badly hit by the world oil glut - which has made a radical modernization of the obsolete refining sector clearly a must. (author)
Unbiased Sampling and Meshing of Isosurfaces
Yan, Dongming
2014-05-07
In this paper, we present a new technique to generate unbiased samples on isosurfaces. An isosurface, F(x,y,z) = c , of a function, F , is implicitly defined by trilinear interpolation of background grid points. The key idea of our approach is that of treating the isosurface within a grid cell as a graph (height) function in one of the three coordinate axis directions, restricted to where the slope is not too high, and integrating / sampling from each of these three. We use this unbiased sampling algorithm for applications in Monte Carlo integration, Poisson-disk sampling, and isosurface meshing.
Performance of FACTS equipment in Meshed systems
Lerch, E; Povh, D [Siemens AG, Berlin (Germany)
1994-12-31
Modern power electronic devices such as thyristors and GTOs have made it possible to design controllable network elements, which will play a considerable role in ensuring reliable economic operation of transmission systems as a result of their capability to rapidly change active and reactive power. A number of FACTS elements for high-speed active and reactive power control will be described. Control of power system fluctuations in meshed systems by modulation of active and reactive power will be demonstrated using a number of examples. (author) 7 refs., 11 figs.
Symbolic Block Decomposition In Hexahedral Mesh Generation
Andrzej Adamek
2005-01-01
Full Text Available Hexahedral mesh generation for three-dimensional solid objects is often done in stages. Usually an object is ﬁrst subdivided into simple-shaped subregions, which then are ﬁlled withhexahedral ﬁnite elements. This article presents an automatic subdividing method of polyhedron with planar faces. The subdivision is based on medial surface, axes and nodes of a solid.The main emphasis is put on creating a topology of subregions. Obtaining such a topologyinvolves deﬁning a graph structure OMG which contains necessary information about medialsurface topology and object topology, followed by simple symbolic processing on it.
Shadowfax: Moving mesh hydrodynamical integration code
Vandenbroucke, Bert
2016-05-01
Shadowfax simulates galaxy evolution. Written in object-oriented modular C++, it evolves a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. For the hydrodynamical integration, it makes use of a (co-) moving Lagrangian mesh. The code has a 2D and 3D version, contains utility programs to generate initial conditions and visualize simulation snapshots, and its input/output is compatible with a number of other simulation codes, e.g. Gadget2 (ascl:0003.001) and GIZMO (ascl:1410.003).
Unbiased Sampling and Meshing of Isosurfaces
Yan, Dongming; Wallner, Johannes; Wonka, Peter
2014-01-01
In this paper, we present a new technique to generate unbiased samples on isosurfaces. An isosurface, F(x,y,z) = c , of a function, F , is implicitly defined by trilinear interpolation of background grid points. The key idea of our approach is that of treating the isosurface within a grid cell as a graph (height) function in one of the three coordinate axis directions, restricted to where the slope is not too high, and integrating / sampling from each of these three. We use this unbiased sampling algorithm for applications in Monte Carlo integration, Poisson-disk sampling, and isosurface meshing.
Moving mesh generation with a sequential approach for solving PDEs
In moving mesh methods, physical PDEs and a mesh equation derived from equidistribution of an error metrics (so-called the monitor function) are simultaneously solved and meshes are dynamically concentrated on steep regions (Lim et al., 2001). However, the simultaneous solution procedure...... a simple and robust moving mesh algorithm in one or multidimension. In this study, we propose a sequential solution procedure including two separate parts: prediction step to obtain an approximate solution to a next time level (integration of physical PDEs) and regriding step at the next time level (mesh...... generation and solution interpolation). Convection terms, which appear in physical PDEs and a mesh equation, are discretized by a WENO (Weighted Essentially Non-Oscillatory) scheme under the consrvative form. This sequential approach is to keep the advantages of robustness and simplicity for the static...
Improved mesh generator for the POISSON Group Codes
Gupta, R.C.
1987-01-01
This paper describes the improved mesh generator of the POISSON Group Codes. These improvements enable one to have full control over the way the mesh is generated and in particular the way the mesh density is distributed throughout this model. A higher mesh density in certain regions coupled with a successively lower mesh density in others keeps the accuracy of the field computation high and the requirements on the computer time and computer memory low. The mesh is generated with the help of codes AUTOMESH and LATTICE; both have gone through a major upgrade. Modifications have also been made in the POISSON part of these codes. We shall present an example of a superconducting dipole magnet to explain how to use this code. The results of field computations are found to be reliable within a few parts in a hundred thousand even in such complex geometries
Image-Based Geometric Modeling and Mesh Generation
2013-01-01
As a new interdisciplinary research area, “image-based geometric modeling and mesh generation” integrates image processing, geometric modeling and mesh generation with finite element method (FEM) to solve problems in computational biomedicine, materials sciences and engineering. It is well known that FEM is currently well-developed and efficient, but mesh generation for complex geometries (e.g., the human body) still takes about 80% of the total analysis time and is the major obstacle to reduce the total computation time. It is mainly because none of the traditional approaches is sufficient to effectively construct finite element meshes for arbitrarily complicated domains, and generally a great deal of manual interaction is involved in mesh generation. This contributed volume, the first for such an interdisciplinary topic, collects the latest research by experts in this area. These papers cover a broad range of topics, including medical imaging, image alignment and segmentation, image-to-mesh conversion,...
HypGrid2D. A 2-d mesh generator
Soerensen, N N
1998-03-01
The implementation of a hyperbolic mesh generation procedure, based on an equation for orthogonality and an equation for the cell face area is described. The method is fast, robust and gives meshes with good smoothness and orthogonality. The procedure is implemented in a program called HypGrid2D. The HypGrid2D program is capable of generating C-, O- and `H`-meshes for use in connection with the EllipSys2D Navier-Stokes solver. To illustrate the capabilities of the program, some test examples are shown. First a series of C-meshes are generated around a NACA-0012 airfoil. Secondly a series of O-meshes are generated around a NACA-65-418 airfoil. Finally `H`-meshes are generated over a Gaussian hill and a linear escarpment. (au)
Greene, Patrick T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schofield, Samuel P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nourgaliev, Robert [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-06-21
A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.
Doris Amanda Rosero
2008-09-01
Full Text Available Una alternativa de mejoramiento de las estrategias para el control de la tuberculosis, la ofrecen los métodos para la diferenciación de cepas de M. tuberculosis. La literatura evidencia que la técnica RFLP del segmento de inserción IS6110 está ampliamente estandarizada a nivel internacional y ha demostrado ser un buen instrumento para orientar estrategias locales de control. Mediante una revisión exhaustiva de la literatura, el presente estudio pretende establecer si esta técnica molecular puede contribuir al diseño y refinamiento de estrategias para el control de la tuberculosis en Colombia. En esta revisión se analizaron los resultados de los estudios publicados entre 1993 y 2008, realizados con la técnica en países en desarrollo, incluida Colombia. Los resultados sugieren que en el contexto colombiano esta técnica puede ofrecer información útil para los directores del programa de control de tuberculosis y, por tanto, debe seguir siendo realizada. Para establecer la periodicidad, las poblaciones blanco y otras condiciones óptimas para la realización de la técnica, se necesitan estudios de investigación operativa que incluyan análisis de costo-efectividad y costo-utilidad.Molecular biology methods offer an alternative for improving tuberculosis control strategies through M. tuberculosis strain typing techniques. The international literature shows that RFLP of the insertion element IS6110 is widely standardized internationally and has proved to be a useful tool to guide local tuberculosis control strategies. By means of a thorough literature review, this study aimed to determine if this molecular based method could be useful for the design and refinement of tuberculosis control strategies in Colombia. Results from epidemiologic studies published between 1995 and 2008 which used this technique in developing countries, including Colombia, were analyzed. Our results suggest that in the Colombian context this molecular technique
AUTOMATIC MESH GENERATION OF 3-D GEOMETRIC MODELS
刘剑飞
2003-01-01
In this paper the presentation of the ball-packing method is reviewed,and a scheme to generate mesh for complex 3-D geometric models is given,which consists of 4 steps:(1)create nodes in 3-D models by ball-packing method,(2)connect nodes to generate mesh by 3-D Delaunay triangulation,(3)retrieve the boundary of the model after Delaunay triangulation,(4)improve the mesh.
A moving mesh method with variable relaxation time
Soheili, Ali Reza; Stockie, John M.
2006-01-01
We propose a moving mesh adaptive approach for solving time-dependent partial differential equations. The motion of spatial grid points is governed by a moving mesh PDE (MMPDE) in which a mesh relaxation time \\tau is employed as a regularization parameter. Previously reported results on MMPDEs have invariably employed a constant value of the parameter \\tau. We extend this standard approach by incorporating a variable relaxation time that is calculated adaptively alongside the solution in orde...
Bilateral Laparoscopic Totally Extraperitoneal Repair Without Mesh Fixation
Dehal, Ahmed; Woodward, Brandon; Johna, Samir; Yamanishi, Frank
2014-01-01
Background and Objectives: Mesh fixation during laparoscopic totally extraperitoneal repair is thought to be necessary to prevent recurrence. However, mesh fixation may increase postoperative chronic pain. This study aimed to describe the experience of a single surgeon at our institution performing this operation. Methods: We performed a retrospective review of the medical records of all patients who underwent bilateral laparoscopic totally extraperitoneal repair without mesh fixation for ing...
Automated quadrilateral mesh generation for digital image structures
无
2011-01-01
With the development of advanced imaging technology, digital images are widely used. This paper proposes an automatic quadrilateral mesh generation algorithm for multi-colour imaged structures. It takes an original arbitrary digital image as an input for automatic quadrilateral mesh generation, this includes removing the noise, extracting and smoothing the boundary geometries between different colours, and automatic all-quad mesh generation with the above boundaries as constraints. An application example is...
An Agent Based Collaborative Simplification of 3D Mesh Model
Wang, Li-Rong; Yu, Bo; Hagiwara, Ichiro
Large-volume mesh model faces the challenge in fast rendering and transmission by Internet. The current mesh models obtained by using three-dimensional (3D) scanning technology are usually very large in data volume. This paper develops a mobile agent based collaborative environment on the development platform of mobile-C. Communication among distributed agents includes grasping image of visualized mesh model, annotation to grasped image and instant message. Remote and collaborative simplification can be efficiently conducted by Internet.
Robotic removal of eroded vaginal mesh into the bladder.
Macedo, Francisco Igor B; O'Connor, Jeffrey; Mittal, Vijay K; Hurley, Patrick
2013-11-01
Vaginal mesh erosion into the bladder after midurethral sling procedure or cystocele repair is uncommon, with only a few cases having been reported in the literature. The ideal surgical management is still controversial. Current options for removal of eroded mesh include: endoscopic, transvaginal or abdominal (either open or laparoscopic) approaches. We, herein, present the first case of robotic removal of a large eroded vaginal mesh into the bladder and discuss potential benefits and limitations of the technique. © 2013 The Japanese Urological Association.
Adaptive-mesh zoning by the equipotential method
Winslow, A.M.
1981-04-01
An adaptive mesh method is proposed for the numerical solution of differential equations which causes the mesh lines to move closer together in regions where higher resolution in some physical quantity T is desired. A coefficient D > 0 is introduced into the equipotential zoning equations, where D depends on the gradient of T . The equations are inverted, leading to nonlinear elliptic equations for the mesh coordinates with source terms which depend on the gradient of D. A functional form of D is proposed.
Refinement for Transition Systems with Responses
Marco Carbone
2012-07-01
Full Text Available Motivated by the response pattern for property specifications and applications within flexible workflow management systems, we report upon an initial study of modal and mixed transition systems in which the must transitions are interpreted as must eventually, and in which implementations can contain may behaviors that are resolved at run-time. We propose Transition Systems with Responses (TSRs as a suitable model for this study. We prove that TSRs correspond to a restricted class of mixed transition systems, which we refer to as the action-deterministic mixed transition systems. We show that TSRs allow for a natural definition of deadlocked and accepting states. We then transfer the standard definition of refinement for mixed transition systems to TSRs and prove that refinement does not preserve deadlock freedom. This leads to the proposal of safe refinements, which are those that preserve deadlock freedom. We exemplify the use of TSRs and (safe refinements on a small medication workflow.
Taiwan: refined need for consuming population
Hayes, David.
1995-01-01
A brief discussion is given of the oil and gas industry in Taiwan. Topics covered include the possibility of privatization, refineries and refining contracts overseas, plans for a new petrochemical complex and an offshore submarine transmission pipeline. (UK)
1991 worldwide refining and gas processing directory
Anon.
1990-01-01
This book ia an authority for immediate information on the industry. You can use it to find new business, analyze market trends, and to stay in touch with existing contacts while making new ones. The possibilities for business applications are numerous. Arranged by country, all listings in the directory include address, phone, fax and telex numbers, a description of the company's activities, names of key personnel and their titles, corporate headquarters, branch offices and plant sites. This newly revised edition lists more than 2000 companies and nearly 3000 branch offices and plant locations. This east-to-use reference also includes several of the most vital and informative surveys of the industry, including the U.S. Refining Survey, the Worldwide Construction Survey in Refining, Sulfur, Gas Processing and Related Fuels, the Worldwide Refining and Gas Processing Survey, the Worldwide Catalyst Report, and the U.S. and Canadian Lube and Wax Capacities Report from the National Petroleum Refiner's Association
Development of a Refined Staff Group Trainer
Quensel, Susan
1999-01-01
.... As a follow-on effort to the previous SGT project, the goal was to refine a brigade-level staff training program to more effectively and efficiently coordinate the activities within and between the...
Monitoring and evaluation of wire mesh forming life
Enemuoh, Emmanuel U.; Zhao, Ping; Kadlec, Alec
2018-03-01
Forming tables are used with stainless steel wire mesh conveyor belts to produce variety of products. The forming tables will typically run continuously for several days, with some hours of scheduled downtime for maintenance, cleaning and part replacement after several weeks of operation. The wire mesh conveyor belts show large variation in their remaining life due to associated variations in their nominal thicknesses. Currently the industry is dependent on seasoned operators to determine the replacement time for the wire mesh formers. The drawback of this approach is inconsistency in judgements made by different operators and lack of data knowledge that can be used to develop decision making system that will be more consistent with wire mesh life prediction and replacement time. In this study, diagnostic measurements about the health of wire mesh former is investigated and developed. The wire mesh quality characteristics considered are thermal measurement, tension property, gage thickness, and wire mesh wear. The results show that real time thermal sensor and wear measurements would provide suitable data for the estimation of wire mesh failure, therefore, can be used as a diagnostic parameter for developing structural health monitoring (SHM) system for stainless steel wire mesh formers.
SALOME PLATFORM and TetGen for Polyhedral Mesh Generation
Lee, Sang Yong; Park, Chan Eok; Kim, Shin Whan [KEPCO E and C Company, Inc., Daejeon (Korea, Republic of)
2014-05-15
SPACE and CUPID use the unstructured mesh and they also require reliable mesh generation system. The combination of CAD system and mesh generation system is necessary to cope with a large number of cells and the complex fluid system with structural materials inside. In the past, a CAD system Pro/Engineer and mesh generator Pointwise were evaluated for this application. But, the cost of those commercial CAD and mesh generator is sometimes a great burden. Therefore, efforts have been made to set up a mesh generation system with open source programs. The evaluation of the TetGen has been made in focusing the application for the polyhedral mesh generation. In this paper, SALOME will be evaluated for the efforts in conjunction with TetGen. In section 2, review will be made on the CAD and mesh generation capability of SALOME. SALOME and TetGen codes are being integrated to construct robust polyhedral mesh generator. Edge removal on the flat surface and vertex reattachment to the solid are two challenging tasks. It is worthwhile to point out that the Python script capability of the SALOME should be fully utilized for the future investigation.
An Algorithm for Parallel Sn Sweeps on Unstructured Meshes
Pautz, Shawn D.
2002-01-01
A new algorithm for performing parallel S n sweeps on unstructured meshes is developed. The algorithm uses a low-complexity list ordering heuristic to determine a sweep ordering on any partitioned mesh. For typical problems and with 'normal' mesh partitionings, nearly linear speedups on up to 126 processors are observed. This is an important and desirable result, since although analyses of structured meshes indicate that parallel sweeps will not scale with normal partitioning approaches, no severe asymptotic degradation in the parallel efficiency is observed with modest (≤100) levels of parallelism. This result is a fundamental step in the development of efficient parallel S n methods
Reconfigurable lattice mesh designs for programmable photonic processors.
Pérez, Daniel; Gasulla, Ivana; Capmany, José; Soref, Richard A
2016-05-30
We propose and analyse two novel mesh design geometries for the implementation of tunable optical cores in programmable photonic processors. These geometries are the hexagonal and the triangular lattice. They are compared here to a previously proposed square mesh topology in terms of a series of figures of merit that account for metrics that are relevant to on-chip integration of the mesh. We find that that the hexagonal mesh is the most suitable option of the three considered for the implementation of the reconfigurable optical core in the programmable processor.
Symptom resolution after operative management of complications from transvaginal mesh.
Crosby, Erin C; Abernethy, Melinda; Berger, Mitchell B; DeLancey, John O; Fenner, Dee E; Morgan, Daniel M
2014-01-01
Complications from transvaginal mesh placed for prolapse often require operative management. The aim of this study is to describe the outcomes of vaginal mesh removal. A retrospective review of all patients having surgery by the urogynecology group in the department of obstetrics and gynecology at our institution for a complication of transvaginal mesh placed for prolapse was performed. Demographics, presenting symptoms, surgical procedures, and postoperative symptoms were abstracted. Comparative statistics were performed using the χ or Fisher's exact test with significance at Pmesh and 84 had follow-up data. The most common presenting signs and symptoms were: mesh exposure, 62% (n=56); pain, 64% (n=58); and dyspareunia, 48% (n=43). During operative management, mesh erosion was encountered unexpectedly in a second area of the vagina in 5% (n=4), in the bladder in 1% (n=1), and in the bowel in 2% (n=2). After vaginal mesh removal, 51% (n=43) had resolution of all presenting symptoms. Mesh exposure was treated successfully in 95% of patients, whereas pain was only successfully treated in 51% of patients. Removal of vaginal mesh is helpful in relieving symptoms of presentation. Patients can be reassured that exposed mesh can almost always be successfully managed surgically, but pain and dyspareunia are only resolved completely in half of patients. III.
The mesh controversy [version 1; referees: 2 approved
Joshua A. Cohn
2016-09-01
Full Text Available Pelvic organ prolapse and stress urinary incontinence are common conditions for which approximately 11% of women will undergo surgical intervention in their lifetime. The use of vaginal mesh for pelvic organ prolapse and stress urinary incontinence rose rapidly in the early 2000s as over 100 mesh products were introduced into the clinical armamentarium with little regulatory oversight for their use. US Food and Drug Administration Public Health Notifications in 2008 and 2011, as well as reclassification of transvaginal mesh for prolapse to class III in early 2016, were a response to debilitating complications associated with transvaginal mesh placement in many women. The midurethral sling has not been subject to the same reclassification and continues to be endorsed as the “gold standard” for surgical management of stress urinary incontinence by subspecialty societies. However, litigators have not differentiated between mesh for prolapse and mesh for incontinence. As such, all mesh, including that placed for stress urinary incontinence, faces continued controversy amidst an uncertain future. In this article, we review the background of the mesh controversy, recent developments, and the anticipated role of mesh in surgery for prolapse and stress urinary incontinence going forward.
A Reconfigurable Mesh-Ring Topology for Bluetooth Sensor Networks
Ben-Yi Wang
2018-05-01
Full Text Available In this paper, a Reconfigurable Mesh-Ring (RMR algorithm is proposed for Bluetooth sensor networks. The algorithm is designed in three stages to determine the optimal configuration of the mesh-ring network. Firstly, a designated root advertises and discovers its neighboring nodes. Secondly, a scatternet criterion is built to compute the minimum number of piconets and distributes the connection information for piconet and scatternet. Finally, a peak-search method is designed to determine the optimal mesh-ring configuration for various sizes of networks. To maximize the network capacity, the research problem is formulated by determining the best connectivity of available mesh links. During the formation and maintenance phases, three possible configurations (including piconet, scatternet, and hybrid are examined to determine the optimal placement of mesh links. The peak-search method is a systematic approach, and is implemented by three functional blocks: the topology formation block generates the mesh-ring topology, the routing efficiency block computes the routing performance, and the optimum decision block introduces a decision-making criterion to determine the optimum number of mesh links. Simulation results demonstrate that the optimal mesh-ring configuration can be determined and that the scatternet case achieves better overall performance than the other two configurations. The RMR topology also outperforms the conventional ring-based and cluster-based mesh methods in terms of throughput performance for Bluetooth configurable networks.
Oil refining in South Asia and Australasia
Yamaguchi, N.D.
2000-01-01
An overview of the oil markets of Southeast Asia and Australasia is presented focussing on oil refining. Key statistics of both areas are tabulated, and figures providing information on GDP/capita, crude production, comparison of demand barrels, and product demand are provided. Crude oil production and supply, oil product demand, and the refining industries are examined with details given of evolution of capacity and cracking to distillation ratios
The present state of refining in France
1996-01-01
The european refining industry suffers from a production over-capacity and closures are inevitable; the situation is even worse in France due to the imbalance between gas oil and gasoline prices and the weak margin for distributors. The French refining industry is however an important and essential link for its strategic fuel and petroleum product supply, and represent 17000 jobs. Several measures are introduced by the French Industry department towards restructuring, capacity reduction and fuel price harmonization
Mesh-based parallel code coupling interface
Wolf, K.; Steckel, B. (eds.) [GMD - Forschungszentrum Informationstechnik GmbH, St. Augustin (DE). Inst. fuer Algorithmen und Wissenschaftliches Rechnen (SCAI)
2001-04-01
MpCCI (mesh-based parallel code coupling interface) is an interface for multidisciplinary simulations. It provides industrial end-users as well as commercial code-owners with the facility to combine different simulation tools in one environment. Thereby new solutions for multidisciplinary problems will be created. This opens new application dimensions for existent simulation tools. This Book of Abstracts gives a short overview about ongoing activities in industry and research - all presented at the 2{sup nd} MpCCI User Forum in February 2001 at GMD Sankt Augustin. (orig.) [German] MpCCI (mesh-based parallel code coupling interface) definiert eine Schnittstelle fuer multidisziplinaere Simulationsanwendungen. Sowohl industriellen Anwender als auch kommerziellen Softwarehersteller wird mit MpCCI die Moeglichkeit gegeben, Simulationswerkzeuge unterschiedlicher Disziplinen miteinander zu koppeln. Dadurch entstehen neue Loesungen fuer multidisziplinaere Problemstellungen und fuer etablierte Simulationswerkzeuge ergeben sich neue Anwendungsfelder. Dieses Book of Abstracts bietet einen Ueberblick ueber zur Zeit laufende Arbeiten in der Industrie und in der Forschung, praesentiert auf dem 2{sup nd} MpCCI User Forum im Februar 2001 an der GMD Sankt Augustin. (orig.)
Basic Algorithms for the Asynchronous Reconfigurable Mesh
Yosi Ben-Asher
2002-01-01
Full Text Available Many constant time algorithms for various problems have been developed for the reconfigurable mesh (RM in the past decade. All these algorithms are designed to work with synchronous execution, with no regard for the fact that large size RMs will probably be asynchronous. A similar observation about the PRAM model motivated many researchers to develop algorithms and complexity measures for the asynchronous PRAM (APRAM. In this work, we show how to define the asynchronous reconfigurable mesh (ARM and how to measure the complexity of asynchronous algorithms executed on it. We show that connecting all processors in a row of an n×n ARM (the analog of barrier synchronization in the APRAM model can be solved with complexity Θ(nlogn. Intuitively, this is average work time for solving such a problem. Next, we describe general a technique for simulating T -step synchronous RM algorithms on the ARM with complexity of Θ(T⋅n2logn. Finally, we consider the simulation of the classical synchronous algorithm for counting the number of non-zero bits in an n bits vector using (k
Design of Grain Refiners for Aluminium Alloys
Tronche, A.; Greer, A. L.
The efficiency of a grain refiner can be quantified as the number of grains per nucleant particle in the solidified product. Even for effective refiners in aluminium, such as Al-5Ti-1B, it is known from experiments that efficiencies are very low, at best 10-3 to 102. It is of interest to explore the reasons for such low values, and to assess the prospects for increased efficiency though design of refiners. Recently it has been shown [1] that a simple recalescence-based model can make quantitative predictions of grain size as a function of refiner addition level, cooling rate and solute content. In the model, the initiation of grains is limited by the free growth from nucleant particles, the size distribution of which is very important. The present work uses this model as the basis for discussing the effect of particle size distribution on grain refiner performance. Larger particles (of TiB2 in the case of present interest) promote greater efficiency, as do narrower size distributions. It is shown that even if the size distribution could be exactly specified, compromises would have to be made to balance efficiency (defined as above) with other desirable characteristics of a refiner.
Superior refining performance beyond 2000 -- Breaking traditional paradigms
Tassel, B. van
1995-01-01
Over the last 5 years, refining companies have not performed well financially, generating returns below the cost of capital. Environmental regulations have caused the industry to invest significant amounts of capital, and while new regulations will cause the shutdown of between 500 thousand and 1.2 million barrels per day of capacity, the industry structure will remain poor and financial returns for the average player will likely be volatile, cyclical, and below the cost of capital. Based on this industry outlook, refining companies seeking superior performance will have to break the traditional paradigms and adopt world-class practices used in other industries. Changes required to significantly improve financial returns will include shifts in business strategy to accommodate growth, and development of nontraditional services, as well as initiates to dramatically reshape cost structure and improve profitability. Making the changes to become a superior performer in the refining business will require a clear vision and strong leadership at multiple levels in the organization. The transformation will also require changes in company culture and incentive plans that encourage managers to act as owners. In addition, superior performers will push accountability for results to low levels in the organization. Given the herd mentality of the oil industry, superior performers must take decisive, preemptive action to generate a substantial, competitive advantage
Refining SCJ Mission Specifications into Parallel Handler Designs
Frank Zeyda
2013-05-01
Full Text Available Safety-Critical Java (SCJ is a recent technology that restricts the execution and memory model of Java in such a way that applications can be statically analysed and certified for their real-time properties and safe use of memory. Our interest is in the development of comprehensive and sound techniques for the formal specification, refinement, design, and implementation of SCJ programs, using a correct-by-construction approach. As part of this work, we present here an account of laws and patterns that are of general use for the refinement of SCJ mission specifications into designs of parallel handlers used in the SCJ programming paradigm. Our notation is a combination of languages from the Circus family, supporting state-rich reactive models with the addition of class objects and real-time properties. Our work is a first step to elicit laws of programming for SCJ and fits into a refinement strategy that we have developed previously to derive SCJ programs.
Superior refining performance beyond 2000 -- Breaking traditional paradigms
Tassel, B. van [McKinsey and Co., Inc., Houston, TX (United States)
1995-09-01
Over the last 5 years, refining companies have not performed well financially, generating returns below the cost of capital. Environmental regulations have caused the industry to invest significant amounts of capital, and while new regulations will cause the shutdown of between 500 thousand and 1.2 million barrels per day of capacity, the industry structure will remain poor and financial returns for the average player will likely be volatile, cyclical, and below the cost of capital. Based on this industry outlook, refining companies seeking superior performance will have to break the traditional paradigms and adopt world-class practices used in other industries. Changes required to significantly improve financial returns will include shifts in business strategy to accommodate growth, and development of nontraditional services, as well as initiates to dramatically reshape cost structure and improve profitability. Making the changes to become a superior performer in the refining business will require a clear vision and strong leadership at multiple levels in the organization. The transformation will also require changes in company culture and incentive plans that encourage managers to act as owners. In addition, superior performers will push accountability for results to low levels in the organization. Given the herd mentality of the oil industry, superior performers must take decisive, preemptive action to generate a substantial, competitive advantage.
Challenges and technological opportunities for the oil refining industry: A Brazilian refinery case
Castelo Branco, David A.; Gomes, Gabriel L.; Szklo, Alexandre S.
2010-01-01
The worldwide oil refining industry currently faces strong challenges related to uncertainties about future feedstock and characteristics of oil products. These challenges favor two main strategies for the sector: the first strategy is increasing refinery complexity and versatility; the second is integrating the refining and petrochemical industries, adding value to the crude oil while guaranteeing market share to premium oil products. Both strategies aim at increasing production of highly specified oil products, simultaneously reducing the environmental impacts of the refining industry. This paper analyses the case of a Brazilian refinery, Gabriel Passos Refinery (REGAP), by proposing additional investments to alter and/or expand its current production scheme. All the proposed options present relatively low investment rates of return. However, investments in a hydrocracking based configuration with a gasification unit providing hydrogen and power can further improve the operational profitability, due to reduced natural gas consumption.
Anterior lumbar fusion with titanium threaded and mesh interbody cages.
Rauzzino, M J; Shaffrey, C I; Nockels, R P; Wiggins, G C; Rock, J; Wagner, J
1999-12-15
metastatic breast cancer who had undergone an L-3 corpectomy with placement of a mesh cage. Although her back pain was immediately resolved, she died of systemic disease 3 months after surgery and before fusion could occur. Complications related to the anterior approach included two vascular injuries (two left common iliac vein lacerations); one injury to the sympathetic plexus; one case of superficial phlebitis; two cases of prolonged ileus (greater than 48 hours postoperatively); one anterior femoral cutaneous nerve palsy; and one superficial wound infection. No deaths were directly related to the surgical procedure. There were no cases of dural laceration and no nerve root injury. There were no cases of deep venous thrombosis, pulmonary embolus, retrograde ejaculation, abdominal hernia, bowel or ureteral injury, or deep wound infection. Fusion-related complications included an iliac crest hematoma and prolonged donor-site pain in one patient. There were no complications related to placement or migration of the cages, but there was one case of screw fracture of the Kaneda device that did not require revision. The authors conclude that anterior lumbar fusion performed using titanium interbody or mesh cages, packed with autologous bone, is an effective, safe method to achieve fusion in a wide variety of pathological conditions of the thoracolumbar spine. The fusion rate of 96% compares favorably with results reported in the literature. The complication rate mirrors the low morbidity rate associated with the anterior approach. A detailed study of clinical outcomes is in progress. Patient selection and strategies for avoiding complication are discussed.
MeshVoro: A Three-Dimensional Voronoi Mesh Building Tool for the TOUGH Family of Codes
Freeman, C. M.; Boyle, K. L.; Reagan, M.; Johnson, J.; Rycroft, C.; Moridis, G. J.
2013-09-30
Few tools exist for creating and visualizing complex three-dimensional simulation meshes, and these have limitations that restrict their application to particular geometries and circumstances. Mesh generation needs to trend toward ever more general applications. To that end, we have developed MeshVoro, a tool that is based on the Voro (Rycroft 2009) library and is capable of generating complex threedimensional Voronoi tessellation-based (unstructured) meshes for the solution of problems of flow and transport in subsurface geologic media that are addressed by the TOUGH (Pruess et al. 1999) family of codes. MeshVoro, which includes built-in data visualization routines, is a particularly useful tool because it extends the applicability of the TOUGH family of codes by enabling the scientifically robust and relatively easy discretization of systems with challenging 3D geometries. We describe several applications of MeshVoro. We illustrate the ability of the tool to straightforwardly transform a complex geological grid into a simulation mesh that conforms to the specifications of the TOUGH family of codes. We demonstrate how MeshVoro can describe complex system geometries with a relatively small number of grid blocks, and we construct meshes for geometries that would have been practically intractable with a standard Cartesian grid approach. We also discuss the limitations and appropriate applications of this new technology.
Automated crack detection in conductive smart-concrete structures using a resistor mesh model
Downey, Austin; D'Alessandro, Antonella; Ubertini, Filippo; Laflamme, Simon
2018-03-01
Various nondestructive evaluation techniques are currently used to automatically detect and monitor cracks in concrete infrastructure. However, these methods often lack the scalability and cost-effectiveness over large geometries. A solution is the use of self-sensing carbon-doped cementitious materials. These self-sensing materials are capable of providing a measurable change in electrical output that can be related to their damage state. Previous work by the authors showed that a resistor mesh model could be used to track damage in structural components fabricated from electrically conductive concrete, where damage was located through the identification of high resistance value resistors in a resistor mesh model. In this work, an automated damage detection strategy that works through placing high value resistors into the previously developed resistor mesh model using a sequential Monte Carlo method is introduced. Here, high value resistors are used to mimic the internal condition of damaged cementitious specimens. The proposed automated damage detection method is experimentally validated using a 500 × 500 × 50 mm3 reinforced cement paste plate doped with multi-walled carbon nanotubes exposed to 100 identical impact tests. Results demonstrate that the proposed Monte Carlo method is capable of detecting and localizing the most prominent damage in a structure, demonstrating that automated damage detection in smart-concrete structures is a promising strategy for real-time structural health monitoring of civil infrastructure.
Analysis and development of spatial hp-refinement methods for solving the neutron transport equation
Fournier, D.
2011-01-01
The different neutronic parameters have to be calculated with a higher accuracy in order to design the 4. generation reactor cores. As memory storage and computation time are limited, adaptive methods are a solution to solve the neutron transport equation. The neutronic flux, solution of this equation, depends on the energy, angle and space. The different variables are successively discretized. The energy with a multigroup approach, considering the different quantities to be constant on each group, the angle by a collocation method called SN approximation. Once the energy and angle variable are discretized, a system of spatially-dependent hyperbolic equations has to be solved. Discontinuous finite elements are used to make possible the development of hp-refinement methods. Thus, the accuracy of the solution can be improved by spatial refinement (h-refinement), consisting into subdividing a cell into sub-cells, or by order refinement (p-refinement), by increasing the order of the polynomial basis. In this thesis, the properties of this methods are analyzed showing the importance of the regularity of the solution to choose the type of refinement. Thus, two error estimators are used to lead the refinement process. Whereas the first one requires high regularity hypothesis (analytical solution), the second one supposes only the minimal hypothesis required for the solution to exist. The comparison of both estimators is done on benchmarks where the analytic solution is known by the method of manufactured solutions. Thus, the behaviour of the solution as a regard of the regularity can be studied. It leads to a hp-refinement method using the two estimators. Then, a comparison is done with other existing methods on simplified but also realistic benchmarks coming from nuclear cores. These adaptive methods considerably reduces the computational cost and memory footprint. To further improve these two points, an approach with energy-dependent meshes is proposed. Actually, as the
Guariguata U., G.
1995-01-01
Since the enactment of the US Clean Air Act Amendments of 1990, the worldwide refining industry has aligned itself to become increasing attuned to the future well-being of the environment. Refiners must now develop strategies which address careful selection of crude slates, significant increases and changes in product movements, and upgrading of facilities to meet growing demand--in short, strategies which allow them to make substantial increases in capital investments. The objective of this paper is to determine the regional capital investments refiners must make in order to comply with environmental legislation. The methodology in making this determination was founded on a comprehensive analysis of worldwide petroleum supply/demand and distribution patterns for the coming five years, and included evaluation of a set of linear programming (LP) models based on forecasts for regional product demands and projections of regional specifications. The models considered two scenarios, in which either (1) refinery expansion occurs chiefly in the market consuming regions, or (2) crude producers take control of incremental crude volumes and further expand their planned refining projects and the marketing of refined products. The results of these models, coupled with an understanding of geopolitical situations and economic analyses, provided estimates for capital expenditures for the coming decade. In specific, the following issues were addressed, and are discussed in this paper: refined product trade outlook; crude supply; crude quality; shipping; and capital investments
Multi-cell vortices observed in fine-mesh solutions to the incompressible Euler equations
Rizzi, A.
1986-01-01
Results are presented for a three dimensional flow, containing a vortex sheet shed from a delta wing. The numerical solution indicates that the shearing caused by the trailing edge of the wing set up a torsional wave on the vortex core and produces a structure with multiple cells of vorticity. Although observed in coarse grid solutions too, this effect becomes better resolved with mesh refinement to 614 000 grid volumes. In comparison with a potential solution in which the vortex sheet is fitted as a discontinuity, the results are analyzed for the position of the vortex features captured in the Euler flow field, the accuracy of the pressure field, and for the diffusion of the vortex sheets
The application of TINA in the MESH project
van Sinderen, Marten J.; Ferreira Pires, Luis; Pires, L.F.; Plagemann, Thomas; Goebel, Vera
1998-01-01
This paper discusses the application of TINA concepts, architectures and related design paradigms in the MESH project. MESH adopted TINA as a means to facilitate the design and implementation of a flexible platform for developing and providing interactive multimedia services. This paper reports on
Capacity analysis of wireless mesh networks | Gumel | Nigerian ...
... number of nodes (n) in a linear topology. The degradation is found to be higher in a fully mesh network as a result of increase in interference and MAC layer contention in the network. Key words: Wireless mesh network (WMN), Adhoc network, Network capacity analysis, Bottleneck collision domain, Medium access control ...
Volume Decomposition and Feature Recognition for Hexahedral Mesh Generation
GADH,RAJIT; LU,YONG; TAUTGES,TIMOTHY J.
1999-09-27
Considerable progress has been made on automatic hexahedral mesh generation in recent years. Several automatic meshing algorithms have proven to be very reliable on certain classes of geometry. While it is always worth pursuing general algorithms viable on more general geometry, a combination of the well-established algorithms is ready to take on classes of complicated geometry. By partitioning the entire geometry into meshable pieces matched with appropriate meshing algorithm the original geometry becomes meshable and may achieve better mesh quality. Each meshable portion is recognized as a meshing feature. This paper, which is a part of the feature based meshing methodology, presents the work on shape recognition and volume decomposition to automatically decompose a CAD model into meshable volumes. There are four phases in this approach: (1) Feature Determination to extinct decomposition features, (2) Cutting Surfaces Generation to form the ''tailored'' cutting surfaces, (3) Body Decomposition to get the imprinted volumes; and (4) Meshing Algorithm Assignment to match volumes decomposed with appropriate meshing algorithms. The feature determination procedure is based on the CLoop feature recognition algorithm that is extended to be more general. Results are demonstrated over several parts with complicated topology and geometry.
Micro-mesh fabric pollination bags for switchgrass
Pollination bags for making controlled crosses between switchgrass plants were made from a polyester micro-mesh fabric with a mesh size of 41 µm which is smaller than the mean reported 43 µm diameter of switchgrass pollen. When used in paired plant crosses between switchgrass plants, the mean amoun...
Lagrangian fluid dynamics using the Voronoi-Delauanay mesh
Dukowicz, J.K.
1981-01-01
A Lagrangian technique for numerical fluid dynamics is described. This technique makes use of the Voronoi mesh to efficiently locate new neighbors, and it uses the dual (Delaunay) triangulation to define computational cells. This removes all topological restrictions and facilitates the solution of problems containing interfaces and multiple materials. To improve computational accuracy a mesh smoothing procedure is employed
CAPAClTYANALYSIS OF WIRELESS MESH NET\\VORKS
The limited available bandwidth makes capacity analysis of the network very essential. ... Wireless mesh networks can also be employed for wide variety ofapplications such ... wireless mesh networks using OPNET (Optimized Network Engineering Tool) Modeller 1-J..5. The .... /bps using I I Mbps data rate and 12000 bits.
Sending policies in dynamic wireless mesh using network coding
Pandi, Sreekrishna; Fitzek, Frank; Pihl, Jeppe
2015-01-01
This paper demonstrates the quick prototyping capabilities of the Python-Kodo library for network coding based performance evaluation and investigates the problem of data redundancy in a network coded wireless mesh with opportunistic overhearing. By means of several wireless meshed architectures ...
Plated nickel wire mesh makes superior catalyst bed
Sill, M.
1965-01-01
Porous nickel mesh screen catalyst bed produces gas evolution in hydrogen peroxide thrust chambers used for attitude control of space vehicles. The nickel wire mesh disks in the catalyst bed are plated in rugose form with a silver-gold coating.