WorldWideScience

Sample records for linear-scaling methods mesh

  1. A Linear-Elasticity Solver for Higher-Order Space-Time Mesh Deformation

    Science.gov (United States)

    Diosady, Laslo T.; Murman, Scott M.

    2018-01-01

    A linear-elasticity approach is presented for the generation of meshes appropriate for a higher-order space-time discontinuous finite-element method. The equations of linear-elasticity are discretized using a higher-order, spatially-continuous, finite-element method. Given an initial finite-element mesh, and a specified boundary displacement, we solve for the mesh displacements to obtain a higher-order curvilinear mesh. Alternatively, for moving-domain problems we use the linear-elasticity approach to solve for a temporally discontinuous mesh velocity on each time-slab and recover a continuous mesh deformation by integrating the velocity. The applicability of this methodology is presented for several benchmark test cases.

  2. Preface: Introductory Remarks: Linear Scaling Methods

    Science.gov (United States)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up

  3. Solution of the neutron transport equation by the Method of Characteristics using a linear representation of the source within a mesh

    International Nuclear Information System (INIS)

    Mazumdar, Tanay; Degweker, S.B.

    2017-01-01

    Highlights: • In Method of Characteristics, the neutron source within a mesh is expanded up to linear term. • This expansion reduces the number of meshes as compared to flat source assumption. • Poor representation of circular geometry with coarser meshes is corrected. • Few benchmark problems are solved to show the advantages of linear expansion of source. • The advantage of the present formalism is quite visible in problems with large flux gradient. - Abstract: A common assumption in the solution of the neutron transport equation by the Method of Characteristics (MOC) is that the source (or flux) is constant within a mesh. This assumption is adequate provided the meshes are small enough so that the spatial variation of flux within a mesh may be ignored. Whether a mesh is small enough or not depends upon the flux gradient across a mesh, which in turn depends on factors like the presence of strong absorbers, localized sources or vacuum boundaries. The flat flux assumption often requires a very large number of meshes for solving the neutron transport equation with acceptable accuracy as was observed in our earlier work on the subject. A significant reduction in the required number of meshes is attainable by using a higher order representation of the flux within a mesh. In this paper, we expand the source within a mesh up to first order (linear) terms, which permits the use of larger sized (and therefore fewer) meshes and thereby reduces the computation time without compromising the accuracy of calculation. Since the division of the geometry into meshes is through an automatic triangulation procedure using the Bowyer-Watson algorithm, representation of circular objects (cylindrical fuel rods) with coarse meshes is poorer and causes geometry related errors. A numerical recipe is presented to make a correction to the automatic triangulation process and thereby eliminate this source of error. A number of benchmark problems are analyzed to emphasize the

  4. Deploy production sliding mesh capability with linear solver benchmarking.

    Energy Technology Data Exchange (ETDEWEB)

    Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thomas, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Barone, Matthew F. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Alan B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ananthan, Shreyas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Overfelt, James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sprague, Mike [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rood, Jon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-02-01

    overall simulation time when using the full Tpetra solver stack and nearly 35% when using a mixed Tpetra- Hypre-based solver stack. The report also highlights the project achievement of surpassing the 1 billion element mesh scale for a production V27 hybrid mesh. A detailed timing breakdown is presented that again suggests work to be done in the setup events associated with the linear system. In order to mitigate these initialization costs, several application paths have been explored, all of which are designed to reduce the frequency of matrix reinitialization. Methods such as removing Jacobian entries on the dynamic matrix columns (in concert with increased inner equation iterations), and lagging of Jacobian entries have reduced setup times at the cost of numerical stability. Artificially increasing, or bloating, the matrix stencil to ensure that full Jacobians are included is developed with results suggesting that this methodology is useful in decreasing reinitialization events without loss of matrix contributions. With the above foundational advances in computational capability, the project is well positioned to begin scientific inquiry on a variety of wind-farm physics such as turbine/turbine wake interactions.

  5. The quasidiffusion method for transport problems on unstructured meshes

    Science.gov (United States)

    Wieselquist, William A.

    2009-06-01

    In this work, we develop a quasidiffusion (QD) method for solving radiation transport problems on unstructured quadrilateral meshes in 2D Cartesian geometry, for example hanging-node meshes from adaptive mesh refinement (AMR) applications or skewed quadrilateral meshes from radiation hydrodynamics with Lagrangian meshing. The main result of the work is a new low-order quasidiffusion (LOQD) discretization on arbitrary quadrilaterals and a strategy for the efficient iterative solution which uses Krylov methods and incomplete LU factorization (ILU) preconditioning. The LOQD equations are a non-symmetric set of first-order PDEs that in second-order form resembles convection- diffusion with a diffusion tensor, with the difference that the LOQD equations contain extra cross-derivative terms. Our finite volume (FV) discretization of the LOQD equations is compared with three LOQD discretizations from literature. We then present a conservative, short characteristics discretization based on subcell balances (SCSB) that uses polynomial exponential moments to achieve robust behavior in various limits (e.g. small cells and voids) and is second- order accurate in space. A linear representation of the isotropic component of the scattering source based on face-average and cell-average scalar fluxes is also proposed and shown to be effective in some problems. In numerical tests, our QD method with linear scattering source representation shows some advantages compared to other transport methods. We conclude with avenues for future research and note that this QD method may easily be extended to arbitrary meshes in 3D Cartesian geometry.

  6. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang; Shen, ChaoHui

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  7. Implementation of LDG method for 3D unstructured meshes

    Directory of Open Access Journals (Sweden)

    Filander A. Sequeira Chavarría

    2012-07-01

    Full Text Available This paper describes an implementation of the Local Discontinuous Galerkin method (LDG applied to elliptic problems in 3D. The implementation of the major operators is discussed. In particular the use of higher-order approximations and unstructured meshes. Efficient data structures that allow fast assembly of the linear system in the mixed formulation are described in detail. Keywords: Discontinuous finite element methods, high-order approximations, unstructured meshes, object-oriented programming. Mathematics Subject Classification: 65K05, 65N30, 65N55.

  8. An Implementation and Parallelization of the Scale Space Meshing Algorithm

    Directory of Open Access Journals (Sweden)

    Julie Digne

    2015-11-01

    Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.

  9. Linear-scaling quantum mechanical methods for excited states.

    Science.gov (United States)

    Yam, ChiYung; Zhang, Qing; Wang, Fan; Chen, GuanHua

    2012-05-21

    The poor scaling of many existing quantum mechanical methods with respect to the system size hinders their applications to large systems. In this tutorial review, we focus on latest research on linear-scaling or O(N) quantum mechanical methods for excited states. Based on the locality of quantum mechanical systems, O(N) quantum mechanical methods for excited states are comprised of two categories, the time-domain and frequency-domain methods. The former solves the dynamics of the electronic systems in real time while the latter involves direct evaluation of electronic response in the frequency-domain. The localized density matrix (LDM) method is the first and most mature linear-scaling quantum mechanical method for excited states. It has been implemented in time- and frequency-domains. The O(N) time-domain methods also include the approach that solves the time-dependent Kohn-Sham (TDKS) equation using the non-orthogonal localized molecular orbitals (NOLMOs). Besides the frequency-domain LDM method, other O(N) frequency-domain methods have been proposed and implemented at the first-principles level. Except one-dimensional or quasi-one-dimensional systems, the O(N) frequency-domain methods are often not applicable to resonant responses because of the convergence problem. For linear response, the most efficient O(N) first-principles method is found to be the LDM method with Chebyshev expansion for time integration. For off-resonant response (including nonlinear properties) at a specific frequency, the frequency-domain methods with iterative solvers are quite efficient and thus practical. For nonlinear response, both on-resonance and off-resonance, the time-domain methods can be used, however, as the time-domain first-principles methods are quite expensive, time-domain O(N) semi-empirical methods are often the practical choice. Compared to the O(N) frequency-domain methods, the O(N) time-domain methods for excited states are much more mature and numerically stable, and

  10. MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank.

    Science.gov (United States)

    Mao, Yuqing; Lu, Zhiyong

    2017-04-17

    MeSH indexing is the task of assigning relevant MeSH terms based on a manual reading of scholarly publications by human indexers. The task is highly important for improving literature retrieval and many other scientific investigations in biomedical research. Unfortunately, given its manual nature, the process of MeSH indexing is both time-consuming (new articles are not immediately indexed until 2 or 3 months later) and costly (approximately ten dollars per article). In response, automatic indexing by computers has been previously proposed and attempted but remains challenging. In order to advance the state of the art in automatic MeSH indexing, a community-wide shared task called BioASQ was recently organized. We propose MeSH Now, an integrated approach that first uses multiple strategies to generate a combined list of candidate MeSH terms for a target article. Through a novel learning-to-rank framework, MeSH Now then ranks the list of candidate terms based on their relevance to the target article. Finally, MeSH Now selects the highest-ranked MeSH terms via a post-processing module. We assessed MeSH Now on two separate benchmarking datasets using traditional precision, recall and F 1 -score metrics. In both evaluations, MeSH Now consistently achieved over 0.60 in F-score, ranging from 0.610 to 0.612. Furthermore, additional experiments show that MeSH Now can be optimized by parallel computing in order to process MEDLINE documents on a large scale. We conclude that MeSH Now is a robust approach with state-of-the-art performance for automatic MeSH indexing and that MeSH Now is capable of processing PubMed scale documents within a reasonable time frame. http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/MeSHNow/ .

  11. Adaptive and dynamic meshing methods for numerical simulations

    Science.gov (United States)

    Acikgoz, Nazmiye

    -hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations

  12. Computational performance of Free Mesh Method applied to continuum mechanics problems

    Science.gov (United States)

    YAGAWA, Genki

    2011-01-01

    The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics. PMID:21558753

  13. An Algorithm for Parallel Sn Sweeps on Unstructured Meshes

    International Nuclear Information System (INIS)

    Pautz, Shawn D.

    2002-01-01

    A new algorithm for performing parallel S n sweeps on unstructured meshes is developed. The algorithm uses a low-complexity list ordering heuristic to determine a sweep ordering on any partitioned mesh. For typical problems and with 'normal' mesh partitionings, nearly linear speedups on up to 126 processors are observed. This is an important and desirable result, since although analyses of structured meshes indicate that parallel sweeps will not scale with normal partitioning approaches, no severe asymptotic degradation in the parallel efficiency is observed with modest (≤100) levels of parallelism. This result is a fundamental step in the development of efficient parallel S n methods

  14. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    Science.gov (United States)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  15. Split-Cell Exponential Characteristic Transport Method for Unstructured Tetrahedral Meshes

    International Nuclear Information System (INIS)

    Brennan, Charles R.; Miller, Rodney L.; Mathews, Kirk A.

    2001-01-01

    The nonlinear, exponential characteristic (EC) method is extended to unstructured meshes of tetrahedral cells in three-dimensional Cartesian coordinates. The split-cell approach developed for the linear characteristic (LC) method on such meshes is used. Exponential distributions of the source within a cell and of the inflow flux on upstream faces of the cell are assumed. The coefficients of these distributions are determined by nonlinear root solving so as to match the zeroth and first moments of the source or entering flux. Good conditioning is achieved by casting the formulas for the moments of the source, inflow flux, and solution flux as sums of positive functions and by using accurate and robust algorithms for evaluation of those functions. Various test problems are used to compare the performance of the EC and LC methods. The EC method is somewhat less accurate than the LC method in regions of net out leakage but is strictly positive and retains good accuracy with optically thick cells, as in shielding problems, unlike the LC method. The computational cost per cell is greater for the EC method, but the use of substantially coarser meshes can make the EC method less expensive in total cost. The EC method, unlike the LC method, may fail if negative cross sections or angular quadrature weights are used. It is concluded that the EC and LC methods should be practical, reliable, and complimentary schemes for these meshes

  16. Algebraic mesh generation for large scale viscous-compressible aerodynamic simulation

    International Nuclear Information System (INIS)

    Smith, R.E.

    1984-01-01

    Viscous-compressible aerodynamic simulation is the numerical solution of the compressible Navier-Stokes equations and associated boundary conditions. Boundary-fitted coordinate systems are well suited for the application of finite difference techniques to the Navier-Stokes equations. An algebraic approach to boundary-fitted coordinate systems is one where an explicit functional relation describes a mesh on which a solution is obtained. This approach has the advantage of rapid-precise mesh control. The basic mathematical structure of three algebraic mesh generation techniques is described. They are transfinite interpolation, the multi-surface method, and the two-boundary technique. The Navier-Stokes equations are transformed to a computational coordinate system where boundary-fitted coordinates can be applied. Large-scale computation implies that there is a large number of mesh points in the coordinate system. Computation of viscous compressible flow using boundary-fitted coordinate systems and the application of this computational philosophy on a vector computer are presented

  17. Adaptive moving mesh methods for simulating one-dimensional groundwater problems with sharp moving fronts

    Science.gov (United States)

    Huang, W.; Zheng, Lingyun; Zhan, X.

    2002-01-01

    Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.

  18. Mimetic finite difference method for the stokes problem on polygonal meshes

    Energy Technology Data Exchange (ETDEWEB)

    Lipnikov, K [Los Alamos National Laboratory; Beirao Da Veiga, L [DIPARTIMENTO DI MATE; Gyrya, V [PENNSYLVANIA STATE UNIV; Manzini, G [ISTIUTO DI MATEMATICA

    2009-01-01

    Various approaches to extend the finite element methods to non-traditional elements (pyramids, polyhedra, etc.) have been developed over the last decade. Building of basis functions for such elements is a challenging task and may require extensive geometry analysis. The mimetic finite difference (MFD) method has many similarities with low-order finite element methods. Both methods try to preserve fundamental properties of physical and mathematical models. The essential difference is that the MFD method uses only the surface representation of discrete unknowns to build stiffness and mass matrices. Since no extension inside the mesh element is required, practical implementation of the MFD method is simple for polygonal meshes that may include degenerate and non-convex elements. In this article, we develop a MFD method for the Stokes problem on arbitrary polygonal meshes. The method is constructed for tensor coefficients, which will allow to apply it to the linear elasticity problem. The numerical experiments show the second-order convergence for the velocity variable and the first-order for the pressure.

  19. A novel method of the image processing on irregular triangular meshes

    Science.gov (United States)

    Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta

    2018-04-01

    The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).

  20. A coarse-mesh nodal method-diffusive-mesh finite difference method

    International Nuclear Information System (INIS)

    Joo, H.; Nichols, W.R.

    1994-01-01

    Modern nodal methods have been successfully used for conventional light water reactor core analyses where the homogenized, node average cross sections (XSs) and the flux discontinuity factors (DFs) based on equivalence theory can reliably predict core behavior. For other types of cores and other geometries characterized by tightly-coupled, heterogeneous core configurations, the intranodal flux shapes obtained from a homogenized nodal problem may not accurately portray steep flux gradients near fuel assembly interfaces or various reactivity control elements. This may require extreme values of DFs (either very large, very small, or even negative) to achieve a desired solution accuracy. Extreme values of DFs, however, can disrupt the convergence of the iterative methods used to solve for the node average fluxes, and can lead to a difficulty in interpolating adjacent DF values. Several attempts to remedy the problem have been made, but nothing has been satisfactory. A new coarse-mesh nodal scheme called the Diffusive-Mesh Finite Difference (DMFD) technique, as contrasted with the coarse-mesh finite difference (CMFD) technique, has been developed to resolve this problem. This new technique and the development of a few-group, multidimensional kinetics computer program are described in this paper

  1. Extension of the linear nodal method to large concrete building calculations

    International Nuclear Information System (INIS)

    Childs, R.L.; Rhoades, W.A.

    1985-01-01

    The implementation of the linear nodal method in the TORT code is described, and the results of a mesh refinement study to test the effectiveness of the linear nodal and weighted diamond difference methods available in TORT are presented

  2. Milestone Deliverable: FY18-Q1: Deploy production sliding mesh capability with linear solver benchmarking.

    Energy Technology Data Exchange (ETDEWEB)

    Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    This milestone was focused on deploying and verifying a “sliding-mesh interface,” and establishing baseline timings for blade-resolved simulations of a sub-MW-scale turbine. In the ExaWind project, we are developing both sliding-mesh and overset-mesh approaches for handling the rotating blades in an operating wind turbine. In the sliding-mesh approach, the turbine rotor and its immediate surrounding fluid are captured in a “disk” that is embedded in the larger fluid domain. The embedded fluid is simulated in a coordinate system that rotates with the rotor. It is important that the coupling algorithm (and its implementation) between the rotating and inertial discrete models maintains the accuracy of the numerical methods on either side of the interface, i.e., the interface is “design order.”

  3. Polarized atomic orbitals for linear scaling methods

    Science.gov (United States)

    Berghold, Gerd; Parrinello, Michele; Hutter, Jürg

    2002-02-01

    We present a modified version of the polarized atomic orbital (PAO) method [M. S. Lee and M. Head-Gordon, J. Chem. Phys. 107, 9085 (1997)] to construct minimal basis sets optimized in the molecular environment. The minimal basis set derives its flexibility from the fact that it is formed as a linear combination of a larger set of atomic orbitals. This approach significantly reduces the number of independent variables to be determined during a calculation, while retaining most of the essential chemistry resulting from the admixture of higher angular momentum functions. Furthermore, we combine the PAO method with linear scaling algorithms. We use the Chebyshev polynomial expansion method, the conjugate gradient density matrix search, and the canonical purification of the density matrix. The combined scheme overcomes one of the major drawbacks of standard approaches for large nonorthogonal basis sets, namely numerical instabilities resulting from ill-conditioned overlap matrices. We find that the condition number of the PAO overlap matrix is independent from the condition number of the underlying extended basis set, and consequently no numerical instabilities are encountered. Various applications are shown to confirm this conclusion and to compare the performance of the PAO method with extended basis-set calculations.

  4. Linear Discontinuous Expansion Method using the Subcell Balances for Unstructured Geometry SN Transport

    International Nuclear Information System (INIS)

    Hong, Ser Gi; Kim, Jong Woon; Lee, Young Ouk; Kim, Kyo Youn

    2010-01-01

    The subcell balance methods have been developed for one- and two-dimensional SN transport calculations. In this paper, a linear discontinuous expansion method using sub-cell balances (LDEM-SCB) is developed for neutral particle S N transport calculations in 3D unstructured geometrical problems. At present, this method is applied to the tetrahedral meshes. As the name means, this method assumes the linear distribution of the particle flux in each tetrahedral mesh and uses the balance equations for four sub-cells of each tetrahedral mesh to obtain the equations for the four sub-cell average fluxes which are unknowns. This method was implemented in the computer code MUST (Multi-group Unstructured geometry S N Transport). The numerical tests show that this method gives more robust solution than DFEM (Discontinuous Finite Element Method)

  5. Energy dependent mesh adaptivity of discontinuous isogeometric discrete ordinate methods with dual weighted residual error estimators

    Science.gov (United States)

    Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.

    2017-04-01

    In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.

  6. Method and system for mesh network embedded devices

    Science.gov (United States)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for managing mesh network devices. A mesh network device with integrated features creates an N-way mesh network with a full mesh network topology or a partial mesh network topology.

  7. New finite volume methods for approximating partial differential equations on arbitrary meshes

    International Nuclear Information System (INIS)

    Hermeline, F.

    2008-12-01

    This dissertation presents some new methods of finite volume type for approximating partial differential equations on arbitrary meshes. The main idea lies in solving twice the problem to be dealt with. One addresses the elliptic equations with variable (anisotropic, antisymmetric, discontinuous) coefficients, the parabolic linear or non linear equations (heat equation, radiative diffusion, magnetic diffusion with Hall effect), the wave type equations (Maxwell, acoustics), the elasticity and Stokes'equations. Numerous numerical experiments show the good behaviour of this type of method. (author)

  8. Generalized Coarse-Mesh Rebalance Method for Acceleration of Neutron Transport Calculations

    International Nuclear Information System (INIS)

    Yamamoto, Akio

    2005-01-01

    This paper proposes a new acceleration method for neutron transport calculations: the generalized coarse-mesh rebalance (GCMR) method. The GCMR method is a unified scheme of the traditional coarse-mesh rebalance (CMR) and the coarse-mesh finite difference (CMFD) acceleration methods. Namely, by using an appropriate acceleration factor, formulation of the GCMR method becomes identical to that of the CMR or CMFD method. This also indicates that the convergence property of the GCMR method can be controlled by the acceleration factor since the convergence properties of the CMR and CMFD methods are generally different. In order to evaluate the convergence property of the GCMR method, a linearized Fourier analysis was carried out for a one-group homogeneous medium, and the results clarified the relationship between the acceleration factor and the spectral radius. It was also shown that the spectral radius of the GCMR method is smaller than those of the CMR and CMFD methods. Furthermore, the Fourier analysis showed that when an appropriate acceleration factor was used, the spectral radius of the GCMR method did not exceed unity in this study, which was in contrast to the results of the CMR or the CMFD method. Application of the GCMR method to practical calculations will be easy when the CMFD acceleration is already adopted in a transport code. By multiplying a suitable acceleration factor to a coefficient (D FD ) of a finite difference formulation, one can improve the numerical instability of the CMFD acceleration method

  9. Cartesian Mesh Linearized Euler Equations Solver for Aeroacoustic Problems around Full Aircraft

    Directory of Open Access Journals (Sweden)

    Yuma Fukushima

    2015-01-01

    Full Text Available The linearized Euler equations (LEEs solver for aeroacoustic problems has been developed on block-structured Cartesian mesh to address complex geometry. Taking advantage of the benefits of Cartesian mesh, we employ high-order schemes for spatial derivatives and for time integration. On the other hand, the difficulty of accommodating curved wall boundaries is addressed by the immersed boundary method. The resulting LEEs solver is robust to complex geometry and numerically efficient in a parallel environment. The accuracy and effectiveness of the present solver are validated by one-dimensional and three-dimensional test cases. Acoustic scattering around a sphere and noise propagation from the JT15D nacelle are computed. The results show good agreement with analytical, computational, and experimental results. Finally, noise propagation around fuselage-wing-nacelle configurations is computed as a practical example. The results show that the sound pressure level below the over-the-wing nacelle (OWN configuration is much lower than that of the conventional DLR-F6 aircraft configuration due to the shielding effect of the OWN configuration.

  10. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  11. The linear characteristic method for spatially discretizing the discrete ordinates equations in (x,y)-geometry

    International Nuclear Information System (INIS)

    Larsen, E.W.; Alcouffe, R.E.

    1981-01-01

    In this article a new linear characteristic (LC) spatial differencing scheme for the discrete ordinates equations in (x,y)-geometry is described and numerical comparisons are given with the diamond difference (DD) method. The LC method is more stable with mesh size and is generally much more accurate than the DD method on both fine and coarse meshes, for eigenvalue and deep penetration problems. The LC method is based on computations involving the exact solution of a cell problem which has spatially linear boundary conditions and interior source. The LC method is coupled to the diffusion synthetic acceleration (DSA) algorithm in that the linear variations of the source are determined in part by the results of the DSA calculation from the previous inner iteration. An inexpensive negative-flux fixup is used which has very little effect on the accuracy of the solution. The storage requirements for LC are essentially the same as that for DD, while the computational times for LC are generally less than twice the DD computational times for the same mesh. This increase in computational cost is offset if one computes LC solutions on somewhat coarser meshes than DD; the resulting LC solutions are still generally much more accurate than the DD solutions. (orig.) [de

  12. Linear circuit theory matrices in computer applications

    CERN Document Server

    Vlach, Jiri

    2014-01-01

    Basic ConceptsNodal and Mesh AnalysisMatrix MethodsDependent SourcesNetwork TransformationsCapacitors and InductorsNetworks with Capacitors and InductorsFrequency DomainLaplace TransformationTime DomainNetwork FunctionsActive NetworksTwo-PortsTransformersModeling and Numerical MethodsSensitivitiesModified Nodal FormulationFourier Series and TransformationAppendix: Scaling of Linear Networks.

  13. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  14. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    Science.gov (United States)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  15. Coarse-mesh method for multidimensional, mixed-lattice diffusion calculations

    International Nuclear Information System (INIS)

    Dodds, H.L. Jr.; Honeck, H.C.; Hostetler, D.E.

    1977-01-01

    A coarse-mesh finite difference method has been developed for multidimensional, mixed-lattice reactor diffusion calculations, both statics and kinetics, in hexagonal geometry. Results obtained with the coarse-mesh (CM) method have been compared with a conventional mesh-centered finite difference method and with experiment. The results of this comparison indicate that the accuracy of the CM method for highly heterogeneous (mixed) lattices using one point per hexagonal mesh element (''hex'') is about the same as the conventional method with six points per hex. Furthermore, the computing costs (i.e., central processor unit time and core storage requirements) of the CM method with one point per hex are about the same as the conventional method with one point per hex

  16. Development of three-dimensional ENRICHED FREE MESH METHOD and its application to crack analysis

    International Nuclear Information System (INIS)

    Suzuki, Hayato; Matsubara, Hitoshi; Ezawa, Yoshitaka; Yagawa, Genki

    2010-01-01

    In this paper, we describe a method for three-dimensional high accurate analysis of a crack included in a large-scale structure. The Enriched Free Mesh Method (EFMM) is a method for improving the accuracy of the Free Mesh Method (FMM), which is a kind of meshless method. First, we developed an algorithm of the three-dimensional EFMM. The elastic problem was analyzed using the EFMM and we find that its accuracy compares advantageously with the FMM, and the number of CG iterations is smaller. Next, we developed a method for calculating the stress intensity factor by employing the EFMM. The structure with a crack was analyzed using the EFMM, and the stress intensity factor was calculated by the developed method. The analysis results were very well in agreement with reference solution. It was shown that the proposed method is very effective in the analysis of the crack included in a large-scale structure. (author)

  17. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    Science.gov (United States)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  18. Throughput vs. Delay in Lossy Wireless Mesh Networks with Random Linear Network Coding

    DEFF Research Database (Denmark)

    Hundebøll, Martin; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani

    2014-01-01

    This work proposes a new protocol applying on– the–fly random linear network coding in wireless mesh net- works. The protocol provides increased reliability, low delay, and high throughput to the upper layers, while being oblivious to their specific requirements. This seemingly conflicting goals ...

  19. Analytic Coarse-Mesh Finite-Difference Method Generalized for Heterogeneous Multidimensional Two-Group Diffusion Calculations

    International Nuclear Information System (INIS)

    Garcia-Herranz, Nuria; Cabellos, Oscar; Aragones, Jose M.; Ahnert, Carol

    2003-01-01

    In order to take into account in a more effective and accurate way the intranodal heterogeneities in coarse-mesh finite-difference (CMFD) methods, a new equivalent parameter generation methodology has been developed and tested. This methodology accounts for the dependence of the nodal homogeneized two-group cross sections and nodal coupling factors, with interface flux discontinuity (IFD) factors that account for heterogeneities on the flux-spectrum and burnup intranodal distributions as well as on neighbor effects.The methodology has been implemented in an analytic CMFD method, rigorously obtained for homogeneous nodes with transverse leakage and generalized now for heterogeneous nodes by including IFD heterogeneity factors. When intranodal mesh node heterogeneity vanishes, the heterogeneous solution tends to the analytic homogeneous nodal solution. On the other hand, when intranodal heterogeneity increases, a high accuracy is maintained since the linear and nonlinear feedbacks on equivalent parameters have been shown to be as a very effective way of accounting for heterogeneity effects in two-group multidimensional coarse-mesh diffusion calculations

  20. Adaptive-mesh zoning by the equipotential method

    Energy Technology Data Exchange (ETDEWEB)

    Winslow, A.M.

    1981-04-01

    An adaptive mesh method is proposed for the numerical solution of differential equations which causes the mesh lines to move closer together in regions where higher resolution in some physical quantity T is desired. A coefficient D > 0 is introduced into the equipotential zoning equations, where D depends on the gradient of T . The equations are inverted, leading to nonlinear elliptic equations for the mesh coordinates with source terms which depend on the gradient of D. A functional form of D is proposed.

  1. Trajectory Optimization Based on Multi-Interval Mesh Refinement Method

    Directory of Open Access Journals (Sweden)

    Ningbo Li

    2017-01-01

    Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.

  2. Adaptive hybrid mesh refinement for multiphysics applications

    International Nuclear Information System (INIS)

    Khamayseh, Ahmed; Almeida, Valmor de

    2007-01-01

    The accuracy and convergence of computational solutions of mesh-based methods is strongly dependent on the quality of the mesh used. We have developed methods for optimizing meshes that are comprised of elements of arbitrary polygonal and polyhedral type. We present in this research the development of r-h hybrid adaptive meshing technology tailored to application areas relevant to multi-physics modeling and simulation. Solution-based adaptation methods are used to reposition mesh nodes (r-adaptation) or to refine the mesh cells (h-adaptation) to minimize solution error. The numerical methods perform either the r-adaptive mesh optimization or the h-adaptive mesh refinement method on the initial isotropic or anisotropic meshes to equidistribute weighted geometric and/or solution error function. We have successfully introduced r-h adaptivity to a least-squares method with spherical harmonics basis functions for the solution of the spherical shallow atmosphere model used in climate modeling. In addition, application of this technology also covers a wide range of disciplines in computational sciences, most notably, time-dependent multi-physics, multi-scale modeling and simulation

  3. Linear scaling of density functional algorithms

    International Nuclear Information System (INIS)

    Stechel, E.B.; Feibelman, P.J.; Williams, A.R.

    1993-01-01

    An efficient density functional algorithm (DFA) that scales linearly with system size will revolutionize electronic structure calculations. Density functional calculations are reliable and accurate in determining many condensed matter and molecular ground-state properties. However, because current DFA's, including methods related to that of Car and Parrinello, scale with the cube of the system size, density functional studies are not routinely applied to large systems. Linear scaling is achieved by constructing functions that are both localized and fully occupied, thereby eliminating the need to calculate global eigenfunctions. It is, however, widely believed that exponential localization requires the existence of an energy gap between the occupied and unoccupied states. Despite this, the authors demonstrate that linear scaling can still be achieved for metals. Using a linear scaling algorithm, they have explicitly constructed localized, almost fully occupied orbitals for the quintessential metallic system, jellium. The algorithm is readily generalizable to any system geometry and Hamiltonian. They will discuss the conceptual issues involved, convergence properties and scaling for their new algorithm

  4. Incompressible Navier-Stokes inverse design method based on adaptive unstructured meshes

    International Nuclear Information System (INIS)

    Rahmati, M.T.; Charlesworth, D.; Zangeneh, M.

    2005-01-01

    An inverse method for blade design based on Navier-Stokes equations on adaptive unstructured meshes has been developed. In the method, unlike the method based on inviscid equations, the effect of viscosity is directly taken into account. In the method, the pressure (or pressure loading) is prescribed. The design method then computes the blade shape that would accomplish the target prescribed pressure distribution. The method is implemented using a cell-centered finite volume method, which solves the incompressible Navier-Stokes equations on unstructured meshes. An adaptive unstructured mesh method based on grid subdivision and local adaptive mesh method is utilized for increasing the accuracy. (author)

  5. A higher-order conservation element solution element method for solving hyperbolic differential equations on unstructured meshes

    Science.gov (United States)

    Bilyeu, David

    This dissertation presents an extension of the Conservation Element Solution Element (CESE) method from second- to higher-order accuracy. The new method retains the favorable characteristics of the original second-order CESE scheme, including (i) the use of the space-time integral equation for conservation laws, (ii) a compact mesh stencil, (iii) the scheme will remain stable up to a CFL number of unity, (iv) a fully explicit, time-marching integration scheme, (v) true multidimensionality without using directional splitting, and (vi) the ability to handle two- and three-dimensional geometries by using unstructured meshes. This algorithm has been thoroughly tested in one, two and three spatial dimensions and has been shown to obtain the desired order of accuracy for solving both linear and non-linear hyperbolic partial differential equations. The scheme has also shown its ability to accurately resolve discontinuities in the solutions. Higher order unstructured methods such as the Discontinuous Galerkin (DG) method and the Spectral Volume (SV) methods have been developed for one-, two- and three-dimensional application. Although these schemes have seen extensive development and use, certain drawbacks of these methods have been well documented. For example, the explicit versions of these two methods have very stringent stability criteria. This stability criteria requires that the time step be reduced as the order of the solver increases, for a given simulation on a given mesh. The research presented in this dissertation builds upon the work of Chang, who developed a fourth-order CESE scheme to solve a scalar one-dimensional hyperbolic partial differential equation. The completed research has resulted in two key deliverables. The first is a detailed derivation of a high-order CESE methods on unstructured meshes for solving the conservation laws in two- and three-dimensional spaces. The second is the code implementation of these numerical methods in a computer code. For

  6. Multi-scale freeform surface texture filtering using a mesh relaxation scheme

    International Nuclear Information System (INIS)

    Jiang, Xiangqian; Abdul-Rahman, Hussein S; Scott, Paul J

    2013-01-01

    Surface filtering algorithms using Fourier, Gaussian, wavelets, etc, are well-established for simple Euclidean geometries. However, these filtration techniques cannot be applied to today's complex freeform surfaces, which have non-Euclidean geometries, without distortion of the results. This paper proposes a new multi-scale filtering algorithm for freeform surfaces that are represented by triangular meshes based on a mesh relaxation scheme. The proposed algorithm is capable of decomposing a freeform surface into different scales and separating surface roughness, waviness and form from each other, as will be demonstrated throughout the paper. Results of applying the proposed algorithm to computer-generated as well as real surfaces are represented and compared with a lifting wavelet filtering algorithm. (paper)

  7. Adaptive Mesh Iteration Method for Trajectory Optimization Based on Hermite-Pseudospectral Direct Transcription

    Directory of Open Access Journals (Sweden)

    Humin Lei

    2017-01-01

    Full Text Available An adaptive mesh iteration method based on Hermite-Pseudospectral is described for trajectory optimization. The method uses the Legendre-Gauss-Lobatto points as interpolation points; then the state equations are approximated by Hermite interpolating polynomials. The method allows for changes in both number of mesh points and the number of mesh intervals and produces significantly smaller mesh sizes with a higher accuracy tolerance solution. The derived relative error estimate is then used to trade the number of mesh points with the number of mesh intervals. The adaptive mesh iteration method is applied successfully to the examples of trajectory optimization of Maneuverable Reentry Research Vehicle, and the simulation experiment results show that the adaptive mesh iteration method has many advantages.

  8. Reactor calculation in coarse mesh by finite element method applied to matrix response method

    International Nuclear Information System (INIS)

    Nakata, H.

    1982-01-01

    The finite element method is applied to the solution of the modified formulation of the matrix-response method aiming to do reactor calculations in coarse mesh. Good results are obtained with a short running time. The method is applicable to problems where the heterogeneity is predominant and to problems of evolution in coarse meshes where the burnup is variable in one same coarse mesh, making the cross section vary spatially with the evolution. (E.G.) [pt

  9. Parallel Implementation and Scaling of an Adaptive Mesh Discrete Ordinates Algorithm for Transport

    International Nuclear Information System (INIS)

    Howell, L H

    2004-01-01

    Block-structured adaptive mesh refinement (AMR) uses a mesh structure built up out of locally-uniform rectangular grids. In the BoxLib parallel framework used by the Raptor code, each processor operates on one or more of these grids at each refinement level. The decomposition of the mesh into grids and the distribution of these grids among processors may change every few timesteps as a calculation proceeds. Finer grids use smaller timesteps than coarser grids, requiring additional work to keep the system synchronized and ensure conservation between different refinement levels. In a paper for NECDC 2002 I presented preliminary results on implementation of parallel transport sweeps on the AMR mesh, conjugate gradient acceleration, accuracy of the AMR solution, and scalar speedup of the AMR algorithm compared to a uniform fully-refined mesh. This paper continues with a more in-depth examination of the parallel scaling properties of the scheme, both in single-level and multi-level calculations. Both sweeping and setup costs are considered. The algorithm scales with acceptable performance to several hundred processors. Trends suggest, however, that this is the limit for efficient calculations with traditional transport sweeps, and that modifications to the sweep algorithm will be increasingly needed as job sizes in the thousands of processors become common

  10. A novel finite volume discretization method for advection-diffusion systems on stretched meshes

    Science.gov (United States)

    Merrick, D. G.; Malan, A. G.; van Rooyen, J. A.

    2018-06-01

    This work is concerned with spatial advection and diffusion discretization technology within the field of Computational Fluid Dynamics (CFD). In this context, a novel method is proposed, which is dubbed the Enhanced Taylor Advection-Diffusion (ETAD) scheme. The model equation employed for design of the scheme is the scalar advection-diffusion equation, the industrial application being incompressible laminar and turbulent flow. Developed to be implementable into finite volume codes, ETAD places specific emphasis on improving accuracy on stretched structured and unstructured meshes while considering both advection and diffusion aspects in a holistic manner. A vertex-centered structured and unstructured finite volume scheme is used, and only data available on either side of the volume face is employed. This includes the addition of a so-called mesh stretching metric. Additionally, non-linear blending with the existing NVSF scheme was performed in the interest of robustness and stability, particularly on equispaced meshes. The developed scheme is assessed in terms of accuracy - this is done analytically and numerically, via comparison to upwind methods which include the popular QUICK and CUI techniques. Numerical tests involved the 1D scalar advection-diffusion equation, a 2D lid driven cavity and turbulent flow case. Significant improvements in accuracy were achieved, with L2 error reductions of up to 75%.

  11. A moving mesh finite difference method for equilibrium radiation diffusion equations

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaobo, E-mail: xwindyb@126.com [Department of Mathematics, College of Science, China University of Mining and Technology, Xuzhou, Jiangsu 221116 (China); Huang, Weizhang, E-mail: whuang@ku.edu [Department of Mathematics, University of Kansas, Lawrence, KS 66045 (United States); Qiu, Jianxian, E-mail: jxqiu@xmu.edu.cn [School of Mathematical Sciences and Fujian Provincial Key Laboratory of Mathematical Modeling and High-Performance Scientific Computing, Xiamen University, Xiamen, Fujian 361005 (China)

    2015-10-01

    An efficient moving mesh finite difference method is developed for the numerical solution of equilibrium radiation diffusion equations in two dimensions. The method is based on the moving mesh partial differential equation approach and moves the mesh continuously in time using a system of meshing partial differential equations. The mesh adaptation is controlled through a Hessian-based monitor function and the so-called equidistribution and alignment principles. Several challenging issues in the numerical solution are addressed. Particularly, the radiation diffusion coefficient depends on the energy density highly nonlinearly. This nonlinearity is treated using a predictor–corrector and lagged diffusion strategy. Moreover, the nonnegativity of the energy density is maintained using a cutoff method which has been known in literature to retain the accuracy and convergence order of finite difference approximation for parabolic equations. Numerical examples with multi-material, multiple spot concentration situations are presented. Numerical results show that the method works well for radiation diffusion equations and can produce numerical solutions of good accuracy. It is also shown that a two-level mesh movement strategy can significantly improve the efficiency of the computation.

  12. A moving mesh finite difference method for equilibrium radiation diffusion equations

    International Nuclear Information System (INIS)

    Yang, Xiaobo; Huang, Weizhang; Qiu, Jianxian

    2015-01-01

    An efficient moving mesh finite difference method is developed for the numerical solution of equilibrium radiation diffusion equations in two dimensions. The method is based on the moving mesh partial differential equation approach and moves the mesh continuously in time using a system of meshing partial differential equations. The mesh adaptation is controlled through a Hessian-based monitor function and the so-called equidistribution and alignment principles. Several challenging issues in the numerical solution are addressed. Particularly, the radiation diffusion coefficient depends on the energy density highly nonlinearly. This nonlinearity is treated using a predictor–corrector and lagged diffusion strategy. Moreover, the nonnegativity of the energy density is maintained using a cutoff method which has been known in literature to retain the accuracy and convergence order of finite difference approximation for parabolic equations. Numerical examples with multi-material, multiple spot concentration situations are presented. Numerical results show that the method works well for radiation diffusion equations and can produce numerical solutions of good accuracy. It is also shown that a two-level mesh movement strategy can significantly improve the efficiency of the computation

  13. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    International Nuclear Information System (INIS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Neese, Frank; Valeev, Edward F.

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate

  14. A novel three-dimensional mesh deformation method based on sphere relaxation

    International Nuclear Information System (INIS)

    Zhou, Xuan; Li, Shuixiang

    2015-01-01

    In our previous work (2013) [19], we developed a disk relaxation based mesh deformation method for two-dimensional mesh deformation. In this paper, the idea of the disk relaxation is extended to the sphere relaxation for three-dimensional meshes with large deformations. We develop a node based pre-displacement procedure to apply initial movements on nodes according to their layer indices. Afterwards, the nodes are moved locally by the improved sphere relaxation algorithm to transfer boundary deformations and increase the mesh quality. A three-dimensional mesh smoothing method is also adopted to prevent the occurrence of the negative volume of elements, and further improve the mesh quality. Numerical applications in three-dimension including the wing rotation, bending beam and morphing aircraft are carried out. The results demonstrate that the sphere relaxation based approach generates the deformed mesh with high quality, especially regarding complex boundaries and large deformations

  15. A novel three-dimensional mesh deformation method based on sphere relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Xuan [Department of Mechanics & Engineering Science, College of Engineering, Peking University, Beijing, 100871 (China); Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China); Li, Shuixiang, E-mail: lsx@pku.edu.cn [Department of Mechanics & Engineering Science, College of Engineering, Peking University, Beijing, 100871 (China)

    2015-10-01

    In our previous work (2013) [19], we developed a disk relaxation based mesh deformation method for two-dimensional mesh deformation. In this paper, the idea of the disk relaxation is extended to the sphere relaxation for three-dimensional meshes with large deformations. We develop a node based pre-displacement procedure to apply initial movements on nodes according to their layer indices. Afterwards, the nodes are moved locally by the improved sphere relaxation algorithm to transfer boundary deformations and increase the mesh quality. A three-dimensional mesh smoothing method is also adopted to prevent the occurrence of the negative volume of elements, and further improve the mesh quality. Numerical applications in three-dimension including the wing rotation, bending beam and morphing aircraft are carried out. The results demonstrate that the sphere relaxation based approach generates the deformed mesh with high quality, especially regarding complex boundaries and large deformations.

  16. An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method

    Directory of Open Access Journals (Sweden)

    Jibum Kim

    2014-01-01

    Full Text Available We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient and second-order (Hessian derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality.

  17. Linear finite element method for one-dimensional diffusion problems

    Energy Technology Data Exchange (ETDEWEB)

    Brandao, Michele A.; Dominguez, Dany S.; Iglesias, Susana M., E-mail: micheleabrandao@gmail.com, E-mail: dany@labbi.uesc.br, E-mail: smiglesias@uesc.br [Universidade Estadual de Santa Cruz (LCC/DCET/UESC), Ilheus, BA (Brazil). Departamento de Ciencias Exatas e Tecnologicas. Laboratorio de Computacao Cientifica

    2011-07-01

    We describe in this paper the fundamentals of Linear Finite Element Method (LFEM) applied to one-speed diffusion problems in slab geometry. We present the mathematical formulation to solve eigenvalue and fixed source problems. First, we discretized a calculus domain using a finite set of elements. At this point, we obtain the spatial balance equations for zero order and first order spatial moments inside each element. Then, we introduce the linear auxiliary equations to approximate neutron flux and current inside the element and architect a numerical scheme to obtain the solution. We offer numerical results for fixed source typical model problems to illustrate the method's accuracy for coarse-mesh calculations in homogeneous and heterogeneous domains. Also, we compare the accuracy and computational performance of LFEM formulation with conventional Finite Difference Method (FDM). (author)

  18. Study on boundary search method for DFM mesh generation

    Directory of Open Access Journals (Sweden)

    Li Ri

    2012-08-01

    Full Text Available The boundary mesh of the casting model was determined by direct calculation on the triangular facets extracted from the STL file of the 3D model. Then the inner and outer grids of the model were identified by the algorithm in which we named Inner Seed Grid Method. Finally, a program to automatically generate a 3D FDM mesh was compiled. In the paper, a method named Triangle Contraction Search Method (TCSM was put forward to ensure not losing the boundary grids; while an algorithm to search inner seed grids to identify inner/outer grids of the casting model was also brought forward. Our algorithm was simple, clear and easy to construct program. Three examples for the casting mesh generation testified the validity of the program.

  19. Throughput vs. Delay in Lossy Wireless Mesh Networks with Random Linear Network Coding

    OpenAIRE

    Hundebøll, Martin; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Fitzek, Frank

    2014-01-01

    This work proposes a new protocol applying on–the–fly random linear network coding in wireless mesh net-works. The protocol provides increased reliability, low delay,and high throughput to the upper layers, while being obliviousto their specific requirements. This seemingly conflicting goalsare achieved by design, using an on–the–fly network codingstrategy. Our protocol also exploits relay nodes to increasethe overall performance of individual links. Since our protocolnaturally masks random p...

  20. Gradient Calculation Methods on Arbitrary Polyhedral Unstructured Meshes for Cell-Centered CFD Solvers

    Science.gov (United States)

    Sozer, Emre; Brehm, Christoph; Kiris, Cetin C.

    2014-01-01

    A survey of gradient reconstruction methods for cell-centered data on unstructured meshes is conducted within the scope of accuracy assessment. Formal order of accuracy, as well as error magnitudes for each of the studied methods, are evaluated on a complex mesh of various cell types through consecutive local scaling of an analytical test function. The tests highlighted several gradient operator choices that can consistently achieve 1st order accuracy regardless of cell type and shape. The tests further offered error comparisons for given cell types, leading to the observation that the "ideal" gradient operator choice is not universal. Practical implications of the results are explored via CFD solutions of a 2D inviscid standing vortex, portraying the discretization error properties. A relatively naive, yet largely unexplored, approach of local curvilinear stencil transformation exhibited surprisingly favorable properties

  1. Adaptive upscaling with the dual mesh method

    Energy Technology Data Exchange (ETDEWEB)

    Guerillot, D.; Verdiere, S.

    1997-08-01

    The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.

  2. Electromagnetic forward modelling for realistic Earth models using unstructured tetrahedral meshes and a meshfree approach

    Science.gov (United States)

    Farquharson, C.; Long, J.; Lu, X.; Lelievre, P. G.

    2017-12-01

    Real-life geology is complex, and so, even when allowing for the diffusive, low resolution nature of geophysical electromagnetic methods, we need Earth models that can accurately represent this complexity when modelling and inverting electromagnetic data. This is particularly the case for the scales, detail and conductivity contrasts involved in mineral and hydrocarbon exploration and development, but also for the larger scale of lithospheric studies. Unstructured tetrahedral meshes provide a flexible means of discretizing a general, arbitrary Earth model. This is important when wanting to integrate a geophysical Earth model with a geological Earth model parameterized in terms of surfaces. Finite-element and finite-volume methods can be derived for computing the electric and magnetic fields in a model parameterized using an unstructured tetrahedral mesh. A number of such variants have been proposed and have proven successful. However, the efficiency and accuracy of these methods can be affected by the "quality" of the tetrahedral discretization, that is, how many of the tetrahedral cells in the mesh are long, narrow and pointy. This is particularly the case if one wants to use an iterative technique to solve the resulting linear system of equations. One approach to deal with this issue is to develop sophisticated model and mesh building and manipulation capabilities in order to ensure that any mesh built from geological information is of sufficient quality for the electromagnetic modelling. Another approach is to investigate other methods of synthesizing the electromagnetic fields. One such example is a "meshfree" approach in which the electromagnetic fields are synthesized using a mesh that is distinct from the mesh used to parameterized the Earth model. There are then two meshes, one describing the Earth model and one used for the numerical mathematics of computing the fields. This means that there are no longer any quality requirements on the model mesh, which

  3. Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.

    Science.gov (United States)

    Cawkwell, M J; Niklasson, Anders M N

    2012-10-07

    Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.

  4. A regularized vortex-particle mesh method for large eddy simulation

    Science.gov (United States)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  5. Symmetries and the coarse-mesh method

    International Nuclear Information System (INIS)

    Makai, M.

    1980-10-01

    This report approaches the basic problem of the coarse-mesh method from a new side. Group theory is used for the determination of the space dependency of the flux. The result is a method called ANANAS after the analytic-analytic solution. This method was tested on two benchmark problems: one given by Melice and the IAEA benchmark. The ANANAS program is an experimental one. The method was intended for use in hexagonal geometry. (Auth.)

  6. An Angular Method with Position Control for Block Mesh Squareness Improvement

    Energy Technology Data Exchange (ETDEWEB)

    Yao, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Stillman, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-19

    We optimize a target function de ned by angular properties with a position control term for a basic stencil with a block-structured mesh, to improve element squareness in 2D and 3D. Comparison with the condition number method shows that besides a similar mesh quality regarding orthogonality can be achieved as the former does, the new method converges faster and provides a more uniform global mesh spacing in our numerical tests.

  7. Mesh joinery: a method for building fabricable structures

    OpenAIRE

    Cignoni, Paolo; Pietroni, Nico; Malomo, Luigi; Scopigno, Roberto

    2015-01-01

    Mesh joinery is an innovative method to produce illustrative shape approximations suitable for fabrication. Mesh joinery is capable of producing complex fabricable structures in an efficient and visually pleasing manner. We represent an input geometry as a set of planar pieces arranged to compose a rigid structure by exploiting an efficient slit mechanism. Since slices are planar, a standard 2D cutting system is sufficient to fabricate them.

  8. Kinetic solvers with adaptive mesh in phase space

    Science.gov (United States)

    Arslanbekov, Robert R.; Kolobov, Vladimir I.; Frolova, Anna A.

    2013-12-01

    An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a “tree of trees” (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems.

  9. Coarse mesh code development

    Energy Technology Data Exchange (ETDEWEB)

    Lieberoth, J.

    1975-06-15

    The numerical solution of the neutron diffusion equation plays a very important role in the analysis of nuclear reactors. A wide variety of numerical procedures has been proposed, at which most of the frequently used numerical methods are fundamentally based on the finite- difference approximation where the partial derivatives are approximated by the finite difference. For complex geometries, typical of the practical reactor problems, the computational accuracy of the finite-difference method is seriously affected by the size of the mesh width relative to the neutron diffusion length and by the heterogeneity of the medium. Thus, a very large number of mesh points are generally required to obtain a reasonably accurate approximate solution of the multi-dimensional diffusion equation. Since the computation time is approximately proportional to the number of mesh points, a detailed multidimensional analysis, based on the conventional finite-difference method, is still expensive even with modern large-scale computers. Accordingly, there is a strong incentive to develop alternatives that can reduce the number of mesh-points and still retain accuracy. One of the promising alternatives is the finite element method, which consists of the expansion of the neutron flux by piecewise polynomials. One of the advantages of this procedure is its flexibility in selecting the locations of the mesh points and the degree of the expansion polynomial. The small number of mesh points of the coarse grid enables to store the results of several of the least outer iterations and to calculate well extrapolated values of them by comfortable formalisms. This holds especially if only one energy distribution of fission neutrons is assumed for all fission processes in the reactor, because the whole information of an outer iteration is contained in a field of fission rates which has the size of all mesh points of the coarse grid.

  10. Numerical methods and analysis of the nonlinear Vlasov equation on unstructured meshes of phase space

    International Nuclear Information System (INIS)

    Besse, Nicolas

    2003-01-01

    This work is dedicated to the mathematical and numerical studies of the Vlasov equation on phase-space unstructured meshes. In the first part, new semi-Lagrangian methods are developed to solve the Vlasov equation on unstructured meshes of phase space. As the Vlasov equation describes multi-scale phenomena, we also propose original methods based on a wavelet multi-resolution analysis. The resulting algorithm leads to an adaptive mesh-refinement strategy. The new massively-parallel computers allow to use these methods with several phase-space dimensions. Particularly, these numerical schemes are applied to plasma physics and charged particle beams in the case of two-, three-, and four-dimensional Vlasov-Poisson systems. In the second part we prove the convergence and give error estimates for several numerical schemes applied to the Vlasov-Poisson system when strong and classical solutions are considered. First we show the convergence of a semi-Lagrangian scheme on an unstructured mesh of phase space, when the regularity hypotheses for the initial data are minimal. Then we demonstrate the convergence of classes of high-order semi-Lagrangian schemes in the framework of the regular classical solution. In order to reconstruct the distribution function, we consider symmetrical Lagrange polynomials, B-Splines and wavelets bases. Finally we prove the convergence of a semi-Lagrangian scheme with propagation of gradients yielding a high-order and stable reconstruction of the solution. (author) [fr

  11. A simple nodal force distribution method in refined finite element meshes

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jai Hak [Chungbuk National University, Chungju (Korea, Republic of); Shin, Kyu In [Gentec Co., Daejeon (Korea, Republic of); Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Cho, Seungyon [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2017-05-15

    In finite element analyses, mesh refinement is frequently performed to obtain accurate stress or strain values or to accurately define the geometry. After mesh refinement, equivalent nodal forces should be calculated at the nodes in the refined mesh. If field variables and material properties are available at the integration points in each element, then the accurate equivalent nodal forces can be calculated using an adequate numerical integration. However, in certain circumstances, equivalent nodal forces cannot be calculated because field variable data are not available. In this study, a very simple nodal force distribution method was proposed. Nodal forces of the original finite element mesh are distributed to the nodes of refined meshes to satisfy the equilibrium conditions. The effect of element size should also be considered in determining the magnitude of the distributing nodal forces. A program was developed based on the proposed method, and several example problems were solved to verify the accuracy and effectiveness of the proposed method. From the results, accurate stress field can be recognized to be obtained from refined meshes using the proposed nodal force distribution method. In example problems, the difference between the obtained maximum stress and target stress value was less than 6 % in models with 8-node hexahedral elements and less than 1 % in models with 20-node hexahedral elements or 10-node tetrahedral elements.

  12. Efficient numerical methods for the large-scale, parallel solution of elastoplastic contact problems

    KAUST Repository

    Frohne, Jö rg; Heister, Timo; Bangerth, Wolfgang

    2015-01-01

    © 2016 John Wiley & Sons, Ltd. Quasi-static elastoplastic contact problems are ubiquitous in many industrial processes and other contexts, and their numerical simulation is consequently of great interest in accurately describing and optimizing production processes. The key component in these simulations is the solution of a single load step of a time iteration. From a mathematical perspective, the problems to be solved in each time step are characterized by the difficulties of variational inequalities for both the plastic behavior and the contact problem. Computationally, they also often lead to very large problems. In this paper, we present and evaluate a complete set of methods that are (1) designed to work well together and (2) allow for the efficient solution of such problems. In particular, we use adaptive finite element meshes with linear and quadratic elements, a Newton linearization of the plasticity, active set methods for the contact problem, and multigrid-preconditioned linear solvers. Through a sequence of numerical experiments, we show the performance of these methods. This includes highly accurate solutions of a three-dimensional benchmark problem and scaling our methods in parallel to 1024 cores and more than a billion unknowns.

  13. Efficient numerical methods for the large-scale, parallel solution of elastoplastic contact problems

    KAUST Repository

    Frohne, Jörg

    2015-08-06

    © 2016 John Wiley & Sons, Ltd. Quasi-static elastoplastic contact problems are ubiquitous in many industrial processes and other contexts, and their numerical simulation is consequently of great interest in accurately describing and optimizing production processes. The key component in these simulations is the solution of a single load step of a time iteration. From a mathematical perspective, the problems to be solved in each time step are characterized by the difficulties of variational inequalities for both the plastic behavior and the contact problem. Computationally, they also often lead to very large problems. In this paper, we present and evaluate a complete set of methods that are (1) designed to work well together and (2) allow for the efficient solution of such problems. In particular, we use adaptive finite element meshes with linear and quadratic elements, a Newton linearization of the plasticity, active set methods for the contact problem, and multigrid-preconditioned linear solvers. Through a sequence of numerical experiments, we show the performance of these methods. This includes highly accurate solutions of a three-dimensional benchmark problem and scaling our methods in parallel to 1024 cores and more than a billion unknowns.

  14. Elongation cutoff technique armed with quantum fast multipole method for linear scaling.

    Science.gov (United States)

    Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko

    2009-11-30

    A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.

  15. HypGrid2D. A 2-d mesh generator

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, N N

    1998-03-01

    The implementation of a hyperbolic mesh generation procedure, based on an equation for orthogonality and an equation for the cell face area is described. The method is fast, robust and gives meshes with good smoothness and orthogonality. The procedure is implemented in a program called HypGrid2D. The HypGrid2D program is capable of generating C-, O- and `H`-meshes for use in connection with the EllipSys2D Navier-Stokes solver. To illustrate the capabilities of the program, some test examples are shown. First a series of C-meshes are generated around a NACA-0012 airfoil. Secondly a series of O-meshes are generated around a NACA-65-418 airfoil. Finally `H`-meshes are generated over a Gaussian hill and a linear escarpment. (au)

  16. Adjoint-based Mesh Optimization Method: The Development and Application for Nuclear Fuel Analysis

    International Nuclear Information System (INIS)

    Son, Seongmin; Lee, Jeong Ik

    2016-01-01

    In this research, methods for optimizing mesh distribution is proposed. The proposed method uses adjoint base optimization method (adjoint method). The optimized result will be obtained by applying this meshing technique to the existing code input deck and will be compared to the results produced from the uniform meshing method. Numerical solutions are calculated form an in-house 1D Finite Difference Method code while neglecting the axial conduction. The fuel radial node optimization was first performed to match the Fuel Centerline Temperature (FCT) the best. This was followed by optimizing the axial node which the Peak Cladding Temperature (PCT) is matched the best. After obtaining the optimized radial and axial nodes, the nodalization is implemented into the system analysis code and transient analyses were performed to observe the optimum nodalization performance. The developed adjoint-based mesh optimization method in the study is applied to MARS-KS, which is a nuclear system analysis code. Results show that the newly established method yields better results than that of the uniform meshing method from the numerical point of view. It is again stressed that the optimized mesh for the steady state can also give better numerical results even during a transient analysis

  17. Development of polygon elements based on the scaled boundary finite element method

    International Nuclear Information System (INIS)

    Chiong, Irene; Song Chongmin

    2010-01-01

    We aim to extend the scaled boundary finite element method to construct conforming polygon elements. The development of the polygonal finite element is highly anticipated in computational mechanics as greater flexibility and accuracy can be achieved using these elements. The scaled boundary polygonal finite element will enable new developments in mesh generation, better accuracy from a higher order approximation and better transition elements in finite element meshes. Polygon elements of arbitrary number of edges and order have been developed successfully. The edges of an element are discretised with line elements. The displacement solution of the scaled boundary finite element method is used in the development of shape functions. They are shown to be smooth and continuous within the element, and satisfy compatibility and completeness requirements. Furthermore, eigenvalue decomposition has been used to depict element modes and outcomes indicate the ability of the scaled boundary polygonal element to express rigid body and constant strain modes. Numerical tests are presented; the patch test is passed and constant strain modes verified. Accuracy and convergence of the method are also presented and the performance of the scaled boundary polygonal finite element is verified on Cook's swept panel problem. Results show that the scaled boundary polygonal finite element method outperforms a traditional mesh and accuracy and convergence are achieved from fewer nodes. The proposed method is also shown to be truly flexible, and applies to arbitrary n-gons formed of irregular and non-convex polygons.

  18. Iterative linear solvers in a 2D radiation-hydrodynamics code: Methods and performance

    International Nuclear Information System (INIS)

    Baldwin, C.; Brown, P.N.; Falgout, R.; Graziani, F.; Jones, J.

    1999-01-01

    Computer codes containing both hydrodynamics and radiation play a central role in simulating both astrophysical and inertial confinement fusion (ICF) phenomena. A crucial aspect of these codes is that they require an implicit solution of the radiation diffusion equations. The authors present in this paper the results of a comparison of five different linear solvers on a range of complex radiation and radiation-hydrodynamics problems. The linear solvers used are diagonally scaled conjugate gradient, GMRES with incomplete LU preconditioning, conjugate gradient with incomplete Cholesky preconditioning, multigrid, and multigrid-preconditioned conjugate gradient. These problems involve shock propagation, opacities varying over 5--6 orders of magnitude, tabular equations of state, and dynamic ALE (Arbitrary Lagrangian Eulerian) meshes. They perform a problem size scalability study by comparing linear solver performance over a wide range of problem sizes from 1,000 to 100,000 zones. The fundamental question they address in this paper is: Is it more efficient to invert the matrix in many inexpensive steps (like diagonally scaled conjugate gradient) or in fewer expensive steps (like multigrid)? In addition, what is the answer to this question as a function of problem size and is the answer problem dependent? They find that the diagonally scaled conjugate gradient method performs poorly with the growth of problem size, increasing in both iteration count and overall CPU time with the size of the problem and also increasing for larger time steps. For all problems considered, the multigrid algorithms scale almost perfectly (i.e., the iteration count is approximately independent of problem size and problem time step). For pure radiation flow problems (i.e., no hydrodynamics), they see speedups in CPU time of factors of ∼15--30 for the largest problems, when comparing the multigrid solvers relative to diagonal scaled conjugate gradient

  19. Interior Point Method for Solving Fuzzy Number Linear Programming Problems Using Linear Ranking Function

    Directory of Open Access Journals (Sweden)

    Yi-hua Zhong

    2013-01-01

    Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.

  20. Application of mesh free lattice Boltzmann method to the analysis of very high temperature reactor lower plenum

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jong Woon [Dongguk Univ., Gyeongju (Korea, Republic of). Dept. of Energy and Environment

    2011-11-15

    Inside a helium-cooled very high temperature reactor (VHTR) lower plenum, hot gas jets from upper fuel channels with very high velocities and temperatures and is mixed before flowing out. One of the major concerns is local hot spots in the plenum due to inefficient mixing of the helium exiting from differentially heated fuel channels and it involves complex fluid flow physics. For this situation, mesh-free technique, especially Lattice Boltzmann Method (LBM), is thus of particular interest owing to its merit of no mesh generation. As an attempt to find efficiency of the method in such a problem, 3 dimensional flow field inside a scaled test model of the VHTR lower plenum is computed with commercial XFLOW code. Large eddy simulation (LES) and classical Smagorinsky eddy viscosity (EV) turbulence models are employed to investigate the capability of the LBM in capturing large scale vortex shedding. (orig.)

  1. Numerical homogenization of concrete microstructures without explicit meshes

    International Nuclear Information System (INIS)

    Sanahuja, Julien; Toulemonde, Charles

    2011-01-01

    Life management of electric hydro or nuclear power plants requires to estimate long-term concrete properties on facilities, for obvious safety and serviceability reasons. Decades-old structures are foreseen to be operational for several more decades. As a large number of different concrete formulations are found in EDF facilities, empirical models based on many experiments cannot be an option for a large fleet of power plant buildings. To build predictive models, homogenization techniques offer an appealing alternative. To properly upscale creep, especially at long term, a rather precise description of the microstructure is required. However, the complexity of the morphology of concrete poses several challenges. In particular, concrete is formulated to maximize the packing density of the granular skeleton, leading to aggregates spanning several length scales with small inter particle spacings. Thus, explicit meshing of realistic concrete microstructures is either out of reach of current meshing algorithms or would produce a number of degrees of freedom far higher than the current generic FEM codes capabilities. This paper proposes a method to deal with complex matrix-inclusions microstructures such as the ones encountered at the mortar or concrete scales, without explicitly meshing them. The microstructure is superimposed to an independent mesh, which is a regular Cartesian grid. This inevitably yields so called 'gray elements', spanning across multiple phases. As the reliability of the estimate of the effective properties highly depends on the behavior affected to these gray elements, special attention is paid to them. As far as the question of the solvers is concerned, generic FEM codes are found to lack efficiency: they cannot reach high enough levels of discretization with classical free meshes, and they do not take advantage of the regular structure of the mesh. Thus, a specific finite differences/finite volumes solver has been developed. At first, generic off

  2. Decentralised stabilising controllers for a class of large-scale linear ...

    Indian Academy of Sciences (India)

    subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order. Keywords. Decentralised stabilisation; large-scale linear systems; optimal feedback control; algebraic ...

  3. Does Attorney Advertising Influence Patient Perceptions of Pelvic Mesh?

    Science.gov (United States)

    Tippett, Elizabeth; King, Jesse; Lucent, Vincent; Ephraim, Sonya; Murphy, Miles; Taff, Eileen

    2018-01-01

    To measure the relative influence of attorney advertising on patient perceptions of pelvic mesh compared with a history of surgery and a first urology visit. A 52-item survey was administered to 170 female patients in 2 urology offices between 2014 and 2016. Multiple survey items were combined to form scales for benefit and risk perceptions of pelvic mesh, perceptions of the advertising, attitudes toward pelvic mesh, and knowledge of pelvic mesh and underlying medical conditions. Data were analyzed using hierarchical linear regression models. Exposure to attorney advertising was quite high; 88% reported seeing a mesh-related attorney advertisement in the last 6 months. Over half of patients reported seeing attorney advertisements more than once per week. A history of prior mesh implant surgery was the strongest predictor of benefit and risk perceptions of pelvic mesh. Exposure to attorney advertising was associated with higher risk perceptions but did not significantly affect perceptions of benefits. Past urologist visits increased perceptions of benefits but had no effect on risk perceptions. Attorney advertising appears to have some influence on risk perceptions, but personal experience and discussions with a urogynecologist or urologist also influence patient perceptions. Implications, limitations, and future research are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    Science.gov (United States)

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  5. A Parallel, Multi-Scale Watershed-Hydrologic-Inundation Model with Adaptively Switching Mesh for Capturing Flooding and Lake Dynamics

    Science.gov (United States)

    Ji, X.; Shen, C.

    2017-12-01

    Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.

  6. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Science.gov (United States)

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  7. Universal Linear Scaling of Permeability and Time for Heterogeneous Fracture Dissolution

    Science.gov (United States)

    Wang, L.; Cardenas, M. B.

    2017-12-01

    Fractures are dynamically changing over geological time scale due to mechanical deformation and chemical reactions. However, the latter mechanism remains poorly understood with respect to the expanding fracture, which leads to a positively coupled flow and reactive transport processes, i.e., as a fracture expands, so does its permeability (k) and thus flow and reactive transport processes. To unravel this coupling, we consider a self-enhancing process that leads to fracture expansion caused by acidic fluid, i.e., CO2-saturated brine dissolving calcite fracture. We rigorously derive a theory, for the first time, showing that fracture permeability increases linearly with time [Wang and Cardenas, 2017]. To validate this theory, we resort to the direct simulation that solves the Navier-Stokes and Advection-Diffusion equations with a moving mesh according to the dynamic dissolution process in two-dimensional (2D) fractures. We find that k slowly increases first until the dissolution front breakthrough the outbound when we observe a rapid k increase, i.e., the linear time-dependence of k occurs. The theory agrees well with numerical observations across a broad range of Peclet and Damkohler numbers through homogeneous and heterogeneous 2D fractures. Moreover, the theory of linear scaling relationship between k and time matches well with experimental observations of three-dimensional (3D) fractures' dissolution. To further attest to our theory's universality for 3D heterogeneous fractures across a broad range of roughness and correlation length of aperture field, we develop a depth-averaged model that simulates the process-based reactive transport. The simulation results show that, regardless of a wide variety of dissolution patterns such as the presence of dissolution fingers and preferential dissolution paths, the linear scaling relationship between k and time holds. Our theory sheds light on predicting permeability evolution in many geological settings when the self

  8. Bessel smoothing filter for spectral-element mesh

    Science.gov (United States)

    Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.

    2017-06-01

    Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the

  9. MHD simulations on an unstructured mesh

    International Nuclear Information System (INIS)

    Strauss, H.R.; Park, W.

    1996-01-01

    We describe work on a full MHD code using an unstructured mesh. MH3D++ is an extension of the PPPL MH3D resistive full MHD code. MH3D++ replaces the structured mesh and finite difference / fourier discretization of MH3D with an unstructured mesh and finite element / fourier discretization. Low level routines which perform differential operations, solution of PDEs such as Poisson's equation, and graphics, are encapsulated in C++ objects to isolate the finite element operations from the higher level code. The high level code is the same, whether it is run in structured or unstructured mesh versions. This allows the unstructured mesh version to be benchmarked against the structured mesh version. As a preliminary example, disruptions in DIIID reverse shear equilibria are studied numerically with the MH3D++ code. Numerical equilibria were first produced starting with an EQDSK file containing equilibrium data of a DIII-D L-mode negative central shear discharge. Using these equilibria, the linearized equations are time advanced to get the toroidal mode number n = 1 linear growth rate and eigenmode, which is resistively unstable. The equilibrium and linear mode are used to initialize 3D nonlinear runs. An example shows poloidal slices of 3D pressure surfaces: initially, on the left, and at an intermediate time, on the right

  10. Linear source approximation scheme for method of characteristics

    International Nuclear Information System (INIS)

    Tang Chuntao

    2011-01-01

    Method of characteristics (MOC) for solving neutron transport equation based on unstructured mesh has already become one of the fundamental methods for lattice calculation of nuclear design code system. However, most of MOC codes are developed with flat source approximation called step characteristics (SC) scheme, which is another basic assumption for MOC. A linear source (LS) characteristics scheme and its corresponding modification for negative source distribution were proposed. The OECD/NEA C5G7-MOX 2D benchmark and a self-defined BWR mini-core problem were employed to validate the new LS module of PEACH code. Numerical results indicate that the proposed LS scheme employs less memory and computational time compared with SC scheme at the same accuracy. (authors)

  11. Grid adaption using Chimera composite overlapping meshes

    Science.gov (United States)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  12. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  13. Finite element method for solving Kohn-Sham equations based on self-adaptive tetrahedral mesh

    International Nuclear Information System (INIS)

    Zhang Dier; Shen Lihua; Zhou Aihui; Gong Xingao

    2008-01-01

    A finite element (FE) method with self-adaptive mesh-refinement technique is developed for solving the density functional Kohn-Sham equations. The FE method adopts local piecewise polynomials basis functions, which produces sparsely structured matrices of Hamiltonian. The method is well suitable for parallel implementation without using Fourier transform. In addition, the self-adaptive mesh-refinement technique can control the computational accuracy and efficiency with optimal mesh density in different regions

  14. An {Mathematical expression} iteration bound primal-dual cone affine scaling algorithm for linear programmingiteration bound primal-dual cone affine scaling algorithm for linear programming

    NARCIS (Netherlands)

    J.F. Sturm; J. Zhang (Shuzhong)

    1996-01-01

    textabstractIn this paper we introduce a primal-dual affine scaling method. The method uses a search-direction obtained by minimizing the duality gap over a linearly transformed conic section. This direction neither coincides with known primal-dual affine scaling directions (Jansen et al., 1993;

  15. QMESH RENUM QPLOT, Mesh Generator on 2-D Bodies for Finite Element Method Analysis, with Plot Utility

    International Nuclear Information System (INIS)

    Jones, R.E.; Schkade, A.F.; Eyberger, L.R.

    1991-01-01

    1 - Description of problem or function: A set of five programs which make up a self-organising mesh generation package. QMESH generates meshes having quadrilateral elements on arbitrarily-shaped, two-dimensional (planar or axisymmetric) bodies. It is designed for use with two-dimensional finite element analysis applications. A flexible hierarchical input scheme is used to describe bodies to QMESH as collections of regions. A mesh for each region is developed independently, with the final assembly and bandwidth minimization performed by the independent program, RENUM or RENUM8. RENUM is applied when four-node elements are desired. Eight-node elements (with mid-side nodes) may be obtained with RENUM8., QPLOT and QPLOT8 are plot programs for meshes generated by the QMESH/RENUM and QMESH/RENUM8 program pairs, respectively. QPLOT and QPLOT8 automatically section the mesh into appropriately-sized sections for legible display of node and element numbers. An overall plot showing the position of the selected plot areas is produced. 2 - Method of solution: The mesh generating process for each individual region begins with the installation of an initial mesh which is a transformation of a regular grid on the unit square. The dimensions and orientation of the initial mesh may be defined by the user or, optionally, may be chosen by QMESH. Various smoothing algorithms may be applied to the initial mesh. Then, the mesh may be 'restructured' using an iterative scheme involving 'element pair restructuring', 'acute element deletion', and smoothing. In element pair restructuring, the interface side between two elements is removed and placed between two different nodes belonging to the pair of elements, provided that the change produces an overall improvement in the shapes of the two elements. In acute element deletion, an element having one diagonal much shorter than the other is deleted by collapsing the short diagonal to zero length The exact order in which restructuring, element

  16. Two-dimensional isostatic meshes in the finite element method

    OpenAIRE

    Martínez Marín, Rubén; Samartín, Avelino

    2002-01-01

    In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's...

  17. Adaptive mesh refinement and adjoint methods in geophysics simulations

    Science.gov (United States)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  18. Second-order particle-in-cell (PIC) computational method in the one-dimensional variable Eulerian mesh system

    International Nuclear Information System (INIS)

    Pyun, J.J.

    1981-01-01

    As part of an effort to incorporate the variable Eulerian mesh into the second-order PIC computational method, a truncation error analysis was performed to calculate the second-order error terms for the variable Eulerian mesh system. The results that the maximum mesh size increment/decrement is limited to be α(Δr/sub i/) 2 where Δr/sub i/ is a non-dimensional mesh size of the ith cell, and α is a constant of order one. The numerical solutions of Burgers' equation by the second-order PIC method in the variable Eulerian mesh system wer compared with its exact solution. It was found that the second-order accuracy in the PIC method was maintained under the above condition. Additional problems were analyzed using the second-order PIC methods in both variable and uniform Eulerian mesh systems. The results indicate that the second-order PIC method in the variable Eulerian mesh system can provide substantial computational time saving with no loss in accuracy

  19. Numerical form-finding method for large mesh reflectors with elastic rim trusses

    Science.gov (United States)

    Yang, Dongwu; Zhang, Yiqun; Li, Peng; Du, Jingli

    2018-06-01

    Traditional methods for designing a mesh reflector usually treat the rim truss as rigid. Due to large aperture, light weight and high accuracy requirements on spaceborne reflectors, the rim truss deformation is indeed not negligible. In order to design a cable net with asymmetric boundaries for the front and rear nets, a cable-net form-finding method is firstly introduced. Then, the form-finding method is embedded into an iterative approach for designing a mesh reflector considering the elasticity of the supporting rim truss. By iterations on form-findings of the cable-net based on the updated boundary conditions due to the rim truss deformation, a mesh reflector with a fairly uniform tension distribution in its equilibrium state could be finally designed. Applications on offset mesh reflectors with both circular and elliptical rim trusses are illustrated. The numerical results show the effectiveness of the proposed approach and that a circular rim truss is more stable than an elliptical rim truss.

  20. A novel partitioning method for block-structured adaptive meshes

    Science.gov (United States)

    Fu, Lin; Litvinov, Sergej; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-07-01

    We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtain the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.

  1. A novel partitioning method for block-structured adaptive meshes

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Lin, E-mail: lin.fu@tum.de; Litvinov, Sergej, E-mail: sergej.litvinov@aer.mw.tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de

    2017-07-15

    We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtain the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.

  2. Coarse-mesh rebalancing acceleration for eigenvalue problems

    International Nuclear Information System (INIS)

    Asaoka, T.; Nakahara, Y.; Miyasaka, S.

    1974-01-01

    The coarse-mesh rebalance method is adopted for Monte Carlo schemes for aiming at accelerating the convergence of a source iteration process. At every completion of the Monte Carlo game for one batch of neutron histories, the scaling factor for the neutron flux is calculated to achieve the neutron balance in each coarse-mesh zone into which the total system is divided. This rebalance factor is multiplied to the weight of each fission source neutron in the coarse-mesh zone for playing the next Monte Carlo game. The numerical examples have shown that the coarse-mesh rebalance Monte Carlo calculation gives a good estimate of the eigenvalue already after several batches with a negligible extra computer time compared to the standard Monte Carlo. 5 references. (U.S.)

  3. A linear multiple balance method for discrete ordinates neutron transport equations

    International Nuclear Information System (INIS)

    Park, Chang Je; Cho, Nam Zin

    2000-01-01

    A linear multiple balance method (LMB) is developed to provide more accurate and positive solutions for the discrete ordinates neutron transport equations. In this multiple balance approach, one mesh cell is divided into two subcells with quadratic approximation of angular flux distribution. Four multiple balance equations are used to relate center angular flux with average angular flux by Simpson's rule. From the analysis of spatial truncation error, the accuracy of the linear multiple balance scheme is ο(Δ 4 ) whereas that of diamond differencing is ο(Δ 2 ). To accelerate the linear multiple balance method, we also describe a simplified additive angular dependent rebalance factor scheme which combines a modified boundary projection acceleration scheme and the angular dependent rebalance factor acceleration schme. It is demonstrated, via fourier analysis of a simple model problem as well as numerical calculations, that the additive angular dependent rebalance factor acceleration scheme is unconditionally stable with spectral radius < 0.2069c (c being the scattering ration). The numerical results tested so far on slab-geometry discrete ordinates transport problems show that the solution method of linear multiple balance is effective and sufficiently efficient

  4. Novel algorithm of large-scale simultaneous linear equations

    International Nuclear Information System (INIS)

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-01-01

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  5. New Method for Mesh Moving Based on Radial Basis Function Interpolation

    NARCIS (Netherlands)

    De Boer, A.; Van der Schoot, M.S.; Bijl, H.

    2006-01-01

    A new point-by-point mesh movement algorithm is developed for the deformation of unstructured grids. The method is based on using radial basis function, RBFs, to interpolate the displacements of the boundary nodes to the whole flow mesh. A small system of equations has to be solved, only involving

  6. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  7. hp-version discontinuous Galerkin methods on polygonal and polyhedral meshes

    CERN Document Server

    Cangiani, Andrea; Georgoulis, Emmanuil H; Houston, Paul

    2017-01-01

    Over the last few decades discontinuous Galerkin finite element methods (DGFEMs) have been witnessed tremendous interest as a computational framework for the numerical solution of partial differential equations. Their success is due to their extreme versatility in the design of the underlying meshes and local basis functions, while retaining key features of both (classical) finite element and finite volume methods. Somewhat surprisingly, DGFEMs on general tessellations consisting of polygonal (in 2D) or polyhedral (in 3D) element shapes have received little attention within the literature, despite the potential computational advantages. This volume introduces the basic principles of hp-version (i.e., locally varying mesh-size and polynomial order) DGFEMs over meshes consisting of polygonal or polyhedral element shapes, presents their error analysis, and includes an extensive collection of numerical experiments. The extreme flexibility provided by the locally variable elemen t-shapes, element-sizes, and elemen...

  8. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  9. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    Science.gov (United States)

    Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)

    2011-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  10. Transmission probability method based on triangle meshes for solving unstructured geometry neutron transport problem

    Energy Technology Data Exchange (ETDEWEB)

    Wu Hongchun [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China)]. E-mail: hongchun@mail.xjtu.edu.cn; Liu Pingping [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China); Zhou Yongqiang [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China); Cao Liangzhi [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China)

    2007-01-15

    In the advanced reactor, the fuel assembly or core with unstructured geometry is frequently used and for calculating its fuel assembly, the transmission probability method (TPM) has been used widely. However, the rectangle or hexagon meshes are mainly used in the TPM codes for the normal core structure. The triangle meshes are most useful for expressing the complicated unstructured geometry. Even though finite element method and Monte Carlo method is very good at solving unstructured geometry problem, they are very time consuming. So we developed the TPM code based on the triangle meshes. The TPM code based on the triangle meshes was applied to the hybrid fuel geometry, and compared with the results of the MCNP code and other codes. The results of comparison were consistent with each other. The TPM with triangle meshes would thus be expected to be able to apply to the two-dimensional arbitrary fuel assembly.

  11. Kinetic mesh-free method for flutter prediction in turbomachines

    Indian Academy of Sciences (India)

    -based mesh-free method for unsteady flows. ... Council for Scientific and Industrial Research, National Aerospace Laboratories, Computational and Theoretical Fluid Dynamics Division, Bangalore 560 017, India; Engineering Mechanics Unit, ...

  12. A local level set method based on a finite element method for unstructured meshes

    International Nuclear Information System (INIS)

    Ngo, Long Cu; Choi, Hyoung Gwon

    2016-01-01

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time

  13. A local level set method based on a finite element method for unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Long Cu; Choi, Hyoung Gwon [School of Mechanical Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-12-15

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time.

  14. An Efficient Mesh Generation Method for Fractured Network System Based on Dynamic Grid Deformation

    Directory of Open Access Journals (Sweden)

    Shuli Sun

    2013-01-01

    Full Text Available Meshing quality of the discrete model influences the accuracy, convergence, and efficiency of the solution for fractured network system in geological problem. However, modeling and meshing of such a fractured network system are usually tedious and difficult due to geometric complexity of the computational domain induced by existence and extension of fractures. The traditional meshing method to deal with fractures usually involves boundary recovery operation based on topological transformation, which relies on many complicated techniques and skills. This paper presents an alternative and efficient approach for meshing fractured network system. The method firstly presets points on fractures and then performs Delaunay triangulation to obtain preliminary mesh by point-by-point centroid insertion algorithm. Then the fractures are exactly recovered by local correction with revised dynamic grid deformation approach. Smoothing algorithm is finally applied to improve the quality of mesh. The proposed approach is efficient, easy to implement, and applicable to the cases of initial existing fractures and extension of fractures. The method is successfully applied to modeling of two- and three-dimensional discrete fractured network (DFN system in geological problems to demonstrate its effectiveness and high efficiency.

  15. The application of finite volume methods for modelling three-dimensional incompressible flow on an unstructured mesh

    Science.gov (United States)

    Lonsdale, R. D.; Webster, R.

    This paper demonstrates the application of a simple finite volume approach to a finite element mesh, combining the economy of the former with the geometrical flexibility of the latter. The procedure is used to model a three-dimensional flow on a mesh of linear eight-node brick (hexahedra). Simulations are performed for a wide range of flow problems, some in excess of 94,000 nodes. The resulting computer code ASTEC that incorporates these procedures is described.

  16. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  17. Interoperable mesh components for large-scale, distributed-memory simulations

    International Nuclear Information System (INIS)

    Devine, K; Leung, V; Diachin, L; Miller, M

    2009-01-01

    SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. In this paper, we describe a software component - an abstract data model and programming interface - designed to provide support for parallel unstructured mesh operations. We describe key issues that must be addressed to successfully provide high-performance, distributed-memory unstructured mesh services and highlight some recent research accomplishments in developing new load balancing and MPI-based communication libraries appropriate for leadership class computing. Finally, we give examples of the use of parallel adaptive mesh modification in two SciDAC applications.

  18. Smooth Bézier surfaces over unstructured quadrilateral meshes

    CERN Document Server

    Bercovier, Michel

    2017-01-01

    Using an elegant mixture of geometry, graph theory and linear analysis, this monograph completely solves a problem lying at the interface of Isogeometric Analysis (IgA) and Finite Element Methods (FEM). The recent explosion of IgA, strongly tying Computer Aided Geometry Design to Analysis, does not easily apply to the rich variety of complex shapes that engineers have to design and analyse. Therefore new developments have studied the extension of IgA to unstructured unions of meshes, similar to those one can find in FEM. The following problem arises: given an unstructured planar quadrilateral mesh, construct a C1-surface, by piecewise Bézier or B-Spline patches defined over this mesh. This problem is solved for C1-surfaces defined over plane bilinear Bézier patches, the corresponding results for B-Splines then being simple consequences. The method can be extended to higher-order quadrilaterals and even to three dimensions, and the most recent developments in this direction are also mentioned here.

  19. Finite element analysis of multi-material models using a balancing domain decomposition method combined with the diagonal scaling preconditioner

    International Nuclear Information System (INIS)

    Ogino, Masao

    2016-01-01

    Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)

  20. Coarse mesh finite element method for boiling water reactor physics analysis

    International Nuclear Information System (INIS)

    Ellison, P.G.

    1983-01-01

    A coarse mesh method is formulated for the solution of Boiling Water Reactor physics problems using two group diffusion theory. No fuel assembly cross-section homogenization is required; water gaps, control blades and fuel pins of varying enrichments are treated explicitly. The method combines constrained finite element discretization with infinite lattice super cell trial functions to obtain coarse mesh solutions for which the only approximations are along the boundaries between fuel assemblies. The method is applied to bench mark Boiling Water Reactor problems to obtain both the eigenvalue and detailed flux distributions. The solutions to these problems indicate the method is useful in predicting detailed power distributions and eigenvalues for Boiling Water Reactor physics problems

  1. Adaptive Shape Functions and Internal Mesh Adaptation for Modelling Progressive Failure in Adhesively Bonded Joints

    Science.gov (United States)

    Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.

    2014-01-01

    Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.

  2. A general coarse and fine mesh solution scheme for fluid flow modeling in VHTRS

    International Nuclear Information System (INIS)

    Clifford, I; Ivanov, K; Avramova, M.

    2011-01-01

    Coarse mesh Computational Fluid Dynamics (CFD) methods offer several advantages over traditional coarse mesh methods for the safety analysis of helium-cooled graphite-moderated Very High Temperature Reactors (VHTRs). This relatively new approach opens up the possibility for system-wide calculations to be carried out using a consistent set of field equations throughout the calculation, and subsequently the possibility for hybrid coarse/fine mesh or hierarchical multi scale CFD simulations. To date, a consistent methodology for hierarchical multi-scale CFD has not been developed. This paper describes work carried out in the initial development of a multi scale CFD solver intended to be used for the safety analysis of VHTRs. The VHTR is considered on any scale to consist of a homogenized two-phase mixture of fluid and stationary solid material of varying void fraction. A consistent set of conservation equations was selected such that they reduce to the single-phase conservation equations for the case where void fraction is unity. The discretization of the conservation equations uses a new pressure interpolation scheme capable of capturing the discontinuity in pressure across relatively large changes in void fraction. Based on this, a test solver was developed which supports fully unstructured meshes for three-dimensional time-dependent compressible flow problems, including buoyancy effects. For typical VHTR flow phenomena the new solver shows promise as an effective candidate for predicting the flow behavior on multiple scales, as it is capable of modeling both fine mesh single phase flows as well as coarse mesh flows in homogenized regions containing both fluid and solid materials. (author)

  3. Hybrid meshes and domain decomposition for the modeling of oil reservoirs; Maillages hybrides et decomposition de domaine pour la modelisation des reservoirs petroliers

    Energy Technology Data Exchange (ETDEWEB)

    Gaiffe, St

    2000-03-23

    In this thesis, we are interested in the modeling of fluid flow through porous media with 2-D and 3-D unstructured meshes, and in the use of domain decomposition methods. The behavior of flow through porous media is strongly influenced by heterogeneities: either large-scale lithological discontinuities or quite localized phenomena such as fluid flow in the neighbourhood of wells. In these two typical cases, an accurate consideration of the singularities requires the use of adapted meshes. After having shown the limits of classic meshes we present the future prospects offered by hybrid and flexible meshes. Next, we consider the generalization possibilities of the numerical schemes traditionally used in reservoir simulation and we draw two available approaches: mixed finite elements and U-finite volumes. The investigated phenomena being also characterized by different time-scales, special treatments in terms of time discretization on various parts of the domain are required. We think that the combination of domain decomposition methods with operator splitting techniques may provide a promising approach to obtain high flexibility for a local tune-steps management. Consequently, we develop a new numerical scheme for linear parabolic equations which allows to get a higher flexibility in the local space and time steps management. To conclude, a priori estimates and error estimates on the two variables of interest, namely the pressure and the velocity are proposed. (author)

  4. Parallel Fast Multipole Boundary Element Method for crustal dynamics

    International Nuclear Information System (INIS)

    Quevedo, Leonardo; Morra, Gabriele; Mueller, R Dietmar

    2010-01-01

    Crustal faults and sharp material transitions in the crust are usually represented as triangulated surfaces in structural geological models. The complex range of volumes separating such surfaces is typically three-dimensionally meshed in order to solve equations that describe crustal deformation with the finite-difference (FD) or finite-element (FEM) methods. We show here how the Boundary Element Method, combined with the Multipole approach, can revolutionise the calculation of stress and strain, solving the problem of computational scalability from reservoir to basin scales. The Fast Multipole Boundary Element Method (Fast BEM) tackles the difficulty of handling the intricate volume meshes and high resolution of crustal data that has put classical Finite 3D approaches in a performance crisis. The two main performance enhancements of this method: the reduction of required mesh elements from cubic to quadratic with linear size and linear-logarithmic runtime; achieve a reduction of memory and runtime requirements allowing the treatment of a new scale of geodynamic models. This approach was recently tested and applied in a series of papers by [1, 2, 3] for regional and global geodynamics, using KD trees for fast identification of near and far-field interacting elements, and MPI parallelised code on distributed memory architectures, and is now in active development for crustal dynamics. As the method is based on a free-surface, it allows easy data transfer to geological visualisation tools where only changes in boundaries and material properties are required as input parameters. In addition, easy volume mesh sampling of physical quantities enables direct integration with existing FD/FEM code.

  5. Coarse-mesh discretized low-order quasi-diffusion equations for subregion averaged scalar fluxes

    International Nuclear Information System (INIS)

    Anistratov, D. Y.

    2004-01-01

    In this paper we develop homogenization procedure and discretization for the low-order quasi-diffusion equations on coarse grids for core-level reactor calculations. The system of discretized equations of the proposed method is formulated in terms of the subregion averaged group scalar fluxes. The coarse-mesh solution is consistent with a given fine-mesh discretization of the transport equation in the sense that it preserves a set of average values of the fine-mesh transport scalar flux over subregions of coarse-mesh cells as well as the surface currents, and eigenvalue. The developed method generates numerical solution that mimics the large-scale behavior of the transport solution within assemblies. (authors)

  6. Patched based methods for adaptive mesh refinement solutions of partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Saltzman, J.

    1997-09-02

    This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.

  7. RGG: Reactor geometry (and mesh) generator

    International Nuclear Information System (INIS)

    Jain, R.; Tautges, T.

    2012-01-01

    The reactor geometry (and mesh) generator RGG takes advantage of information about repeated structures in both assembly and core lattices to simplify the creation of geometry and mesh. It is released as open source software as a part of the MeshKit mesh generation library. The methodology operates in three stages. First, assembly geometry models of various types are generated by a tool called AssyGen. Next, the assembly model or models are meshed by using MeshKit tools or the CUBIT mesh generation tool-kit, optionally based on a journal file output by AssyGen. After one or more assembly model meshes have been constructed, a tool called CoreGen uses a copy/move/merge process to arrange the model meshes into a core model. In this paper, we present the current state of tools and new features in RGG. We also discuss the parallel-enabled CoreGen, which in several cases achieves super-linear speedups since the problems fit in available RAM at higher processor counts. Several RGG applications - 1/6 VHTR model, 1/4 PWR reactor core, and a full-core model for Monju - are reported. (authors)

  8. Comparison of three different methods for effective introduction of platelet-rich plasma on PLGA woven mesh.

    Science.gov (United States)

    Lee, Ji-Hye; Nam, Jinwoo; Kim, Hee Joong; Yoo, Jeong Joon

    2015-03-11

    For successful tissue regeneration, effective cell delivery to defect site is very important. Various types of polymer biomaterials have been developed and applied for effective cell delivery. PLGA (poly lactic-co-glycolic acid), a synthetic polymer, is a commercially available and FDA approved material. Platelet-rich plasma (PRP) is an autologous growth factor cocktail containing various growth factors including PDGF, TGFβ-1 and BMPs, and has shown positive effects on cell behaviors. We hypothesized that PRP pretreatment on PLGA mesh using different methods would cause different patterns of platelet adhesion and stages which would modulate cell adhesion and proliferation on the PLGA mesh. In this study, we pretreated PRP on PLGA using three different methods including simple dripping (SD), dynamic oscillation (DO) and centrifugation (CE), then observed the amount of adhered platelets and their activation stage distribution. The highest amount of platelets was observed on CE mesh and calcium treated CE mesh. Moreover, calcium addition after PRP coating triggered dramatic activation of platelets which showed large and flat morphologies of platelets with rich fibrin networks. Human chondrocytes (hCs) and human bone marrow stromal cells (hBMSCs) were next cultured on PRP-pretreated PLGA meshes using different preparation methods. CE mesh showed a significant increase in the initial cell adhesion of hCs and proliferation of hBMSCs compared with SD and DO meshes. The results demonstrated that the centrifugation method can be considered as a promising coating method to introduce PRP on PLGA polymeric material which could improve cell-material interaction using a simple method.

  9. Comparison of three different methods for effective introduction of platelet-rich plasma on PLGA woven mesh

    International Nuclear Information System (INIS)

    Lee, Ji-Hye; Nam, Jinwoo; Kim, Hee Joong; Yoo, Jeong Joon

    2015-01-01

    For successful tissue regeneration, effective cell delivery to defect site is very important. Various types of polymer biomaterials have been developed and applied for effective cell delivery. PLGA (poly lactic-co-glycolic acid), a synthetic polymer, is a commercially available and FDA approved material. Platelet-rich plasma (PRP) is an autologous growth factor cocktail containing various growth factors including PDGF, TGFβ-1 and BMPs, and has shown positive effects on cell behaviors. We hypothesized that PRP pretreatment on PLGA mesh using different methods would cause different patterns of platelet adhesion and stages which would modulate cell adhesion and proliferation on the PLGA mesh. In this study, we pretreated PRP on PLGA using three different methods including simple dripping (SD), dynamic oscillation (DO) and centrifugation (CE), then observed the amount of adhered platelets and their activation stage distribution. The highest amount of platelets was observed on CE mesh and calcium treated CE mesh. Moreover, calcium addition after PRP coating triggered dramatic activation of platelets which showed large and flat morphologies of platelets with rich fibrin networks. Human chondrocytes (hCs) and human bone marrow stromal cells (hBMSCs) were next cultured on PRP-pretreated PLGA meshes using different preparation methods. CE mesh showed a significant increase in the initial cell adhesion of hCs and proliferation of hBMSCs compared with SD and DO meshes. The results demonstrated that the centrifugation method can be considered as a promising coating method to introduce PRP on PLGA polymeric material which could improve cell-material interaction using a simple method. (paper)

  10. Interior-Point Methods for Linear Programming: A Review

    Science.gov (United States)

    Singh, J. N.; Singh, D.

    2002-01-01

    The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…

  11. A nonlinear equivalent circuit method for analysis of passive intermodulation of mesh reflectors

    Directory of Open Access Journals (Sweden)

    Jiang Jie

    2014-08-01

    Full Text Available Passive intermodulation (PIM has gradually become a serious electromagnetic interference due to the development of high-power and high-sensitivity RF/microwave communication systems, especially large deployable mesh reflector antennas. This paper proposes a field-circuit coupling method to analyze the PIM level of mesh reflectors. With the existence of many metal–metal (MM contacts in mesh reflectors, the contact nonlinearity becomes the main reason for PIM generation. To analyze these potential PIM sources, an equivalent circuit model including nonlinear components is constructed to model a single MM contact so that the transient current through the MM contact point induced by incident electromagnetic waves can be calculated. Taking the electric current as a new electromagnetic wave source, the far-field scattering can be obtained by the use of electromagnetic numerical methods or the communication link method. Finally, a comparison between simulation and experimental results is illustrated to verify the validity of the proposed method.

  12. Parallel unstructured mesh optimisation for 3D radiation transport and fluids modelling

    International Nuclear Information System (INIS)

    Gorman, G.J.; Pain, Ch. C.; Oliveira, C.R.E. de; Umpleby, A.P.; Goddard, A.J.H.

    2003-01-01

    In this paper we describe the theory and application of a parallel mesh optimisation procedure to obtain self-adapting finite element solutions on unstructured tetrahedral grids. The optimisation procedure adapts the tetrahedral mesh to the solution of a radiation transport or fluid flow problem without sacrificing the integrity of the boundary (geometry), or internal boundaries (regions) of the domain. The objective is to obtain a mesh which has both a uniform interpolation error in any direction and the element shapes are of good quality. This is accomplished with use of a non-Euclidean (anisotropic) metric which is related to the Hessian of the solution field. Appropriate scaling of the metric enables the resolution of multi-scale phenomena as encountered in transient incompressible fluids and multigroup transport calculations. The resulting metric is used to calculate element size and shape quality. The mesh optimisation method is based on a series of mesh connectivity and node position searches of the landscape defining mesh quality which is gauged by a functional. The mesh modification thus fits the solution field(s) in an optimal manner. The parallel mesh optimisation/adaptivity procedure presented in this paper is of general applicability. We illustrate this by applying it to a transient CFD (computational fluid dynamics) problem. Incompressible flow past a cylinder at moderate Reynolds numbers is modelled to demonstrate that the mesh can follow transient flow features. (authors)

  13. A Tissue Relevance and Meshing Method for Computing Patient-Specific Anatomical Models in Endoscopic Sinus Surgery Simulation

    Science.gov (United States)

    Audette, M. A.; Hertel, I.; Burgert, O.; Strauss, G.

    This paper presents on-going work on a method for determining which subvolumes of a patient-specific tissue map, extracted from CT data of the head, are relevant to simulating endoscopic sinus surgery of that individual, and for decomposing these relevant tissues into triangles and tetrahedra whose mesh size is well controlled. The overall goal is to limit the complexity of the real-time biomechanical interaction while ensuring the clinical relevance of the simulation. Relevant tissues are determined as the union of the pathology present in the patient, of critical tissues deemed to be near the intended surgical path or pathology, and of bone and soft tissue near the intended path, pathology or critical tissues. The processing of tissues, prior to meshing, is based on the Fast Marching method applied under various guises, in a conditional manner that is related to tissue classes. The meshing is based on an adaptation of a meshing method of ours, which combines the Marching Tetrahedra method and the discrete Simplex mesh surface model to produce a topologically faithful surface mesh with well controlled edge and face size as a first stage, and Almost-regular Tetrahedralization of the same prescribed mesh size as a last stage.

  14. A moving mesh method with variable relaxation time

    OpenAIRE

    Soheili, Ali Reza; Stockie, John M.

    2006-01-01

    We propose a moving mesh adaptive approach for solving time-dependent partial differential equations. The motion of spatial grid points is governed by a moving mesh PDE (MMPDE) in which a mesh relaxation time \\tau is employed as a regularization parameter. Previously reported results on MMPDEs have invariably employed a constant value of the parameter \\tau. We extend this standard approach by incorporating a variable relaxation time that is calculated adaptively alongside the solution in orde...

  15. Aranha: a 2D mesh generator for triangular finite elements

    International Nuclear Information System (INIS)

    Fancello, E.A.; Salgado, A.C.; Feijoo, R.A.

    1990-01-01

    A method for generating unstructured meshes for linear and quadratic triangular finite elements is described in this paper. Some topics on the C language data structure used in the development of the program Aranha are also presented. The applicability for adaptive remeshing is shown and finally several examples are included to illustrate the performance of the method in irregular connected planar domains. (author)

  16. Vertex Normals and Face Curvatures of Triangle Meshes

    KAUST Repository

    Sun, Xiang; Jiang, Caigui; Wallner, Johannes; Pottmann, Helmut

    2016-01-01

    This study contributes to the discrete differential geometry of triangle meshes, in combination with discrete line congruences associated with such meshes. In particular we discuss when a congruence defined by linear interpolation of vertex normals

  17. The Role of Chronic Mesh Infection in Delayed-Onset Vaginal Mesh Complications or Recurrent Urinary Tract Infections: Results From Explanted Mesh Cultures.

    Science.gov (United States)

    Mellano, Erin M; Nakamura, Leah Y; Choi, Judy M; Kang, Diana C; Grisales, Tamara; Raz, Shlomo; Rodriguez, Larissa V

    2016-01-01

    Vaginal mesh complications necessitating excision are increasingly prevalent. We aim to study whether subclinical chronically infected mesh contributes to the development of delayed-onset mesh complications or recurrent urinary tract infections (UTIs). Women undergoing mesh removal from August 2013 through May 2014 were identified by surgical code for vaginal mesh removal. Only women undergoing removal of anti-incontinence mesh were included. Exclusion criteria included any women undergoing simultaneous prolapse mesh removal. We abstracted preoperative and postoperative information from the medical record and compared mesh culture results from patients with and without mesh extrusion, de novo recurrent UTIs, and delayed-onset pain. One hundred seven women with only anti-incontinence mesh removed were included in the analysis. Onset of complications after mesh placement was within the first 6 months in 70 (65%) of 107 and delayed (≥6 months) in 37 (35%) of 107. A positive culture from the explanted mesh was obtained from 82 (77%) of 107 patients, and 40 (37%) of 107 were positive with potential pathogens. There were no significant differences in culture results when comparing patients with delayed-onset versus immediate pain, extrusion with no extrusion, and de novo recurrent UTIs with no infections. In this large cohort of patients with mesh removed for a diverse array of complications, cultures of the explanted vaginal mesh demonstrate frequent low-density bacterial colonization. We found no differences in culture results from women with delayed-onset pain versus acute pain, vaginal mesh extrusions versus no extrusions, or recurrent UTIs using standard culture methods. Chronic prosthetic infections in other areas of medicine are associated with bacterial biofilms, which are resistant to typical culture techniques. Further studies using culture-independent methods are needed to investigate the potential role of chronic bacterial infections in delayed vaginal mesh

  18. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  19. A Nonlinear Dynamic Model and Free Vibration Analysis of Deployable Mesh Reflectors

    Science.gov (United States)

    Shi, H.; Yang, B.; Thomson, M.; Fang, H.

    2011-01-01

    This paper presents a dynamic model of deployable mesh reflectors, in which geometric and material nonlinearities of such a space structure are fully described. Then, by linearization around an equilibrium configuration of the reflector structure, a linearized model is obtained. With this linearized model, the natural frequencies and mode shapes of a reflector can be computed. The nonlinear dynamic model of deployable mesh reflectors is verified by using commercial finite element software in numerical simulation. As shall be seen, the proposed nonlinear model is useful for shape (surface) control of deployable mesh reflectors under thermal loads.

  20. Convergence diagnostics for Eigenvalue problems with linear regression model

    International Nuclear Information System (INIS)

    Shi, Bo; Petrovic, Bojan

    2011-01-01

    Although the Monte Carlo method has been extensively used for criticality/Eigenvalue problems, a reliable, robust, and efficient convergence diagnostics method is still desired. Most methods are based on integral parameters (multiplication factor, entropy) and either condense the local distribution information into a single value (e.g., entropy) or even disregard it. We propose to employ the detailed cycle-by-cycle local flux evolution obtained by using mesh tally mechanism to assess the source and flux convergence. By applying a linear regression model to each individual mesh in a mesh tally for convergence diagnostics, a global convergence criterion can be obtained. We exemplify this method on two problems and obtain promising diagnostics results. (author)

  1. Mesh Plug Repair of Inguinal Hernia; Single Surgeon Experience

    Directory of Open Access Journals (Sweden)

    Ahmet Serdar Karaca

    2013-10-01

    Full Text Available Aim: Mesh repair of inguinal hernia repairs are shown to be an effective and reliable method. In this study, a single surgeon%u2019s experience with plug-mesh method performs inguinal hernia repair have been reported. Material and Method: 587 patients with plug-mesh repair of inguinal hernia, preoperative age, body / mass index, comorbid disease were recorded in terms of form. All of the patients during the preoperative and postoperative hernia classification of information, duration of operation, antibiotics, perioperative complications, and later, the early and late postoperative complications, infection, recurrence rates and return to normal daily activity, verbal pain scales in terms of time and postoperative pain were evaluated. Added to this form of long-term pain ones. The presence of wound infection was assessed by the presence of purulent discharge from the incision. Visual analog scale pain status of the patients was measured. Results: 587 patients underwent repair of primary inguinal hernia mesh plug. One of the patients, 439 (74% of them have adapted follow-ups. Patients%u2019 ages ranged from 18-86. Was calculated as the mean of 47±18:07. Follow-up period of the patients was found to be a minimum of 3 months, maximum 55 months. Found an average of 28.2±13.4 months. Mean duration of surgery was 35.07±4.00 min (min:22mn-max:52mn, respectively. When complication rates of patients with recurrence in 2 patients (0.5%, hematoma development (1.4% in 6 patients, the development of infection in 11 patients (2.5% and long-term groin pain in 4 patients (0.9% appeared. Discussion: In our experience, the plug-mesh repair of primary inguinal hernia repair safe, effective low recurrence and complication rates can be used.

  2. Comments on the comparison of global methods for linear two-point boundary value problems

    International Nuclear Information System (INIS)

    de Boor, C.; Swartz, B.

    1977-01-01

    A more careful count of the operations involved in solving the linear system associated with collocation of a two-point boundary value problem using a rough splines reverses results recently reported by others in this journal. In addition, it is observed that the use of the technique of ''condensation of parameters'' can decrease the computer storage required. Furthermore, the use of a particular highly localized basis can also reduce the setup time when the mesh is irregular. Finally, operation counts are roughly estimated for the solution of certain linear system associated with two competing collocation methods; namely, collocation with smooth splines and collocation of the equivalent first order system with continuous piecewise polynomials

  3. Linear-scaling implementation of the direct random-phase approximation

    International Nuclear Information System (INIS)

    Kállay, Mihály

    2015-01-01

    We report the linear-scaling implementation of the direct random-phase approximation (dRPA) for closed-shell molecular systems. As a bonus, linear-scaling algorithms are also presented for the second-order screened exchange extension of dRPA as well as for the second-order Møller–Plesset (MP2) method and its spin-scaled variants. Our approach is based on an incremental scheme which is an extension of our previous local correlation method [Rolik et al., J. Chem. Phys. 139, 094105 (2013)]. The approach extensively uses local natural orbitals to reduce the size of the molecular orbital basis of local correlation domains. In addition, we also demonstrate that using natural auxiliary functions [M. Kállay, J. Chem. Phys. 141, 244113 (2014)], the size of the auxiliary basis of the domains and thus that of the three-center Coulomb integral lists can be reduced by an order of magnitude, which results in significant savings in computation time. The new approach is validated by extensive test calculations for energies and energy differences. Our benchmark calculations also demonstrate that the new method enables dRPA calculations for molecules with more than 1000 atoms and 10 000 basis functions on a single processor

  4. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks

    Directory of Open Access Journals (Sweden)

    Raja Jurdak

    2008-11-01

    Full Text Available Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.

  5. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks.

    Science.gov (United States)

    Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio

    2008-11-24

    Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.

  6. Capacity Analysis of Wireless Mesh Networks

    Directory of Open Access Journals (Sweden)

    M. I. Gumel

    2012-06-01

    Full Text Available The next generation wireless networks experienced a great development with emergence of wireless mesh networks (WMNs, which can be regarded as a realistic solution that provides wireless broadband access. The limited available bandwidth makes capacity analysis of the network very essential. While the network offers broadband wireless access to community and enterprise users, the problems that limit the network capacity must be addressed to exploit the optimum network performance. The wireless mesh network capacity analysis shows that the throughput of each mesh node degrades in order of l/n with increasing number of nodes (n in a linear topology. The degradation is found to be higher in a fully mesh network as a result of increase in interference and MAC layer contention in the network.

  7. On the economical solution method for a system of linear algebraic equations

    Directory of Open Access Journals (Sweden)

    Jan Awrejcewicz

    2004-01-01

    Full Text Available The present work proposes a novel optimal and exact method of solving large systems of linear algebraic equations. In the approach under consideration, the solution of a system of algebraic linear equations is found as a point of intersection of hyperplanes, which needs a minimal amount of computer operating storage. Two examples are given. In the first example, the boundary value problem for a three-dimensional stationary heat transfer equation in a parallelepiped in ℝ3 is considered, where boundary value problems of first, second, or third order, or their combinations, are taken into account. The governing differential equations are reduced to algebraic ones with the help of the finite element and boundary element methods for different meshes applied. The obtained results are compared with known analytical solutions. The second example concerns computation of a nonhomogeneous shallow physically and geometrically nonlinear shell subject to transversal uniformly distributed load. The partial differential equations are reduced to a system of nonlinear algebraic equations with the error of O(hx12+hx22. The linearization process is realized through either Newton method or differentiation with respect to a parameter. In consequence, the relations of the boundary condition variations along the shell side and the conditions for the solution matching are reported.

  8. Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.

    Science.gov (United States)

    Yuan, J; Moses, G A; McKenty, P W

    2005-10-01

    A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.

  9. Tetrahedral-Mesh Simulation of Turbulent Flows with the Space-Time Conservative Schemes

    Science.gov (United States)

    Chang, Chau-Lyan; Venkatachari, Balaji; Cheng, Gary C.

    2015-01-01

    Direct numerical simulations of turbulent flows are predominantly carried out using structured, hexahedral meshes despite decades of development in unstructured mesh methods. Tetrahedral meshes offer ease of mesh generation around complex geometries and the potential of an orientation free grid that would provide un-biased small-scale dissipation and more accurate intermediate scale solutions. However, due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for triangular and tetrahedral meshes at the cell interfaces, numerical issues exist when flow discontinuities or stagnation regions are present. The space-time conservative conservation element solution element (CESE) method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to more accurately simulate turbulent flows using unstructured tetrahedral meshes. To pave the way towards accurate simulation of shock/turbulent boundary-layer interaction, a series of wave and shock interaction benchmark problems that increase in complexity, are computed in this paper with triangular/tetrahedral meshes. Preliminary computations for the normal shock/turbulence interactions are carried out with a relatively coarse mesh, by direct numerical simulations standards, in order to assess other effects such as boundary conditions and the necessity of a buffer domain. The results indicate that qualitative agreement with previous studies can be obtained for flows where, strong shocks co-exist along with unsteady waves that display a broad range of scales, with a relatively compact computational domain and less stringent requirements for grid clustering near the shock. With the space-time conservation properties, stable solutions without any spurious wave reflections can be obtained without a need for buffer domains near the outflow/farfield boundaries. Computational results for the

  10. Transonic Airfoil Flow Simulation. Part I: Mesh Generation and Inviscid Method

    Directory of Open Access Journals (Sweden)

    Vladimir CARDOS

    2010-06-01

    Full Text Available A calculation method for the subsonic and transonic viscous flow over airfoil using thedisplacement surface concept is described. Part I presents a mesh generation method forcomputational grid and a finite volume method for the time-dependent Euler equations. The inviscidsolution is used for the inviscid-viscous coupling procedure presented in the Part II.

  11. Capacity analysis of wireless mesh networks | Gumel | Nigerian ...

    African Journals Online (AJOL)

    ... number of nodes (n) in a linear topology. The degradation is found to be higher in a fully mesh network as a result of increase in interference and MAC layer contention in the network. Key words: Wireless mesh network (WMN), Adhoc network, Network capacity analysis, Bottleneck collision domain, Medium access control ...

  12. Simulation of geothermal water extraction in heterogeneous reservoirs using dynamic unstructured mesh optimisation

    Science.gov (United States)

    Salinas, P.; Pavlidis, D.; Jacquemyn, C.; Lei, Q.; Xie, Z.; Pain, C.; Jackson, M.

    2017-12-01

    It is well known that the pressure gradient into a production well increases with decreasing distance to the well. To properly capture the local pressure drawdown into the well a high grid or mesh resolution is required; moreover, the location of the well must be captured accurately. In conventional simulation models, the user must interact with the model to modify grid resolution around wells of interest, and the well location is approximated on a grid defined early in the modelling process.We report a new approach for improved simulation of near wellbore flow in reservoir scale models through the use of dynamic mesh optimisation and the recently presented double control volume finite element method. Time is discretized using an adaptive, implicit approach. Heterogeneous geologic features are represented as volumes bounded by surfaces. Within these volumes, termed geologic domains, the material properties are constant. Up-, cross- or down-scaling of material properties during dynamic mesh optimization is not required, as the properties are uniform within each geologic domain. A given model typically contains numerous such geologic domains. Wells are implicitly coupled with the domain, and the fluid flows is modelled inside the wells. The method is novel for two reasons. First, a fully unstructured tetrahedral mesh is used to discretize space, and the spatial location of the well is specified via a line vector, ensuring its location even if the mesh is modified during the simulation. The well location is therefore accurately captured, the approach allows complex well trajectories and wells with many laterals to be modelled. Second, computational efficiency is increased by use of dynamic mesh optimization, in which an unstructured mesh adapts in space and time to key solution fields (preserving the geometry of the geologic domains), such as pressure, velocity or temperature, this also increases the quality of the solutions by placing higher resolution where required

  13. A simplified density matrix minimization for linear scaling self-consistent field theory

    International Nuclear Information System (INIS)

    Challacombe, M.

    1999-01-01

    A simplified version of the Li, Nunes and Vanderbilt [Phys. Rev. B 47, 10891 (1993)] and Daw [Phys. Rev. B 47, 10895 (1993)] density matrix minimization is introduced that requires four fewer matrix multiplies per minimization step relative to previous formulations. The simplified method also exhibits superior convergence properties, such that the bulk of the work may be shifted to the quadratically convergent McWeeny purification, which brings the density matrix to idempotency. Both orthogonal and nonorthogonal versions are derived. The AINV algorithm of Benzi, Meyer, and Tuma [SIAM J. Sci. Comp. 17, 1135 (1996)] is introduced to linear scaling electronic structure theory, and found to be essential in transformations between orthogonal and nonorthogonal representations. These methods have been developed with an atom-blocked sparse matrix algebra that achieves sustained megafloating point operations per second rates as high as 50% of theoretical, and implemented in the MondoSCF suite of linear scaling SCF programs. For the first time, linear scaling Hartree - Fock theory is demonstrated with three-dimensional systems, including water clusters and estane polymers. The nonorthogonal minimization is shown to be uncompetitive with minimization in an orthonormal representation. An early onset of linear scaling is found for both minimal and double zeta basis sets, and crossovers with a highly optimized eigensolver are achieved. Calculations with up to 6000 basis functions are reported. The scaling of errors with system size is investigated for various levels of approximation. copyright 1999 American Institute of Physics

  14. Graph-based linear scaling electronic structure theory

    Energy Technology Data Exchange (ETDEWEB)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.; Swart, Pieter J.; Germann, Timothy C.; Bock, Nicolas [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Mniszewski, Susan M.; Mohd-Yusof, Jamal; Wall, Michael E.; Djidjev, Hristo [Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Rubensson, Emanuel H. [Division of Scientific Computing, Department of Information Technology, Uppsala University, Box 337, SE-751 05 Uppsala (Sweden)

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  15. Application of coarse-mesh methods to fluid dynamics equations

    International Nuclear Information System (INIS)

    Romstedt, P.; Werner, W.

    1977-01-01

    An Asymmetric Weighted Residual (ASWR) method for fluid dynamics equations is described. It leads to local operators with a 7-point Finite Difference (FD) structure, which is independent of the degree of the approximating polynomials. An 1-dimensional problem was solved by both this ASWR-method and a commonly used FD-method. The numerical results demonstrate that the ASWR-method combines high accuracy on a coarse computational mesh with short computing time per space point. The posibility of using fewer space points consequently brings about a considerable reduction in total running time for the ASWR-method as compared with conventional FD-methods. (orig.) [de

  16. Computerized implementation of higher-order electron-correlation methods and their linear-scaling divide-and-conquer extensions.

    Science.gov (United States)

    Nakano, Masahiko; Yoshikawa, Takeshi; Hirata, So; Seino, Junji; Nakai, Hiromi

    2017-11-05

    We have implemented a linear-scaling divide-and-conquer (DC)-based higher-order coupled-cluster (CC) and Møller-Plesset perturbation theories (MPPT) as well as their combinations automatically by means of the tensor contraction engine, which is a computerized symbolic algebra system. The DC-based energy expressions of the standard CC and MPPT methods and the CC methods augmented with a perturbation correction were proposed for up to high excitation orders [e.g., CCSDTQ, MP4, and CCSD(2) TQ ]. The numerical assessment for hydrogen halide chains, polyene chains, and first coordination sphere (C1) model of photoactive yellow protein has revealed that the DC-based correlation methods provide reliable correlation energies with significantly less computational cost than that of the conventional implementations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Linear-scaling evaluation of the local energy in quantum Monte Carlo

    International Nuclear Information System (INIS)

    Austin, Brian; Aspuru-Guzik, Alan; Salomon-Ferrer, Romelia; Lester, William A. Jr.

    2006-01-01

    For atomic and molecular quantum Monte Carlo calculations, most of the computational effort is spent in the evaluation of the local energy. We describe a scheme for reducing the computational cost of the evaluation of the Slater determinants and correlation function for the correlated molecular orbital (CMO) ansatz. A sparse representation of the Slater determinants makes possible efficient evaluation of molecular orbitals. A modification to the scaled distance function facilitates a linear scaling implementation of the Schmidt-Moskowitz-Boys-Handy (SMBH) correlation function that preserves the efficient matrix multiplication structure of the SMBH function. For the evaluation of the local energy, these two methods lead to asymptotic linear scaling with respect to the molecule size

  18. ZONE, Finite Elements Method Quadrilateral and Triangular Mesh Generator for 2-D Axisymmetric Geometry

    International Nuclear Information System (INIS)

    Burger, M. J.

    1981-01-01

    1 - Description of problem or function: The ZONE program is a finite element mesh generator which produces the nodes and element description of any two-dimensional geometry. The geometry is divided into a mesh of quadrilateral and triangular zones defined by node points taken in a counter-clockwise sequence. The zones are arranged sequentially in an ordered march through the geometry. The order can be chosen so that the minimum bandwidth is obtained. The mesh that is generated can be used as input to any two-dimensional as well as any axisymmetrical structure program. 2 - Method of solution: The basic concept used is the definition of a two-dimensional structure by the intersection of two sets of lines which describe the geometric and material boundaries. A set of lines called meridians define the geometric and material boundaries and generally run in the same direction. Another set of linear line segments called rays which intersect the meridians are also defined at the material and geometric boundaries. The section of the structure between successive rays is called a region. The ray segment between any two consecutive ray-meridian intersections or void area in the structure is called a layer and is described as passing through, or bounding a material. The boundaries can be directly defined as a sequence of straight line segments or can be computed in terms of elliptic segments or circular arcs. A meridian or ray can also be made to follow a previously-defined meridian or ray at a fixed distance by invoking an offset option. 3 - Restrictions on the complexity of the problem: The following are limited only by a DIMENSION statement. The code currently has a maxima of: 100 coordinate points defining a meridian or ray, 40 meridians, 40 layers. There are no limits on the number of zones or nodes for any problems

  19. LR: Compact connectivity representation for triangle meshes

    Energy Technology Data Exchange (ETDEWEB)

    Gurung, T; Luffel, M; Lindstrom, P; Rossignac, J

    2011-01-28

    We propose LR (Laced Ring) - a simple data structure for representing the connectivity of manifold triangle meshes. LR provides the option to store on average either 1.08 references per triangle or 26.2 bits per triangle. Its construction, from an input mesh that supports constant-time adjacency queries, has linear space and time complexity, and involves ordering most vertices along a nearly-Hamiltonian cycle. LR is best suited for applications that process meshes with fixed connectivity, as any changes to the connectivity require the data structure to be rebuilt. We provide an implementation of the set of standard random-access, constant-time operators for traversing a mesh, and show that LR often saves both space and traversal time over competing representations.

  20. A Wrapping Method for Inserting Titanium Micro-Mesh Implants in the Reconstruction of Blowout Fractures

    Directory of Open Access Journals (Sweden)

    Tae Joon Choi

    2016-01-01

    Full Text Available Titanium micro-mesh implants are widely used in orbital wall reconstructions because they have several advantageous characteristics. However, the rough and irregular marginal spurs of the cut edges of the titanium mesh sheet impede the efficacious and minimally traumatic insertion of the implant, because these spurs may catch or hook the orbital soft tissue, skin, or conjunctiva during the insertion procedure. In order to prevent this problem, we developed an easy method of inserting a titanium micro-mesh, in which it is wrapped with the aseptic transparent plastic film that is used to pack surgical instruments or is attached to one side of the inner suture package. Fifty-four patients underwent orbital wall reconstruction using a transconjunctival or transcutaneous approach. The wrapped implant was easily inserted without catching or injuring the orbital soft tissue, skin, or conjunctiva. In most cases, the implant was inserted in one attempt. Postoperative computed tomographic scans showed excellent placement of the titanium micro-mesh and adequate anatomic reconstruction of the orbital walls. This wrapping insertion method may be useful for making the insertion of titanium micro-mesh implants in the reconstruction of orbital wall fractures easier and less traumatic.

  1. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    Science.gov (United States)

    Boutchko, R.; Sitek, A.; Gullberg, G. T.

    2013-05-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio

  2. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    International Nuclear Information System (INIS)

    Boutchko, R; Gullberg, G T; Sitek, A

    2013-01-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio

  3. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    Science.gov (United States)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  4. Computational methods and modeling. 3. Adaptive Mesh Refinement for the Nodal Integral Method and Application to the Convection-Diffusion Equation

    International Nuclear Information System (INIS)

    Torej, Allen J.; Rizwan-Uddin

    2001-01-01

    The nodal integral method (NIM) has been developed for several problems, including the Navier-Stokes equations, the convection-diffusion equation, and the multigroup neutron diffusion equations. The coarse-mesh efficiency of the NIM is not fully realized in problems characterized by a wide range of spatial scales. However, the combination of adaptive mesh refinement (AMR) capability with the NIM can recover the coarse mesh efficiency by allowing high degrees of resolution in specific localized areas where it is needed and by using a lower resolution everywhere else. Furthermore, certain features of the NIM can be fruitfully exploited in the application of the AMR process. In this paper, we outline a general approach to couple nodal schemes with AMR and then apply it to the convection-diffusion (energy) equation. The development of the NIM with AMR capability (NIMAMR) is based on the well-known Berger-Oliger method for structured AMR. In general, the main components of all AMR schemes are 1. the solver; 2. the level-grid hierarchy; 3. the selection algorithm; 4. the communication procedures; 5. the governing algorithm. The first component, the solver, consists of the numerical scheme for the governing partial differential equations and the algorithm used to solve the resulting system of discrete algebraic equations. In the case of the NIM-AMR, the solver is the iterative approach to the solution of the set of discrete equations obtained by applying the NIM. Furthermore, in the NIM-AMR, the level-grid hierarchy (the second component) is based on the Hierarchical Adaptive Mesh Refinement (HAMR) system,6 and hence, the details of the hierarchy are omitted here. In the selection algorithm, regions of the domain that require mesh refinement are identified. The criterion to select regions for mesh refinement can be based on the magnitude of the gradient or on the Richardson truncation error estimate. Although an excellent choice for the selection criterion, the Richardson

  5. Multi-Repeated Projection Lithography for High-Precision Linear Scale Based on Average Homogenization Effect

    Directory of Open Access Journals (Sweden)

    Dongxu Ren

    2016-04-01

    Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.

  6. Predicting mesh density for adaptive modelling of the global atmosphere.

    Science.gov (United States)

    Weller, Hilary

    2009-11-28

    The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1-20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.

  7. Boltzmann Solver with Adaptive Mesh in Velocity Space

    International Nuclear Information System (INIS)

    Kolobov, Vladimir I.; Arslanbekov, Robert R.; Frolova, Anna A.

    2011-01-01

    We describe the implementation of direct Boltzmann solver with Adaptive Mesh in Velocity Space (AMVS) using quad/octree data structure. The benefits of the AMVS technique are demonstrated for the charged particle transport in weakly ionized plasmas where the collision integral is linear. We also describe the implementation of AMVS for the nonlinear Boltzmann collision integral. Test computations demonstrate both advantages and deficiencies of the current method for calculations of narrow-kernel distributions.

  8. Topological patterns of mesh textures in serpentinites

    Science.gov (United States)

    Miyazawa, M.; Suzuki, A.; Shimizu, H.; Okamoto, A.; Hiraoka, Y.; Obayashi, I.; Tsuji, T.; Ito, T.

    2017-12-01

    Serpentinization is a hydration process that forms serpentine minerals and magnetite within the oceanic lithosphere. Microfractures crosscut these minerals during the reactions, and the structures look like mesh textures. It has been known that the patterns of microfractures and the system evolutions are affected by the hydration reaction and fluid transport in fractures and within matrices. This study aims at quantifying the topological patterns of the mesh textures and understanding possible conditions of fluid transport and reaction during serpentinization in the oceanic lithosphere. Two-dimensional simulation by the distinct element method (DEM) generates fracture patterns due to serpentinization. The microfracture patterns are evaluated by persistent homology, which measures features of connected components of a topological space and encodes multi-scale topological features in the persistence diagrams. The persistence diagrams of the different mesh textures are evaluated by principal component analysis to bring out the strong patterns of persistence diagrams. This approach help extract feature values of fracture patterns from high-dimensional and complex datasets.

  9. Partitioning of unstructured meshes for load balancing

    International Nuclear Information System (INIS)

    Martin, O.C.; Otto, S.W.

    1994-01-01

    Many large-scale engineering and scientific calculations involve repeated updating of variables on an unstructured mesh. To do these types of computations on distributed memory parallel computers, it is necessary to partition the mesh among the processors so that the load balance is maximized and inter-processor communication time is minimized. This can be approximated by the problem, of partitioning a graph so as to obtain a minimum cut, a well-studied combinatorial optimization problem. Graph partitioning algorithms are discussed that give good but not necessarily optimum solutions. These algorithms include local search methods recursive spectral bisection, and more general purpose methods such as simulated annealing. It is shown that a general procedure enables to combine simulated annealing with Kernighan-Lin. The resulting algorithm is both very fast and extremely effective. (authors) 23 refs., 3 figs., 1 tab

  10. A point-centered diffusion differencing for unstructured meshes in 3-D

    International Nuclear Information System (INIS)

    Palmer, T.S.

    1994-01-01

    We describe a point-centered diffusion discretization for 3-D unstructured meshes of polyhedra. The method has several attractive qualities, including second-order accuracy and preservation of linear solutions. A potential drawback to the scheme is that the diffusion matrix is asymmetric, in general. Results of numerical test problems illustrate the behavior of the scheme

  11. SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method

    International Nuclear Information System (INIS)

    Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X

    2015-01-01

    Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed

  12. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    Science.gov (United States)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  13. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang; Yang, Yijun; Pottmann, Helmut; Mitra, Niloy J.

    2011-01-01

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.

  14. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang

    2011-12-12

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.

  15. 3D CSEM inversion based on goal-oriented adaptive finite element method

    Science.gov (United States)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with

  16. Automated hexahedral mesh generation from biomedical image data: applications in limb prosthetics.

    Science.gov (United States)

    Zachariah, S G; Sanders, J E; Turkiyyah, G M

    1996-06-01

    A general method to generate hexahedral meshes for finite element analysis of residual limbs and similar biomedical geometries is presented. The method utilizes skeleton-based subdivision of cross-sectional domains to produce simple subdomains in which structured meshes are easily generated. Application to a below-knee residual limb and external prosthetic socket is described. The residual limb was modeled as consisting of bones, soft tissue, and skin. The prosthetic socket model comprised a socket wall with an inner liner. The geometries of these structures were defined using axial cross-sectional contour data from X-ray computed tomography, optical scanning, and mechanical surface digitization. A tubular surface representation, using B-splines to define the directrix and generator, is shown to be convenient for definition of the structure geometries. Conversion of cross-sectional data to the compact tubular surface representation is direct, and the analytical representation simplifies geometric querying and numerical optimization within the mesh generation algorithms. The element meshes remain geometrically accurate since boundary nodes are constrained to lie on the tubular surfaces. Several element meshes of increasing mesh density were generated for two residual limbs and prosthetic sockets. Convergence testing demonstrated that approximately 19 elements are required along a circumference of the residual limb surface for a simple linear elastic model. A model with the fibula absent compared with the same geometry with the fibula present showed differences suggesting higher distal stresses in the absence of the fibula. Automated hexahedral mesh generation algorithms for sliced data represent an advancement in prosthetic stress analysis since they allow rapid modeling of any given residual limb and optimization of mesh parameters.

  17. Seeking new surgical predictors of mesh exposure after transvaginal mesh repair.

    Science.gov (United States)

    Wu, Pei-Ying; Chang, Chih-Hung; Shen, Meng-Ru; Chou, Cheng-Yang; Yang, Yi-Ching; Huang, Yu-Fang

    2016-10-01

    The purpose of this study was to explore new preventable risk factors for mesh exposure. A retrospective review of 92 consecutive patients treated with transvaginal mesh (TVM) in the urogynecological unit of our university hospital. An analysis of perioperative predictors was conducted in patients after vaginal repairs using a type 1 mesh. Mesh complications were recorded according to International Urogynecological Association (IUGA) definitions. Mesh-exposure-free durations were calculated by using the Kaplan-Meier method and compared between different closure techniques using log-rank test. Hazard ratios (HR) of predictors for mesh exposure were estimated by univariate and multivariate analyses using Cox proportional hazards regression models. The median surveillance interval was 24.1 months. Two late occurrences were found beyond 1 year post operation. No statistically significant correlation was observed between mesh exposure and concomitant hysterectomy. Exposure risks were significantly higher in patients with interrupted whole-layer closure in univariate analysis. In the multivariate analysis, hematoma [HR 5.42, 95 % confidence interval (CI) 1.26-23.35, P = 0.024), Prolift mesh (HR 5.52, 95 % CI 1.15-26.53, P = 0.033), and interrupted whole-layer closure (HR 7.02, 95 % CI 1.62-30.53, P = 0.009) were the strongest predictors of mesh exposure. Findings indicate the risks of mesh exposure and reoperation may be prevented by avoiding hematoma, large amount of mesh, or interrupted whole-layer closure in TVM surgeries. If these risk factors are prevented, hysterectomy may not be a relative contraindication for TVM use. We also provide evidence regarding mesh exposure and the necessity for more than 1 year of follow-up and preoperative counselling.

  18. Linear Algebraic Method for Non-Linear Map Analysis

    International Nuclear Information System (INIS)

    Yu, L.; Nash, B.

    2009-01-01

    We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.

  19. The application of the mesh-free method in the numerical simulations of the higher-order continuum structures

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yuzhou, E-mail: yuzhousun@126.com; Chen, Gensheng; Li, Dongxia [School of Civil Engineering and Architecture, Zhongyuan University of Technology, Zhengzhou (China)

    2016-06-08

    This paper attempts to study the application of mesh-free method in the numerical simulations of the higher-order continuum structures. A high-order bending beam considers the effect of the third-order derivative of deflections, and can be viewed as a one-dimensional higher-order continuum structure. The moving least-squares method is used to construct the shape function with the high-order continuum property, the curvature and the third-order derivative of deflections are directly interpolated with nodal variables and the second- and third-order derivative of the shape function, and the mesh-free computational scheme is establish for beams. The coupled stress theory is introduced to describe the special constitutive response of the layered rock mass in which the bending effect of thin layer is considered. The strain and the curvature are directly interpolated with the nodal variables, and the mesh-free method is established for the layered rock mass. The good computational efficiency is achieved based on the developed mesh-free method, and some key issues are discussed.

  20. Toward An Unstructured Mesh Database

    Science.gov (United States)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi

  1. Mapping method for generating three-dimensional meshes: past and present

    International Nuclear Information System (INIS)

    Cook, W.A.; Oakes, W.R.

    1982-01-01

    Two transformations are derived in this paper. One is a mapping of a unit square onto a surve and the other is a mapping of a unit cube onto a three-dimensional region. Two meshing computer programs are then discussed that use these mappings. The first is INGEN, which has been used to calculate three-dimensional meshes for approximately 15 years. This meshing program uses an index scheme to number boundaries, surfaces, and regions. With such an index scheme, it is possible to control nodal points, elements, and boundary conditions. The second is ESCHER, a meshing program now being developed. Two primary considerations governing development of ESCHER are that meshes graded using quadrilaterals are required and that edge-line geometry defined by Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM) systems will be a major source of geometry definition. This program separates the processes of nodal-point connectivity generation, computation of nodal-point mapping space coordinates, and mapping of nodal points into model space

  2. Using high-order methods on adaptively refined block-structured meshes - discretizations, interpolations, and filters.

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.

    2006-01-01

    Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.

  3. A Novel Mesh Quality Improvement Method for Boundary Elements

    Directory of Open Access Journals (Sweden)

    Hou-lin Liu

    2012-01-01

    Full Text Available In order to improve the boundary mesh quality while maintaining the essential characteristics of discrete surfaces, a new approach combining optimization-based smoothing and topology optimization is developed. The smoothing objective function is modified, in which two functions denoting boundary and interior quality, respectively, and a weight coefficient controlling boundary quality are taken into account. In addition, the existing smoothing algorithm can improve the mesh quality only by repositioning vertices of the interior mesh. Without destroying boundary conformity, bad elements with all their vertices on the boundary cannot be eliminated. Then, topology optimization is employed, and those elements are converted into other types of elements whose quality can be improved by smoothing. The practical application shows that the worst elements can be eliminated and, with the increase of weight coefficient, the average quality of boundary mesh can also be improved. Results obtained with the combined approach are compared with some common approach. It is clearly shown that it performs better than the existing approach.

  4. Coarse mesh and one-cell block inversion based diffusion synthetic acceleration

    Science.gov (United States)

    Kim, Kang-Seog

    DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent

  5. Mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods

    International Nuclear Information System (INIS)

    Baker, A.R.

    1982-07-01

    A study has been performed of mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods. As the objective was to illuminate the issues, the study was performed for a 1D slab model of a reactor with one neutron-energy group for which analytical solutions were possible. A computer code SLAB was specially written to perform the finite-difference and finite-element calculations and also to obtain the analytical solutions. The standard finite-difference equations were obtained by starting with an expansion of the neutron current in powers of the mesh size, h, and keeping terms as far as h 2 . It was confirmed that these equations led to the well-known result that the criticality parameter varied with the square of the mesh size. An improved form of the finite-difference equations was obtained by continuing the expansion for the neutron current as far as the term in h 4 . In this case, the critical parameter varied as the fourth power of the mesh size. The finite-element solutions for 2 and 3 nodes per element revealed that the criticality parameter varied as the square and fourth power of the mesh size, respectively. Numerical results are presented for a bare reactive core of uniform composition with 2 zones of different uniform mesh and for a reactive core with an absorptive reflector. (author)

  6. Dynamic Mesh Adaptation for Front Evolution Using Discontinuous Galerkin Based Weighted Condition Number Mesh Relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Greene, Patrick T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schofield, Samuel P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nourgaliev, Robert [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-06-21

    A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.

  7. Numerical analysis of splashing fluid using hybrid method of mesh-based and particle-based modelings

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Ogawara, Takuya; Kaneda, Takeshi; Maseguchi, Ryo

    2009-01-01

    In order to simulate splashing and scattering fluid behaviors, we developed a hybrid method of mesh-based model for large-scale continuum fluid and particle-based model for small-scale discrete fluid particles. As for the solver of the continuum fluid, we adopt the CIVA RefIned Multiphase SimulatiON (CRIMSON) code to evaluate two phase flow behaviors based on the recent computational fluid dynamics (CFD) techniques. The phase field model has been introduced to the CRIMSON in order to solve the problem of loosing phase interface sharpness in long-term calculation. As for the solver of the discrete fluid droplets, we applied the idea of Smoothed Particle Hydrodynamics (SPH) method. Both continuum fluid and discrete fluid interact each other through drag interaction force. We verified our method by applying it to a popular benchmark problem of collapse of water column problems, especially focusing on the splashing and scattering fluid behaviors after the column collided against the wall. We confirmed that the gross splashing and scattering behaviors were well reproduced by the introduction of particle model while the detailed behaviors of the particles were slightly different from the experimental results. (author)

  8. An accurate and linear-scaling method for calculating charge-transfer excitation energies and diabatic couplings

    Energy Technology Data Exchange (ETDEWEB)

    Pavanello, Michele [Department of Chemistry, Rutgers University, Newark, New Jersey 07102-1811 (United States); Van Voorhis, Troy [Department of Chemistry, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139-4307 (United States); Visscher, Lucas [Amsterdam Center for Multiscale Modeling, VU University, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Neugebauer, Johannes [Theoretische Organische Chemie, Organisch-Chemisches Institut der Westfaelischen Wilhelms-Universitaet Muenster, Corrensstrasse 40, 48149 Muenster (Germany)

    2013-02-07

    Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Angstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.

  9. Parallel adaptive simulations on unstructured meshes

    International Nuclear Information System (INIS)

    Shephard, M S; Jansen, K E; Sahni, O; Diachin, L A

    2007-01-01

    This paper discusses methods being developed by the ITAPS center to support the execution of parallel adaptive simulations on unstructured meshes. The paper first outlines the ITAPS approach to the development of interoperable mesh, geometry and field services to support the needs of SciDAC application in these areas. The paper then demonstrates the ability of unstructured adaptive meshing methods built on such interoperable services to effectively solve important physics problems. Attention is then focused on ITAPs' developing ability to solve adaptive unstructured mesh problems on massively parallel computers

  10. Comparative study on triangular and quadrilateral meshes by a finite-volume method with a central difference scheme

    KAUST Repository

    Yu, Guojun

    2012-10-01

    In this article, comparative studies on computational accuracies and convergence rates of triangular and quadrilateral meshes are carried out in the frame work of the finite-volume method. By theoretical analysis, we conclude that the number of triangular cells needs to be 4/3 times that of quadrilateral cells to obtain similar accuracy. The conclusion is verified by a number of numerical examples. In addition, the convergence rates of the triangular meshes are found to be slower than those of the quadrilateral meshes when the same accuracy is obtained with these two mesh types. © 2012 Taylor and Francis Group, LLC.

  11. Comparative study on triangular and quadrilateral meshes by a finite-volume method with a central difference scheme

    KAUST Repository

    Yu, Guojun; Yu, Bo; Sun, Shuyu; Tao, Wenquan

    2012-01-01

    In this article, comparative studies on computational accuracies and convergence rates of triangular and quadrilateral meshes are carried out in the frame work of the finite-volume method. By theoretical analysis, we conclude that the number of triangular cells needs to be 4/3 times that of quadrilateral cells to obtain similar accuracy. The conclusion is verified by a number of numerical examples. In addition, the convergence rates of the triangular meshes are found to be slower than those of the quadrilateral meshes when the same accuracy is obtained with these two mesh types. © 2012 Taylor and Francis Group, LLC.

  12. New methods to interpolate large volume of data from points or particles (Mesh-Free) methods application for its scientific visualization

    International Nuclear Information System (INIS)

    Reyes Lopez, Y.; Yervilla Herrera, H.; Viamontes Esquivel, A.; Recarey Morfa, C. A.

    2009-01-01

    In the following paper we developed a new method to interpolate large volumes of scattered data, focused mainly on the results of the Mesh free Methods, Points Methods and the Particles Methods application. Through this one, we use local radial basis function as interpolating functions. We also use over-tree as the data structure that allows to accelerate the localization of the data that influences to interpolate the values at a new point, speeding up the application of scientific visualization techniques to generate images from large data volumes from the application of Mesh-free Methods, Points and Particle Methods, in the resolution of diverse models of physics-mathematics. As an example, the results obtained after applying this method using the local interpolation functions of Shepard are shown. (Author) 22 refs

  13. Vertex Normals and Face Curvatures of Triangle Meshes

    KAUST Repository

    Sun, Xiang

    2016-08-12

    This study contributes to the discrete differential geometry of triangle meshes, in combination with discrete line congruences associated with such meshes. In particular we discuss when a congruence defined by linear interpolation of vertex normals deserves to be called a ʼnormal’ congruence. Our main results are a discussion of various definitions of normality, a detailed study of the geometry of such congruences, and a concept of curvatures and shape operators associated with the faces of a triangle mesh. These curvatures are compatible with both normal congruences and the Steiner formula.

  14. Data-Parallel Mesh Connected Components Labeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  15. Obtuse triangle suppression in anisotropic meshes

    KAUST Repository

    Sun, Feng; Choi, Yi King; Wang, Wen Ping; Yan, Dongming; Liu, Yang; Lé vy, Bruno L.

    2011-01-01

    Anisotropic triangle meshes are used for efficient approximation of surfaces and flow data in finite element analysis, and in these applications it is desirable to have as few obtuse triangles as possible to reduce the discretization error. We present a variational approach to suppressing obtuse triangles in anisotropic meshes. Specifically, we introduce a hexagonal Minkowski metric, which is sensitive to triangle orientation, to give a new formulation of the centroidal Voronoi tessellation (CVT) method. Furthermore, we prove several relevant properties of the CVT method with the newly introduced metric. Experiments show that our algorithm produces anisotropic meshes with much fewer obtuse triangles than using existing methods while maintaining mesh anisotropy. © 2011 Elsevier B.V. All rights reserved.

  16. Obtuse triangle suppression in anisotropic meshes

    KAUST Repository

    Sun, Feng

    2011-12-01

    Anisotropic triangle meshes are used for efficient approximation of surfaces and flow data in finite element analysis, and in these applications it is desirable to have as few obtuse triangles as possible to reduce the discretization error. We present a variational approach to suppressing obtuse triangles in anisotropic meshes. Specifically, we introduce a hexagonal Minkowski metric, which is sensitive to triangle orientation, to give a new formulation of the centroidal Voronoi tessellation (CVT) method. Furthermore, we prove several relevant properties of the CVT method with the newly introduced metric. Experiments show that our algorithm produces anisotropic meshes with much fewer obtuse triangles than using existing methods while maintaining mesh anisotropy. © 2011 Elsevier B.V. All rights reserved.

  17. Dual-scale Galerkin methods for Darcy flow

    Science.gov (United States)

    Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex

    2018-02-01

    The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.

  18. Adaptive radial basis function mesh deformation using data reduction

    Science.gov (United States)

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    bandwidth available between CPU and memory. In terms of parallel efficiency/scaling the different studied methods perform similarly, with the greedy algorithm being the bottleneck. In terms of absolute computational work the adaptive methods are better for the cases studied due to their more efficient selection of the control points. By automating most of the RBF mesh deformation, a robust, efficient and almost user-independent mesh deformation method is presented.

  19. Discontinuous finite element solution of the radiation diffusion equation on arbitrary polygonal meshes and locally adapted quadrilateral grids

    International Nuclear Information System (INIS)

    Ragusa, Jean C.

    2015-01-01

    In this paper, we propose a piece-wise linear discontinuous (PWLD) finite element discretization of the diffusion equation for arbitrary polygonal meshes. It is based on the standard diffusion form and uses the symmetric interior penalty technique, which yields a symmetric positive definite linear system matrix. A preconditioned conjugate gradient algorithm is employed to solve the linear system. Piece-wise linear approximations also allow a straightforward implementation of local mesh adaptation by allowing unrefined cells to be interpreted as polygons with an increased number of vertices. Several test cases, taken from the literature on the discretization of the radiation diffusion equation, are presented: random, sinusoidal, Shestakov, and Z meshes are used. The last numerical example demonstrates the application of the PWLD discretization to adaptive mesh refinement

  20. Impact of Variable-Resolution Meshes on Regional Climate Simulations

    Science.gov (United States)

    Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.

    2014-12-01

    The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using ERA-Interim re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally- refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.

  1. Simulation of transients with space-dependent feedback by coarse mesh flux expansion method

    International Nuclear Information System (INIS)

    Langenbuch, S.; Maurer, W.; Werner, W.

    1975-01-01

    For the simulation of the time-dependent behaviour of large LWR-cores, even the most efficient Finite-Difference (FD) methods require a prohibitive amount of computing time in order to achieve results of acceptable accuracy. Static CM-solutions computed with a mesh-size corresponding to the fuel element structure (about 20 cm) are at least as accurate as FD-solutions computed with about 5 cm mesh-size. For 3d-calculations this results in a reduction of storage requirements by a factor 60 and of computing costs by a factor 40, relative to FD-methods. These results have been obtained for pure neutronic calculations, where feedback is not taken into account. In this paper it is demonstrated that the method retains its accuracy also in kinetic calculations, even in the presence of strong space dependent feedback. (orig./RW) [de

  2. Opfront: mesh

    DEFF Research Database (Denmark)

    2015-01-01

    Mesh generation and visualization software based on the CGAL library. Folder content: drawmesh Visualize slices of the mesh (surface/volumetric) as wireframe on top of an image (3D). drawsurf Visualize surfaces of the mesh (surface/volumetric). img2mesh Convert isosurface in image to volumetric m...... mesh (medit format). img2off Convert isosurface in image to surface mesh (off format). off2mesh Convert surface mesh (off format) to volumetric mesh (medit format). reduce Crop and resize 3D and stacks of images. data Example data to test the library on...

  3. Efficient decomposition and linearization methods for the stochastic transportation problem

    International Nuclear Information System (INIS)

    Holmberg, K.

    1993-01-01

    The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)

  4. A regularized vortex-particle mesh method for large eddy simulation

    DEFF Research Database (Denmark)

    Spietz, Henrik Juul; Walther, Jens Honore; Hejlesen, Mads Mølholm

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green’s function...... solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy...

  5. Documentation for MeshKit - Reactor Geometry (&mesh) Generator

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-09-30

    This report gives documentation for using MeshKit’s Reactor Geometry (and mesh) Generator (RGG) GUI and also briefly documents other algorithms and tools available in MeshKit. RGG is a program designed to aid in modeling and meshing of complex/large hexagonal and rectilinear reactor cores. RGG uses Argonne’s SIGMA interfaces, Qt and VTK to produce an intuitive user interface. By integrating a 3D view of the reactor with the meshing tools and combining them into one user interface, RGG streamlines the task of preparing a simulation mesh and enables real-time feedback that reduces accidental scripting mistakes that could waste hours of meshing. RGG interfaces with MeshKit tools to consolidate the meshing process, meaning that going from model to mesh is as easy as a button click. This report is designed to explain RGG v 2.0 interface and provide users with the knowledge and skills to pilot RGG successfully. Brief documentation of MeshKit source code, tools and other algorithms available are also presented for developers to extend and add new algorithms to MeshKit. RGG tools work in serial and parallel and have been used to model complex reactor core models consisting of conical pins, load pads, several thousands of axially varying material properties of instrumentation pins and other interstices meshes.

  6. A finite element method with overlapping meshes for free-boundary axisymmetric plasma equilibria in realistic geometries

    Science.gov (United States)

    Heumann, Holger; Rapetti, Francesca

    2017-04-01

    Existing finite element implementations for the computation of free-boundary axisymmetric plasma equilibria approximate the unknown poloidal flux function by standard lowest order continuous finite elements with discontinuous gradients. As a consequence, the location of critical points of the poloidal flux, that are of paramount importance in tokamak engineering, is constrained to nodes of the mesh leading to undesired jumps in transient problems. Moreover, recent numerical results for the self-consistent coupling of equilibrium with resistive diffusion and transport suggest the necessity of higher regularity when approximating the flux map. In this work we propose a mortar element method that employs two overlapping meshes. One mesh with Cartesian quadrilaterals covers the vacuum chamber domain accessible by the plasma and one mesh with triangles discretizes the region outside. The two meshes overlap in a narrow region. This approach gives the flexibility to achieve easily and at low cost higher order regularity for the approximation of the flux function in the domain covered by the plasma, while preserving accurate meshing of the geometric details outside this region. The continuity of the numerical solution in the region of overlap is weakly enforced by a mortar-like mapping.

  7. Three-dimensional dynamic rupture simulation with a high-order discontinuous Galerkin method on unstructured tetrahedral meshes

    KAUST Repository

    Pelties, Christian

    2012-02-18

    Accurate and efficient numerical methods to simulate dynamic earthquake rupture and wave propagation in complex media and complex fault geometries are needed to address fundamental questions in earthquake dynamics, to integrate seismic and geodetic data into emerging approaches for dynamic source inversion, and to generate realistic physics-based earthquake scenarios for hazard assessment. Modeling of spontaneous earthquake rupture and seismic wave propagation by a high-order discontinuous Galerkin (DG) method combined with an arbitrarily high-order derivatives (ADER) time integration method was introduced in two dimensions by de la Puente et al. (2009). The ADER-DG method enables high accuracy in space and time and discretization by unstructured meshes. Here we extend this method to three-dimensional dynamic rupture problems. The high geometrical flexibility provided by the usage of tetrahedral elements and the lack of spurious mesh reflections in the ADER-DG method allows the refinement of the mesh close to the fault to model the rupture dynamics adequately while concentrating computational resources only where needed. Moreover, ADER-DG does not generate spurious high-frequency perturbations on the fault and hence does not require artificial Kelvin-Voigt damping. We verify our three-dimensional implementation by comparing results of the SCEC TPV3 test problem with two well-established numerical methods, finite differences, and spectral boundary integral. Furthermore, a convergence study is presented to demonstrate the systematic consistency of the method. To illustrate the capabilities of the high-order accurate ADER-DG scheme on unstructured meshes, we simulate an earthquake scenario, inspired by the 1992 Landers earthquake, that includes curved faults, fault branches, and surface topography. Copyright 2012 by the American Geophysical Union.

  8. Frequency scaling of linear super-colliders

    International Nuclear Information System (INIS)

    Mondelli, A.; Chernin, D.; Drobot, A.; Reiser, M.; Granatstein, V.

    1986-06-01

    The development of electron-positron linear colliders in the TeV energy range will be facilitated by the development of high-power rf sources at frequencies above 2856 MHz. Present S-band technology, represented by the SLC, would require a length in excess of 50 km per linac to accelerate particles to energies above 1 TeV. By raising the rf driving frequency, the rf breakdown limit is increased, thereby allowing the length of the accelerators to be reduced. Currently available rf power sources set the realizable gradient limit in an rf linac at frequencies above S-band. This paper presents a model for the frequency scaling of linear colliders, with luminosity scaled in proportion to the square of the center-of-mass energy. Since wakefield effects are the dominant deleterious effect, a separate single-bunch simulation model is described which calculates the evolution of the beam bunch with specified wakefields, including the effects of using programmed phase positioning and Landau damping. The results presented here have been obtained for a SLAC structure, scaled in proportion to wavelength

  9. Splitting Method for Solving the Coarse-Mesh Discretized Low-Order Quasi-Diffusion Equations

    International Nuclear Information System (INIS)

    Hiruta, Hikaru; Anistratov, Dmitriy Y.; Adams, Marvin L.

    2005-01-01

    In this paper, the development is presented of a splitting method that can efficiently solve coarse-mesh discretized low-order quasi-diffusion (LOQD) equations. The LOQD problem can reproduce exactly the transport scalar flux and current. To solve the LOQD equations efficiently, a splitting method is proposed. The presented method splits the LOQD problem into two parts: (a) the D problem that captures a significant part of the transport solution in the central parts of assemblies and can be reduced to a diffusion-type equation and (b) the Q problem that accounts for the complicated behavior of the transport solution near assembly boundaries. Independent coarse-mesh discretizations are applied: the D problem equations are approximated by means of a finite element method, whereas the Q problem equations are discretized using a finite volume method. Numerical results demonstrate the efficiency of the methodology presented. This methodology can be used to modify existing diffusion codes for full-core calculations (which already solve a version of the D problem) to account for transport effects

  10. Mesh association by projection along smoothed-normal-vector fields:association of closed manifolds

    NARCIS (Netherlands)

    Brummelen, van E.H.

    2008-01-01

    The necessity to associate two geometrically distinct meshes arises in many engineering applications. Current mesh-association algorithms have generally been developed for piecewise-linear geometry approximations, and their extension to the high-order geometry representations corresponding to

  11. Runge-Kutta discontinuous Galerkin method using a new type of WENO limiters on unstructured meshes

    Science.gov (United States)

    Zhu, Jun; Zhong, Xinghui; Shu, Chi-Wang; Qiu, Jianxian

    2013-09-01

    In this paper we generalize a new type of limiters based on the weighted essentially non-oscillatory (WENO) finite volume methodology for the Runge-Kutta discontinuous Galerkin (RKDG) methods solving nonlinear hyperbolic conservation laws, which were recently developed in [32] for structured meshes, to two-dimensional unstructured triangular meshes. The key idea of such limiters is to use the entire polynomials of the DG solutions from the troubled cell and its immediate neighboring cells, and then apply the classical WENO procedure to form a convex combination of these polynomials based on smoothness indicators and nonlinear weights, with suitable adjustments to guarantee conservation. The main advantage of this new limiter is its simplicity in implementation, especially for the unstructured meshes considered in this paper, as only information from immediate neighbors is needed and the usage of complicated geometric information of the meshes is largely avoided. Numerical results for both scalar equations and Euler systems of compressible gas dynamics are provided to illustrate the good performance of this procedure.

  12. Meshed doped silicon photonic crystals for manipulating near-field thermal radiation

    Science.gov (United States)

    Elzouka, Mahmoud; Ndao, Sidy

    2018-01-01

    The ability to control and manipulate heat flow is of great interest to thermal management and thermal logic and memory devices. Particularly, near-field thermal radiation presents a unique opportunity to enhance heat transfer while being able to tailor its characteristics (e.g., spectral selectivity). However, achieving nanometric gaps, necessary for near-field, has been and remains a formidable challenge. Here, we demonstrate significant enhancement of the near-field heat transfer through meshed photonic crystals with separation gaps above 0.5 μm. Using a first-principle method, we investigate the meshed photonic structures numerically via finite-difference time-domain technique (FDTD) along with the Langevin approach. Results for doped-silicon meshed structures show significant enhancement in heat transfer; 26 times over the non-meshed corrugated structures. This is especially important for thermal management and thermal rectification applications. The results also support the premise that thermal radiation at micro scale is a bulk (rather than a surface) phenomenon; the increase in heat transfer between two meshed-corrugated surfaces compared to the flat surface (8.2) wasn't proportional to the increase in the surface area due to the corrugations (9). Results were further validated through good agreements between the resonant modes predicted from the dispersion relation (calculated using a finite-element method), and transmission factors (calculated from FDTD).

  13. Relaxation Methods for Strictly Convex Regularizations of Piecewise Linear Programs

    International Nuclear Information System (INIS)

    Kiwiel, K. C.

    1998-01-01

    We give an algorithm for minimizing the sum of a strictly convex function and a convex piecewise linear function. It extends several dual coordinate ascent methods for large-scale linearly constrained problems that occur in entropy maximization, quadratic programming, and network flows. In particular, it may solve exact penalty versions of such (possibly inconsistent) problems, and subproblems of bundle methods for nondifferentiable optimization. It is simple, can exploit sparsity, and in certain cases is highly parallelizable. Its global convergence is established in the recent framework of B -functions (generalized Bregman functions)

  14. PowderSim: Lagrangian Discrete and Mesh-Free Continuum Simulation Code for Cohesive Soils

    Science.gov (United States)

    Johnson, Scott; Walton, Otis; Settgast, Randolph

    2013-01-01

    PowderSim is a calculation tool that combines a discrete-element method (DEM) module, including calibrated interparticle-interaction relationships, with a mesh-free, continuum, SPH (smoothed-particle hydrodynamics) based module that utilizes enhanced, calibrated, constitutive models capable of mimicking both large deformations and the flow behavior of regolith simulants and lunar regolith under conditions anticipated during in situ resource utilization (ISRU) operations. The major innovation introduced in PowderSim is to use a mesh-free method (SPH-based) with a calibrated and slightly modified critical-state soil mechanics constitutive model to extend the ability of the simulation tool to also address full-scale engineering systems in the continuum sense. The PowderSim software maintains the ability to address particle-scale problems, like size segregation, in selected regions with a traditional DEM module, which has improved contact physics and electrostatic interaction models.

  15. Cache-Oblivious Mesh Layouts

    International Nuclear Information System (INIS)

    Yoon, S; Lindstrom, P; Pascucci, V; Manocha, D

    2005-01-01

    We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications

  16. An implementation analysis of the linear discontinuous finite element method

    International Nuclear Information System (INIS)

    Becker, T. L.

    2013-01-01

    This paper provides an implementation analysis of the linear discontinuous finite element method (LD-FEM) that spans the space of (l, x, y, z). A practical implementation of LD includes 1) selecting a computationally efficient algorithm to solve the 4 x 4 matrix system Ax = b that describes the angular flux in a mesh element, and 2) choosing how to store the data used to construct the matrix A and the vector b to either reduce memory consumption or increase computational speed. To analyze the first of these, three algorithms were selected to solve the 4 x 4 matrix equation: Cramer's rule, a streamlined implementation of Gaussian elimination, and LAPACK's Gaussian elimination subroutine dgesv. The results indicate that Cramer's rule and the streamlined Gaussian elimination algorithm perform nearly equivalently and outperform LAPACK's implementation of Gaussian elimination by a factor of 2. To analyze the second implementation detail, three formulations of the discretized LD-FEM equations were provided for implementation in a transport solver: 1) a low-memory formulation, which relies heavily on 'on-the-fly' calculations and less on the storage of pre-computed data, 2) a high-memory formulation, which pre-computes much of the data used to construct A and b, and 3) a reduced-memory formulation, which lies between the low - and high-memory formulations. These three formulations were assessed in the Jaguar transport solver based on relative memory footprint and computational speed for increasing mesh size and quadrature order. The results indicated that the memory savings of the low-memory formulation were not sufficient to warrant its implementation. The high-memory formulation resulted in a significant speed advantage over the reduced-memory option (10-50%), but also resulted in a proportional increase in memory consumption (5-45%) for increasing quadrature order and mesh count; therefore, the practitioner should weigh the system memory constraints against any

  17. An implementation analysis of the linear discontinuous finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Becker, T. L. [Bechtel Marine Propulsion Corporation, Knolls Atomic Power Laboratory, P.O. Box 1072, Schenectady, NY 12301-1072 (United States)

    2013-07-01

    This paper provides an implementation analysis of the linear discontinuous finite element method (LD-FEM) that spans the space of (l, x, y, z). A practical implementation of LD includes 1) selecting a computationally efficient algorithm to solve the 4 x 4 matrix system Ax = b that describes the angular flux in a mesh element, and 2) choosing how to store the data used to construct the matrix A and the vector b to either reduce memory consumption or increase computational speed. To analyze the first of these, three algorithms were selected to solve the 4 x 4 matrix equation: Cramer's rule, a streamlined implementation of Gaussian elimination, and LAPACK's Gaussian elimination subroutine dgesv. The results indicate that Cramer's rule and the streamlined Gaussian elimination algorithm perform nearly equivalently and outperform LAPACK's implementation of Gaussian elimination by a factor of 2. To analyze the second implementation detail, three formulations of the discretized LD-FEM equations were provided for implementation in a transport solver: 1) a low-memory formulation, which relies heavily on 'on-the-fly' calculations and less on the storage of pre-computed data, 2) a high-memory formulation, which pre-computes much of the data used to construct A and b, and 3) a reduced-memory formulation, which lies between the low - and high-memory formulations. These three formulations were assessed in the Jaguar transport solver based on relative memory footprint and computational speed for increasing mesh size and quadrature order. The results indicated that the memory savings of the low-memory formulation were not sufficient to warrant its implementation. The high-memory formulation resulted in a significant speed advantage over the reduced-memory option (10-50%), but also resulted in a proportional increase in memory consumption (5-45%) for increasing quadrature order and mesh count; therefore, the practitioner should weigh the system memory

  18. Adaptive mesh refinement for a finite volume method for flow and transport of radionuclides in heterogeneous porous media

    International Nuclear Information System (INIS)

    Amaziane, Brahim; Bourgeois, Marc; El Fatini, Mohamed

    2014-01-01

    In this paper, we consider adaptive numerical simulation of miscible displacement problems in porous media, which are modeled by single phase flow equations. A vertex-centred finite volume method is employed to discretize the coupled system: the Darcy flow equation and the diffusion-convection concentration equation. The convection term is approximated with a Godunov scheme over the dual finite volume mesh, whereas the diffusion-dispersion term is discretized by piecewise linear conforming finite elements. We introduce two kinds of indicators, both of them of residual type. The first one is related to time discretization and is local with respect to the time discretization: thus, at each time, it provides an appropriate information for the choice of the next time step. The second is related to space discretization and is local with respect to both the time and space variable and the idea is that at each time it is an efficient tool for mesh adaptivity. An error estimation procedure evaluates where additional refinement is needed and grid generation procedures dynamically create or remove fine-grid patches as resolution requirements change. The method was implemented in the software MELODIE, developed by the French Institute for Radiological Protection and Nuclear Safety (IRSN, Institut de Radioprotection et de Surete Nucleaire). The algorithm is then used to simulate the evolution of radionuclide migration from the waste packages through a heterogeneous disposal, demonstrating its capability to capture complex behavior of the resulting flow. (authors)

  19. Mesh Excision: Is Total Mesh Excision Necessary?

    Science.gov (United States)

    Wolff, Gillian F; Winters, J Christian; Krlin, Ryan M

    2016-04-01

    Nearly 29% of women will undergo a secondary, repeat operation for pelvic organ prolapse (POP) symptom recurrence following a primary repair, as reported by Abbott et al. (Am J Obstet Gynecol 210:163.e1-163.e1, 2014). In efforts to decrease the rates of failure, graft materials have been utilized to augment transvaginal repairs. Following the success of using polypropylene mesh (PPM) for stress urinary incontinence (SUI), the use of PPM in the transvaginal repair of POP increased. However, in recent years, significant concerns have been raised about the safety of PPM mesh. Complications, some specific to mesh, such as exposures, erosion, dyspareunia, and pelvic pain, have been reported with increased frequency. In the current literature, there is not substantive evidence to suggest that PPM has intrinsic properties that warrant total mesh removal in the absence of complications. There are a number of complications that can occur after transvaginal mesh placement that do warrant surgical intervention after failure of conservative therapy. In aggregate, there are no high-quality controlled studies that clearly demonstrate that total mesh removal is consistently more likely to achieve pain reduction. In the cases of obstruction and erosion, it seems clear that definitive removal of the offending mesh is associated with resolution of symptoms in the majority of cases and reasonable practice. There are a number of complications that can occur with removal of mesh, and patients should be informed of this as they formulate a choice of treatment. We will review these considerations as we examine the clinical question of whether total versus partial removal of mesh is necessary for the resolution of complications following transvaginal mesh placement.

  20. Root-cause analysis of the better performance of the coarse-mesh finite-difference method for CANDU-type reactors

    International Nuclear Information System (INIS)

    Shen, W.

    2012-01-01

    Recent assessment results indicate that the coarse-mesh finite-difference method (FDM) gives consistently smaller percent differences in channel powers than the fine-mesh FDM when compared to the reference MCNP solution for CANDU-type reactors. However, there is an impression that the fine-mesh FDM should always give more accurate results than the coarse-mesh FDM in theory. To answer the question if the better performance of the coarse-mesh FDM for CANDU-type reactors was just a coincidence (cancellation of errors) or caused by the use of heavy water or the use of lattice-homogenized cross sections for the cluster fuel geometry in the diffusion calculation, three benchmark problems were set up with three different fuel lattices: CANDU, HWR and PWR. These benchmark problems were then used to analyze the root cause of the better performance of the coarse-mesh FDM for CANDU-type reactors. The analyses confirm that the better performance of the coarse-mesh FDM for CANDU-type reactors is mainly caused by the use of lattice-homogenized cross sections for the sub-meshes of the cluster fuel geometry in the diffusion calculation. Based on the analyses, it is recommended to use 2 x 2 coarse-mesh FDM to analyze CANDU-type reactors when lattice-homogenized cross sections are used in the core analysis. (authors)

  1. Root-cause analysis of the better performance of the coarse-mesh finite-difference method for CANDU-type reactors

    Energy Technology Data Exchange (ETDEWEB)

    Shen, W. [Candu Energy Inc., 2285 Speakman Dr., Mississauga, ON L5B 1K (Canada)

    2012-07-01

    Recent assessment results indicate that the coarse-mesh finite-difference method (FDM) gives consistently smaller percent differences in channel powers than the fine-mesh FDM when compared to the reference MCNP solution for CANDU-type reactors. However, there is an impression that the fine-mesh FDM should always give more accurate results than the coarse-mesh FDM in theory. To answer the question if the better performance of the coarse-mesh FDM for CANDU-type reactors was just a coincidence (cancellation of errors) or caused by the use of heavy water or the use of lattice-homogenized cross sections for the cluster fuel geometry in the diffusion calculation, three benchmark problems were set up with three different fuel lattices: CANDU, HWR and PWR. These benchmark problems were then used to analyze the root cause of the better performance of the coarse-mesh FDM for CANDU-type reactors. The analyses confirm that the better performance of the coarse-mesh FDM for CANDU-type reactors is mainly caused by the use of lattice-homogenized cross sections for the sub-meshes of the cluster fuel geometry in the diffusion calculation. Based on the analyses, it is recommended to use 2 x 2 coarse-mesh FDM to analyze CANDU-type reactors when lattice-homogenized cross sections are used in the core analysis. (authors)

  2. The Scaling LInear Macroweather model (SLIM): using scaling to forecast global scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-03-01

    At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare

  3. A third-order moving mesh cell-centered scheme for one-dimensional elastic-plastic flows

    Science.gov (United States)

    Cheng, Jun-Bo; Huang, Weizhang; Jiang, Song; Tian, Baolin

    2017-11-01

    A third-order moving mesh cell-centered scheme without the remapping of physical variables is developed for the numerical solution of one-dimensional elastic-plastic flows with the Mie-Grüneisen equation of state, the Wilkins constitutive model, and the von Mises yielding criterion. The scheme combines the Lagrangian method with the MMPDE moving mesh method and adaptively moves the mesh to better resolve shock and other types of waves while preventing the mesh from crossing and tangling. It can be viewed as a direct arbitrarily Lagrangian-Eulerian method but can also be degenerated to a purely Lagrangian scheme. It treats the relative velocity of the fluid with respect to the mesh as constant in time between time steps, which allows high-order approximation of free boundaries. A time dependent scaling is used in the monitor function to avoid possible sudden movement of the mesh points due to the creation or diminishing of shock and rarefaction waves or the steepening of those waves. A two-rarefaction Riemann solver with elastic waves is employed to compute the Godunov values of the density, pressure, velocity, and deviatoric stress at cell interfaces. Numerical results are presented for three examples. The third-order convergence of the scheme and its ability to concentrate mesh points around shock and elastic rarefaction waves are demonstrated. The obtained numerical results are in good agreement with those in literature. The new scheme is also shown to be more accurate in resolving shock and rarefaction waves than an existing third-order cell-centered Lagrangian scheme.

  4. Feedforward Control of Gear Mesh Vibration Using Piezoelectric Actuators

    Directory of Open Access Journals (Sweden)

    Gerald T. Montague

    1994-01-01

    Full Text Available This article presents a novel means for suppressing gear mesh related vibrations. The key components in this approach are piezoelectric actuators and a high-frequency, analog feed forward controller. Test results are presented and show up to a 70% reduction in gear mesh acceleration and vibration control up to 4500 Hz. The principle of the approach is explained by an analysis of a harmonically excited, general linear vibratory system.

  5. Users manual for Opt-MS : local methods for simplicial mesh smoothing and untangling.

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, L.

    1999-07-20

    Creating meshes containing good-quality elements is a challenging, yet critical, problem facing computational scientists today. Several researchers have shown that the size of the mesh, the shape of the elements within that mesh, and their relationship to the physical application of interest can profoundly affect the efficiency and accuracy of many numerical approximation techniques. If the application contains anisotropic physics, the mesh can be improved by considering both local characteristics of the approximate application solution and the geometry of the computational domain. If the application is isotropic, regularly shaped elements in the mesh reduce the discretization error, and the mesh can be improved a priori by considering geometric criteria only. The Opt-MS package provides several local node point smoothing techniques that improve elements in the mesh by adjusting grid point location using geometric, criteria. The package is easy to use; only three subroutine calls are required for the user to begin using the software. The package is also flexible; the user may change the technique, function, or dimension of the problem at any time during the mesh smoothing process. Opt-MS is designed to interface with C and C++ codes, ad examples for both two-and three-dimensional meshes are provided.

  6. Mesh-Sequenced Realizations for Evaluation of Subgrid-Scale Models for Turbulent Combustion (Short Term Innovative Research Program)

    Science.gov (United States)

    2018-02-15

    conservation equations. The closure problem hinges on the evaluation of the filtered chemical production rates. In MRA/MSR, simultaneous large-eddy... simultaneous , constrained large-eddy simulations at three different mesh levels as a means of connecting reactive scalar information at different...functions of a locally normalized subgrid Damköhler number (a measure of the distribution of inverse chemical time scales in the neighborhood of a

  7. Sodium flow rate measurement method of annular linear induction pumps

    International Nuclear Information System (INIS)

    Araseki, Hideo; Kirillov, Igor R.; Preslitsky, Gennady V.

    2012-01-01

    Highlights: ► We found a new method of flow rate monitoring of electromagnetic pump. ► The method is very simple and does not require a large space. ► The method was verified with an experiment and a numerical analysis. ► The experimental data and the numerical results are in good agreement. - Abstract: The present paper proposes a method for measuring sodium flow rate of annular linear induction pumps. The feature of the method lies in measuring the leaked magnetic field with measuring coils near the stator end on the outlet side and in correlating it with the sodium flow rate. This method is verified through an experiment and a numerical analysis. The data obtained in the experiment reveals that the correlation between the leaked magnetic field and the sodium flow rate is almost linear. The result of the numerical analysis agrees with the experimental data. The present method will be particularly effective to sodium flow rate monitoring of each one of plural annular linear induction pumps arranged in parallel in a vessel which forms a large-scale pump unit.

  8. New procedure for criticality search using coarse mesh nodal methods

    International Nuclear Information System (INIS)

    Pereira, Wanderson F.; Silva, Fernando C. da; Martinez, Aquilino S.

    2011-01-01

    The coarse mesh nodal methods have as their primary goal to calculate the neutron flux inside the reactor core. Many computer systems use a specific form of calculation, which is called nodal method. In classical computing systems that use the criticality search is made after the complete convergence of the iterative process of calculating the neutron flux. In this paper, we proposed a new method for the calculation of criticality, condition which will be over very iterative process of calculating the neutron flux. Thus, the processing time for calculating the neutron flux was reduced by half compared with the procedure developed by the Nuclear Engineering Program of COPPE/UFRJ (PEN/COPPE/UFRJ). (author)

  9. New procedure for criticality search using coarse mesh nodal methods

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Wanderson F.; Silva, Fernando C. da; Martinez, Aquilino S., E-mail: wneto@con.ufrj.b, E-mail: fernando@con.ufrj.b, E-mail: Aquilino@lmp.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    The coarse mesh nodal methods have as their primary goal to calculate the neutron flux inside the reactor core. Many computer systems use a specific form of calculation, which is called nodal method. In classical computing systems that use the criticality search is made after the complete convergence of the iterative process of calculating the neutron flux. In this paper, we proposed a new method for the calculation of criticality, condition which will be over very iterative process of calculating the neutron flux. Thus, the processing time for calculating the neutron flux was reduced by half compared with the procedure developed by the Nuclear Engineering Program of COPPE/UFRJ (PEN/COPPE/UFRJ). (author)

  10. Unified Modeling Language description of the object-oriented multi-scale adaptive finite element method for Step-and-Flash Imprint Lithography Simulations

    International Nuclear Information System (INIS)

    Paszynski, Maciej; Gurgul, Piotr; Sieniek, Marcin; Pardo, David

    2010-01-01

    In the first part of the paper we present the multi-scale simulation of the Step-and-Flash Imprint Lithography (SFIL), a modern patterning process. The simulation utilizes the hp adaptive Finite Element Method (hp-FEM) coupled with Molecular Statics (MS) model. Thus, we consider the multi-scale problem, with molecular statics applied in the areas of the mesh where the highest accuracy is required, and the continuous linear elasticity with thermal expansion coefficient applied in the remaining part of the domain. The degrees of freedom from macro-scale element's nodes located on the macro-scale side of the interface have been identified with particles from nano-scale elements located on the nano-scale side of the interface. In the second part of the paper we present Unified Modeling Language (UML) description of the resulting multi-scale application (hp-FEM coupled with MS). We investigated classical, procedural codes from the point of view of the object-oriented (O-O) programming paradigm. The discovered hierarchical structure of classes and algorithms makes the UML project as independent on the spatial dimension of the problem as possible. The O-O UML project was defined at an abstract level, independent on the programming language used.

  11. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  12. Error analysis of dimensionless scaling experiments with multiple points using linear regression

    International Nuclear Information System (INIS)

    Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C.

    2010-01-01

    A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)

  13. Multi-phase Volume Segmentation with Tetrahedral Mesh

    DEFF Research Database (Denmark)

    Nguyen Trung, Tuan; Dahl, Vedrana Andersen; Bærentzen, Jakob Andreas

    Volume segmentation is efficient for reconstructing material structure, which is important for several analyses, e.g. simulation with finite element method, measurement of quantitative information like surface area, surface curvature, volume, etc. We are concerned about the representations of the 3......D volumes, which can be categorized into two groups: fixed voxel grids [1] and unstructured meshes [2]. Among these two representations, the voxel grids are more popular since manipulating a fixed grid is easier than an unstructured mesh, but they are less efficient for quantitative measurements....... In many cases, the voxel grids are converted to explicit meshes, however the conversion may reduce the accuracy of the segmentations, and the effort for meshing is also not trivial. On the other side, methods using unstructured meshes have difficulty in handling topology changes. To reduce the complexity...

  14. Non-linear triangle-based polynomial expansion nodal method for hexagonal core analysis

    International Nuclear Information System (INIS)

    Cho, Jin Young; Cho, Byung Oh; Joo, Han Gyu; Zee, Sung Qunn; Park, Sang Yong

    2000-09-01

    This report is for the implementation of triangle-based polynomial expansion nodal (TPEN) method to MASTER code in conjunction with the coarse mesh finite difference(CMFD) framework for hexagonal core design and analysis. The TPEN method is a variation of the higher order polynomial expansion nodal (HOPEN) method that solves the multi-group neutron diffusion equation in the hexagonal-z geometry. In contrast with the HOPEN method, only two-dimensional intranodal expansion is considered in the TPEN method for a triangular domain. The axial dependence of the intranodal flux is incorporated separately here and it is determined by the nodal expansion method (NEM) for a hexagonal node. For the consistency of node geometry of the MASTER code which is based on hexagon, TPEN solver is coded to solve one hexagonal node which is composed of 6 triangular nodes directly with Gauss elimination scheme. To solve the CMFD linear system efficiently, stabilized bi-conjugate gradient(BiCG) algorithm and Wielandt eigenvalue shift method are adopted. And for the construction of the efficient preconditioner of BiCG algorithm, the incomplete LU(ILU) factorization scheme which has been widely used in two-dimensional problems is used. To apply the ILU factorization scheme to three-dimensional problem, a symmetric Gauss-Seidel Factorization scheme is used. In order to examine the accuracy of the TPEN solution, several eigenvalue benchmark problems and two transient problems, i.e., a realistic VVER1000 and VVER440 rod ejection benchmark problems, were solved and compared with respective references. The results of eigenvalue benchmark problems indicate that non-linear TPEN method is very accurate showing less than 15 pcm of eigenvalue errors and 1% of maximum power errors, and fast enough to solve the three-dimensional VVER-440 problem within 5 seconds on 733MHz PENTIUM-III. In the case of the transient problems, the non-linear TPEN method also shows good results within a few minute of

  15. A new method for simplification and compression of 3D meshes

    OpenAIRE

    Attene, Marco

    2001-01-01

    We focus on the lossy compression of manifold triangle meshes. Our SwingWrapper approach partitions the surface of an original mesh M into simply-connected regions, called triangloids. We compute a new mesh M'. Each triangle of M' is a close approximation of a pseudo-triangle of M. By construction, the connectivity of M' is fairly regular and can be compressed to less than a bit per triangle using EdgeBreaker or one of the other recently developed schemes. The locations of the vertices of M' ...

  16. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  17. Refficientlib: an efficient load-rebalanced adaptive mesh refinement algorithm for high-performance computational physics meshes

    OpenAIRE

    Baiges Aznar, Joan; Bayona Roa, Camilo Andrés

    2017-01-01

    No separate or additional fees are collected for access to or distribution of the work. In this paper we present a novel algorithm for adaptive mesh refinement in computational physics meshes in a distributed memory parallel setting. The proposed method is developed for nodally based parallel domain partitions where the nodes of the mesh belong to a single processor, whereas the elements can belong to multiple processors. Some of the main features of the algorithm presented in this paper a...

  18. Sodium flow rate measurement method of annular linear induction pump

    International Nuclear Information System (INIS)

    Araseki, Hideo

    2011-01-01

    This report describes a method for measuring sodium flow rate of annular linear induction pumps arranged in parallel and its verification result obtained through an experiment and a numerical analysis. In the method, the leaked magnetic field is measured with measuring coils at the stator end on the outlet side and is correlated with the sodium flow rate. The experimental data and the numerical result indicate that the leaked magnetic field at the stator edge keeps almost constant when the sodium flow rate changes and that the leaked magnetic field change arising from the flow rate change is small compared with the overall leaked magnetic field. It is shown that the correlation between the leaked magnetic field and the sodium flow rate is almost linear due to this feature of the leaked magnetic field, which indicates the applicability of the method to small-scale annular linear induction pumps. (author)

  19. Processor farming method for multi-scale analysis of masonry structures

    Science.gov (United States)

    Krejčí, Tomáš; Koudelka, Tomáš

    2017-07-01

    This paper describes a processor farming method for a coupled heat and moisture transport in masonry using a two-level approach. The motivation for the two-level description comes from difficulties connected with masonry structures, where the size of stone blocks is much larger than the size of mortar layers and very fine finite element mesh has to be used. The two-level approach is suitable for parallel computing because nearly all computations can be performed independently with little synchronization. This approach is called processor farming. The master processor is dealing with the macro-scale level - the structure and the slave processors are dealing with a homogenization procedure on the meso-scale level which is represented by an appropriate representative volume element.

  20. r-Adaptive mesh generation for shell finite element analysis

    International Nuclear Information System (INIS)

    Cho, Maenghyo; Jun, Seongki

    2004-01-01

    An r-adaptive method or moving grid technique relocates a grid so that it becomes concentrated in the desired region. This concentration improves the accuracy and efficiency of finite element solutions. We apply the r-adaptive method to computational mesh of shell surfaces, which is initially regular and uniform. The r-adaptive method, given by Liao and Anderson [Appl. Anal. 44 (1992) 285], aggregate the grid in the region with a relatively high weight function without any grid-tangling. The stress error estimator is calculated in the initial uniform mesh for a weight function. However, since the r-adaptive method is a method that moves the grid, shell surface geometry error such as curvature error and mesh distortion error will increase. Therefore, to represent the exact geometry of a shell surface and to prevent surface geometric errors, we use the Naghdi's shell theory and express the shell surface by a B-spline patch. In addition, using a nine-node element, which is relatively less sensitive to mesh distortion, we try to diminish mesh distortion error in the application of an r-adaptive method. In the numerical examples, it is shown that the values of the error estimator for a cylinder, hemisphere, and torus in the overall domain can be reduced effectively by using the mesh generated by the r-adaptive method. Also, the reductions of the estimated relative errors are demonstrated in the numerical examples. In particular, a new functional is proposed to construct an adjusted mesh configuration by considering a mesh distortion measure as well as the stress error function. The proposed weight function provides a reliable mesh adaptation method after a parameter value in the weight function is properly chosen

  1. A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali

    2017-02-25

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.

  2. A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi

    2017-01-01

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.

  3. Confinement of electron beams by mesh arrays in a relativistic klystron amplifier

    International Nuclear Information System (INIS)

    Wang Pingshan; Gu Binlin

    1998-01-01

    Theoretical and experimental results of intense beam confinement by conducting meshes in a relativistic klystron amplifier (RKA) are presented. Electron motions in a steady intense electron beam confined by conducting meshes are analyzed with an approximate space charge field distribution. And the conditions for steady beam transportation are discussed. Experimental results of a long distance (60 cm) transportation of an intense beam (400 kV, 2.5 kA) generated by a linear induction accelerator are presented. Experimental results of modulated beam transportation confined by the mesh array are presented also. The results show that the focusing ability of the conducting meshes is not very sensitive to the beam energy. And the meshes can be used effectively in a RKA to replace the magnetic field system

  4. Simulating control rod and fuel assembly motion using moving meshes

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, D. [Department of Electrical and Computer Engineering, McMaster University, 1280 Main Street West, Hamilton Ontario, L8S 4K1 (Canada)], E-mail: gilbertdw1@gmail.com; Roman, J.E. [Departamento de Sistemas Informaticos y Computacion, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain); Garland, Wm. J. [Department of Engineering Physics, McMaster University, 1280 Main Street West, Hamilton Ontario, L8S 4K1 (Canada); Poehlman, W.F.S. [Department of Computing and Software, McMaster University, 1280 Main Street West, Hamilton Ontario, L8S 4K1 (Canada)

    2008-02-15

    A prerequisite for designing a transient simulation experiment which includes the motion of control and fuel assemblies is the careful verification of a steady state model which computes k{sub eff} versus assembly insertion distance. Previous studies in nuclear engineering have usually approached the problem of the motion of control rods with the use of nonlinear nodal models. Nodal methods employ special approximations for the leading and trailing cells of the moving assemblies to avoid the rod cusping problem which results from the naive volume weighted cell cross-section approximation. A prototype framework called the MOOSE has been developed for modeling moving components in the presence of diffusion phenomena. A linear finite difference model is constructed, solutions for which are computed by SLEPc, a high performance parallel eigenvalue solver. Design techniques for the implementation of a patched non-conformal mesh which links groups of sub-meshes that can move relative to one another are presented. The generation of matrices which represent moving meshes which conserve neutron current at their boundaries, and the performance of the framework when applied to model reactivity insertion experiments is also discussed.

  5. Simulation of the Beam-Beam Effects in e+e- Storage Rings with a Method of Reducing the Region of Mesh

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Yunhai

    2000-08-31

    A highly accurate self-consistent particle code to simulate the beam-beam collision in e{sup +}e{sup -} storage rings has been developed. It adopts a method of solving the Poisson equation with an open boundary. The method consists of two steps: assigning the potential on a finite boundary using the Green's function, and then solving the potential inside the boundary with a fast Poisson solver. Since the solution of the Poisson's equation is unique, the authors solution is exactly the same as the one obtained by simply using the Green's function. The method allows us to select much smaller region of mesh and therefore increase the resolution of the solver. The better resolution makes more accurate the calculation of the dynamics in the core of the beams. The luminosity simulated with this method agrees quantitatively with the measurement for the PEP-II B-factory ring in the linear and nonlinear beam current regimes, demonstrating its predictive capability in detail.

  6. Time Scale in Least Square Method

    Directory of Open Access Journals (Sweden)

    Özgür Yeniay

    2014-01-01

    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  7. Notes on the Mesh Handler and Mesh Data Conversion

    International Nuclear Information System (INIS)

    Lee, Sang Yong; Park, Chan Eok

    2009-01-01

    At the outset of the development of the thermal-hydraulic code (THC), efforts have been made to utilize the recent technology of the computational fluid dynamics. Among many of them, the unstructured mesh approach was adopted to alleviate the restriction of the grid handling system. As a natural consequence, a mesh handler (MH) has been developed to manipulate the complex mesh data from the mesh generator. The mesh generator, Gambit, was chosen at the beginning of the development of the code. But a new mesh generator, Pointwise, was introduced to get more flexible mesh generation capability. An open source code, Paraview, was chosen as a post processor, which can handle unstructured as well as structured mesh data. Overall data processing system for THC is shown in Figure-1. There are various file formats to save the mesh data in the permanent storage media. A couple of dozen of file formats are found even in the above mentioned programs. A competent mesh handler should have the capability to import or export mesh data as many as possible formats. But, in reality, there are two aspects that make it difficult to achieve the competence. The first aspect to consider is the time and efforts to program the interface code. And the second aspect, which is even more difficult one, is the fact that many mesh data file formats are proprietary information. In this paper, some experience of the development of the format conversion programs will be presented. File formats involved are Gambit neutral format, Ansys-CFX grid file format, VTK legacy file format, Nastran format and CGNS

  8. AUTOMATIC MESH GENERATION OF 3-D GEOMETRIC MODELS

    Institute of Scientific and Technical Information of China (English)

    刘剑飞

    2003-01-01

    In this paper the presentation of the ball-packing method is reviewed,and a scheme to generate mesh for complex 3-D geometric models is given,which consists of 4 steps:(1)create nodes in 3-D models by ball-packing method,(2)connect nodes to generate mesh by 3-D Delaunay triangulation,(3)retrieve the boundary of the model after Delaunay triangulation,(4)improve the mesh.

  9. The use and optimization of stainless steel mesh cathodes in microbial electrolysis cells

    KAUST Repository

    Zhang, Yimin

    2010-11-01

    Microbial electrolysis cells (MECs) provide a high-yield method for producing hydrogen from renewable biomass. One challenge for commercialization of the technology is a low-cost and highly efficient cathode. Stainless steel (SS) is very inexpensive, and cathodes made of this material with high specific surface areas can achieve performance similar to carbon cathodes containing a platinum catalyst in MECs. SS mesh cathodes were examined here as a method to provide a higher surface area material than flat plate electrodes. Cyclic voltammetry tests showed that the electrochemically active surface area of certain sized mesh could be three times larger than a flat sheet. The relative performance of SS mesh in linear sweep voltammetry at low bubble coverages (low current densities) was also consistent with performance on this basis in MEC tests. The best SS mesh size (#60) in MEC tests had a relatively thick wire size (0.02 cm), a medium pore size (0.02 cm), and a specific surface area of 66 m2/m3. An applied voltage of 0.9 V produced a high hydrogen recovery (98 ± 4%) and overall energy efficiency (74 ± 4%), with a hydrogen production rate of 2.1 ± 0.3 m3H 2/m3d (current density of 8.08 A/m2, volumetric current density of 188 ± 19 A/m3). These studies show that SS in mesh format shows great promise for the development of lower cost MEC systems for hydrogen production. © 2010 Professor T. Nejat Veziroglu. Published by Elsevier Ltd. All rights reserved.

  10. Feature-Sensitive Tetrahedral Mesh Generation with Guaranteed Quality

    OpenAIRE

    Wang, Jun; Yu, Zeyun

    2012-01-01

    Tetrahedral meshes are being extensively used in finite element methods (FEM). This paper proposes an algorithm to generate feature-sensitive and high-quality tetrahedral meshes from an arbitrary surface mesh model. A top-down octree subdivision is conducted on the surface mesh and a set of tetrahedra are constructed using adaptive body-centered cubic (BCC) lattices. Special treatments are given to the tetrahedra near the surface such that the quality of the resulting tetrahedral mesh is prov...

  11. Adaptive Meshing for Bi-directional Information Flows

    DEFF Research Database (Denmark)

    Nicholas, Paul; Zwierzycki, Mateusz; Stasiuk, David

    2016-01-01

    This paper describes a mesh-based modelling approach that supports the multiscale design of a panelised, thin-skinned metal structure. The term multi-scale refers to the decomposition of a design modelling problem into distinct but interdependent models associated with particular scales, and the ...

  12. Mesh Adaptation and Shape Optimization on Unstructured Meshes, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In this SBIR CRM proposes to implement the entropy adjoint method for solution adaptive mesh refinement into the Loci/CHEM unstructured flow solver. The scheme will...

  13. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    Science.gov (United States)

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems

  14. Optimal mesh hierarchies in Multilevel Monte Carlo methods

    KAUST Repository

    Von Schwerin, Erik

    2016-01-01

    I will discuss how to choose optimal mesh hierarchies in Multilevel Monte Carlo (MLMC) simulations when computing the expected value of a quantity of interest depending on the solution of, for example, an Ito stochastic differential equation or a partial differential equation with stochastic data. I will consider numerical schemes based on uniform discretization methods with general approximation orders and computational costs. I will compare optimized geometric and non-geometric hierarchies and discuss how enforcing some domain constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. I will also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. This talk presents joint work with N.Collier, A.-L.Haji-Ali, F. Nobile, and R. Tempone.

  15. Optimal mesh hierarchies in Multilevel Monte Carlo methods

    KAUST Repository

    Von Schwerin, Erik

    2016-01-08

    I will discuss how to choose optimal mesh hierarchies in Multilevel Monte Carlo (MLMC) simulations when computing the expected value of a quantity of interest depending on the solution of, for example, an Ito stochastic differential equation or a partial differential equation with stochastic data. I will consider numerical schemes based on uniform discretization methods with general approximation orders and computational costs. I will compare optimized geometric and non-geometric hierarchies and discuss how enforcing some domain constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. I will also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. This talk presents joint work with N.Collier, A.-L.Haji-Ali, F. Nobile, and R. Tempone.

  16. Study on TVD parameters sensitivity of a crankshaft using multiple scale and state space method considering quadratic and cubic non-linearities

    Directory of Open Access Journals (Sweden)

    R. Talebitooti

    Full Text Available In this paper the effect of quadratic and cubic non-linearities of the system consisting of the crankshaft and torsional vibration damper (TVD is taken into account. TVD consists of non-linear elastomer material used for controlling the torsional vibration of crankshaft. The method of multiple scales is used to solve the governing equations of the system. Meanwhile, the frequency response of the system for both harmonic and sub-harmonic resonances is extracted. In addition, the effects of detuning parameters and other dimensionless parameters for a case of harmonic resonance are investigated. Moreover, the external forces including both inertia and gas forces are simultaneously applied into the model. Finally, in order to study the effectiveness of the parameters, the dimensionless governing equations of the system are solved, considering the state space method. Then, the effects of the torsional damper as well as all corresponding parameters of the system are discussed.

  17. The Effect of Cyclic Loading on the Mechanical Performance of Surgical Mesh

    Directory of Open Access Journals (Sweden)

    Ho Y.C.

    2010-06-01

    Full Text Available Polymeric meshes in the form of knitted nets are commonly used in the surgical repair of pelvic organ prolapses. Although a number of these prosthetic meshes are commercially available, there is little published data on their mechanical performance, in particular on the change in stiffness under the repeated loading experienced in vivo. In this in vitro study, cyclic tensile loading was applied to rectangular strips of four different commercially available meshes. The applied force and resultant displacement was monitored throughout the tests in order to evaluate the change in stiffness. In addition, each mesh was randomly marked using indelible ink in order to permit the use of threedimensional digital image correlation to evaluate local displacements during the tests. However, the scale and form of the deformation experienced by some of the meshes made correlation difficult so that confirmation of the values of stiffness were only obtained for two meshes. The results demonstrate that all the meshes experience an increase in stiffness during cyclic loading, that in most cases cyclic creep occurs and in some cases large-scale, irreversible reorganisation of the mesh structure occurs after as few as 200 cycles at loads of the order of 10N.

  18. An adaptive mesh refinement approach for average current nodal expansion method in 2-D rectangular geometry

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.

    2013-01-01

    Highlights: ► A new adaptive h-refinement approach has been developed for a class of nodal method. ► The resulting system of nodal equations is more amenable to efficient numerical solution. ► The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. ► Spatially adaptive approach greatly enhances the accuracy of the solution. - Abstract: The aim of this work is to develop a spatially adaptive coarse mesh strategy that progressively refines the nodes in appropriate regions of domain to solve the neutron balance equation by zeroth order nodal expansion method. A flux gradient based a posteriori estimation scheme has been utilized for checking the approximate solutions for various nodes. The relative surface net leakage of nodes has been considered as an assessment criterion. In this approach, the core module is called in by adaptive mesh generator to determine gradients of node surfaces flux to explore the possibility of node refinements in appropriate regions and directions of the problem. The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. For this purpose, a computer program ANRNE-2D, Adaptive Node Refinement Nodal Expansion, has been developed to solve neutron diffusion equation using average current nodal expansion method for 2D rectangular geometries. Implementing the adaptive algorithm confirms its superiority in enhancing the accuracy of the solution without using fine nodes throughout the domain and increasing the number of unknown solution. Some well-known benchmarks have been investigated and improvements are reported

  19. Recent development of linear scaling quantum theories in GAMESS

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Cheol Ho [Kyungpook National Univ., Daegu (Korea, Republic of)

    2003-06-01

    Linear scaling quantum theories are reviewed especially focusing on the method adopted in GAMESS. The three key translation equations of the fast multipole method (FMM) are deduced from the general polypolar expansions given earlier by Steinborn and Rudenberg. Simplifications are introduced for the rotation-based FMM that lead to a very compact FMM formalism. The OPS (optimum parameter searching) procedure, a stable and efficient way of obtaining the optimum set of FMM parameters, is established with complete control over the tolerable error {epsilon}. In addition, a new parallel FMM algorithm requiring virtually no inter-node communication, is suggested which is suitable for the parallel construction of Fock matrices in electronic structure calculations.

  20. Approximating second-order vector differential operators on distorted meshes in two space dimensions

    International Nuclear Information System (INIS)

    Hermeline, F.

    2008-01-01

    A new finite volume method is presented for approximating second-order vector differential operators in two space dimensions. This method allows distorted triangle or quadrilateral meshes to be used without the numerical results being too much altered. The matrices that need to be inverted are symmetric positive definite therefore, the most powerful linear solvers can be applied. The method has been tested on a few second-order vector partial differential equations coming from elasticity and fluids mechanics areas. These numerical experiments show that it is second-order accurate and locking-free. (authors)

  1. Simple and fast fabrication of superhydrophobic metal wire mesh for efficiently gravity-driven oil/water separation.

    Science.gov (United States)

    Song, Botao

    2016-12-15

    Superhydrophobic metal wire mesh (SMWM) has frequently been applied for the selective and efficient separation of oil/water mixture due to its porous structure and special wettability. However, current methods for the modification of metal wire mesh to be superhydrophobic suffered from problems with respect to complex experimental procedures or time-consuming process. In this study, a very simple, time-saving and single-step electrospray method was proposed to fabricate SMWM and the whole procedure required about only 2min. The morphology, surface composition and wettability of the SMWM were all evaluated, and the oil/water separation ability was further investigated. In addition, a commercial available sponge covered with SMWM was fabricated as an oil adsorbent for the purpose of oil recovery. This study demonstrated a convenient and fast method to modify the metal wire mesh to be superhydrophobic and such simple method might find practical applications in the large-scale removal of oils. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Comparison of various spring analogy related mesh deformation techniques in two-dimensional airfoil design optimization

    Science.gov (United States)

    Yang, Y.; Özgen, S.

    2017-06-01

    During the last few decades, CFD (Computational Fluid Dynamics) has developed greatly and has become a more reliable tool for the conceptual phase of aircraft design. This tool is generally combined with an optimization algorithm. In the optimization phase, the need for regenerating the computational mesh might become cumbersome, especially when the number of design parameters is high. For this reason, several mesh generation and deformation techniques have been developed in the past decades. One of the most widely used techniques is the Spring Analogy. There are numerous spring analogy related techniques reported in the literature: linear spring analogy, torsional spring analogy, semitorsional spring analogy, and ball vertex spring analogy. This paper gives the explanation of linear spring analogy method and angle inclusion in the spring analogy method. In the latter case, two di¨erent solution methods are proposed. The best feasible method will later be used for two-dimensional (2D) Airfoil Design Optimization with objective function being to minimize sectional drag for a required lift coe©cient at di¨erent speeds. Design variables used in the optimization include camber and thickness distribution of the airfoil. SU2 CFD is chosen as the §ow solver during the optimization procedure. The optimization is done by using Phoenix ModelCenter Optimization Tool.

  3. Meshes optimized for discrete exterior calculus (DEC).

    Energy Technology Data Exchange (ETDEWEB)

    Mousley, Sarah C. [Univ. of Illinois, Urbana-Champaign, IL (United States); Deakin, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knupp, Patrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    We study the optimization of an energy function used by the meshing community to measure and improve mesh quality. This energy is non-traditional because it is dependent on both the primal triangulation and its dual Voronoi (power) diagram. The energy is a measure of the mesh's quality for usage in Discrete Exterior Calculus (DEC), a method for numerically solving PDEs. In DEC, the PDE domain is triangulated and this mesh is used to obtain discrete approximations of the continuous operators in the PDE. The energy of a mesh gives an upper bound on the error of the discrete diagonal approximation of the Hodge star operator. In practice, one begins with an initial mesh and then makes adjustments to produce a mesh of lower energy. However, we have discovered several shortcomings in directly optimizing this energy, e.g. its non-convexity, and we show that the search for an optimized mesh may lead to mesh inversion (malformed triangles). We propose a new energy function to address some of these issues.

  4. Wireless mesh networks.

    Science.gov (United States)

    Wang, Xinheng

    2008-01-01

    Wireless telemedicine using GSM and GPRS technologies can only provide low bandwidth connections, which makes it difficult to transmit images and video. Satellite or 3G wireless transmission provides greater bandwidth, but the running costs are high. Wireless networks (WLANs) appear promising, since they can supply high bandwidth at low cost. However, the WLAN technology has limitations, such as coverage. A new wireless networking technology named the wireless mesh network (WMN) overcomes some of the limitations of the WLAN. A WMN combines the characteristics of both a WLAN and ad hoc networks, thus forming an intelligent, large scale and broadband wireless network. These features are attractive for telemedicine and telecare because of the ability to provide data, voice and video communications over a large area. One successful wireless telemedicine project which uses wireless mesh technology is the Emergency Room Link (ER-LINK) in Tucson, Arizona, USA. There are three key characteristics of a WMN: self-organization, including self-management and self-healing; dynamic changes in network topology; and scalability. What we may now see is a shift from mobile communication and satellite systems for wireless telemedicine to the use of wireless networks based on mesh technology, since the latter are very attractive in terms of cost, reliability and speed.

  5. A characteristic based multiple balance approach for SN on arbitrary polygonal meshes

    International Nuclear Information System (INIS)

    Grove, R.E.; Pevey, R.E.

    1995-01-01

    The authors introduce a new approach for characteristic based S n transport on arbitrary polygonal meshes in XY geometry. They approximate a general surface as an arbitrary polygon and rotate to a coordinate system aligned with the direction of particle travel. They use exact moment balance equations on whole cells and subregions called slices and close the system by analytically solving the characteristic equation. The authors assume spatial functions for boundary conditions and cell sources and formulate analogous functions for outgoing edge and cell angular fluxes which exactly preserve spatial moments of the analytic solution. In principle, their approach provides the framework to extend characteristic methods formulated on rectangular grids to arbitrary polygonal meshes. The authors derive schemes based on step and linear spatial approximations. Their step characteristic scheme is mathematically equivalent to the Extended Step Characteristic (ESC) method but their approach and scheme differ in the geometry rotation and in the solution form. Their solutions are simple and permit edge-based transport sweep ordering

  6. A short note on the use of the red-black tree in Cartesian adaptive mesh refinement algorithms

    Science.gov (United States)

    Hasbestan, Jaber J.; Senocak, Inanc

    2017-12-01

    Mesh adaptivity is an indispensable capability to tackle multiphysics problems with large disparity in time and length scales. With the availability of powerful supercomputers, there is a pressing need to extend time-proven computational techniques to extreme-scale problems. Cartesian adaptive mesh refinement (AMR) is one such method that enables simulation of multiscale, multiphysics problems. AMR is based on construction of octrees. Originally, an explicit tree data structure was used to generate and manipulate an adaptive Cartesian mesh. At least eight pointers are required in an explicit approach to construct an octree. Parent-child relationships are then used to traverse the tree. An explicit octree, however, is expensive in terms of memory usage and the time it takes to traverse the tree to access a specific node. For these reasons, implicit pointerless methods have been pioneered within the computer graphics community, motivated by applications requiring interactivity and realistic three dimensional visualization. Lewiner et al. [1] provides a concise review of pointerless approaches to generate an octree. Use of a hash table and Z-order curve are two key concepts in pointerless methods that we briefly discuss next.

  7. Automatic mesh generation with QMESH program

    International Nuclear Information System (INIS)

    Ise, Takeharu; Tsutsui, Tsuneo

    1977-05-01

    Usage of the two-dimensional self-organizing mesh generation program, QMESH, is presented together with the descriptions and the experience, as it has recently been converted and reconstructed from the NEACPL version to the FACOM. The program package consists of the QMESH code to generate quadrilaterial meshes with smoothing techniques, the QPLOT code to plot the data obtained from the QMESH on the graphic COM, and the RENUM code to renumber the meshes by using a bandwidth minimization procedure. The technique of mesh reconstructuring coupled with smoothing techniques is especially useful when one generates the meshes for computer codes based on the finite element method. Several typical examples are given for easy access to the QMESH program, which is registered in the R.B-disks of JAERI for users. (auth.)

  8. A Reconfigurable Mesh-Ring Topology for Bluetooth Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ben-Yi Wang

    2018-05-01

    Full Text Available In this paper, a Reconfigurable Mesh-Ring (RMR algorithm is proposed for Bluetooth sensor networks. The algorithm is designed in three stages to determine the optimal configuration of the mesh-ring network. Firstly, a designated root advertises and discovers its neighboring nodes. Secondly, a scatternet criterion is built to compute the minimum number of piconets and distributes the connection information for piconet and scatternet. Finally, a peak-search method is designed to determine the optimal mesh-ring configuration for various sizes of networks. To maximize the network capacity, the research problem is formulated by determining the best connectivity of available mesh links. During the formation and maintenance phases, three possible configurations (including piconet, scatternet, and hybrid are examined to determine the optimal placement of mesh links. The peak-search method is a systematic approach, and is implemented by three functional blocks: the topology formation block generates the mesh-ring topology, the routing efficiency block computes the routing performance, and the optimum decision block introduces a decision-making criterion to determine the optimum number of mesh links. Simulation results demonstrate that the optimal mesh-ring configuration can be determined and that the scatternet case achieves better overall performance than the other two configurations. The RMR topology also outperforms the conventional ring-based and cluster-based mesh methods in terms of throughput performance for Bluetooth configurable networks.

  9. Cell-centered particle weighting algorithm for PIC simulations in a non-uniform 2D axisymmetric mesh

    Science.gov (United States)

    Araki, Samuel J.; Wirz, Richard E.

    2014-09-01

    Standard area weighting methods for particle-in-cell simulations result in systematic errors on particle densities for a non-uniform mesh in cylindrical coordinates. These errors can be significantly reduced by using weighted cell volumes for density calculations. A detailed description on the corrected volume calculations and cell-centered weighting algorithm in a non-uniform mesh is provided. The simple formulas for the corrected volume can be used for any type of quadrilateral and/or triangular mesh in cylindrical coordinates. Density errors arising from the cell-centered weighting algorithm are computed for radial density profiles of uniform, linearly decreasing, and Bessel function in an adaptive Cartesian mesh and an unstructured mesh. For all the density profiles, it is shown that the weighting algorithm provides a significant improvement for density calculations. However, relatively large density errors may persist at outermost cells for monotonically decreasing density profiles. A further analysis has been performed to investigate the effect of the density errors in potential calculations, and it is shown that the error at the outermost cell does not propagate into the potential solution for the density profiles investigated.

  10. Cell Adhesion Minimization by a Novel Mesh Culture Method Mechanically Directs Trophoblast Differentiation and Self-Assembly Organization of Human Pluripotent Stem Cells.

    Science.gov (United States)

    Okeyo, Kennedy Omondi; Kurosawa, Osamu; Yamazaki, Satoshi; Oana, Hidehiro; Kotera, Hidetoshi; Nakauchi, Hiromitsu; Washizu, Masao

    2015-10-01

    Mechanical methods for inducing differentiation and directing lineage specification will be instrumental in the application of pluripotent stem cells. Here, we demonstrate that minimization of cell-substrate adhesion can initiate and direct the differentiation of human pluripotent stem cells (hiPSCs) into cyst-forming trophoblast lineage cells (TLCs) without stimulation with cytokines or small molecules. To precisely control cell-substrate adhesion area, we developed a novel culture method where cells are cultured on microstructured mesh sheets suspended in a culture medium such that cells on mesh are completely out of contact with the culture dish. We used microfabricated mesh sheets that consisted of open meshes (100∼200 μm in pitch) with narrow mesh strands (3-5 μm in width) to provide support for initial cell attachment and growth. We demonstrate that minimization of cell adhesion area achieved by this culture method can trigger a sequence of morphogenetic transformations that begin with individual hiPSCs attached on the mesh strands proliferating to form cell sheets by self-assembly organization and ultimately differentiating after 10-15 days of mesh culture to generate spherical cysts that secreted human chorionic gonadotropin (hCG) hormone and expressed caudal-related homeobox 2 factor (CDX2), a specific marker of trophoblast lineage. Thus, this study demonstrates a simple and direct mechanical approach to induce trophoblast differentiation and generate cysts for application in the study of early human embryogenesis and drug development and screening.

  11. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  12. Field-aligned mesh joinery

    OpenAIRE

    Cignoni, Paolo; Pietroni, Nico; Malomo, Luigi

    2014-01-01

    Mesh joinery is an innovative method to produce illustrative shape approximations suitable for fabrication. Mesh joinery is capable of producing complex fabricable structures in an efficient and visually pleasing manner. We represent an input geometry as a set of planar pieces arranged to compose a rigid structure, by exploiting an efficient slit mechanism. Since slices are planar, to fabricate them a standard 2D cutting system is enough. We automatically arrange slices according to a smooth ...

  13. Mesh Optimization for Ground Vehicle Aerodynamics

    OpenAIRE

    Adrian Gaylard; Essam F Abo-Serie; Nor Elyana Ahmad

    2010-01-01

    Mesh optimization strategy for estimating accurate drag of a ground vehicle is proposed based on examining the effect of different mesh parameters.  The optimized mesh parameters were selected using design of experiment (DOE) method to be able to work in a...

  14. A study of coarse mesh collision probability correction factors in slab lattices

    International Nuclear Information System (INIS)

    Buckler, A.N.

    1975-07-01

    Calculations of collision probability leakage estimates are performed in one dimensional slab geometry with one neutron group to gain some insight into methods of correction for the coarseness of the mesh H. The chief result is that the correction factor, beta, can be written as CD/H where C → 4 for the diffusion limit. An explicit expression for C is derived in terms of the E 3 function, for a linear flux variation across the slabs. (author)

  15. Texturing of continuous LOD meshes with the hierarchical texture atlas

    Science.gov (United States)

    Birkholz, Hermann

    2006-02-01

    For the rendering of detailed virtual environments, trade-offs have to be made between image quality and rendering time. An immersive experience of virtual reality always demands high frame-rates with the best reachable image qual-ity. Continuous Level of Detail (cLoD) triangle-meshes provide an continuous spectrum of detail for a triangle mesh that can be used to create view-dependent approximations of the environment in real-time. This enables the rendering with a constant number of triangles and thus with constant frame-rates. Normally the construction of such cLoD mesh representations leads to the loss of all texture information of the original mesh. To overcome this problem, a parameter domain can be created, in order to map the surface properties (colour, texture, normal) to it. This parameter domain can be used to map the surface properties back to arbitrary approximations of the original mesh. The parameter domain is often a simplified version of the mesh to be parameterised. This limits the reachable simplification to the domain mesh which has to map the surface of the original mesh with the least possible stretch. In this paper, a hierarchical domain mesh is presented, that scales between very coarse domain meshes and good property-mapping.

  16. Regional Community Climate Simulations with variable resolution meshes in the Community Earth System Model

    Science.gov (United States)

    Zarzycki, C. M.; Gettelman, A.; Callaghan, P.

    2017-12-01

    Accurately predicting weather extremes such as precipitation (floods and droughts) and temperature (heat waves) requires high resolution to resolve mesoscale dynamics and topography at horizontal scales of 10-30km. Simulating such resolutions globally for climate scales (years to decades) remains computationally impractical. Simulating only a small region of the planet is more tractable at these scales for climate applications. This work describes global simulations using variable-resolution static meshes with multiple dynamical cores that target the continental United States using developmental versions of the Community Earth System Model version 2 (CESM2). CESM2 is tested in idealized, aquaplanet and full physics configurations to evaluate variable mesh simulations against uniform high and uniform low resolution simulations at resolutions down to 15km. Different physical parameterization suites are also evaluated to gauge their sensitivity to resolution. Idealized variable-resolution mesh cases compare well to high resolution tests. More recent versions of the atmospheric physics, including cloud schemes for CESM2, are more stable with respect to changes in horizontal resolution. Most of the sensitivity is due to sensitivity to timestep and interactions between deep convection and large scale condensation, expected from the closure methods. The resulting full physics model produces a comparable climate to the global low resolution mesh and similar high frequency statistics in the high resolution region. Some biases are reduced (orographic precipitation in the western United States), but biases do not necessarily go away at high resolution (e.g. summertime JJA surface Temp). The simulations are able to reproduce uniform high resolution results, making them an effective tool for regional climate studies and are available in CESM2.

  17. Parallel FE Electron-Photon Transport Analysis on 2-D Unstructured Mesh

    International Nuclear Information System (INIS)

    Drumm, C.R.; Lorenz, J.

    1999-01-01

    A novel solution method has been developed to solve the coupled electron-photon transport problem on an unstructured triangular mesh. Instead of tackling the first-order form of the linear Boltzmann equation, this approach is based on the second-order form in conjunction with the conventional multi-group discrete-ordinates approximation. The highly forward-peaked electron scattering is modeled with a multigroup Legendre expansion derived from the Goudsmit-Saunderson theory. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, a method that is well suited for massively parallel computers

  18. Mersiline mesh in premaxillary augmentation.

    Science.gov (United States)

    Foda, Hossam M T

    2005-01-01

    Premaxillary retrusion may distort the aesthetic appearance of the columella, lip, and nasal tip. This defect is characteristically seen in, but not limited to, patients with cleft lip nasal deformity. This study investigated 60 patients presenting with premaxillary deficiencies in which Mersiline mesh was used to augment the premaxilla. All the cases had surgery using the external rhinoplasty technique. Two methods of augmentation with Mersiline mesh were used: the Mersiline roll technique, for the cases with central symmetric deficiencies, and the Mersiline packing technique, for the cases with asymmetric deficiencies. Premaxillary augmentation with Mersiline mesh proved to be simple technically, easy to perform, and not associated with any complications. Periodic follow-up evaluation for a mean period of 32 months (range, 12-98 months) showed that an adequate degree of premaxillary augmentation was maintained with no clinically detectable resorption of the mesh implant.

  19. Mesh-graft urethroplasty: a case report

    OpenAIRE

    田中, 敏博; 滝川, 浩; 香川, 征; 長江, 浩朗

    1987-01-01

    We used a meshed free-foreskin transplant in a two-stage procedure for reconstruction of the extended stricture of urethra after direct vision urethrotomy. The results were excellent. Mesh-graft urethroplasty is a useful method for patients with extended strictures of the urethra or recurrent strictures after several operations.

  20. Synthetic acceleration methods for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.

    1989-01-01

    One of the iterative methods which is used to solve the discretized transport equation is called the Source Iteration Method (SI). The SI method converges very slowly for problems with optically thick regions and scattering ratios (σ s /σ t ) near unity. The Diffusion-Synthetic Acceleration method (DSA) is one of the methods which has been devised to improve the convergence rate of the SI method. The DSA method is a good tool to accelerate the SI method, if the particle which is being dealt with is a neutron. This is because the scattering process for neutrons is not severely anisotropic. However, if the particle is a charged particle (electron), DSA becomes ineffective as an acceleration device because here the scattering process is severely anisotropic. To improve the DSA algorithm for electron transport, the author approaches the problem in two different ways in this thesis. He develops the first approach by accelerating more angular moments (φ 0 , φ 1 , φ 2 , φ 3 ,...) than is done in DSA; he calls this approach the Modified P N Synthetic Acceleration (MPSA) method. In the second approach he modifies the definition of the transport sweep, using the physics of the scattering; he calls this approach the Modified Diffusion Synthetic Acceleration (MDSA) method. In general, he has developed, analyzed, and implemented the MPSA and MDSA methods in this thesis and has shown that for a high order quadrature set and mesh widths about 1.0 cm, they are each about 34 times faster (clock time) than the DSA method. Also, he has found that the MDSA spectral radius decreases as the mesh size increases. This makes the MDSA method a better choice for large spatial meshes

  1. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    Science.gov (United States)

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    2017-10-01

    We present a code implementing the linearized quasiparticle self-consistent GW method (LQSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method. Program Files doi:http://dx.doi.org/10.17632/cpchkfty4w.1 Licensing provisions: GNU General Public License Programming language: Fortran 90 External routines/libraries: BLAS, LAPACK, MPI (optional) Nature of problem: Direct implementation of the GW method scales as N4 with the system size, which quickly becomes prohibitively time consuming even in the modern computers. Solution method: We implemented the GW approach using a method that switches between real space and momentum space representations. Some operations are faster in real space, whereas others are more computationally efficient in the reciprocal space. This makes our approach scale as N3. Restrictions: The limiting factor is usually the memory available in a computer. Using 10 GB/core of memory allows us to study the systems up to 15 atoms per unit cell.

  2. Scalable power selection method for wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available This paper addresses the problem of a scalable dynamic power control (SDPC) for wireless mesh networks (WMNs) based on IEEE 802.11 standards. An SDPC model that accounts for architectural complexities witnessed in multiple radios and hops...

  3. User Manual for the PROTEUS Mesh Tools

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, Emily R [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-09-19

    PROTEUS is built around a finite element representation of the geometry for visualization. In addition, the PROTEUS-SN solver was built to solve the even-parity transport equation on a finite element mesh provided as input. Similarly, PROTEUS-MOC and PROTEUS-NEMO were built to apply the method of characteristics on unstructured finite element meshes. Given the complexity of real world problems, experience has shown that using commercial mesh generator to create rather simple input geometries is overly complex and slow. As a consequence, significant effort has been put into place to create multiple codes that help assist in the mesh generation and manipulation. There are three input means to create a mesh in PROTEUS: UFMESH, GRID, and NEMESH. At present, the UFMESH is a simple way to generate two-dimensional Cartesian and hexagonal fuel assembly geometries. The UFmesh input allows for simple assembly mesh generation while the GRID input allows the generation of Cartesian, hexagonal, and regular triangular structured grid geometry options. The NEMESH is a way for the user to create their own mesh or convert another mesh file format into a PROTEUS input format. Given that one has an input mesh format acceptable for PROTEUS, we have constructed several tools which allow further mesh and geometry construction (i.e. mesh extrusion and merging). This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial

  4. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  5. Unstructured Finite Elements and Dynamic Meshing for Explicit Phase Tracking in Multiphase Problems

    Science.gov (United States)

    Chandra, Anirban; Yang, Fan; Zhang, Yu; Shams, Ehsan; Sahni, Onkar; Oberai, Assad; Shephard, Mark

    2017-11-01

    Multi-phase processes involving phase change at interfaces, such as evaporation of a liquid or combustion of a solid, represent an interesting class of problems with varied applications. Large density ratio across phases, discontinuous fields at the interface and rapidly evolving geometries are some of the inherent challenges which influence the numerical modeling of multi-phase phase change problems. In this work, a mathematically consistent and robust computational approach to address these issues is presented. We use stabilized finite element methods on mixed topology unstructured grids for solving the compressible Navier-Stokes equations. Appropriate jump conditions derived from conservations laws across the interface are handled by using discontinuous interpolations, while the continuity of temperature and tangential velocity is enforced using a penalty parameter. The arbitrary Lagrangian-Eulerian (ALE) technique is utilized to explicitly track the interface motion. Mesh at the interface is constrained to move with the interface while elsewhere it is moved using the linear elasticity analogy. Repositioning is applied to the layered mesh that maintains its structure and normal resolution. In addition, mesh modification is used to preserve the quality of the volumetric mesh. This work is supported by the U.S. Army Grants W911NF1410301 and W911NF16C0117.

  6. Percutaneous Vertebral Augmentation with Polyethylene Mesh and Allograft Bone for Traumatic Thoracolumbar Fractures

    Directory of Open Access Journals (Sweden)

    C. Schulz

    2015-01-01

    Full Text Available Purpose. In cases of traumatic thoracolumbar fractures, percutaneous vertebral augmentation can be used in addition to posterior stabilisation. The use of an augmentation technique with a bone-filled polyethylene mesh as a stand-alone treatment for traumatic vertebral fractures has not yet been investigated. Methods. In this retrospective study, 17 patients with acute type A3.1 fractures of the thoracic or lumbar spine underwent stand-alone augmentation with mesh and allograft bone and were followed up for one year using pain scales and sagittal endplate angles. Results. From before surgery to 12 months after surgery, pain and physical function improved significantly, as indicated by an improvement in the median VAS score and in the median pain and work scale scores. From before to immediately after surgery, all patients showed a significant improvement in mean mono- and bisegmental kyphoses. During the one-year period, there was a significant loss of correction. Conclusions. Based on this data a stand-alone approach with vertebral augmentation with polyethylene mesh and allograft bone is not a suitable therapy option for incomplete burst fractures for a young patient collective.

  7. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    International Nuclear Information System (INIS)

    Toshimitsu, Fujisawa; Genki, Yagawa

    2003-01-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  8. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    Energy Technology Data Exchange (ETDEWEB)

    Toshimitsu, Fujisawa [Tokyo Univ., Collaborative Research Center of Frontier Simulation Software for Industrial Science, Institute of Industrial Science (Japan); Genki, Yagawa [Tokyo Univ., Department of Quantum Engineering and Systems Science (Japan)

    2003-07-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  9. Clinical observation of a modified surgical method: posterior vaginal mesh suspension of female rectocele with intractable constipation.

    Science.gov (United States)

    Hong, Ling; Li, Huai-Fang; Sun, Jing; Zhu, Jian-Long; Ai, Gui-hai; Li, Li; Zhang, Bo; Chi, Feng-li; Tong, Xiao-Wen

    2012-01-01

    To explore the feasibility and effectiveness of a modified posterior vaginal mesh suspension method in treating female rectocele with intractable constipation. Descriptive study (Canadian Task Force classification II-3). The study was performed in the Study Center for Female Pelvic Dysfunction Disease, Department of Obstetrics and Gynecology, Tongji Hospital, Tongji University School of Medicine, Shanghai, China. The Study Center includes 15 physicians, most of whom have received advanced training in pelvic floor dysfunctional disease and can skillfully perform many types of operations in patients with such disease. Almost 1500 operations to treat pelvic floor dysfunctional disease are performed every year at the center. Thirty-six women with rectocele with intractable constipation. Posterior vaginal mesh suspension. All patients were followed up for 15 to 36 months. In 29 patients, the condition was cured completely; in 5 patients it had improved; and in 2 patients, the intervention had no effect. Insofar as recovery and improved results, the overall effectiveness rate was 94.4%. Posterior vaginal mesh suspension is an effective, harmless, and convenient method for treatment of female rectocele with intractable constipation. It has positive short-term curative effects, with few complications and sequelae. However, the long-term effects of posterior vaginal mesh suspension should be evaluated. Copyright © 2012 AAGL. Published by Elsevier Inc. All rights reserved.

  10. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  11. Multiphase flow of immiscible fluids on unstructured moving meshes

    DEFF Research Database (Denmark)

    Misztal, Marek Krzysztof; Erleben, Kenny; Bargteil, Adam

    2012-01-01

    In this paper, we present a method for animating multiphase flow of immiscible fluids using unstructured moving meshes. Our underlying discretization is an unstructured tetrahedral mesh, the deformable simplicial complex (DSC), that moves with the flow in a Lagrangian manner. Mesh optimization op...

  12. Multiphase Flow of Immiscible Fluids on Unstructured Moving Meshes

    DEFF Research Database (Denmark)

    Misztal, Marek Krzysztof; Erleben, Kenny; Bargteil, Adam

    2013-01-01

    In this paper, we present a method for animating multiphase flow of immiscible fluids using unstructured moving meshes. Our underlying discretization is an unstructured tetrahedral mesh, the deformable simplicial complex (DSC), that moves with the flow in a Lagrangian manner. Mesh optimization op...

  13. Outcome of transvaginal mesh and tape removed for pain only.

    Science.gov (United States)

    Hou, Jack C; Alhalabi, Feras; Lemack, Gary E; Zimmern, Philippe E

    2014-09-01

    Because there is reluctance to operate for pain, we evaluated midterm outcomes of vaginal mesh and synthetic suburethral tape removed for pain as the only indication. After receiving institutional review board approval we reviewed a prospective database of women without a neurogenic condition who underwent surgery for vaginal mesh or suburethral tape removal with a focus on pain as the single reason for removal and a minimum 6-month followup. The primary outcome was pain level assessed by a visual analog scale (range 0 to 10) at baseline and at each subsequent visit with the score at the last visit used for analysis. Parameters evaluated included demographics, mean time to presentation and type of mesh or tape inserted. From 2005 to 2013, 123 patients underwent surgical removal of mesh (69) and suburethral tape (54) with pain as the only indication. Mean followup was 35 months (range 6 to 59) in the tape group and 22 months (range 6 to 47) in the mesh group. The visual analog scale score decreased from a mean preoperative level of 7.9 to 0.9 postoperatively (p = 0.0014) in the mesh group and from 5.3 to 1.5 (p = 0.00074) in the tape group. Pain-free status, considered a score of 0, was achieved in 81% of tape and 67% of mesh cases, respectively. No statistically significant difference was found between the groups. When pain is the only indication for suburethral tape or vaginal mesh removal, a significant decrease in the pain score can be durably expected after removal in most patients at midterm followup. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  14. In-vitro examination of the biocompatibility of fibroblast cell lines on alloplastic meshes and sterilized polyester mosquito mesh.

    Science.gov (United States)

    Wiessner, R; Kleber, T; Ekwelle, N; Ludwig, K; Richter, D-U

    2017-06-01

    as well as the pH value of the fibroblasts showed no significant differences between the tested meshes. The examination of the oxidative stress via measurement of the H 2 O 2 concentration showed values in the normal range for the commercially alloplastic meshes and the mosquito mesh. Our examination showed no significant difference with regard to biocompatibility between the officially approved and cost-intensive meshes and the sterilized (autoclaved) mosquito mesh. Due to the proven strength and stability of the mosquito mesh and their proven compatibility, the implantation of the sterilized mosquito mesh in additional in vivo studies must be considered. A wide-scale and cost-effective treatment of hernias could thus be guaranteed, not only in Third World countries.

  15. Mesh requirements for neutron transport calculations

    International Nuclear Information System (INIS)

    Askew, J.R.

    1967-07-01

    Fine-structure calculations are reported for a cylindrical natural uranium-graphite cell using different solution methods (discrete ordinate and collision probability codes) and varying the spatial mesh. It is suggested that of formulations assuming the source constant in a mesh interval the differential approach is generally to be preferred. Due to cancellation between approximations made in the derivation of the finite difference equations and the errors in neglecting source variation, the discrete ordinate code gave a more accurate estimate of fine structure for a given mesh even for unusually coarse representations. (author)

  16. Simulation of 2-D Compressible Flows on a Moving Curvilinear Mesh with an Implicit-Explicit Runge-Kutta Method

    KAUST Repository

    AbuAlSaud, Moataz

    2012-07-01

    The purpose of this thesis is to solve unsteady two-dimensional compressible Navier-Stokes equations for a moving mesh using implicit explicit (IMEX) Runge- Kutta scheme. The moving mesh is implemented in the equations using Arbitrary Lagrangian Eulerian (ALE) formulation. The inviscid part of the equation is explicitly solved using second-order Godunov method, whereas the viscous part is calculated implicitly. We simulate subsonic compressible flow over static NACA-0012 airfoil at different angle of attacks. Finally, the moving mesh is examined via oscillating the airfoil between angle of attack = 0 and = 20 harmonically. It is observed that the numerical solution matches the experimental and numerical results in the literature to within 20%.

  17. Fitting boxes to Manhattan scenes using linear integer programming

    KAUST Repository

    Li, Minglei

    2016-02-19

    We propose an approach for automatic generation of building models by assembling a set of boxes using a Manhattan-world assumption. The method first aligns the point cloud with a per-building local coordinate system, and then fits axis-aligned planes to the point cloud through an iterative regularization process. The refined planes partition the space of the data into a series of compact cubic cells (candidate boxes) spanning the entire 3D space of the input data. We then choose to approximate the target building by the assembly of a subset of these candidate boxes using a binary linear programming formulation. The objective function is designed to maximize the point cloud coverage and the compactness of the final model. Finally, all selected boxes are merged into a lightweight polygonal mesh model, which is suitable for interactive visualization of large scale urban scenes. Experimental results and a comparison with state-of-the-art methods demonstrate the effectiveness of the proposed framework.

  18. A THREE-YEAR EXPERIENCE WITH ANTERIOR TRANSOBTURATOR MESH (ATOM AND POSTERIOR ISCHIORECTAL MESH (PIRM

    Directory of Open Access Journals (Sweden)

    Marijan Lužnik

    2018-02-01

    Full Text Available Background. Use of alloplastic mesh implantates allow a new urogynecologycal surgical techniques achieve a marked improvement in pelvic organ static and pelvic floor function with minimally invasive needle transvaginal intervention like an anterior transobturator mesh (ATOM and a posterior ischiorectal mesh (PIRM procedures. Methods. In three years, between April 2006 and May 2009, we performed one hundred and eightyfour operative corrections of female pelvic organ prolapse (POP and pelvic floor dysfunction (PFD with mesh implantates. The eighty-three patients with surgical procedure TVT-O or Monarc as solo intervention indicated by stress urinary incontinence without POP, are not included in this number. In 97 % of mesh operations, Gynemesh 10 × 15 cm was used. For correction of anterior vaginal prolapse with ATOM procedure, Gynemesh was individually trimmed in mesh with 6 free arms for tension-free transobturator application and tension-free apical collar. IVS (Intravaginal sling 04 Tunneller (Tyco needle system was used for transobturator application of 6 arms through 4 dermal incisions (2 on right and 2 on left. Minimal anterior median colpotomy was made in two separate parts. For correction of posterior vaginal prolapse with PIRM procedure Gynemesh was trimmed in mesh with 4 free arms and tension-free collar. Two ischiorectal long arms for tension-free application through fossa ischiorectale – right and left, and two short arms for perineal body also on both sides. IVS 02 Tunneller (Tyco needle system was used for tension-free application of 4 arms through 4 dermal incisions (2 on right and 2 on left in PIRM. Results. All 184 procedures were performed relatively safely. In 9 cases of ATOM we had perforation of bladder, in 5 by application of anterior needle, in 3 by application of posterior needle and in one case with pincette when collar was inserted in lateral vesico – vaginal space. In 2 cases of PIRM we had perforation of rectum

  19. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yang; Sivalingam, Kantharuban; Neese, Frank, E-mail: Frank.Neese@cec.mpg.de [Max Planck Institut für Chemische Energiekonversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F. [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24014 (United States)

    2016-03-07

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling “partially contracted” NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient “electron pair prescreening” that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed

  20. Charged particle tracking through electrostatic wire meshes using the finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Devlin, L. J.; Karamyshev, O.; Welsch, C. P., E-mail: carsten.welsch@cockcroft.ac.uk [The Cockcroft Institute, Daresbury Laboratory, Warrington (United Kingdom); Department of Physics, University of Liverpool, Liverpool (United Kingdom)

    2016-06-15

    Wire meshes are used across many disciplines to accelerate and focus charged particles, however, analytical solutions are non-exact and few codes exist which simulate the exact fields around a mesh with physical sizes. A tracking code based in Matlab-Simulink using field maps generated using finite element software has been developed which tracks electrons or ions through electrostatic wire meshes. The fields around such a geometry are presented as an analytical expression using several basic assumptions, however, it is apparent that computational calculations are required to obtain realistic values of electric potential and fields, particularly when multiple wire meshes are deployed. The tracking code is flexible in that any quantitatively describable particle distribution can be used for both electrons and ions as well as other benefits such as ease of export to other programs for analysis. The code is made freely available and physical examples are highlighted where this code could be beneficial for different applications.

  1. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    Science.gov (United States)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution

  2. Grouper: a compact, streamable triangle mesh data structure.

    Science.gov (United States)

    Luffel, Mark; Gurung, Topraj; Lindstrom, Peter; Rossignac, Jarek

    2014-01-01

    We present Grouper: an all-in-one compact file format, random-access data structure, and streamable representation for large triangle meshes. Similarly to the recently published SQuad representation, Grouper represents the geometry and connectivity of a mesh by grouping vertices and triangles into fixed-size records, most of which store two adjacent triangles and a shared vertex. Unlike SQuad, however, Grouper interleaves geometry with connectivity and uses a new connectivity representation to ensure that vertices and triangles can be stored in a coherent order that enables memory-efficient sequential stream processing. We present a linear-time construction algorithm that allows streaming out Grouper meshes using a small memory footprint while preserving the initial ordering of vertices. As a part of this construction, we show how the problem of assigning vertices and triangles to groups reduces to a well-known NP-hard optimization problem, and present a simple yet effective heuristic solution that performs well in practice. Our array-based Grouper representation also doubles as a triangle mesh data structure that allows direct access to vertices and triangles. Storing only about two integer references per triangle--i.e., less than the three vertex references stored with each triangle in a conventional indexed mesh format--Grouper answers both incidence and adjacency queries in amortized constant time. Our compact representation enables data-parallel processing on multicore computers, instant partitioning and fast transmission for distributed processing, as well as efficient out-of-core access. We demonstrate the versatility and performance benefits of Grouper using a suite of example meshes and processing kernels.

  3. Grouper: A Compact, Streamable Triangle Mesh Data Structure

    Energy Technology Data Exchange (ETDEWEB)

    Luffel, Mark [Georgia Inst. of Technology, Atlanta, GA (United States). Visualization and Usability Center (GVU); Gurung, Topraj [Georgia Inst. of Technology, Atlanta, GA (United States). Visualization and Usability Center (GVU); Lindstrom, Peter [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rossignac, Jarek [Georgia Inst. of Technology, Atlanta, GA (United States). Visualization and Usability Center (GVU)

    2014-01-01

    Here, we present Grouper: an all-in-one compact file format, random-access data structure, and streamable representation for large triangle meshes. Similarly to the recently published SQuad representation, Grouper represents the geometry and connectivity of a mesh by grouping vertices and triangles into fixed-size records, most of which store two adjacent triangles and a shared vertex. Unlike SQuad, however, Grouper interleaves geometry with connectivity and uses a new connectivity representation to ensure that vertices and triangles can be stored in a coherent order that enables memory-efficient sequential stream processing. We also present a linear-time construction algorithm that allows streaming out Grouper meshes using a small memory footprint while preserving the initial ordering of vertices. In this construction, we show how the problem of assigning vertices and triangles to groups reduces to a well-known NP-hard optimization problem, and present a simple yet effective heuristic solution that performs well in practice. Our array-based Grouper representation also doubles as a triangle mesh data structure that allows direct access to vertices and triangles. Storing only about two integer references per triangle-i.e., less than the three vertex references stored with each triangle in a conventional indexed mesh format-Grouper answers both incidence and adjacency queries in amortized constant time. Our compact representation enables data-parallel processing on multicore computers, instant partitioning and fast transmission for distributed processing, as well as efficient out-of-core access. We demonstrate the versatility and performance benefits of Grouper using a suite of example meshes and processing kernels.

  4. Combination of ray-tracing and the method of moments for electromagnetic radiation analysis using reduced meshes

    Science.gov (United States)

    Delgado, Carlos; Cátedra, Manuel Felipe

    2018-05-01

    This work presents a technique that allows a very noticeable relaxation of the computational requirements for full-wave electromagnetic simulations based on the Method of Moments. A ray-tracing analysis of the geometry is performed in order to extract the critical points with significant contributions. These points are then used to generate a reduced mesh, considering the regions of the geometry that surround each critical point and taking into account the electrical path followed from the source. The electromagnetic analysis of the reduced mesh produces very accurate results, requiring a fraction of the resources that the conventional analysis would utilize.

  5. Multiphase flow modelling of volcanic ash particle settling in water using adaptive unstructured meshes

    Science.gov (United States)

    Jacobs, C. T.; Collins, G. S.; Piggott, M. D.; Kramer, S. C.; Wilson, C. R. G.

    2013-02-01

    Small-scale experiments of volcanic ash particle settling in water have demonstrated that ash particles can either settle slowly and individually, or rapidly and collectively as a gravitationally unstable ash-laden plume. This has important implications for the emplacement of tephra deposits on the seabed. Numerical modelling has the potential to extend the results of laboratory experiments to larger scales and explore the conditions under which plumes may form and persist, but many existing models are computationally restricted by the fixed mesh approaches that they employ. In contrast, this paper presents a new multiphase flow model that uses an adaptive unstructured mesh approach. As a simulation progresses, the mesh is optimized to focus numerical resolution in areas important to the dynamics and decrease it where it is not needed, thereby potentially reducing computational requirements. Model verification is performed using the method of manufactured solutions, which shows the correct solution convergence rates. Model validation and application considers 2-D simulations of plume formation in a water tank which replicate published laboratory experiments. The numerically predicted settling velocities for both individual particles and plumes, as well as instability behaviour, agree well with experimental data and observations. Plume settling is clearly hindered by the presence of a salinity gradient, and its influence must therefore be taken into account when considering particles in bodies of saline water. Furthermore, individual particles settle in the laminar flow regime while plume settling is shown (by plume Reynolds numbers greater than unity) to be in the turbulent flow regime, which has a significant impact on entrainment and settling rates. Mesh adaptivity maintains solution accuracy while providing a substantial reduction in computational requirements when compared to the same simulation performed using a fixed mesh, highlighting the benefits of an

  6. Image-Based Geometric Modeling and Mesh Generation

    CERN Document Server

    2013-01-01

    As a new interdisciplinary research area, “image-based geometric modeling and mesh generation” integrates image processing, geometric modeling and mesh generation with finite element method (FEM) to solve problems in computational biomedicine, materials sciences and engineering. It is well known that FEM is currently well-developed and efficient, but mesh generation for complex geometries (e.g., the human body) still takes about 80% of the total analysis time and is the major obstacle to reduce the total computation time. It is mainly because none of the traditional approaches is sufficient to effectively construct finite element meshes for arbitrarily complicated domains, and generally a great deal of manual interaction is involved in mesh generation. This contributed volume, the first for such an interdisciplinary topic, collects the latest research by experts in this area. These papers cover a broad range of topics, including medical imaging, image alignment and segmentation, image-to-mesh conversion,...

  7. Tensile Behaviour of Welded Wire Mesh and Hexagonal Metal Mesh for Ferrocement Application

    Science.gov (United States)

    Tanawade, A. G.; Modhera, C. D.

    2017-08-01

    Tension tests were conducted on welded mesh and hexagonal Metal mesh. Welded Mesh is available in the market in different sizes. The two types are analysed viz. Ø 2.3 mm and Ø 2.7 mm welded mesh, having opening size 31.75 mm × 31.75 mm and 25.4 mm × 25.4 mm respectively. Tensile strength test was performed on samples of welded mesh in three different orientations namely 0°, 30° and 45° degrees with the loading axis and hexagonal Metal mesh of Ø 0.7 mm, having opening 19.05 × 19.05 mm. Experimental tests were conducted on samples of these meshes. The objective of this study was to investigate the behaviour of the welded mesh and hexagonal Metal mesh. The result shows that the tension load carrying capacity of welded mesh of Ø 2.7 mm of 0° orientation is good as compared to Ø2.3 mm mesh and ductility of hexagonal Metal mesh is good in behaviour.

  8. An Effective Wormhole Attack Defence Method for a Smart Meter Mesh Network in an Intelligent Power Grid

    Directory of Open Access Journals (Sweden)

    Jungtaek Seo

    2012-08-01

    Full Text Available Smart meters are one of the key components of intelligent power grids. Wireless mesh networks based on smart meters could provide customer-oriented information on electricity use to the operational control systems, which monitor power grid status and estimate electric power demand. Using this information, an operational control system could regulate devices within the smart grid in order to provide electricity in a cost-efficient manner. Ensuring the availability of the smart meter mesh network is therefore a critical factor in securing the soundness of an intelligent power system. Wormhole attacks can be one of the most difficult-to-address threats to the availability of mesh networks, and although many methods to nullify wormhole attacks have been tried, these have been limited by high computational resource requirements and unnecessary overhead, as well as by the lack of ability of such methods to respond to attacks. In this paper, an effective defense mechanism that both detects and responds to wormhole attacks is proposed. In the proposed system, each device maintains information on its neighbors, allowing each node to identify replayed packets. The effectiveness and efficiency of the proposed method is analyzed in light of additional computational message and memory complexities.

  9. Analysis of 2D reactor core using linear perturbation theory and nodal finite element methods

    International Nuclear Information System (INIS)

    Adrian Mugica; Edmundo del Valle

    2005-01-01

    In this work the multigroup steady state neutron diffusion equations are solved using the nodal finite element method (NFEM) and the Linear Perturbation Theory (LPT) for XY geometry. The NFEM used corresponds to the Raviart-Thomas schemes RT0 and RT1, interpolating 5 and 12 parameters respectively in each node of the space discretization. The accuracy of these methods is related with the dimension of the space approximation and the mesh size. Therefore, using fine meshes and the RT0 or RT1 nodal methods leads to a large an interesting eigenvalue problem. The finite element method used to discretize the weak formulation of the diffusion equations is the Galerkin one. The algebraic structure of the discrete eigenvalue problem is obtained and solved using the Wielandt technique and the BGSTAB iterative method using the SPARSKIT package developed by Yousef Saad. The results obtained with LPT show good agreement with the results obtained directly for the perturbed problem. In fact, the cpu time to solve a single problem, the unperturbed and the perturbed one, is practically the same but when one is focused in shuffling many times two different assemblies in the core then the LPT technique becomes quite useful to get good approximations in a short time. This particular problem was solved for one quarter-core with NFEM. Thus, the computer program based on LPT can be used to perform like an analysis tool in the fuel reload optimization or combinatory analysis to get reload patterns in nuclear power plants once that it had been incorporated with the thermohydraulic aspects needed to simulate accurately a real problem. The maximum differences between the NFEM and LPT for the three LWR reactor cores are about 250 pcm. This quantity is considered an acceptable value for this kind of analysis. (authors)

  10. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement

    Directory of Open Access Journals (Sweden)

    Juan J. Garcia-Cantero

    2017-06-01

    Full Text Available Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells’ overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma’s morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been

  11. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  12. A Generic Mesh Data Structure with Parallel Applications

    Science.gov (United States)

    Cochran, William Kenneth, Jr.

    2009-01-01

    High performance, massively-parallel multi-physics simulations are built on efficient mesh data structures. Most data structures are designed from the bottom up, focusing on the implementation of linear algebra routines. In this thesis, we explore a top-down approach to design, evaluating the various needs of many aspects of simulation, not just…

  13. A prospective randomised trial comparing mesh types and fixation in totally extraperitoneal inguinal hernia repairs.

    Science.gov (United States)

    Cristaudo, Adam; Nayak, Arun; Martin, Sarah; Adib, Reza; Martin, Ian

    2015-05-01

    The totally extraperitoneal (TEP) approach for surgical repair of inguinal hernias has emerged as a popular technique. We conducted a prospective randomised trial to compare patient comfort scores using different mesh types and fixation using this technique. Over a 14 month period, 146 patients underwent 232 TEP inguinal hernia repairs. We compared the comfort scores of patients who underwent these procedures using different types of mesh and fixation. A non-absorbable 15 × 10 cm anatomical mesh fixed with absorbable tacks (Control group) was compared with either a non-absorbable 15 × 10 cm folding slit mesh with absorbable tacks (Group 2), a partially-absorbable 15 × 10 cm mesh with absorbable tacks (Group 3) or a non-absorbable 15 × 10 cm anatomical mesh fixed with 2 ml fibrin sealant (Group 4). Outcomes were compared at 1, 2, 4 and 12 weeks using the Carolina Comfort Scale (CCS) scores. At 1, 2, 4 and 12 weeks, the median global CCS scores were low for all treatment groups. Statistically significant differences were seen only for median CCS scores and subscores with the use of partially-absorbable mesh with absorbable tacks (Group 3) at weeks 2 and 4. However, these were no longer significant at week 12. In this study, the TEP inguinal hernia repair with minimal fixation results in low CCS scores. There were no statistical differences in CCS scores when comparing types of mesh, configuration of the mesh or fixation methods. Copyright © 2015 IJS Publishing Group Limited. Published by Elsevier Ltd. All rights reserved.

  14. 3D Mesh Compression and Transmission for Mobile Robotic Applications

    Directory of Open Access Journals (Sweden)

    Bailin Yang

    2016-01-01

    Full Text Available Mobile robots are useful for environment exploration and rescue operations. In such applications, it is crucial to accurately analyse and represent an environment, providing appropriate inputs for motion planning in order to support robot navigation and operations. 2D mapping methods are simple but cannot handle multilevel or multistory environments. To address this problem, 3D mapping methods generate structural 3D representations of the robot operating environment and its objects by 3D mesh reconstruction. However, they face the challenge of efficiently transmitting those 3D representations to system modules for 3D mapping, motion planning, and robot operation visualization. This paper proposes a quality-driven mesh compression and transmission method to address this. Our method is efficient, as it compresses a mesh by quantizing its transformed vertices without the need to spend time constructing an a-priori structure over the mesh. A visual distortion function is developed to govern the level of quantization, allowing mesh transmission to be controlled under different network conditions or time constraints. Our experiments demonstrate how the visual quality of a mesh can be manipulated by the visual distortion function.

  15. Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC

    International Nuclear Information System (INIS)

    Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C

    2007-01-01

    More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficiently optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS)

  16. An optimization-based framework for anisotropic simplex mesh adaptation

    Science.gov (United States)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  17. VARIABLE MESH STIFFNESS OF SPUR GEAR TEETH USING ...

    African Journals Online (AJOL)

    gear engagement. A gear mesh kinematic simulation ... model is appropnate for VMS of a spur gear tooth. The assumptions for ... This process has been continued until one complete tooth meshing cycle is ..... Element Method. Using MATLAB,.

  18. Polarization properties of linearly polarized parabolic scaling Bessel beams

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Mengwen; Zhao, Daomu, E-mail: zhaodaomu@yahoo.com

    2016-10-07

    The intensity profiles for the dominant polarization, cross polarization, and longitudinal components of modified parabolic scaling Bessel beams with linear polarization are investigated theoretically. The transverse intensity distributions of the three electric components are intimately connected to the topological charge. In particular, the intensity patterns of the cross polarization and longitudinal components near the apodization plane reflect the sign of the topological charge. - Highlights: • We investigated the polarization properties of modified parabolic scaling Bessel beams with linear polarization. • We studied the evolution of transverse intensity profiles for the three components of these beams. • The intensity patterns of the cross polarization and longitudinal components can reflect the sign of the topological charge.

  19. GENERATION OF IRREGULAR HEXAGONAL MESHES

    Directory of Open Access Journals (Sweden)

    Vlasov Aleksandr Nikolaevich

    2012-07-01

    Decomposition is performed in a constructive way and, as option, it involves meshless representation. Further, this mapping method is used to generate the calculation mesh. In this paper, the authors analyze different cases of mapping onto simply connected and bi-connected canonical domains. They represent forward and backward mapping techniques. Their potential application for generation of nonuniform meshes within the framework of the asymptotic homogenization theory is also performed to assess and project effective characteristics of heterogeneous materials (composites.

  20. Bilateral Laparoscopic Totally Extraperitoneal Repair Without Mesh Fixation

    OpenAIRE

    Dehal, Ahmed; Woodward, Brandon; Johna, Samir; Yamanishi, Frank

    2014-01-01

    Background and Objectives: Mesh fixation during laparoscopic totally extraperitoneal repair is thought to be necessary to prevent recurrence. However, mesh fixation may increase postoperative chronic pain. This study aimed to describe the experience of a single surgeon at our institution performing this operation. Methods: We performed a retrospective review of the medical records of all patients who underwent bilateral laparoscopic totally extraperitoneal repair without mesh fixation for ing...

  1. Coupling of a 3-D vortex particle-mesh method with a finite volume near-wall solver

    Science.gov (United States)

    Marichal, Y.; Lonfils, T.; Duponcheel, M.; Chatelain, P.; Winckelmans, G.

    2011-11-01

    This coupling aims at improving the computational efficiency of high Reynolds number bluff body flow simulations by using two complementary methods and exploiting their respective advantages in distinct parts of the domain. Vortex particle methods are particularly well suited for free vortical flows such as wakes or jets (the computational domain -with non zero vorticity- is then compact and dispersion errors are negligible). Finite volume methods, however, can handle boundary layers much more easily due to anisotropic mesh refinement. In the present approach, the vortex method is used in the whole domain (overlapping domain technique) but its solution is highly underresolved in the vicinity of the wall. It thus has to be corrected by the near-wall finite volume solution at each time step. Conversely, the vortex method provides the outer boundary conditions for the near-wall solver. A parallel multi-resolution vortex particle-mesh approach is used here along with an Immersed Boundary method in order to take the walls into account. The near-wall flow is solved by OpenFOAM® using the PISO algorithm. We validate the methodology on the flow past a sphere at a moderate Reynolds number. F.R.S. - FNRS Research Fellow.

  2. Unstructured mesh adaptivity for urban flooding modelling

    Science.gov (United States)

    Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.

    2018-05-01

    Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.

  3. Variationally derived coarse mesh methods using an alternative flux representation

    International Nuclear Information System (INIS)

    Wojtowicz, G.; Holloway, J.P.

    1995-01-01

    Investigation of a previously reported variational technique for the solution of the 1-D, 1-group neutron transport equation in reactor lattices has inspired the development of a finite element formulation of the method. Compared to conventional homogenization methods in which node homogenized cross sections are used, the coefficients describing this system take on greater spatial dependence. However, the methods employ an alternative flux representation which allows the transport equation to be cast into a form whose solution has only a slow spatial variation and, hence, requires relatively few variables to describe. This alternative flux representation and the stationary property of a variational principle define a class of coarse mesh discretizations of transport theory capable of achieving order of magnitude reductions of eigenvalue and pointwise scalar flux errors as compared with diffusion theory while retaining diffusion theory's relatively low cost. Initial results of a 1-D spectral element approach are reviewed and used to motivate the finite element implementation which is more efficient and almost as accurate; one and two group results of this method are described

  4. Kinetic mesh-free method for flutter prediction in turbomachines

    Indian Academy of Sciences (India)

    Mesh-free kinetic upwind scheme; unsteady flows; modified CIR splitting ... scheme for solving the inviscid compressible Euler equations of gas ..... typically carried out for about five cycles in which the periodic behaviour of the flow is captured.

  5. A new preconditioner update strategy for the solution of sequences of linear systems in structural mechanics: application to saddle point problems in elasticity

    Science.gov (United States)

    Mercier, Sylvain; Gratton, Serge; Tardieu, Nicolas; Vasseur, Xavier

    2017-12-01

    Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method. We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices.

  6. Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner

    International Nuclear Information System (INIS)

    Subber, Waad; Sarkar, Abhijit

    2012-01-01

    For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.

  7. Solving one-dimensional phase change problems with moving grid method and mesh free radial basis functions

    International Nuclear Information System (INIS)

    Vrankar, L.; Turk, G.; Runovc, F.; Kansa, E.J.

    2006-01-01

    Many heat-transfer problems involve a change of phase of material due to solidification or melting. Applications include: the safety studies of nuclear reactors (molten core concrete interaction), the drilling of high ice-content soil, the storage of thermal energy, etc. These problems are often called Stefan's or moving boundary value problems. Mathematically, the interface motion is expressed implicitly in an equation for the conservation of thermal energy at the interface (Stefan's conditions). This introduces a non-linear character to the system which treats each problem somewhat uniquely. The exact solution of phase change problems is limited exclusively to the cases in which e.g. the heat transfer regions are infinite or semi-infinite one dimensional-space. Therefore, solution is obtained either by approximate analytical solution or by numerical methods. Finite-difference methods and finite-element techniques have been used extensively for numerical solution of moving boundary problems. Recently, the numerical methods have focused on the idea of using a mesh-free methodology for the numerical solution of partial differential equations based on radial basis functions. In our case we will study solid-solid transformation. The numerical solutions will be compared with analytical solutions. Actually, in our work we will examine usefulness of radial basis functions (especially multiquadric-MQ) for one-dimensional Stefan's problems. The position of the moving boundary will be simulated by moving grid method. The resultant system of RBF-PDE will be solved by affine space decomposition. (author)

  8. Enriching Triangle Mesh Animations with Physically Based Simulation.

    Science.gov (United States)

    Li, Yijing; Xu, Hongyi; Barbic, Jernej

    2017-10-01

    We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.

  9. [TVT (transvaginal mesh) surgical method for complex resolution of pelvic floor defects].

    Science.gov (United States)

    Adamík, Z

    2006-01-01

    Assessment of the effects of a new surgical method for complex resolution of pelvic floor defects. Case study. Department of Obstetrics and Gynaecology, Bata Hospital, Zlín. We evaluated the procedures and results of the new TVM (transvaginal mesh) surgical method which we used in a group of 12 patients. Ten patients had vaginal prolapse following vaginal hysterectomy and in two cases there was uterine prolapse and vaginal prolapse. Only in one case there was a small protrusion in the range of 0.5 cm which we resolved by removal of the penetrated section. The resulting anatomic effect was very good in all the cases.

  10. Treatment of Mesh Skin Grafted Scars Using a Plasma Skin Regeneration System

    Directory of Open Access Journals (Sweden)

    Takamitsu Higashimori

    2010-01-01

    Full Text Available Objectives. Several modalities have been advocated to treat traumatic scars, including surgical techniques and laser resurfacing. Recently, a plasma skin regeneration (PSR system has been investigated. There are no reports on plasma treatment of mesh skin grafted scars. The objective of our study is to evaluate the effectiveness and complications of plasma treatment of mesh skin grafted scars in Asian patients. Materials and Methods. Four Asian patients with mesh skin grafted scars were enrolled in the study. The plasma treatments were performed at monthly intervals with PSR, using energy settings of 3 to 4 J. Improvement was determined by patient questionnaires and physician evaluation of digital photographs taken prior to treatment and at 3 months post treatment. The patients were also evaluated for any side effects from the treatment. Results. All patients showed more than 50% improvement. The average pain score on a 10-point scale was 6.9 +/− 1.2 SD and all patients tolerated the treatments. Temporary, localized hypopigmentation was observed in two patients. Hyperpigmentation and worsening of scarring were not observed. Conclusions. Plasma treatment is clinically effective and is associated with minimal complications when used to treat mesh skin grafted scars in Asian patients.

  11. Stability analysis of CMFD acceleration for the wavelet expansion method of neutron transport equation

    International Nuclear Information System (INIS)

    Zheng Youqi; Wu Hongchun; Cao Liangzhi

    2013-01-01

    This paper describes the stability analysis for the coarse mesh finite difference (CMFD) acceleration used in the wavelet expansion method. The nonlinear CMFD acceleration scheme is transformed by linearization and the Fourier ansatz is introduced into the linearized formulae. The spectral radius is defined as the stability criterion, which is the least upper bound (LUB) of the largest eigenvalue of Fourier analysis matrix. The stability analysis considers the effect of mesh size (spectral length), coarse mesh division and scattering ratio. The results show that for the wavelet expansion method, the CMFD acceleration is conditionally stable. The small size of fine mesh brings stability and fast convergent. With the increase of the mesh size, the stability becomes worse. The scattering ratio does not impact the stability obviously. It makes the CMFD acceleration highly efficient in the strong scattering case. The results of Fourier analysis are verified by the numerical tests based on a homogeneous slab problem.

  12. Urogynecologic Surgical Mesh Implants

    Science.gov (United States)

    ... procedures performed to treat pelvic floor disorders with surgical mesh: Transvaginal mesh to treat POP Transabdominal mesh to treat ... address safety risks Final Order for Reclassification of Surgical Mesh for Transvaginal Pelvic Organ Prolapse Repair Final Order for Effective ...

  13. Linear dynamic coupling in geared rotor systems

    Science.gov (United States)

    David, J. W.; Mitchell, L. D.

    1986-01-01

    The effects of high frequency oscillations caused by the gear mesh, on components of a geared system that can be modeled as rigid discs are analyzed using linear dynamic coupling terms. The coupled, nonlinear equations of motion for a disc attached to a rotating shaft are presented. The results of a trial problem analysis show that the inclusion of the linear dynamic coupling terms can produce significant changes in the predicted response of geared rotor systems, and that the produced sideband responses are greater than the unbalanced response. The method is useful in designing gear drives for heavy-lift helicopters, industrial speed reducers, naval propulsion systems, and heavy off-road equipment.

  14. Cartesian anisotropic mesh adaptation for compressible flow

    International Nuclear Information System (INIS)

    Keats, W.A.; Lien, F.-S.

    2004-01-01

    Simulating transient compressible flows involving shock waves presents challenges to the CFD practitioner in terms of the mesh quality required to resolve discontinuities and prevent smearing. This paper discusses a novel two-dimensional Cartesian anisotropic mesh adaptation technique implemented for compressible flow. This technique, developed for laminar flow by Ham, Lien and Strong, is efficient because it refines and coarsens cells using criteria that consider the solution in each of the cardinal directions separately. In this paper the method will be applied to compressible flow. The procedure shows promise in its ability to deliver good quality solutions while achieving computational savings. The convection scheme used is the Advective Upstream Splitting Method (Plus), and the refinement/ coarsening criteria are based on work done by Ham et al. Transient shock wave diffraction over a backward step and shock reflection over a forward step are considered as test cases because they demonstrate that the quality of the solution can be maintained as the mesh is refined and coarsened in time. The data structure is explained in relation to the computational mesh, and the object-oriented design and implementation of the code is presented. Refinement and coarsening algorithms are outlined. Computational savings over uniform and isotropic mesh approaches are shown to be significant. (author)

  15. Unstructured Mesh Movement and Viscous Mesh Generation for CFD-Based Design Optimization, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovations proposed are twofold: 1) a robust unstructured mesh movement method able to handle isotropic (Euler), anisotropic (viscous), mixed element (hybrid)...

  16. Linear methods in band theory

    DEFF Research Database (Denmark)

    Andersen, O. Krogh

    1975-01-01

    of Korringa-Kohn-Rostoker, linear-combination-of-atomic-orbitals, and cellular methods; the secular matrix is linear in energy, the overlap integrals factorize as potential parameters and structure constants, the latter are canonical in the sense that they neither depend on the energy nor the cell volume...

  17. Parallel Quasi Newton Algorithms for Large Scale Non Linear Unconstrained Optimization

    International Nuclear Information System (INIS)

    Rahman, M. A.; Basarudin, T.

    1997-01-01

    This paper discusses about Quasi Newton (QN) method to solve non-linear unconstrained minimization problems. One of many important of QN method is choice of matrix Hk. to be positive definite and satisfies to QN method. Our interest here is the parallel QN methods which will suite for the solution of large-scale optimization problems. The QN methods became less attractive in large-scale problems because of the storage and computational requirements. How ever, it is often the case that the Hessian is space matrix. In this paper we include the mechanism of how to reduce the Hessian update and hold the Hessian properties.One major reason of our research is that the QN method may be good in solving certain type of minimization problems, but it is efficiency degenerate when is it applied to solve other category of problems. For this reason, we use an algorithm containing several direction strategies which are processed in parallel. We shall attempt to parallelized algorithm by exploring different search directions which are generated by various QN update during the minimization process. The different line search strategies will be employed simultaneously in the process of locating the minimum along each direction.The code of algorithm will be written in Occam language 2 which is run on the transputer machine

  18. ITMETH, Iterative Routines for Linear System

    International Nuclear Information System (INIS)

    Greenbaum, A.

    1989-01-01

    1 - Description of program or function: ITMETH is a collection of iterative routines for solving large, sparse linear systems. 2 - Method of solution: ITMETH solves general linear systems of the form AX=B using a variety of methods: Jacobi iteration; Gauss-Seidel iteration; incomplete LU decomposition or matrix splitting with iterative refinement; diagonal scaling, matrix splitting, or incomplete LU decomposition with the conjugate gradient method for the problem AA'Y=B, X=A'Y; bi-conjugate gradient method with diagonal scaling, matrix splitting, or incomplete LU decomposition; and ortho-min method with diagonal scaling, matrix splitting, or incomplete LU decomposition. ITMETH also solves symmetric positive definite linear systems AX=B using the conjugate gradient method with diagonal scaling or matrix splitting, or the incomplete Cholesky conjugate gradient method

  19. Intravesical midurethral sling mesh erosion secondary to transvaginal mesh reconstructive surgery

    Directory of Open Access Journals (Sweden)

    Sukanda Bin Jaili

    2015-05-01

    Conclusion: Repeated vaginal reconstructive surgery may jeopardize a primary mesh or sling, and pose a high risk of mesh erosion, which may be delayed for several years. Removal of the mesh erosion and bladder repair are feasible pervaginally with good outcome.

  20. Mesh versus non-mesh repair of ventral abdominal hernias

    International Nuclear Information System (INIS)

    Jawaid, M.A.; Talpur, A.H.

    2008-01-01

    To investigate the relative effectiveness of mesh and suture repair of ventral abdominal hernias in terms of clinical outcome, quality of life and rate of recurrence in both the techniques. This is a retrospective descriptive analysis of 236 patients with mesh and non-mesh repair of primary ventral hernias performed between January 2000 to December 2004 at Surgery Department, Liaquat University of Medical and Health Sciences, Jamshoro. The record sheets of the patients were analyzed and data retrieved to compare the results of both techniques for short-term and long-term results. The data retrieved is statistically analyzed on SPSS version 11. There were 43 (18.22%) males and 193 (81.77%) females with a mean age of 51.79 years and a range of 59 (81-22). Para-umbilical hernia was the commonest of ventral hernia and accounted for 49.8% (n=118) of the total study population followed by incisional hernia comprising 24% (n=57) of the total number. There was a significant difference in the recurrent rate at 3 years interval with 23/101 (22.77%) recurrences in suture-repaired subjects compared to 10/135 (7.40%) in mesh repair group. Chronic pain lasting up to 1-2 years was noted in 14 patients with suture repair. Wound infection is comparatively more common (8.14%) in mesh group. The other variables such as operative and postoperative complications, total hospital stay and quality of life is also discussed. Mesh repair of ventral hernia is much superior to non-mesh suture repair in terms of recurrence and overall outcome. (author)

  1. [Current state of transvaginal meshes by resolution of pelvic organ prolapse].

    Science.gov (United States)

    Jírová, J; Pán, M

    Treatment of pelvic organs prolapse with transvaginal mesh kits represents nowadays a widespread surgical method, which partially replaced classic operations due to high success rate and low count of recurrences. Just like any other surgical method, the placement of transvaginal mesh is linked with occurrence of complications. In this article we attempt to review the more and less known facts about trans-vaginal meshes, their efficacy, count of recurrence and the spectrum of complications and we try to compare this technique with traditional surgical methods used to treat pelvic organs prolapse (without graft materials). Review. Department of Obstetrics and Gynecology, Regional hospital Mladá Boleslav a.s., Mladá Boleslav. Overview of the results of recent studies published in the Czech and English language in recent years. Pelvic organ prolapse repair with vaginal mesh has generally lower count of relapse especially in patients with wide genital hiatal area and with major levator ani avulsion. The spectrum of complications differs from classical techniques because of the presence of synthetic nonabsorbable material. Some of the specific complications we did not encounter during classical operations include vaginal mesh erosion, infection of mesh associated with chronic pelvic pain, dyspareunia, protrusion of the mesh into the closest organs or the rejection and progressive extrusion of the mesh. Primary enthusiasm has now been replaced with worries of major complications. Future tasks should therefore be aimed at minimizing the rate of complications associated with transvaginal meshes. Except using well-known and safe materials and providing specialized training of physicians for each mesh implantation technique, other precautions outlined in this article should help, such as a closer specification of indication for the application of transvaginal mesh.

  2. DNA rendering of polyhedral meshes at the nanoscale

    Science.gov (United States)

    Benson, Erik; Mohammed, Abdulmelik; Gardell, Johan; Masich, Sergej; Czeizler, Eugen; Orponen, Pekka; Högberg, Björn

    2015-07-01

    It was suggested more than thirty years ago that Watson-Crick base pairing might be used for the rational design of nanometre-scale structures from nucleic acids. Since then, and especially since the introduction of the origami technique, DNA nanotechnology has enabled increasingly more complex structures. But although general approaches for creating DNA origami polygonal meshes and design software are available, there are still important constraints arising from DNA geometry and sense/antisense pairing, necessitating some manual adjustment during the design process. Here we present a general method of folding arbitrary polygonal digital meshes in DNA that readily produces structures that would be very difficult to realize using previous approaches. The design process is highly automated, using a routeing algorithm based on graph theory and a relaxation simulation that traces scaffold strands through the target structures. Moreover, unlike conventional origami designs built from close-packed helices, our structures have a more open conformation with one helix per edge and are therefore stable under the ionic conditions usually used in biological assays.

  3. Constructing C1 Continuous Surface on Irregular Quad Meshes

    Institute of Scientific and Technical Information of China (English)

    HE Jun; GUO Qiang

    2013-01-01

    A new method is proposed for surface construction on irregular quad meshes as extensions to uniform B-spline surfaces. Given a number of control points, which form a regular or irregular quad mesh, a weight function is constructed for each control point. The weight function is defined on a local domain and is C1 continuous. Then the whole surface is constructed by the weighted combination of all the control points. The property of the new method is that the surface is defined by piecewise C1 bi-cubic rational parametric polynomial with each quad face. It is an extension to uniform B-spline surfaces in the sense that its definition is an analogy of the B-spline surface, and it produces a uniform bi-cubic B-spline surface if the control mesh is a regular quad mesh. Examples produced by the new method are also included.

  4. An efficient Adaptive Mesh Refinement (AMR) algorithm for the Discontinuous Galerkin method: Applications for the computation of compressible two-phase flows

    Science.gov (United States)

    Papoutsakis, Andreas; Sazhin, Sergei S.; Begg, Steven; Danaila, Ionut; Luddens, Francky

    2018-06-01

    We present an Adaptive Mesh Refinement (AMR) method suitable for hybrid unstructured meshes that allows for local refinement and de-refinement of the computational grid during the evolution of the flow. The adaptive implementation of the Discontinuous Galerkin (DG) method introduced in this work (ForestDG) is based on a topological representation of the computational mesh by a hierarchical structure consisting of oct- quad- and binary trees. Adaptive mesh refinement (h-refinement) enables us to increase the spatial resolution of the computational mesh in the vicinity of the points of interest such as interfaces, geometrical features, or flow discontinuities. The local increase in the expansion order (p-refinement) at areas of high strain rates or vorticity magnitude results in an increase of the order of accuracy in the region of shear layers and vortices. A graph of unitarian-trees, representing hexahedral, prismatic and tetrahedral elements is used for the representation of the initial domain. The ancestral elements of the mesh can be split into self-similar elements allowing each tree to grow branches to an arbitrary level of refinement. The connectivity of the elements, their genealogy and their partitioning are described by linked lists of pointers. An explicit calculation of these relations, presented in this paper, facilitates the on-the-fly splitting, merging and repartitioning of the computational mesh by rearranging the links of each node of the tree with a minimal computational overhead. The modal basis used in the DG implementation facilitates the mapping of the fluxes across the non conformal faces. The AMR methodology is presented and assessed using a series of inviscid and viscous test cases. Also, the AMR methodology is used for the modelling of the interaction between droplets and the carrier phase in a two-phase flow. This approach is applied to the analysis of a spray injected into a chamber of quiescent air, using the Eulerian

  5. Conservation Properties of the Hamiltonian Particle-Mesh method for the Quasi-Geostrophic Equations on a sphere

    NARCIS (Netherlands)

    H. Thorsdottir (Halldora)

    2011-01-01

    htmlabstractThe Hamiltonian particle-mesh (HPM) method is used to solve the Quasi-Geostrophic model generalized to a sphere, using the Spherepack modeling package to solve the Helmholtz equation on a colatitude-longitude grid with spherical harmonics. The predicted energy conservation of a

  6. Comparative efficacy of Prolene and Prolene-Vicryl composite mesh for experimental ventral hernia repair in dogs.

    Science.gov (United States)

    Anjum, H; Bokhari, S G; Khan, M A; Awais, M; Mughal, Z U; Shahzad, H K; Ijaz, F; Siddiqui, M I; Khan, I U; Chaudhry, A S; Akhtar, R; Aslam, S; Akbar, H; Asif, M; Maan, M K; Khan, M A; Noor, A; Khan, W A; Ullah, A; Hayat, M A

    2016-01-01

    In this study, efficacy of two hernia mesh implants viz. conventional Prolene and a novel Prolene-Vicryl composite mesh was assessed for experimental ventral hernia repair in dogs. Twelve healthy mongrel dogs were selected and randomly divided into three groups, A, Band C (n=4). In all groups, an experimental laparotomy was performed; thereafter, the posterior rectus sheath and peritoneum were sutured together, while, a 5 × 5 cm defect was created in the rectus muscle belly and anterior rectus sheath. For sublay hernioplasty, the hernia mesh (Prolene: group A; Prolene-Vicryl composite mesh: group B), was implanted over the posterior rectus sheath. In group C (control), mesh was not implanted; instead the laparotomy incision was closed after a herniorrhaphy. Post-operative pain, mesh shrinkage and adhesion formation were assessed as short term complications. Post-operatively, pain at surgical site was significantly less (P<0.001) in group B (composite mesh); mesh shrinkage was also significantly less in group B (21.42%, P<0.05) than in group A (Prolene mesh shrinkage: 58.18%). Group B (composite mesh) also depicted less than 25% adhesions (Mean ± SE: 0.75 ± 0.50 scores, P≤0.013) when assessed on the basis of a Quantitative Modified Diamond scale; a Qualitative Adhesion Tenacity scale also depicted either no adhesions (n=2), or, only flimsy adhesions (n=2) in group B (composite mesh), in contrast to group A (Prolene), which manifested greater adhesion formation and presence of dense adhesions requiring blunt dissection. Conclusively, the Prolene-Vicryl composite mesh proved superior to the Prolene mesh regarding lesser mesh contraction, fewer adhesions and no short-term follow-up complications.

  7. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    International Nuclear Information System (INIS)

    Kutepov, A. L.

    2017-01-01

    We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.

  8. A posteriori estimator and adaptive mesh refinement for finite volume finite element method for monophasic flow and solute transport in porous media

    International Nuclear Information System (INIS)

    Amor, H.; Bourgeois, M.

    2012-01-01

    Document available in extended abstract form only. The disposal of high level, long lived waste in deep underground clay formations is investigated by several countries including France. In the safety assessment of such geological repositories, a thoughtful consideration must be given to the mechanisms and possible pathways of migration of radionuclides released from waste packages. However, when modelling the transfer of radionuclides throughout the disposal facilities and geological formations, the numerical simulations must take into consideration, in addition to long durations of concern, the variety in the properties as well as in geometrical scales of the different components of the overall disposal, including the host formation. This task presents significant computational challenges. Numerical methods used in the MELODIE software The MELODIE software is developed by IRSN, and constantly upgraded, with the aim to assess the long-term containment capabilities of underground and surface radioactive waste repositories. The MELODIE software models water flow and the phenomena involved in the transport of radionuclides in saturated and unsaturated porous media in 2 and 3 dimensions; chemical processes are represented by a retardation factor and a solubility limit, for sorption and solubility respectively, integrated in the computational equations. These equations are discretized using a so-called Finite Volume Finite Element method (FVFE), which is based on a Galerkin method to discretize time and variables, together with a Finite Volume method using the Godunov scheme for the convection term. The FVFE method is used to convert partial differential equations into a finite number of algebraic equations that match the number of nodes in the mesh used to model the considered domain. It is also used to stabilise the numerical scheme. In order to manage the variety in properties and geometrical scales of underground disposal components, an a posteriori error estimator

  9. High-fidelity meshes from tissue samples for diffusion MRI simulations.

    Science.gov (United States)

    Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C

    2010-01-01

    This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.

  10. Effect of mesh-peel ply variation on mechanical properties of E-glas composite by infusion vacuum method

    Science.gov (United States)

    Abdurohman, K.; Siahaan, Mabe

    2018-04-01

    Composite materials made of glass fiber EW-135 with epoxy lycal resin with vacuum infusion method have been performed. The dried glass fiber is arranged in a mold then connected to a vacuum machine and a resin tube. Then, the vacuum machine is turned on and at the same time the resin is sucked and flowed into the mold. This paper reports on the effect of using mesh- peel ply singles on upper-side laminates called A and the effect of using double mesh-peel ply on upper and lower-side laminates call B with glass fiber arrangement is normal and ± 450 in vacuum infusion process. Followed by the manufacture of tensile test specimen and tested its tensile strength with universal test machine 100kN Tensilon RTF 2410, at room temperature with constant crosshead speed. From tensile test results using single and double layers showed that double mesh-peel ply can increase tensile strength 14% and Young modulus 17%.

  11. Watermarking on 3D mesh based on spherical wavelet transform.

    Science.gov (United States)

    Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng

    2004-03-01

    In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.

  12. Mesh removal following transvaginal mesh placement: a case series of 104 operations.

    Science.gov (United States)

    Marcus-Braun, Naama; von Theobald, Peter

    2010-04-01

    The objective of the study was to reveal the way we treat vaginal mesh complications in a trained referral center. This is a retrospective review of all patients who underwent surgical removal of transvaginal mesh for mesh-related complications during a 5-year period. Eighty-three patients underwent 104 operations including 61 complete mesh removal, 14 partial excision, 15 section of sub-urethral sling, and five laparoscopies. Main indications were erosion, infection, granuloma, incomplete voiding, and pain. Fifty-eight removals occurred more than 2 years after the primary mesh placement. Mean operation time was 21 min, and there were two intraoperative and ten minor postoperative complications. Stress urinary incontinence (SUI) recurred in 38% and cystocele in 19% of patients. In a trained center, mesh removal was found to be a quick and safe procedure. Mesh-related complications may frequently occur more than 2 years after the primary operation. Recurrence was mostly associated with SUI and less with genital prolapse.

  13. Open preperitoneal groin hernia repair with mesh

    DEFF Research Database (Denmark)

    Andresen, Kristoffer; Rosenberg, Jacob

    2017-01-01

    Background For the repair of inguinal hernias, several surgical methods have been presented where the purpose is to place a mesh in the preperitoneal plane through an open access. The aim of this systematic review was to describe preperitoneal repairs with emphasis on the technique. Data sources...... A systematic review was conducted and reported according to the PRISMA statement. PubMed, Cochrane library and Embase were searched systematically. Studies were included if they provided clinical data with more than 30 days follow up following repair of an inguinal hernia with an open preperitoneal mesh......-analysis. Open preperitoneal techniques with placement of a mesh through an open approach seem promising compared with the standard anterior techniques. This systematic review provides an overview of these techniques together with a description of surgical methods and clinical outcomes....

  14. Open preperitoneal groin hernia repair with mesh

    DEFF Research Database (Denmark)

    Andresen, Kristoffer; Rosenberg, Jacob

    2017-01-01

    BACKGROUND: For the repair of inguinal hernias, several surgical methods have been presented where the purpose is to place a mesh in the preperitoneal plane through an open access. The aim of this systematic review was to describe preperitoneal repairs with emphasis on the technique. DATA SOURCES......: A systematic review was conducted and reported according to the PRISMA statement. PubMed, Cochrane library and Embase were searched systematically. Studies were included if they provided clinical data with more than 30 days follow up following repair of an inguinal hernia with an open preperitoneal mesh......-analysis. Open preperitoneal techniques with placement of a mesh through an open approach seem promising compared with the standard anterior techniques. This systematic review provides an overview of these techniques together with a description of surgical methods and clinical outcomes....

  15. Surface orientation effects on bending properties of surgical mesh are independent of tensile properties.

    Science.gov (United States)

    Simon, David D; Andrews, Sharon M; Robinson-Zeigler, Rebecca; Valdes, Thelma; Woods, Terry O

    2018-02-01

    Current mechanical testing of surgical mesh focuses primarily on tensile properties even though implanted devices are not subjected to pure tensile loads. Our objective was to determine the flexural (bending) properties of surgical mesh and determine if they correlate with mesh tensile properties. The flexural rigidity values of 11 different surgical mesh designs were determined along three textile directions (machine, cross-machine, and 45° to machine; n = 5 for each) using ASTM D1388-14 while tracking surface orientation. Tensile testing was also performed on the same specimens using ASTM D882-12. Linear regressions were performed to compare mesh flexural rigidity to mesh thickness, areal mass density, filament diameter, ultimate tensile strength, and maximum extension. Of 33 mesh specimen groups, 30 had significant differences in flexural rigidity values when comparing surface orientations (top and bottom). Flexural rigidity and mesh tensile properties also varied with textile direction (machine and cross-machine). There was no strong correlation between the flexural and tensile properties, with mesh thickness having the best overall correlation with flexural rigidity. Currently, surface orientation is not indicated on marketed surgical mesh, and a single mesh may behave differently depending on the direction of loading. The lack of correlation between flexural stiffness and tensile properties indicates the need to examine mesh bending stiffness to provide a more comprehensive understanding of surgical mesh mechanical behaviors. Further investigation is needed to determine if these flexural properties result in the surgical mesh behaving mechanically different depending on implantation direction. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 106B: 854-862, 2018. © 2017 Wiley Periodicals, Inc.

  16. Strong-stability-preserving additive linear multistep methods

    KAUST Repository

    Hadjimichael, Yiannis

    2018-02-20

    The analysis of strong-stability-preserving (SSP) linear multistep methods is extended to semi-discretized problems for which different terms on the right-hand side satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain larger monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding nonadditive SSP linear multistep methods.

  17. Glue versus suture for mesh fixation in inguinal hernia repair.

    Science.gov (United States)

    Chandrasekar, Shruthi; Jeyakumar, S; Ganapathy, Tharun

    2018-03-22

    glue were used to fix the mesh instead of sutures. The tissue glue used in this study was N Butyl- 2 - Cyanoacrylate. All patients in the study underwent surgery only by one group of surgeons to maintain homogeneity and were observed in the hospital for 72 h. A note of the pain on VAS scale was made at 12, 24, 48, 72 h, 1 week, 1 month, 3 months and 6 months. Operative time and any complications were also recorded. Results developed using SPSS software show that there is a significant difference in the time taken by both the methods, with glue taking a significantly lower time than sutures. Significance is also seen in the difference in the immediate and chronic post-operative pain between both the groups. However the complication rates in both the groups were found to be the same. It can thus be concluded from this study that tissue glue mesh fixation is superior to suture mesh fixation in open inguinal repair in terms of operative time, immediate and chronic post-operative time. Copyright © 2018. Published by Elsevier Ltd.

  18. How to model wireless mesh networks topology

    International Nuclear Information System (INIS)

    Sanni, M L; Hashim, A A; Anwar, F; Ali, S; Ahmed, G S M

    2013-01-01

    The specification of network connectivity model or topology is the beginning of design and analysis in Computer Network researches. Wireless Mesh Networks is an autonomic network that is dynamically self-organised, self-configured while the mesh nodes establish automatic connectivity with the adjacent nodes in the relay network of wireless backbone routers. Researches in Wireless Mesh Networks range from node deployment to internetworking issues with sensor, Internet and cellular networks. These researches require modelling of relationships and interactions among nodes including technical characteristics of the links while satisfying the architectural requirements of the physical network. However, the existing topology generators model geographic topologies which constitute different architectures, thus may not be suitable in Wireless Mesh Networks scenarios. The existing methods of topology generation are explored, analysed and parameters for their characterisation are identified. Furthermore, an algorithm for the design of Wireless Mesh Networks topology based on square grid model is proposed in this paper. The performance of the topology generated is also evaluated. This research is particularly important in the generation of a close-to-real topology for ensuring relevance of design to the intended network and validity of results obtained in Wireless Mesh Networks researches

  19. Recurrence and Pain after Mesh Repair of Inguinal Hernias

    African Journals Online (AJOL)

    Abstract. Background: Surgery for inguinal hernias has ... repair. Methods: The study was conducted on all inguinal hernia patients operated between 1st. October ... bilateral (1.6%). Only 101 .... Open Mesh Versus Laparoscopic Mesh. Repair ...

  20. Polygonal Prism Mesh in the Viscous Layers for the Polyhedral Mesh Generator, PolyGen

    International Nuclear Information System (INIS)

    Lee, Sang Yong; Park, Chan Eok; Kim, Shin Whan

    2015-01-01

    Polyhedral mesh has been known to have some benefits over the tetrahedral mesh. Efforts have been made to set up a polyhedral mesh generation system with open source programs SALOME and TetGen. The evaluation has shown that the polyhedral mesh generation system is promising. But it is necessary to extend the capability of the system to handle the viscous layers to be a generalized mesh generator. A brief review to the previous works on the mesh generation for the viscous layers will be made in section 2. Several challenging issues for the polygonal prism mesh generation will be discussed as well. The procedure to generate a polygonal prism mesh will be discussed in detail in section 3. Conclusion will be followed in section 4. A procedure to generate meshes in the viscous layers with PolyGen has been successfully designed. But more efforts have to be exercised to find the best way for the generating meshes for viscous layers. Using the extrusion direction of the STL data will the first of the trials in the near future

  1. Highly parallel demagnetization field calculation using the fast multipole method on tetrahedral meshes with continuous sources

    Science.gov (United States)

    Palmesi, P.; Exl, L.; Bruckner, F.; Abert, C.; Suess, D.

    2017-11-01

    The long-range magnetic field is the most time-consuming part in micromagnetic simulations. Computational improvements can relieve problems related to this bottleneck. This work presents an efficient implementation of the Fast Multipole Method [FMM] for the magnetic scalar potential as used in micromagnetics. The novelty lies in extending FMM to linearly magnetized tetrahedral sources making it interesting also for other areas of computational physics. We treat the near field directly and in use (exact) numerical integration on the multipole expansion in the far field. This approach tackles important issues like the vectorial and continuous nature of the magnetic field. By using FMM the calculations scale linearly in time and memory.

  2. Use of mesh in laparoscopic paraesophageal hernia repair

    DEFF Research Database (Denmark)

    Müller-Stich, Beat P.; Kenngott, Hannes G.; Gondan, Matthias

    2015-01-01

    Introduction. Mesh augmentation seems to reduce recurrences following laparoscopic paraesophageal hernia repair (LPHR). However, there is an uncertain risk of mesh-associated complications. Risk-benefit analysis might solve the dilemma. Materials and Methods. A systematic literature search...... potential benefits of LMAH. All data regarding LMAH were used to estimate risk of mesh-associated complications. Risk-benefit analysis was performed using a Markov Monte Carlo decision-analytic model. Results. Meta-analysis of 3 RCTs and 9 OCSs including 915 patients revealed a significantly lower...

  3. INGEN: a general-purpose mesh generator for finite element codes

    International Nuclear Information System (INIS)

    Cook, W.A.

    1979-05-01

    INGEN is a general-purpose mesh generator for two- and three-dimensional finite element codes. The basic parts of the code are surface and three-dimensional region generators that use linear-blending interpolation formulas. These generators are based on an i, j, k index scheme that is used to number nodal points, construct elements, and develop displacement and traction boundary conditions. This code can generate truss elements (2 modal points); plane stress, plane strain, and axisymmetry two-dimensional continuum elements (4 to 8 nodal points); plate elements (4 to 8 nodal points); and three-dimensional continuum elements (8 to 21 nodal points). The traction loads generated are consistent with the element generated. The expansion--contraction option is of special interest. This option makes it possible to change an existing mesh such that some regions are refined and others are made coarser than the original mesh. 9 figures

  4. Parallel Performance Optimizations on Unstructured Mesh-based Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid

    2015-01-01

    © The Authors. Published by Elsevier B.V. This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.

  5. MHD simulations on an unstructured mesh

    International Nuclear Information System (INIS)

    Strauss, H.R.; Park, W.; Belova, E.; Fu, G.Y.; Sugiyama, L.E.

    1998-01-01

    Two reasons for using an unstructured computational mesh are adaptivity, and alignment with arbitrarily shaped boundaries. Two codes which use finite element discretization on an unstructured mesh are described. FEM3D solves 2D and 3D RMHD using an adaptive grid. MH3D++, which incorporates methods of FEM3D into the MH3D generalized MHD code, can be used with shaped boundaries, which might be 3D

  6. GALAXY CLUSTER RADIO RELICS IN ADAPTIVE MESH REFINEMENT COSMOLOGICAL SIMULATIONS: RELIC PROPERTIES AND SCALING RELATIONSHIPS

    International Nuclear Information System (INIS)

    Skillman, Samuel W.; Hallman, Eric J.; Burns, Jack O.; Smith, Britton D.; O'Shea, Brian W.; Turk, Matthew J.

    2011-01-01

    Cosmological shocks are a critical part of large-scale structure formation, and are responsible for heating the intracluster medium in galaxy clusters. In addition, they are capable of accelerating non-thermal electrons and protons. In this work, we focus on the acceleration of electrons at shock fronts, which is thought to be responsible for radio relics-extended radio features in the vicinity of merging galaxy clusters. By combining high-resolution adaptive mesh refinement/N-body cosmological simulations with an accurate shock-finding algorithm and a model for electron acceleration, we calculate the expected synchrotron emission resulting from cosmological structure formation. We produce synthetic radio maps of a large sample of galaxy clusters and present luminosity functions and scaling relationships. With upcoming long-wavelength radio telescopes, we expect to see an abundance of radio emission associated with merger shocks in the intracluster medium. By producing observationally motivated statistics, we provide predictions that can be compared with observations to further improve our understanding of magnetic fields and electron shock acceleration.

  7. Linearly scaling and almost Hamiltonian dielectric continuum molecular dynamics simulations through fast multipole expansions

    Energy Technology Data Exchange (ETDEWEB)

    Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul, E-mail: tavan@physik.uni-muenchen.de [Lehrstuhl für BioMolekulare Optik, Ludig–Maximilians Universität München, Oettingenstr. 67, 80538 München (Germany)

    2015-11-14

    Hamiltonian Dielectric Solvent (HADES) is a recent method [S. Bauer et al., J. Chem. Phys. 140, 104103 (2014)] which enables atomistic Hamiltonian molecular dynamics (MD) simulations of peptides and proteins in dielectric solvent continua. Such simulations become rapidly impractical for large proteins, because the computational effort of HADES scales quadratically with the number N of atoms. If one tries to achieve linear scaling by applying a fast multipole method (FMM) to the computation of the HADES electrostatics, the Hamiltonian character (conservation of total energy, linear, and angular momenta) may get lost. Here, we show that the Hamiltonian character of HADES can be almost completely preserved, if the structure-adapted fast multipole method (SAMM) as recently redesigned by Lorenzen et al. [J. Chem. Theory Comput. 10, 3244-3259 (2014)] is suitably extended and is chosen as the FMM module. By this extension, the HADES/SAMM forces become exact gradients of the HADES/SAMM energy. Their translational and rotational invariance then guarantees (within the limits of numerical accuracy) the exact conservation of the linear and angular momenta. Also, the total energy is essentially conserved—up to residual algorithmic noise, which is caused by the periodically repeated SAMM interaction list updates. These updates entail very small temporal discontinuities of the force description, because the employed SAMM approximations represent deliberately balanced compromises between accuracy and efficiency. The energy-gradient corrected version of SAMM can also be applied, of course, to MD simulations of all-atom solvent-solute systems enclosed by periodic boundary conditions. However, as we demonstrate in passing, this choice does not offer any serious advantages.

  8. An h-adaptive mesh method for Boltzmann-BGK/hydrodynamics coupling

    International Nuclear Information System (INIS)

    Cai Zhenning; Li Ruo

    2010-01-01

    We introduce a coupled method for hydrodynamic and kinetic equations on 2-dimensional h-adaptive meshes. We adopt the Euler equations with a fast kinetic solver in the region near thermodynamical equilibrium, while use the Boltzmann-BGK equation in kinetic regions where fluids are far from equilibrium. A buffer zone is created around the kinetic regions, on which a gradually varying numerical flux is adopted. Based on the property of a continuously discretized cut-off function which describes how the flux varies, the coupling will be conservative. In order for the conservative 2-dimensional specularly reflective boundary condition to be implemented conveniently, the discrete Maxwellian is approximated by a high order continuous formula with improved accuracy on a disc instead of on a square domain. The h-adaptive method can work smoothly with a time-split numerical scheme. Through h-adaptation, the cell number is greatly reduced. This method is particularly suitable for problems with hydrodynamics breakdown on only a small part of the whole domain, so that the total efficiency of the algorithm can be greatly improved. Three numerical examples are presented to validate the proposed method and demonstrate its efficiency.

  9. Bayes linear statistics, theory & methods

    CERN Document Server

    Goldstein, Michael

    2007-01-01

    Bayesian methods combine information available from data with any prior information available from expert knowledge. The Bayes linear approach follows this path, offering a quantitative structure for expressing beliefs, and systematic methods for adjusting these beliefs, given observational data. The methodology differs from the full Bayesian methodology in that it establishes simpler approaches to belief specification and analysis based around expectation judgements. Bayes Linear Statistics presents an authoritative account of this approach, explaining the foundations, theory, methodology, and practicalities of this important field. The text provides a thorough coverage of Bayes linear analysis, from the development of the basic language to the collection of algebraic results needed for efficient implementation, with detailed practical examples. The book covers:The importance of partial prior specifications for complex problems where it is difficult to supply a meaningful full prior probability specification...

  10. Optimal control linear quadratic methods

    CERN Document Server

    Anderson, Brian D O

    2007-01-01

    This augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems. It explores linear optimal control theory from an engineering viewpoint, with step-by-step explanations that show clearly how to make practical use of the material.The three-part treatment begins with the basic theory of the linear regulator/tracker for time-invariant and time-varying systems. The Hamilton-Jacobi equation is introduced using the Principle of Optimality, and the infinite-time problem is considered. The second part outlines the

  11. Atlas-Based Automatic Generation of Subject-Specific Finite Element Tongue Meshes.

    Science.gov (United States)

    Bijar, Ahmad; Rohan, Pierre-Yves; Perrier, Pascal; Payan, Yohan

    2016-01-01

    Generation of subject-specific 3D finite element (FE) models requires the processing of numerous medical images in order to precisely extract geometrical information about subject-specific anatomy. This processing remains extremely challenging. To overcome this difficulty, we present an automatic atlas-based method that generates subject-specific FE meshes via a 3D registration guided by Magnetic Resonance images. The method extracts a 3D transformation by registering the atlas' volume image to the subject's one, and establishes a one-to-one correspondence between the two volumes. The 3D transformation field deforms the atlas' mesh to generate the subject-specific FE mesh. To preserve the quality of the subject-specific mesh, a diffeomorphic non-rigid registration based on B-spline free-form deformations is used, which guarantees a non-folding and one-to-one transformation. Two evaluations of the method are provided. First, a publicly available CT-database is used to assess the capability to accurately capture the complexity of each subject-specific Lung's geometry. Second, FE tongue meshes are generated for two healthy volunteers and two patients suffering from tongue cancer using MR images. It is shown that the method generates an appropriate representation of the subject-specific geometry while preserving the quality of the FE meshes for subsequent FE analysis. To demonstrate the importance of our method in a clinical context, a subject-specific mesh is used to simulate tongue's biomechanical response to the activation of an important tongue muscle, before and after cancer surgery.

  12. Mesh optimization for microbial fuel cell cathodes constructed around stainless steel mesh current collectors

    KAUST Repository

    Zhang, Fang

    2011-02-01

    Mesh current collectors made of stainless steel (SS) can be integrated into microbial fuel cell (MFC) cathodes constructed of a reactive carbon black and Pt catalyst mixture and a poly(dimethylsiloxane) (PDMS) diffusion layer. It is shown here that the mesh properties of these cathodes can significantly affect performance. Cathodes made from the coarsest mesh (30-mesh) achieved the highest maximum power of 1616 ± 25 mW m-2 (normalized to cathode projected surface area; 47.1 ± 0.7 W m-3 based on liquid volume), while the finest mesh (120-mesh) had the lowest power density (599 ± 57 mW m-2). Electrochemical impedance spectroscopy showed that charge transfer and diffusion resistances decreased with increasing mesh opening size. In MFC tests, the cathode performance was primarily limited by reaction kinetics, and not mass transfer. Oxygen permeability increased with mesh opening size, accounting for the decreased diffusion resistance. At higher current densities, diffusion became a limiting factor, especially for fine mesh with low oxygen transfer coefficients. These results demonstrate the critical nature of the mesh size used for constructing MFC cathodes. © 2010 Elsevier B.V. All rights reserved.

  13. The strategy of alternate direction adapted to a coarse mesh method for the solution of neutron diffusion problems

    International Nuclear Information System (INIS)

    Watson, F.V.

    1982-01-01

    An adaptation of the alternate direction method for coarse mesh calculation, is presented. The algorithm is applicable to two-and three dimensional problems, the last being the more interesting one. (E.G.) [pt

  14. Multi-Scale Coupling Between Monte Carlo Molecular Simulation and Darcy-Scale Flow in Porous Media

    KAUST Repository

    Saad, Ahmed Mohamed; Kadoura, Ahmad Salim; Sun, Shuyu

    2016-01-01

    In this work, an efficient coupling between Monte Carlo (MC) molecular simulation and Darcy-scale flow in porous media is presented. The cell centered finite difference method with non-uniform rectangular mesh were used to discretize the simulation

  15. Grid adaptation using chimera composite overlapping meshes

    Science.gov (United States)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1994-01-01

    The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.

  16. Anisotropic mesh adaptation for marine ice-sheet modelling

    Science.gov (United States)

    Gillet-Chaulet, Fabien; Tavard, Laure; Merino, Nacho; Peyaud, Vincent; Brondex, Julien; Durand, Gael; Gagliardini, Olivier

    2017-04-01

    Improving forecasts of ice-sheets contribution to sea-level rise requires, amongst others, to correctly model the dynamics of the grounding line (GL), i.e. the line where the ice detaches from its underlying bed and goes afloat on the ocean. Many numerical studies, including the intercomparison exercises MISMIP and MISMIP3D, have shown that grid refinement in the GL vicinity is a key component to obtain reliable results. Improving model accuracy while maintaining the computational cost affordable has then been an important target for the development of marine icesheet models. Adaptive mesh refinement (AMR) is a method where the accuracy of the solution is controlled by spatially adapting the mesh size. It has become popular in models using the finite element method as they naturally deal with unstructured meshes, but block-structured AMR has also been successfully applied to model GL dynamics. The main difficulty with AMR is to find efficient and reliable estimators of the numerical error to control the mesh size. Here, we use the estimator proposed by Frey and Alauzet (2015). Based on the interpolation error, it has been found effective in practice to control the numerical error, and has some flexibility, such as its ability to combine metrics for different variables, that makes it attractive. Routines to compute the anisotropic metric defining the mesh size have been implemented in the finite element ice flow model Elmer/Ice (Gagliardini et al., 2013). The mesh adaptation is performed using the freely available library MMG (Dapogny et al., 2014) called from Elmer/Ice. Using a setup based on the inter-comparison exercise MISMIP+ (Asay-Davis et al., 2016), we study the accuracy of the solution when the mesh is adapted using various variables (ice thickness, velocity, basal drag, …). We show that combining these variables allows to reduce the number of mesh nodes by more than one order of magnitude, for the same numerical accuracy, when compared to uniform mesh

  17. Adaptive mesh generation for image registration and segmentation

    DEFF Research Database (Denmark)

    Fogtmann, Mads; Larsen, Rasmus

    2013-01-01

    measure. The method was tested on a T1 weighted MR volume of an adult brain and showed a 66% reduction in the number of mesh vertices compared to a red-subdivision strategy. The deformation capability of the mesh was tested by registration to five additional T1-weighted MR volumes....

  18. The Disability Impact and Associated Cost per Disability in Women Who Underwent Surgical Revision of Transvaginal Mesh Kits for Prolapse Repair.

    Science.gov (United States)

    Javadian, Pouya; Shobeiri, S Abbas

    2017-09-13

    The aim of this study was to investigate disability impact in patients and cost to the families of patients who have had complications of transvaginal prolapse mesh kits and underwent surgical revision. Patients who developed complications of transvaginal mesh kits for prolapse and who had undergone vaginal prolapse mesh surgical revision/removal in 2009 to 2014 at a single institution were identified by Current Procedural Terminology codes. The group was invited to complete a phone survey pertaining to the initial vaginal mesh used for prolapse surgery utilizing Sheehan Disability Scale (scale 0-10) and Years of life Lived with Disability (YLDs) questionnaires. The data collected were used to estimate the disability and cost analysis. We used our data to estimate the economic and quality-of-life impact of vaginal mesh complications on patients in the United States RESULTS: Sixty-two patients (62/198 [31.2%]) were consented to participate and completed the questionnaires by phone. Extremely disabled patients were 18 (29%) of 62 of whole cases, and 5 (8%) of 62 reported that they had no disability after vaginal mesh surgery. The median for overall disability score after vaginal mesh procedure was 8 (which reflects marked disability on a scale of 0-10). The majority of patients missed a median of 12 months of their school or work because of their mesh complications. Thirty-seven (59.6%) of 62 did not improve after mesh removal. Twenty-one (33.9%) of 62 stated that their family income dropped because of productivity loss related to mesh complications. The mean time between vaginal mesh surgery and mesh removal procedure was 4.7 years. Sheehan Disability Scale scores are significantly correlated with YLDs outcomes. Patients' overall disability score showed a significant correlation with YLDs scores (P mesh for prolapse reduction complications had a sustained disability impact that continued despite mesh removal. Likewise, the complications were associated with

  19. AUTOMATIC MESH GENERATION OF 3—D GEOMETRIC MODELS

    Institute of Scientific and Technical Information of China (English)

    刘剑飞

    2003-01-01

    In this paper the presentation of the ball-packing method is reviewed, and a schemeto generate mesh for complex 3-D geometric models is given, which consists of 4 steps: (1) createnodes in 3-D models by ball-packing method, (2) connect nodes to generate mesh by 3-D Delaunaytriangulation, (3) retrieve the boundary of the model after Delaunay triangulation, (4) improve themesh.

  20. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    Science.gov (United States)

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  1. Development op finite volume methods for fluid dynamics; Developpement de methodes de volumes finis pour la mecanique des fluides

    Energy Technology Data Exchange (ETDEWEB)

    Delcourte, S

    2007-09-15

    We aim to develop a finite volume method which applies to a greater class of meshes than other finite volume methods, restricted by orthogonality constraints. We build discrete differential operators over the three staggered tessellations needed for the construction of the method. These operators verify some analogous properties to those of the continuous operators. At first, the method is applied to the Div-Curl problem, which can be viewed as a building block of the Stokes problem. Then, the Stokes problem is dealt with with various boundary conditions. It is well known that when the computational domain is polygonal and non-convex, the order of convergence of numerical methods is deteriorated. Consequently, we have studied how an appropriate local refinement is able to restore the optimal order of convergence for the Laplacian problem. At last, we have discretized the non-linear Navier-Stokes problem, using the rotational formulation of the convection term, associated to the Bernoulli pressure. With an iterative algorithm, we are led to solve a saddle-point problem at each iteration. We give a particular interest to this linear problem by testing some pre-conditioners issued from finite elements, which we adapt to our method. Each problem is illustrated by numerical results on arbitrary meshes, such as strongly non-conforming meshes. (author)

  2. Automatic mesh refinement and local multigrid methods for contact problems: application to the Pellet-Cladding mechanical Interaction

    International Nuclear Information System (INIS)

    Liu, Hao

    2016-01-01

    This Ph.D. work takes place within the framework of studies on Pellet-Cladding mechanical Interaction (PCI) which occurs in the fuel rods of pressurized water reactor. This manuscript focuses on automatic mesh refinement to simulate more accurately this phenomena while maintaining acceptable computational time and memory space for industrial calculations. An automatic mesh refinement strategy based on the combination of the Local Defect Correction multigrid method (LDC) with the Zienkiewicz and Zhu a posteriori error estimator is proposed. The estimated error is used to detect the zones to be refined, where the local sub-grids of the LDC method are generated. Several stopping criteria are studied to end the refinement process when the solution is accurate enough or when the refinement does not improve the global solution accuracy anymore. Numerical results for elastic 2D test cases with pressure discontinuity show the efficiency of the proposed strategy. The automatic mesh refinement in case of unilateral contact problems is then considered. The strategy previously introduced can be easily adapted to the multi-body refinement by estimating solution error on each body separately. Post-processing is often necessary to ensure the conformity of the refined areas regarding the contact boundaries. A variety of numerical experiments with elastic contact (with or without friction, with or without an initial gap) confirms the efficiency and adaptability of the proposed strategy. (author) [fr

  3. H-Morph: An indirect approach to advancing front hex meshing

    Energy Technology Data Exchange (ETDEWEB)

    OWEN,STEVEN J.; SAIGAL,SUNIL

    2000-05-30

    H-Morph is a new automatic algorithm for the generation of a hexahedral-dominant finite element mesh for arbitrary volumes. The H-Morph method starts with an initial tetrahedral mesh and systematically transforms and combines tetrahedral into hexahedra. It uses an advancing front technique where the initial front consists of a set of prescribed quadrilateral surface facets. Fronts are individually processed by recovering each of the six quadrilateral faces of a hexahedron from the tetrahedral mesh. Recovery techniques similar to those used in boundary constrained Delaunay mesh generation are used. Tetrahedral internal to the six hexahedral faces are then removed and a hexahedron is formed. At any time during the H-Morph procedure a valid mixed hexahedral-tetrahedral mesh is in existence within the volume. The procedure continues until no tetrahedral remain within the volume, or tetrahedral remain which cannot be transformed or combined into valid hexahedral elements. Any remaining tetrahedral are typically towards the interior of the volume, generally a less critical region for analysis. Transition from tetrahedral to hexahedra in the final mesh is accomplished through pyramid shaped elements. Advantages of the proposed method include its ability to conform to an existing quadrilateral surface mesh, its ability to mesh without the need to decompose or recognize special classes of geometry, and its characteristic well-aligned layers of elements parallel to the boundary. Example test cases are presented on a variety of models.

  4. Moving mesh generation with a sequential approach for solving PDEs

    DEFF Research Database (Denmark)

    In moving mesh methods, physical PDEs and a mesh equation derived from equidistribution of an error metrics (so-called the monitor function) are simultaneously solved and meshes are dynamically concentrated on steep regions (Lim et al., 2001). However, the simultaneous solution procedure...... a simple and robust moving mesh algorithm in one or multidimension. In this study, we propose a sequential solution procedure including two separate parts: prediction step to obtain an approximate solution to a next time level (integration of physical PDEs) and regriding step at the next time level (mesh...... generation and solution interpolation). Convection terms, which appear in physical PDEs and a mesh equation, are discretized by a WENO (Weighted Essentially Non-Oscillatory) scheme under the consrvative form. This sequential approach is to keep the advantages of robustness and simplicity for the static...

  5. Mesh Generation via Local Bisection Refinement of Triangulated Grids

    Science.gov (United States)

    2015-06-01

    Science and Technology Organisation DSTO–TR–3095 ABSTRACT This report provides a comprehensive implementation of an unstructured mesh generation method...and Technology Organisation 506 Lorimer St, Fishermans Bend, Victoria 3207, Australia Telephone: 1300 333 362 Facsimile: (03) 9626 7999 c© Commonwealth...their behaviour is critically linked to Maubach’s method and the data structures N and T . The top- level mesh refinement algorithm is also presented

  6. Validation of a non-uniform meshing algorithm for the 3D-FDTD method by means of a two-wire crosstalk experimental set-up

    Directory of Open Access Journals (Sweden)

    Raúl Esteban Jiménez-Mejía

    2015-06-01

    Full Text Available This paper presents an algorithm used to automatically mesh a 3D computational domain in order to solve electromagnetic interaction scenarios by means of the Finite-Difference Time-Domain -FDTD-  Method. The proposed algorithm has been formulated in a general mathematical form, where convenient spacing functions can be defined for the problem space discretization, allowing the inclusion of small sized objects in the FDTD method and the calculation of detailed variations of the electromagnetic field at specified regions of the computation domain. The results obtained by using the FDTD method with the proposed algorithm have been contrasted not only with a typical uniform mesh algorithm, but also with experimental measurements for a two-wire crosstalk set-up, leading to excellent agreement between theoretical and experimental waveforms. A discussion about the advantages of the non-uniform mesh over the uniform one is also presented.

  7. Two-Level Iteration Penalty Methods for the Navier-Stokes Equations with Friction Boundary Conditions

    Directory of Open Access Journals (Sweden)

    Yuan Li

    2013-01-01

    Full Text Available This paper presents two-level iteration penalty finite element methods to approximate the solution of the Navier-Stokes equations with friction boundary conditions. The basic idea is to solve the Navier-Stokes type variational inequality problem on a coarse mesh with mesh size H in combining with solving a Stokes, Oseen, or linearized Navier-Stokes type variational inequality problem for Stokes, Oseen, or Newton iteration on a fine mesh with mesh size h. The error estimate obtained in this paper shows that if H, h, and ε can be chosen appropriately, then these two-level iteration penalty methods are of the same convergence orders as the usual one-level iteration penalty method.

  8. Turbulence Spreading into Linearly Stable Zone and Transport Scaling

    International Nuclear Information System (INIS)

    Hahm, T.S.; Diamond, P.H.; Lin, Z.; Itoh, K.; Itoh, S.-I.

    2003-01-01

    We study the simplest problem of turbulence spreading corresponding to the spatio-temporal propagation of a patch of turbulence from a region where it is locally excited to a region of weaker excitation, or even local damping. A single model equation for the local turbulence intensity I(x, t) includes the effects of local linear growth and damping, spatially local nonlinear coupling to dissipation and spatial scattering of turbulence energy induced by nonlinear coupling. In the absence of dissipation, the front propagation into the linearly stable zone occurs with the property of rapid progression at small t, followed by slower subdiffusive progression at late times. The turbulence radial spreading into the linearly stable zone reduces the turbulent intensity in the linearly unstable zone, and introduces an additional dependence on the rho* is always equal to rho i/a to the turbulent intensity and the transport scaling. These are in broad, semi-quantitative agreements with a number of global gyrokinetic simulation results with zonal flows and without zonal flows. The front propagation stops when the radial flux of fluctuation energy from the linearly unstable region is balanced by local dissipation in the linearly stable region

  9. Comparison of the deflated preconditioned conjugate gradient method and algebraic multigrid for composite materials

    NARCIS (Netherlands)

    Jönsthövel, T.B.; Van Gijzen, M.B.; MacLachlan, S.; Vuik, C.; Scarpas, A.

    2012-01-01

    Many applications in computational science and engineering concern composite materials, which are characterized by large discontinuities in the material properties. Such applications require fine-scale finite-element meshes, which lead to large linear systems that are challenging to solve with

  10. Runge-Kutta Methods for Linear Ordinary Differential Equations

    Science.gov (United States)

    Zingg, David W.; Chisholm, Todd T.

    1997-01-01

    Three new Runge-Kutta methods are presented for numerical integration of systems of linear inhomogeneous ordinary differential equations (ODES) with constant coefficients. Such ODEs arise in the numerical solution of the partial differential equations governing linear wave phenomena. The restriction to linear ODEs with constant coefficients reduces the number of conditions which the coefficients of the Runge-Kutta method must satisfy. This freedom is used to develop methods which are more efficient than conventional Runge-Kutta methods. A fourth-order method is presented which uses only two memory locations per dependent variable, while the classical fourth-order Runge-Kutta method uses three. This method is an excellent choice for simulations of linear wave phenomena if memory is a primary concern. In addition, fifth- and sixth-order methods are presented which require five and six stages, respectively, one fewer than their conventional counterparts, and are therefore more efficient. These methods are an excellent option for use with high-order spatial discretizations.

  11. Laparoscopic appendicectomy for suspected mesh-induced appendicitis after laparoscopic transabdominal preperitoneal polypropylene mesh inguinal herniorraphy

    Directory of Open Access Journals (Sweden)

    Jennings Jason

    2010-01-01

    Full Text Available Laparoscopic inguinal herniorraphy via a transabdominal preperitoneal (TAPP approach using Polypropylene Mesh (Mesh and staples is an accepted technique. Mesh induces a localised inflammatory response that may extend to, and involve, adjacent abdominal and pelvic viscera such as the appendix. We present an interesting case of suspected Mesh-induced appendicitis treated successfully with laparoscopic appendicectomy, without Mesh removal, in an elderly gentleman who presented with symptoms and signs of acute appendicitis 18 months after laparoscopic inguinal hernia repair. Possible mechanisms for Mesh-induced appendicitis are briefly discussed.

  12. Aerofoil broadband and tonal noise modelling using stochastic sound sources and incorporated large scale fluctuations

    Science.gov (United States)

    Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.

    2017-12-01

    The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.

  13. Parallel Adaptive Mesh Refinement for High-Order Finite-Volume Schemes in Computational Fluid Dynamics

    Science.gov (United States)

    Schwing, Alan Michael

    For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable

  14. Recently developed methods in neutral-particle transport calculations: overview

    International Nuclear Information System (INIS)

    Alcouffe, R.E.

    1982-01-01

    It has become increasingly apparent that successful, general methods for the solution of the neutral particle transport equation involve a close connection between the spatial-discretization method used and the source-acceleration method chosen. The first form of the transport equation, angular discretization which is discrete ordinates is considered as well as spatial discretization based upon a mesh arrangement. Characteristic methods are considered briefly in the context of future, desirable developments. The ideal spatial-discretization method is described as having the following attributes: (1) positive-positive boundary data yields a positive angular flux within the mesh including its boundaries; (2) satisfies the particle balance equation over the mesh, that is, the method is conservative; (3) possesses the diffusion limit independent of spatial mesh size, that is, for a linearly isotropic flux assumption, the transport differencing reduces to a suitable diffusion equation differencing; (4) the method is unconditionally acceleratable, i.e., for each mesh size, the method is unconditionally convergent with a source iteration acceleration. It is doubtful that a single method possesses all these attributes for a general problem. Some commonly used methods are outlined and their computational performance and usefulness are compared; recommendations for future development are detailed, which include practical computational considerations

  15. ONETEP: linear-scaling density-functional theory with plane-waves

    International Nuclear Information System (INIS)

    Haynes, P D; Mostof, A A; Skylaris, C-K; Payne, M C

    2006-01-01

    This paper provides a general overview of the methodology implemented in onetep (Order-N Electronic Total Energy Package), a parallel density-functional theory code for largescale first-principles quantum-mechanical calculations. The distinctive features of onetep are linear-scaling in both computational effort and resources, obtained by making well-controlled approximations which enable simulations to be performed with plane-wave accuracy. Titanium dioxide clusters of increasing size designed to mimic surfaces are studied to demonstrate the accuracy and scaling of onetep

  16. Selectivity of commercial, larger mesh and square mesh trawl codends for deep water rose shrimp Parapenaeus longirostris (Lucas, 1846 in the Aegean Sea

    Directory of Open Access Journals (Sweden)

    Hakan Kaykaç

    2009-09-01

    Full Text Available We investigated the differences between size selectivity of a commercial codend (40 mm diamond mesh – 40D, a larger mesh codend (48 mm diamond mesh – 48D, and a square mesh codend (40 mm square mesh – 40S for Parapenaeus longirostris in international waters of the Aegean Sea. Selectivity data were collected by using a covered codend method and analysed taking between-haul variation into account. The results indicate significant increases in L50 values in relation to an increase in mesh size and when the square mesh is used in the commercial trawl codend. The results demonstrate that the commercially used codend (40D is not selective enough for P. longirostris in terms of length at first maturity. Changing from a 40D to a 48D codend significantly improves selection, with an increase of about 15% in the L50 values (carapace length 14.5 mm for 40D and 16.6 mm for 48D. Similarly, 40 mm square mesh, which has recently been legislated for EU Mediterranean waters, showed a 12.4% higher mean L50 value (16.3 mm than 40 mm diamond mesh for this species. However, despite these improvements, the 48D and 40S codends still need further improvements to obtain higher selectivity closer to the length at first maturity (20 mm carapace length.

  17. Development op finite volume methods for fluid dynamics

    International Nuclear Information System (INIS)

    Delcourte, S.

    2007-09-01

    We aim to develop a finite volume method which applies to a greater class of meshes than other finite volume methods, restricted by orthogonality constraints. We build discrete differential operators over the three staggered tessellations needed for the construction of the method. These operators verify some analogous properties to those of the continuous operators. At first, the method is applied to the Div-Curl problem, which can be viewed as a building block of the Stokes problem. Then, the Stokes problem is dealt with with various boundary conditions. It is well known that when the computational domain is polygonal and non-convex, the order of convergence of numerical methods is deteriorated. Consequently, we have studied how an appropriate local refinement is able to restore the optimal order of convergence for the Laplacian problem. At last, we have discretized the non-linear Navier-Stokes problem, using the rotational formulation of the convection term, associated to the Bernoulli pressure. With an iterative algorithm, we are led to solve a saddle-point problem at each iteration. We give a particular interest to this linear problem by testing some pre-conditioners issued from finite elements, which we adapt to our method. Each problem is illustrated by numerical results on arbitrary meshes, such as strongly non-conforming meshes. (author)

  18. The discrete ordinate method in association with the finite-volume method in non-structured mesh; Methode des ordonnees discretes associee a la methode des volumes finis en maillage non structure

    Energy Technology Data Exchange (ETDEWEB)

    Le Dez, V; Lallemand, M [Ecole Nationale Superieure de Mecanique et d` Aerotechnique (ENSMA), 86 - Poitiers (France); Sakami, M; Charette, A [Quebec Univ., Chicoutimi, PQ (Canada). Dept. des Sciences Appliquees

    1997-12-31

    The description of an efficient method of radiant heat transfer field determination in a grey semi-transparent environment included in a 2-D polygonal cavity with surface boundaries that reflect the radiation in a purely diffusive manner is proposed, at the equilibrium and in radiation-conduction coupling situation. The technique uses simultaneously the finite-volume method in non-structured triangular mesh, the discrete ordinate method and the ray shooting method. The main mathematical developments and comparative results with the discrete ordinate method in orthogonal curvilinear coordinates are included. (J.S.) 10 refs.

  19. The discrete ordinate method in association with the finite-volume method in non-structured mesh; Methode des ordonnees discretes associee a la methode des volumes finis en maillage non structure

    Energy Technology Data Exchange (ETDEWEB)

    Le Dez, V.; Lallemand, M. [Ecole Nationale Superieure de Mecanique et d`Aerotechnique (ENSMA), 86 - Poitiers (France); Sakami, M.; Charette, A. [Quebec Univ., Chicoutimi, PQ (Canada). Dept. des Sciences Appliquees

    1996-12-31

    The description of an efficient method of radiant heat transfer field determination in a grey semi-transparent environment included in a 2-D polygonal cavity with surface boundaries that reflect the radiation in a purely diffusive manner is proposed, at the equilibrium and in radiation-conduction coupling situation. The technique uses simultaneously the finite-volume method in non-structured triangular mesh, the discrete ordinate method and the ray shooting method. The main mathematical developments and comparative results with the discrete ordinate method in orthogonal curvilinear coordinates are included. (J.S.) 10 refs.

  20. Persistent pelvic pain following transvaginal mesh surgery: a cause for mesh removal.

    Science.gov (United States)

    Marcus-Braun, Naama; Bourret, Antoine; von Theobald, Peter

    2012-06-01

    Persistent pelvic pain after vaginal mesh surgery is an uncommon but serious complication that greatly affects women's quality of life. Our aim was to evaluate various procedures for mesh removal performed at a tertiary referral center in cases of persistent pelvic pain, and to evaluate the ensuing complications and outcomes. A retrospective study was conducted at the University Hospital of Caen, France, including all patients treated for removal or section of vaginal mesh due to pelvic pain as a primary cause, between January 2004 and September 2009. Ten patients met the inclusion criteria. Patients were diagnosed between 10 months and 3 years after their primary operation. Eight cases followed suburethral sling procedures and two followed mesh surgery for pelvic organ prolapse. Patients presented with obturator neuralgia (6), pudendal neuralgia (2), dyspareunia (1), and non-specific pain (1). The surgical treatment to release the mesh included: three cases of extra-peritoneal laparoscopy, four cases of complete vaginal mesh removal, one case of partial mesh removal and two cases of section of the suburethral sling. In all patients with obturator neuralgia, symptoms were resolved or improved, whereas in both cases of pudendal neuralgia the symptoms continued. There were no intra-operative complications. Post-operative Retzius hematoma was observed in one patient after laparoscopy. Mesh removal in a tertiary center is a safe procedure, necessary in some cases of persistent pelvic pain. Obturator neuralgia seems to be easier to treat than pudendal neuralgia. Early diagnosis is the key to success in prevention of chronic disease. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. Variational linear algebraic equations method

    International Nuclear Information System (INIS)

    Moiseiwitsch, B.L.

    1982-01-01

    A modification of the linear algebraic equations method is described which ensures a variational bound on the phaseshifts for potentials having a definite sign at all points. The method is illustrated by the elastic scattering of s-wave electrons by the static field of atomic hydrogen. (author)

  2. 3D visualization and finite element mesh formation from wood anatomy samples, Part II – Algorithm approach

    Directory of Open Access Journals (Sweden)

    Petr Koňas

    2009-01-01

    Full Text Available Paper presents new original application WOOD3D in form of program code assembling. The work extends the previous article “Part I – Theoretical approach” in detail description of implemented C++ classes of utilized projects Visualization Toolkit (VTK, Insight Toolkit (ITK and MIMX. Code is written in CMake style and it is available as multiplatform application. Currently GNU Linux (32/64b and MS Windows (32/64b platforms were released. Article discusses various filter classes for image filtering. Mainly Otsu and Binary threshold filters are classified for anatomy wood samples thresholding. Registration of images series is emphasized for difference of colour spaces compensation is included. Resulted work flow of image analysis is new methodological approach for images processing through the composition, visualization, filtering, registration and finite element mesh formation. Application generates script in ANSYS parametric design language (APDL which is fully compatible with ANSYS finite element solver and designer environment. The script includes the whole definition of unstructured finite element mesh formed by individual elements and nodes. Due to simple notation, the same script can be used for generation of geometrical entities in element positions. Such formed volumetric entities are prepared for further geometry approximation (e.g. by boolean or more advanced methods. Hexahedral and tetrahedral types of mesh elements are formed on user request with specified mesh options. Hexahedral meshes are formed both with uniform element size and with anisotropic character. Modified octree method for hexahedral mesh with anisotropic character was declared in application. Multicore CPUs in the application are supported for fast image analysis realization. Visualization of image series and consequent 3D image are realized in VTK format sufficiently known and public format, visualized in GPL application Paraview. Future work based on mesh

  3. Ordering schemes for parallel processing of certain mesh problems

    International Nuclear Information System (INIS)

    O'Leary, D.

    1984-01-01

    In this work, some ordering schemes for mesh points are presented which enable algorithms such as the Gauss-Seidel or SOR iteration to be performed efficiently for the nine-point operator finite difference method on computers consisting of a two-dimensional grid of processors. Convergence results are presented for the discretization of u /SUB xx/ + u /SUB yy/ on a uniform mesh over a square, showing that the spectral radius of the iteration for these orderings is no worse than that for the standard row by row ordering of mesh points. Further applications of these mesh point orderings to network problems, more general finite difference operators, and picture processing problems are noted

  4. Accurate halo-galaxy mocks from automatic bias estimation and particle mesh gravity solvers

    Science.gov (United States)

    Vakili, Mohammadjavad; Kitaura, Francisco-Shu; Feng, Yu; Yepes, Gustavo; Zhao, Cheng; Chuang, Chia-Hsun; Hahn, ChangHoon

    2017-12-01

    Reliable extraction of cosmological information from clustering measurements of galaxy surveys requires estimation of the error covariance matrices of observables. The accuracy of covariance matrices is limited by our ability to generate sufficiently large number of independent mock catalogues that can describe the physics of galaxy clustering across a wide range of scales. Furthermore, galaxy mock catalogues are required to study systematics in galaxy surveys and to test analysis tools. In this investigation, we present a fast and accurate approach for generation of mock catalogues for the upcoming galaxy surveys. Our method relies on low-resolution approximate gravity solvers to simulate the large-scale dark matter field, which we then populate with haloes according to a flexible non-linear and stochastic bias model. In particular, we extend the PATCHY code with an efficient particle mesh algorithm to simulate the dark matter field (the FASTPM code), and with a robust MCMC method relying on the EMCEE code for constraining the parameters of the bias model. Using the haloes in the BigMultiDark high-resolution N-body simulation as a reference catalogue, we demonstrate that our technique can model the bivariate probability distribution function (counts-in-cells), power spectrum and bispectrum of haloes in the reference catalogue. Specifically, we show that the new ingredients permit us to reach percentage accuracy in the power spectrum up to k ∼ 0.4 h Mpc-1 (within 5 per cent up to k ∼ 0.6 h Mpc-1) with accurate bispectra improving previous results based on Lagrangian perturbation theory.

  5. Convergence analysis of the rebalance methods in multiplying finite slab having periodic boundary conditions

    International Nuclear Information System (INIS)

    Hong, Ser Gi; Lee, Young Ouk; Song, Jae Seung

    2009-01-01

    This paper analyzes the convergence of the rebalance iteration methods for the discrete ordinates transport equation in the multiplying finite slab problem. The finite slab is assumed to be homogeneous and it has the periodic boundary conditions. A general formulation is used to include three well-known rebalance methods of the linearized form in a unified way. The rebalance iteration methods considered in this paper are the CMR (Coarse-Mesh Rebalance), the CMFD (Coarse-Mesh Finite Difference), and p-CMFD (Partial Current-Based Coarse Mesh Finite Difference) methods which have been popularly used in the reactor physics. The convergence analysis is performed with the well-known Fourier analysis through a linearization. The analyses are applied for one-group problems. The theoretical analysis shows that there are one fundamental mode and N-1 Eigen-modes which determine the convergence if the finite slab is divided into N uniform meshes. The numerical tests show that the Fourier convergence analysis provides the reasonable estimate of the numerical spectral radii for the model problems and the spectral radius for the finite slab approaches the one for the infinite slab as the thickness of the slab increases. (author)

  6. Early experience with mesh excision for adverse outcomes after transvaginal mesh placement using prolapse kits.

    Science.gov (United States)

    Ridgeway, Beri; Walters, Mark D; Paraiso, Marie Fidela R; Barber, Matthew D; McAchran, Sarah E; Goldman, Howard B; Jelovsek, J Eric

    2008-12-01

    The purpose of this study was to determine the complications, treatments, and outcomes in patients choosing to undergo removal of mesh previously placed with a mesh procedural kit. This was a retrospective review of all patients who underwent surgical removal of transvaginal mesh for mesh-related complications during a 3-year period at Cleveland Clinic. At last follow-up, patients reported degree of pain, level of improvement, sexual activity, and continued symptoms. Nineteen patients underwent removal of mesh during the study period. Indications for removal included chronic pain (6/19), dyspareunia (6/19), recurrent pelvic organ prolapse (8/19), mesh erosion (12/19), and vesicovaginal fistula (3/19), with most patients (16/19) citing more than 1 reason. There were few complications related to the mesh removal. Most patients reported significant relief of symptoms. Mesh removal can be technically difficult but appears to be safe with few complications and high relief of symptoms, although some symptoms can persist.

  7. Mesh erosion after abdominal sacrocolpopexy.

    Science.gov (United States)

    Kohli, N; Walsh, P M; Roat, T W; Karram, M M

    1998-12-01

    To report our experience with erosion of permanent suture or mesh material after abdominal sacrocolpopexy. A retrospective chart review was performed to identify patients who underwent sacrocolpopexy by the same surgeon over 8 years. Demographic data, operative notes, hospital records, and office charts were reviewed after sacrocolpopexy. Patients with erosion of either suture or mesh were treated initially with conservative therapy followed by surgical intervention as required. Fifty-seven patients underwent sacrocolpopexy using synthetic mesh during the study period. The mean (range) postoperative follow-up was 19.9 (1.3-50) months. Seven patients (12%) had erosions after abdominal sacrocolpopexy with two suture erosions and five mesh erosions. Patients with suture erosion were asymptomatic compared with patients with mesh erosion, who presented with vaginal bleeding or discharge. The mean (+/-standard deviation) time to erosion was 14.0+/-7.7 (range 4-24) months. Both patients with suture erosion were treated conservatively with estrogen cream. All five patients with mesh erosion required transvaginal removal of the mesh. Mesh erosion can follow abdominal sacrocolpopexy over a long time, and usually presents as vaginal bleeding or discharge. Although patients with suture erosion can be managed successfully with conservative treatment, patients with mesh erosion require surgical intervention. Transvaginal removal of the mesh with vaginal advancement appears to be an effective treatment in patients failing conservative management.

  8. Surgical management of lower urinary mesh perforation after mid-urethral polypropylene mesh sling: mesh excision, urinary tract reconstruction and concomitant pubovaginal sling with autologous rectus fascia.

    Science.gov (United States)

    Shah, Ketul; Nikolavsky, Dmitriy; Gilsdorf, Daniel; Flynn, Brian J

    2013-12-01

    We present our management of lower urinary tract (LUT) mesh perforation after mid-urethral polypropylene mesh sling using a novel combination of surgical techniques including total or near total mesh excision, urinary tract reconstruction, and concomitant pubovaginal sling with autologous rectus fascia in a single operation. We retrospectively reviewed the medical records of 189 patients undergoing transvaginal removal of polypropylene mesh from the lower urinary tract or vagina. The focus of this study is 21 patients with LUT mesh perforation after mid-urethral polypropylene mesh sling. We excluded patients with LUT mesh perforation from prolapse kits (n = 4) or sutures (n = 11), or mesh that was removed because of isolated vaginal wall exposure without concomitant LUT perforation (n = 164). Twenty-one patients underwent surgical removal of mesh through a transvaginal approach or combined transvaginal/abdominal approaches. The location of the perforation was the urethra in 14 and the bladder in 7. The mean follow-up was 22 months. There were no major intraoperative complications. All patients had complete resolution of the mesh complication and the primary symptom. Of the patients with urethral perforation, continence was achieved in 10 out of 14 (71.5 %). Of the patients with bladder perforation, continence was achieved in all 7. Total or near total removal of lower urinary tract (LUT) mesh perforation after mid-urethral polypropylene mesh sling can completely resolve LUT mesh perforation in a single operation. A concomitant pubovaginal sling can be safely performed in efforts to treat existing SUI or avoid future surgery for SUI.

  9. Transvaginal mesh procedures for pelvic organ prolapse.

    Science.gov (United States)

    Walter, Jens-Erik

    2011-02-01

    To provide an update on transvaginal mesh procedures, newly available minimally invasive surgical techniques for pelvic floor repair. The discussion is limited to minimally invasive transvaginal mesh procedures. PubMed and Medline were searched for articles published in English, using the key words "pelvic organ prolapse," transvaginal mesh," and "minimally invasive surgery." Results were restricted to systematic reviews, randomized control trials/controlled clinical trials, and observational studies. Searches were updated on a regular basis, and articles were incorporated in the guideline to May 2010. Grey (unpublished) literature was identified through searching the websites of health technology assessment and health technology assessment-related agencies, clinical practice guideline collections, clinical trial registries, and national and international medical specialty societies. The quality of evidence was rated using the criteria described in the Report of the Canadian Task Force on the Preventive Health Care. Recommendations for practice were ranked according to the method described in that report (Table 1). Counselling for the surgical treatment of pelvic organ prolapse should consider all benefits, harms, and costs of the surgical procedure, with particular emphasis on the use of mesh. 1. Patients should be counselled that transvaginal mesh procedures are considered novel techniques for pelvic floor repair that demonstrate high rates of anatomical cure in uncontrolled short-term case series. (II-2B) 2. Patients should be informed of the range of success rates until stronger evidence of superiority is published. (II-2B) 3. Training specific to transvaginal mesh procedures should be undertaken before procedures are performed. (III-C) 4. Patients should undergo thorough preoperative counselling regarding (a) the potential serious adverse sequelae of transvaginal mesh repairs, including mesh exposure, pain, and dyspareunia; and (b) the limited data available

  10. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  11. Support Operators Method for the Diffusion Equation in Multiple Materials

    Energy Technology Data Exchange (ETDEWEB)

    Winters, Andrew R. [Los Alamos National Laboratory; Shashkov, Mikhail J. [Los Alamos National Laboratory

    2012-08-14

    A second-order finite difference scheme for the solution of the diffusion equation on non-uniform meshes is implemented. The method allows the heat conductivity to be discontinuous. The algorithm is formulated on a one dimensional mesh and is derived using the support operators method. A key component of the derivation is that the discrete analog of the flux operator is constructed to be the negative adjoint of the discrete divergence, in an inner product that is a discrete analog of the continuum inner product. The resultant discrete operators in the fully discretized diffusion equation are symmetric and positive definite. The algorithm is generalized to operate on meshes with cells which have mixed material properties. A mechanism to recover intermediate temperature values in mixed cells using a limited linear reconstruction is introduced. The implementation of the algorithm is verified and the linear reconstruction mechanism is compared to previous results for obtaining new material temperatures.

  12. Two linearization methods for atmospheric remote sensing

    International Nuclear Information System (INIS)

    Doicu, A.; Trautmann, T.

    2009-01-01

    We present two linearization methods for a pseudo-spherical atmosphere and general viewing geometries. The first approach is based on an analytical linearization of the discrete ordinate method with matrix exponential and incorporates two models for matrix exponential calculation: the matrix eigenvalue method and the Pade approximation. The second method referred to as the forward-adjoint approach is based on the adjoint radiative transfer for a pseudo-spherical atmosphere. We provide a compact description of the proposed methods as well as a numerical analysis of their accuracy and efficiency.

  13. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    Energy Technology Data Exchange (ETDEWEB)

    Pointer, William David [ORNL

    2017-08-01

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes were used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge

  14. Solving implicit multi-mesh flow and conjugate heat transfer problems with RELAP-7

    International Nuclear Information System (INIS)

    Zou, L.; Peterson, J.; Zhao, H.; Zhang, H.; Andrs, D.; Martineau, R.

    2013-01-01

    The fully implicit simulation capability of RELAP-7 to solve multi-mesh flow and conjugate heat transfer problems for reactor system safety analysis is presented. Compared to general single-mesh simulations, the reactor system safety analysis-type of code has unique challenges due to its highly simplified, interconnected, one-dimensional, and zero-dimensional flow network describing multiple physics with significantly different time and length scales. To use the Jacobian-free Newton Krylov-type of solver, preconditioning is generally required for the Krylov method. The uniqueness of the reactor safety analysis-type of code in treating the interconnected flow network and conjugate heat transfer also introduces challenges in providing preconditioning matrix. Typical flow and conjugate heat transfer problems involved in reactor safety analysis using RELAP-7, as well as the special treatment on the preconditioning matrix are presented in detail. (authors)

  15. Immersed boundary method combined with a high order compact scheme on half-staggered meshes

    International Nuclear Information System (INIS)

    Księżyk, M; Tyliszczak, A

    2014-01-01

    This paper presents the results of computations of incompressible flows performed with a high-order compact scheme and the immersed boundary method. The solution algorithm is based on the projection method implemented using the half-staggered grid arrangement in which the velocity components are stored in the same locations while the pressure nodes are shifted half a cell size. The time discretization is performed using the predictor-corrector method in which the forcing terms used in the immersed boundary method acts in both steps. The solution algorithm is verified based on 2D flow problems (flow in a lid-driven skewed cavity, flow over a backward facing step) and turns out to be very accurate on computational meshes comparable with ones used in the classical approaches, i.e. not based on the immersed boundary method.

  16. Protection from radiation enteritis by an absorbable polyglycolic acid mesh sling

    International Nuclear Information System (INIS)

    Devereux, D.F.; Thompson, D.; Sandhaus, L.; Sweeney, W.; Haas, A.

    1987-01-01

    Patients with malignant tumors of the pelvis who cannot be cured surgically often are treated with radiation after surgery. A devastating side effect of this treatment is radiation-associated small bowel injury (RASBI). The purpose of this study was to test the hypothesis that removal of the small bowel from the radiation field would protect it against RASBI. Twenty cebus monkeys underwent low anterior resection. In 10 animals an absorbable polyglycolic acid (PGA) mesh was sewn circumferentially around the interior of the abdominal cavity as a supporting apron, which prevented the small bowel's descent into the pelvis. The other 10 monkeys did not receive the mesh. All animals received 2000 rads by linear acceleration in a single dose. Twenty-four-hour stool fat, serum vitamin B12, and other serum values were obtained during the study. Animals were sacrificed after 1, 2, 3, 6, and 12 months, and the small bowel and rectum were examined histologically in a blind manner. Two monkeys who did not undergo surgery, or exposure to radiation served as controls. At all sacrifice periods, the animals with PGA mesh slings demonstrated normal small bowel function and histologic structure. Animals without mesh slings had abnormal stool and blood values at 1 month, and by 2 months all had died of small bowel necrosis. The animals that received the slings had no evidence of infection or obstruction, and by 6 months all evidence of the mesh was gone. Support of the small bowel out of the pelvis by an absorbable PGA mesh sling protects against RASBI and is without apparent complications

  17. Offset linear scaling for H-mode confinement

    International Nuclear Information System (INIS)

    Miura, Yukitoshi; Tamai, Hiroshi; Suzuki, Norio; Mori, Masahiro; Matsuda, Toshiaki; Maeda, Hikosuke; Takizuka, Tomonori; Itoh, Sanae; Itoh, Kimitaka.

    1992-01-01

    An offset linear scaling for the H-mode confinement time is examined based on single parameter scans on the JFT-2M experiment. Regression study is done for various devices with open divertor configuration such as JET, DIII-D, JFT-2M. The scaling law of the thermal energy is given in the MKSA unit as W th =0.0046R 1.9 I P 1.1 B T 0.91 √A+2.9x10 -8 I P 1.0 R 0.87 P√AP, where R is the major radius, I P is the plasma current, B T is the toroidal magnetic field, A is the average mass number of plasma and neutral beam particles, and P is the heating power. This fitting has a similar root mean square error (RMSE) compared to the power law scaling. The result is also compared with the H-mode in other configurations. The W th of closed divertor H-mode on ASDEX shows a little better values than that of open divertor H-mode. (author)

  18. Simulations of a single vortex ring using an unbounded, regularized particle-mesh based vortex method

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Spietz, Henrik J.; Walther, Jens Honore

    2014-01-01

    , unbounded particle-mesh based vortex method is used to simulate the instability, transition to turbulence and eventual destruction of a single vortex ring. From the simulation data a novel method on analyzing the dynamics of the enstrophy is presented based on the alignment of the vorticity vector...... with the principal axis of the strain rate tensor. We find that the dynamics of the enstrophy density is dominated by the local flow deformation and axis of rotation, which is used to infer some concrete tendencies related to the topology of the vorticity field....

  19. Oral, intestinal, and skin bacteria in ventral hernia mesh implants

    Directory of Open Access Journals (Sweden)

    Odd Langbach

    2016-07-01

    Full Text Available Background: In ventral hernia surgery, mesh implants are used to reduce recurrence. Infection after mesh implantation can be a problem and rates around 6–10% have been reported. Bacterial colonization of mesh implants in patients without clinical signs of infection has not been thoroughly investigated. Molecular techniques have proven effective in demonstrating bacterial diversity in various environments and are able to identify bacteria on a gene-specific level. Objective: The purpose of this study was to detect bacterial biofilm in mesh implants, analyze its bacterial diversity, and look for possible resemblance with bacterial biofilm from the periodontal pocket. Methods: Thirty patients referred to our hospital for recurrence after former ventral hernia mesh repair, were examined for periodontitis in advance of new surgical hernia repair. Oral examination included periapical radiographs, periodontal probing, and subgingival plaque collection. A piece of mesh (1×1 cm from the abdominal wall was harvested during the new surgical hernia repair and analyzed for bacteria by PCR and 16S rRNA gene sequencing. From patients with positive PCR mesh samples, subgingival plaque samples were analyzed with the same techniques. Results: A great variety of taxa were detected in 20 (66.7% mesh samples, including typical oral commensals and periodontopathogens, enterics, and skin bacteria. Mesh and periodontal bacteria were further analyzed for similarity in 16S rRNA gene sequences. In 17 sequences, the level of resemblance between mesh and subgingival bacterial colonization was 98–100% suggesting, but not proving, a transfer of oral bacteria to the mesh. Conclusion: The results show great bacterial diversity on mesh implants from the anterior abdominal wall including oral commensals and periodontopathogens. Mesh can be reached by bacteria in several ways including hematogenous spread from an oral site. However, other sites such as gut and skin may also

  20. BOX-COX REGRESSION METHOD IN TIME SCALING

    Directory of Open Access Journals (Sweden)

    ATİLLA GÖKTAŞ

    2013-06-01

    Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error  when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.

  1. Direct current linear measurement sub-assembly data and test methods. Nuclear electronic equipment for control and monitoring panel

    International Nuclear Information System (INIS)

    1977-12-01

    The M.C.H./M.E.N.T.3 document is concerned with sub-assemblies intended for measuring on a linear scale the neutron fluence rate or radiation dose rate when connected with nuclear detectors working in current. The symbols used are described. Some definitions and a bibliography are given. The main characteristics of direct current linear measurement sub-assemblies are then described together with corresponding test methods. This type of instrument indicates on a linear scale the level of a direct current applied to its input. The document reviews linear sub-assemblies for general purpose applications, difference amplifiers for monitoring, and averaging amplifiers. The document is intended for electronics manufacturers, designers, persons participating in acceptance trials and plant operators [fr

  2. Common Nearly Best Linear Estimates of Location and Scale ...

    African Journals Online (AJOL)

    Common nearly best linear estimates of location and scale parameters of normal and logistic distributions, which are based on complete samples, are considered. Here, the population from which the samples are drawn is either normal or logistic population or a fusion of both distributions and the estimates are computed ...

  3. Amorphous Ni(Fe)OxHy-coated nanocone arrays self-supported on stainless steel mesh as a promising oxygen-evolving anode for large scale water splitting

    Science.gov (United States)

    Shen, Junyu; Wang, Mei; Zhao, Liang; Zhang, Peili; Jiang, Jian; Liu, Jinxuan

    2018-06-01

    The development of highly efficient, robust, and cheap water oxidation electrodes is a major challenge in constructing industrially applicable electrolyzers for large-scale production of hydrogen from water. Herein we report a hierarchical stainless steel mesh electrode which features Ni(Fe)OxHy-coated self-supported nanocone arrays. Through a facile, mild, low-cost and readily scalable two-step fabrication procedure, the electrochemically active area of the optimized electrode is enlarged by a factor of 3.1 and the specific activity is enhanced by a factor of 250 at 265 mV overpotential compared with that of a corresponding pristine stainless steel mesh electrode. Moreover, the charge-transfer resistance is reduced from 4.47 Ω for the stainless steel mesh electrode to 0.13 Ω for the Ni(Fe)OxHy-coated nanocone array stainless steel mesh electrode. As a result, the cheap and easily fabricated electrode displays 280 and 303 mV low overpotentials to achieve high current densities of 500 and 1000 mA cmgeo-2, respectively, for oxygen evolution reaction in 1 M KOH. More importantly, the electrode exhibits a good stability over 340 h of chronopotentiometric test at 50 mA cmgeo-2 and only a slight attenuation (4.2%, ∼15 mV) in catalytic activity over 82 h electrolysis at a constant current density of 500 mA cmgeo-2.

  4. Surface meshing with curvature convergence

    KAUST Repository

    Li, Huibin; Zeng, Wei; Morvan, Jean-Marie; Chen, Liming; Gu, Xianfengdavid

    2014-01-01

    Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.

  5. Surface meshing with curvature convergence

    KAUST Repository

    Li, Huibin

    2014-06-01

    Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.

  6. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  7. Reduced order modelling techniques for mesh movement strategies as applied to fluid structure interactions

    CSIR Research Space (South Africa)

    Bogaers, Alfred EJ

    2010-01-01

    Full Text Available of Laplacian or Bi-harmonic equations [7], radial basis function (RBF) interpolation [3, 15] or through mesh optimization [1, 6]. Despite the successes of these algorithms in reducing the frequency and necessity for re- meshing, they still account for a... simulations of a real system. What makes POD remarkable is that the selected modes are not only appropriate but make up the optimal linear basis for describing any given system. POD has been applied in a wide range of disciplines including image processing...

  8. Mesh size in Lichtenstein repair: a systematic review and meta-analysis to determine the importance of mesh size.

    Science.gov (United States)

    Seker, D; Oztuna, D; Kulacoglu, H; Genc, Y; Akcil, M

    2013-04-01

    Small mesh size has been recognized as one of the factors responsible for recurrence after Lichtenstein hernia repair due to insufficient coverage or mesh shrinkage. The Lichtenstein Hernia Institute recommends a 7 × 15 cm mesh that can be trimmed up to 2 cm from the lateral side. We performed a systematic review to determine surgeons' mesh size preference for the Lichtenstein hernia repair and made a meta-analysis to determine the effect of mesh size, mesh type, and length of follow-up time on recurrence. Two medical databases, PubMed and ISI Web of Science, were systematically searched using the key word "Lichtenstein repair." All full text papers were selected. Publications mentioning mesh size were brought for further analysis. A mesh surface area of 90 cm(2) was accepted as the threshold for defining the mesh as small or large. Also, a subgroup analysis for recurrence pooled proportion according to the mesh size, mesh type, and follow-up period was done. In total, 514 papers were obtained. There were no prospective or retrospective clinical studies comparing mesh size and clinical outcome. A total of 141 papers were duplicated in both databases. As a result, 373 papers were obtained. The full text was available in over 95 % of papers. Only 41 (11.2 %) papers discussed mesh size. In 29 studies, a mesh larger than 90 cm(2) was used. The most frequently preferred commercial mesh size was 7.5 × 15 cm. No papers mentioned the size of the mesh after trimming. There was no information about the relationship between mesh size and patient BMI. The pooled proportion in recurrence for small meshes was 0.0019 (95 % confidence interval: 0.007-0.0036), favoring large meshes to decrease the chance of recurrence. Recurrence becomes more marked when follow-up period is longer than 1 year (p < 0.001). Heavy meshes also decreased recurrence (p = 0.015). This systematic review demonstrates that the size of the mesh used in Lichtenstein hernia repair is rarely

  9. Parallel adaptation of general three-dimensional hybrid meshes

    International Nuclear Information System (INIS)

    Kavouklis, Christos; Kallinderis, Yannis

    2010-01-01

    A new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid grids has been developed. The meshes considered in this work are composed of four kinds of elements; tetrahedra, prisms, hexahedra and pyramids, which poses a challenge to parallel mesh adaptation. Additional complexity imposed by the presence of multiple types of elements affects especially data migration, updates of local data structures and interpartition data structures. Efficient partition of hybrid meshes has been accomplished by transforming them to suitable graphs and using serial graph partitioning algorithms. Communication among processors is based on the faces of the interpartition boundary and the termination detection algorithm of Dijkstra is employed to ensure proper flagging of edges for refinement. An inexpensive dynamic load balancing strategy is introduced to redistribute work load among processors after adaptation. In particular, only the initial coarse mesh, with proper weighting, is balanced which yields savings in computation time and relatively simple implementation of mesh quality preservation rules, while facilitating coarsening of refined elements. Special algorithms are employed for (i) data migration and dynamic updates of the local data structures, (ii) determination of the resulting interpartition boundary and (iii) identification of the communication pattern of processors. Several representative applications are included to evaluate the method.

  10. Influence of cell shape on mechanical properties of Ti-6Al-4V meshes fabricated by electron beam melting method.

    Science.gov (United States)

    Li, S J; Xu, Q S; Wang, Z; Hou, W T; Hao, Y L; Yang, R; Murr, L E

    2014-10-01

    Ti-6Al-4V reticulated meshes with different elements (cubic, G7 and rhombic dodecahedron) in Materialise software were fabricated by additive manufacturing using the electron beam melting (EBM) method, and the effects of cell shape on the mechanical properties of these samples were studied. The results showed that these cellular structures with porosities of 88-58% had compressive strength and elastic modulus in the range 10-300MPa and 0.5-15GPa, respectively. The compressive strength and deformation behavior of these meshes were determined by the coupling of the buckling and bending deformation of struts. Meshes that were dominated by buckling deformation showed relatively high collapse strength and were prone to exhibit brittle characteristics in their stress-strain curves. For meshes dominated by bending deformation, the elastic deformation corresponded well to the Gibson-Ashby model. By enhancing the effect of bending deformation, the stress-strain curve characteristics can change from brittle to ductile (the smooth plateau area). Therefore, Ti-6Al-4V cellular solids with high strength, low modulus and desirable deformation behavior could be fabricated through the cell shape design using the EBM technique. Copyright © 2014 Acta Materialia Inc. All rights reserved.

  11. Two-dimensional differential transform method for solving linear and non-linear Schroedinger equations

    International Nuclear Information System (INIS)

    Ravi Kanth, A.S.V.; Aruna, K.

    2009-01-01

    In this paper, we propose a reliable algorithm to develop exact and approximate solutions for the linear and nonlinear Schroedinger equations. The approach rest mainly on two-dimensional differential transform method which is one of the approximate methods. The method can easily be applied to many linear and nonlinear problems and is capable of reducing the size of computational work. Exact solutions can also be achieved by the known forms of the series solutions. Several illustrative examples are given to demonstrate the effectiveness of the present method.

  12. Unstructured Adaptive Meshes: Bad for Your Memory?

    Science.gov (United States)

    Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob

    2003-01-01

    This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.

  13. Vaginal native tissue repair versus transvaginal mesh repair for apical prolapse: how utilizing different methods of analysis affects the estimated trade-off between reoperation for mesh exposure/erosion and reoperation for recurrent prolapse.

    Science.gov (United States)

    Dieter, Alexis A; Willis-Gray, Marcella G; Weidner, Alison C; Visco, Anthony G; Myers, Evan R

    2015-05-01

    Informed decision-making about optimal surgical repair of apical prolapse with vaginal native tissue (NT) versus transvaginal mesh (TVM) requires understanding the balance between the potential "harm" of mesh-related complications and the potential "benefit" of reducing prolapse recurrence. Synthesis of data from observational studies is required and the current literature shows that the average follow-up for NT repair is significantly longer than for TVM repair. We examined this harm/benefit balance. We hypothesized that using different methods of analysis to incorporate follow-up time would affect the balance of outcomes. We used a Markov state transition model to estimate the cumulative 24-month probabilities of reoperation for mesh exposure/erosion or for recurrent prolapse after either NT or TVM repair. We used four different analytic approaches to estimate probability distributions ranging from simple pooled proportions to a random effects meta-analysis using study-specific events per patient-time. As variability in follow-up time was accounted for better, the balance of outcomes became more uncertain. For TVM repair, the incremental ratio of number of operations for mesh exposure/erosion per single reoperation for recurrent prolapse prevented increased progressively from 1.4 to over 100 with more rigorous analysis methods. The most rigorous analysis showed a 70% probability that TVM would result in more operations for recurrent prolapse repair than NT. Based on the best available evidence, there is considerable uncertainty about the harm/benefit trade-off between NT and TVM for apical prolapse repair. Future studies should incorporate time-to-event analyses, with greater standardization of reporting, in order to better inform decision-making.

  14. Clinical study for pancreatic fistula after distal pancreatectomy with mesh reinforcement

    Directory of Open Access Journals (Sweden)

    Akira Hayashibe

    2018-05-01

    Full Text Available Summary: Background: The purpose of this cohort study was to determine whether distal pancreatectomy with mesh reinforcement can reduce postoperative pancreatic fistula (POPF rates compared with bare stapler. Methods: In total, 51 patients underwent stapled distal pancreatectomy. Out of these, 22 patients (no mesh group underwent distal pancreatectomy with bare stapler and 29 patients (mesh group underwent distal pancreatectomy with mesh reinforced stapler. The risk factor for clinically relevant POPF (grades B and C after distal pancreatectomy was also evaluated. Results: Clinical characteristics were almost similar in both the groups. The days of the mean hospital stay and drainage tube insertion in the mesh group were significantly fewer than those in the no mesh group. The mean level of amylase in the discharge fluid in the mesh group was also significantly lower than that the in no mesh group. The rate of clinically relevant POPF (grades B and C in the mesh group was significantly lower than that in the no mesh group (p=0.016. Univariate analyses of risk factors for POPF (grades B and C revealed that only mesh reinforcement was associated with POPF (grades B and C. Moreover, on multivariate analyses of POPF risk factors with p value<0.2 in univariate analyses by logistic regression, mesh reinforcement was regarded as a significant factor for POPF(grades B and C. Conclusions: The distal pancreatectomy with mesh reinforced stapler was thought to be favorable for the prevention of clinically relevant POPF (grades B and C. Keywords: mesh reinforcement, pancreatic fistula, pancreatic surgery

  15. Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Beckingsale, D. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Gaudin, W. P. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Hornung, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gunney, B. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Herdman, J. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Jarvis, S. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom)

    2014-11-17

    Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.

  16. Development of a 3D non-linear implicit MHD code

    International Nuclear Information System (INIS)

    Nicolas, T.; Ichiguchi, K.

    2016-06-01

    This paper details the on-going development of a 3D non-linear implicit MHD code, which aims at making possible large scale simulations of the non-linear phase of the interchange mode. The goal of the paper is to explain the rationale behind the choices made along the development, and the technical difficulties encountered. At the present stage, the development of the code has not been completed yet. Most of the discussion is concerned with the first approach, which utilizes cartesian coordinates in the poloidal plane. This approach shows serious difficulties in writing the preconditioner, closely related to the choice of coordinates. A second approach, based on curvilinear coordinates, also faced significant difficulties, which are detailed. The third and last approach explored involves unstructured tetrahedral grids, and indicates the possibility to solve the problem. The issue to domain meshing is addressed. (author)

  17. Performance and scaling of locally-structured grid methods forpartial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Colella, Phillip; Bell, John; Keen, Noel; Ligocki, Terry; Lijewski, Michael; Van Straalen, Brian

    2007-07-19

    In this paper, we discuss some of the issues in obtaining high performance for block-structured adaptive mesh refinement software for partial differential equations. We show examples in which AMR scales to thousands of processors. We also discuss a number of metrics for performance and scalability that can provide a basis for understanding the advantages and disadvantages of this approach.

  18. Three-point phase correlations: A new measure of non-linear large-scale structure

    CERN Document Server

    Wolstenhulme, Richard; Obreschkow, Danail

    2015-01-01

    We derive an analytical expression for a novel large-scale structure observable: the line correlation function. The line correlation function, which is constructed from the three-point correlation function of the phase of the density field, is a robust statistical measure allowing the extraction of information in the non-linear and non-Gaussian regime. We show that, in perturbation theory, the line correlation is sensitive to the coupling kernel F_2, which governs the non-linear gravitational evolution of the density field. We compare our analytical expression with results from numerical simulations and find a very good agreement for separations r>20 Mpc/h. Fitting formulae for the power spectrum and the non-linear coupling kernel at small scales allow us to extend our prediction into the strongly non-linear regime. We discuss the advantages of the line correlation relative to standard statistical measures like the bispectrum. Unlike the latter, the line correlation is independent of the linear bias. Furtherm...

  19. A discontinuous Galerkin method for solving transient Maxwell equations with nonlinear material properties

    KAUST Repository

    Sirenko, Kostyantyn

    2014-07-01

    Discontinuous Galerkin time-domain method (DGTD) has been used extensively in computational electromagnetics for analyzing transient electromagnetic wave interactions on structures described with linear constitutive relations. DGTD expands unknown fields independently on disconnected mesh elements and uses numerical flux to realize information exchange between fields on different elements (J. S. Hesthaven and T. Warburton, Nodal Discontinuous Galerkin Method, 2008). The numerical flux of choice for \\'linear\\' Maxwell equations is the upwind flux, which mimics accurately the physical behavior of electromagnetic waves on discontinuous boundaries. It is obtained from the analytical solution of the Riemann problem defined on the boundary of two neighboring mesh elements.

  20. A discontinuous Galerkin method for solving transient Maxwell equations with nonlinear material properties

    KAUST Repository

    Sirenko, Kostyantyn; Asirim, Ozum Emre; Bagci, Hakan

    2014-01-01

    Discontinuous Galerkin time-domain method (DGTD) has been used extensively in computational electromagnetics for analyzing transient electromagnetic wave interactions on structures described with linear constitutive relations. DGTD expands unknown fields independently on disconnected mesh elements and uses numerical flux to realize information exchange between fields on different elements (J. S. Hesthaven and T. Warburton, Nodal Discontinuous Galerkin Method, 2008). The numerical flux of choice for 'linear' Maxwell equations is the upwind flux, which mimics accurately the physical behavior of electromagnetic waves on discontinuous boundaries. It is obtained from the analytical solution of the Riemann problem defined on the boundary of two neighboring mesh elements.

  1. Computational mesh generation for vascular structures with deformable surfaces

    International Nuclear Information System (INIS)

    Putter, S. de; Laffargue, F.; Breeuwer, M.; Vosse, F.N. van de; Gerritsen, F.A.; Philips Medical Systems, Best

    2006-01-01

    Computational blood flow and vessel wall mechanics simulations for vascular structures are becoming an important research tool for patient-specific surgical planning and intervention. An important step in the modelling process for patient-specific simulations is the creation of the computational mesh based on the segmented geometry. Most known solutions either require a large amount of manual processing or lead to a substantial difference between the segmented object and the actual computational domain. We have developed a chain of algorithms that lead to a closely related implementation of image segmentation with deformable models and 3D mesh generation. The resulting processing chain is very robust and leads both to an accurate geometrical representation of the vascular structure as well as high quality computational meshes. The chain of algorithms has been tested on a wide variety of shapes. A benchmark comparison of our mesh generation application with five other available meshing applications clearly indicates that the new approach outperforms the existing methods in the majority of cases. (orig.)

  2. Application of particle-mesh Ewald summation to ONIOM theory

    International Nuclear Information System (INIS)

    Kobayashi, Osamu; Nanbu, Shinkoh

    2015-01-01

    Highlights: • Particle-mesh Ewald sum is extended to ONIOM scheme. • Non-adiabatic MD simulation in solution is performed. • The behavior of excited (Z)-penta-2,4-dieniminium cation in methanol is simulated. • The difference between gas phase and solution is predicted. - Abstract: We extended a particle mesh Ewald (PME) summation method to the ONIOM (our Own N-layered Integrated molecular Orbitals and molecular Mechanics) scheme (PME-ONIOM) to validate the simulation in solution. This took the form of a nonadiabatic ab initio molecular dynamics (MD) simulation in which the Zhu-Nakamura trajectory surface hopping (ZN-TSH) method was performed for the photoisomerization of a (Z)-penta-2,4-dieniminium cation (protonated Schiff base, PSB3) electronically excited to the S 1 state in a methanol solution. We also calculated a nonadiabatic ab initio MD simulation with only minimum image convention (MI-ONIOM). The lifetime determined by PME-ONIOM-MD was 3.483 ps. The MI-ONIOM-MD lifetime of 0.4642 ps was much shorter than those of PME-ONIOM-MD and the experimentally determined excited state lifetime. The difference eminently illustrated the accurate treatment of the long-range solvation effect, which destines the electronically excited PSB3 for staying in S 1 at the pico-second or the femto-second time scale.

  3. Voltammetry at micro-mesh electrodes

    Directory of Open Access Journals (Sweden)

    Wadhawan Jay D.

    2003-01-01

    Full Text Available The voltammetry at three micro-mesh electrodes is explored. It is found that at sufficiently short experimental durations, the micro-mesh working electrode first behaves as an ensemble of microband electrodes, then follows the behaviour anticipated for an array of diffusion-independent micro-ring electrodes of the same perimeter as individual grid-squares within the mesh. During prolonged electrolysis, the micro-mesh electrode follows that behaviour anticipated theoretically for a cubically-packed partially-blocked electrode. Application of the micro-mesh electrode for the electrochemical determination of carbon dioxide in DMSO electrolyte solutions is further illustrated.

  4. Streaming simplification of tetrahedral meshes.

    Science.gov (United States)

    Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T

    2007-01-01

    Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.

  5. Introducing a distributed unstructured mesh into gyrokinetic particle-in-cell code, XGC

    Science.gov (United States)

    Yoon, Eisung; Shephard, Mark; Seol, E. Seegyoung; Kalyanaraman, Kaushik

    2017-10-01

    XGC has shown good scalability for large leadership supercomputers. The current production version uses a copy of the entire unstructured finite element mesh on every MPI rank. Although an obvious scalability issue if the mesh sizes are to be dramatically increased, the current approach is also not optimal with respect to data locality of particles and mesh information. To address these issues we have initiated the development of a distributed mesh PIC method. This approach directly addresses the base scalability issue with respect to mesh size and, through the use of a mesh entity centric view of the particle mesh relationship, provides opportunities to address data locality needs of many core and GPU supported heterogeneous systems. The parallel mesh PIC capabilities are being built on the Parallel Unstructured Mesh Infrastructure (PUMI). The presentation will first overview the form of mesh distribution used and indicate the structures and functions used to support the mesh, the particles and their interaction. Attention will then focus on the node-level optimizations being carried out to ensure performant operation of all PIC operations on the distributed mesh. Partnership for Edge Physics Simulation (EPSI) Grant No. DE-SC0008449 and Center for Extended Magnetohydrodynamic Modeling (CEMM) Grant No. DE-SC0006618.

  6. Mesh Nanoelectronics: Seamless Integration of Electronics with Tissues.

    Science.gov (United States)

    Dai, Xiaochuan; Hong, Guosong; Gao, Teng; Lieber, Charles M

    2018-02-20

    nanoelectronics into rodent brains. First, we describe the design of ultraflexible mesh nanoelectronics with size features and mechanical properties similar to brain tissue and a novel syringe-injection methodology that allows the mesh nanoelectronics to be precisely delivered to targeted brain regions in a minimally invasive manner. Next, we discuss time-dependent histology studies showing seamless and stable integration of mesh nanoelectronics within brain tissue on at least one year scales without evidence of chronic immune response or glial scarring characteristic of conventional implants. Third, armed with facile input/output interfaces, we describe multiplexed single-unit recordings that demonstrate stable tracking of the same individual neurons and local neural circuits for at least 8 months, long-term monitoring and stimulation of the same groups of neurons, and following changes in individual neuron activity during brain aging. Moving forward, we foresee substantial opportunities for (1) continued development of mesh nanoelectronics through, for example, broadening nanodevice signal detection modalities and taking advantage of tissue-like properties for selective cell targeting and (2) exploiting the unique capabilities of mesh nanoelectronics for tackling critical scientific and medical challenges such as understanding and potentially ameliorating cell and circuit level changes associated with natural and pathological aging, as well as using mesh nanoelectronics as active tissue scaffolds for regenerative medicine and as neuroprosthetics for monitoring and treating neurological diseases.

  7. Scaling Optimization of the SIESTA MHD Code

    Science.gov (United States)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  8. MeshVoro: A Three-Dimensional Voronoi Mesh Building Tool for the TOUGH Family of Codes

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, C. M.; Boyle, K. L.; Reagan, M.; Johnson, J.; Rycroft, C.; Moridis, G. J.

    2013-09-30

    Few tools exist for creating and visualizing complex three-dimensional simulation meshes, and these have limitations that restrict their application to particular geometries and circumstances. Mesh generation needs to trend toward ever more general applications. To that end, we have developed MeshVoro, a tool that is based on the Voro (Rycroft 2009) library and is capable of generating complex threedimensional Voronoi tessellation-based (unstructured) meshes for the solution of problems of flow and transport in subsurface geologic media that are addressed by the TOUGH (Pruess et al. 1999) family of codes. MeshVoro, which includes built-in data visualization routines, is a particularly useful tool because it extends the applicability of the TOUGH family of codes by enabling the scientifically robust and relatively easy discretization of systems with challenging 3D geometries. We describe several applications of MeshVoro. We illustrate the ability of the tool to straightforwardly transform a complex geological grid into a simulation mesh that conforms to the specifications of the TOUGH family of codes. We demonstrate how MeshVoro can describe complex system geometries with a relatively small number of grid blocks, and we construct meshes for geometries that would have been practically intractable with a standard Cartesian grid approach. We also discuss the limitations and appropriate applications of this new technology.

  9. 3D face analysis by using Mesh-LBP feature

    Science.gov (United States)

    Wang, Haoyu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong

    2017-11-01

    Objective: Face Recognition is one of the widely application of image processing. Corresponding two-dimensional limitations, such as the pose and illumination changes, to a certain extent restricted its accurate rate and further development. How to overcome the pose and illumination changes and the effects of self-occlusion is the research hotspot and difficulty, also attracting more and more domestic and foreign experts and scholars to study it. 3D face recognition fusing shape and texture descriptors has become a very promising research direction. Method: Our paper presents a 3D point cloud based on mesh local binary pattern grid (Mesh-LBP), then feature extraction for 3D face recognition by fusing shape and texture descriptors. 3D Mesh-LBP not only retains the integrity of the 3D geometry, is also reduces the need for recognition process of normalization steps, because the triangle Mesh-LBP descriptor is calculated on 3D grid. On the other hand, in view of multi-modal consistency in face recognition advantage, construction of LBP can fusing shape and texture information on Triangular Mesh. In this paper, some of the operators used to extract Mesh-LBP, Such as the normal vectors of the triangle each face and vertex, the gaussian curvature, the mean curvature, laplace operator and so on. Conclusion: First, Kinect devices obtain 3D point cloud face, after the pretreatment and normalization, then transform it into triangular grid, grid local binary pattern feature extraction from face key significant parts of face. For each local face, calculate its Mesh-LBP feature with Gaussian curvature, mean curvature laplace operator and so on. Experiments on the our research database, change the method is robust and high recognition accuracy.

  10. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    Science.gov (United States)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  11. A third-order gas-kinetic CPR method for the Euler and Navier-Stokes equations on triangular meshes

    Science.gov (United States)

    Zhang, Chao; Li, Qibing; Fu, Song; Wang, Z. J.

    2018-06-01

    A third-order accurate gas-kinetic scheme based on the correction procedure via reconstruction (CPR) framework is developed for the Euler and Navier-Stokes equations on triangular meshes. The scheme combines the accuracy and efficiency of the CPR formulation with the multidimensional characteristics and robustness of the gas-kinetic flux solver. Comparing with high-order finite volume gas-kinetic methods, the current scheme is more compact and efficient by avoiding wide stencils on unstructured meshes. Unlike the traditional CPR method where the inviscid and viscous terms are treated differently, the inviscid and viscous fluxes in the current scheme are coupled and computed uniformly through the kinetic evolution model. In addition, the present scheme adopts a fully coupled spatial and temporal gas distribution function for the flux evaluation, achieving high-order accuracy in both space and time within a single step. Numerical tests with a wide range of flow problems, from nearly incompressible to supersonic flows with strong shocks, for both inviscid and viscous problems, demonstrate the high accuracy and efficiency of the present scheme.

  12. Laparoscopic mesh explantation and drainage of sacral abscess remote from transvaginal excision of exposed sacral colpopexy mesh.

    Science.gov (United States)

    Roth, Ted M; Reight, Ian

    2012-07-01

    Sacral colpopexy may be complicated by mesh exposure, and the surgical treatment of mesh exposure typically results in minor postoperative morbidity and few delayed complications. A 75-year-old woman presented 7 years after a laparoscopic sacral colpopexy, with Mersilene mesh, with an apical mesh exposure. She underwent an uncomplicated transvaginal excision and was asymptomatic until 8 months later when she presented with vaginal drainage and a sacral abscess. This was successfully treated with laparoscopic enterolysis, drainage of the abscess, and explantation of the remaining mesh. Incomplete excision of exposed colpopexy mesh can lead to ascending infection and sacral abscess. Laparoscopic drainage and mesh removal may be considered in these patients.

  13. Conforming to interface structured adaptive mesh refinement: 3D algorithm and implementation

    Science.gov (United States)

    Nagarajan, Anand; Soghrati, Soheil

    2018-03-01

    A new non-iterative mesh generation algorithm named conforming to interface structured adaptive mesh refinement (CISAMR) is introduced for creating 3D finite element models of problems with complex geometries. CISAMR transforms a structured mesh composed of tetrahedral elements into a conforming mesh with low element aspect ratios. The construction of the mesh begins with the structured adaptive mesh refinement of elements in the vicinity of material interfaces. An r-adaptivity algorithm is then employed to relocate selected nodes of nonconforming elements, followed by face-swapping a small fraction of them to eliminate tetrahedrons with high aspect ratios. The final conforming mesh is constructed by sub-tetrahedralizing remaining nonconforming elements, as well as tetrahedrons with hanging nodes. In addition to studying the convergence and analyzing element-wise errors in meshes generated using CISAMR, several example problems are presented to show the ability of this method for modeling 3D problems with intricate morphologies.

  14. Mesh optimization for microbial fuel cell cathodes constructed around stainless steel mesh current collectors

    KAUST Repository

    Zhang, Fang; Merrill, Matthew D.; Tokash, Justin C.; Saito, Tomonori; Cheng, Shaoan; Hickner, Michael A.; Logan, Bruce E.

    2011-01-01

    that the mesh properties of these cathodes can significantly affect performance. Cathodes made from the coarsest mesh (30-mesh) achieved the highest maximum power of 1616 ± 25 mW m-2 (normalized to cathode projected surface area; 47.1 ± 0.7 W m-3 based on liquid

  15. SUPERIMPOSED MESH PLOTTING IN MCNP

    Energy Technology Data Exchange (ETDEWEB)

    J. HENDRICKS

    2001-02-01

    The capability to plot superimposed meshes has been added to MCNP{trademark}. MCNP4C featured a superimposed mesh weight window generator which enabled users to set up geometries without having to subdivide geometric cells for variance reduction. The variance reduction was performed with weight windows on a rectangular or cylindrical mesh superimposed over the physical geometry. Experience with the new capability was favorable but also indicated that a number of enhancements would be very beneficial, particularly a means of visualizing the mesh and its values. The mathematics for plotting the mesh and its values is described here along with a description of other upgrades.

  16. New software developments for quality mesh generation and optimization from biomedical imaging data.

    Science.gov (United States)

    Yu, Zeyun; Wang, Jun; Gao, Zhanheng; Xu, Ming; Hoshijima, Masahiko

    2014-01-01

    In this paper we present a new software toolkit for generating and optimizing surface and volumetric meshes from three-dimensional (3D) biomedical imaging data, targeted at image-based finite element analysis of some biomedical activities in a single material domain. Our toolkit includes a series of geometric processing algorithms including surface re-meshing and quality-guaranteed tetrahedral mesh generation and optimization. All methods described have been encapsulated into a user-friendly graphical interface for easy manipulation and informative visualization of biomedical images and mesh models. Numerous examples are presented to demonstrate the effectiveness and efficiency of the described methods and toolkit. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov

    International Nuclear Information System (INIS)

    Greenough, J.A.; Rider, W.J.

    2004-01-01

    A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the 'peak' shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are

  18. A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov

    Science.gov (United States)

    Greenough, J. A.; Rider, W. J.

    2004-05-01

    A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are

  19. Surgical excision of eroded mesh after prior abdominal sacrocolpopexy.

    Science.gov (United States)

    South, Mary M T; Foster, Raymond T; Webster, George D; Weidner, Alison C; Amundsen, Cindy L

    2007-12-01

    We previously described an endoscopic-assisted transvaginal mesh excision technique. This study compares surgical outcomes after transvaginal mesh excision vs endoscopic-assisted transvaginal mesh excision. In addition, we reviewed our postoperative outcomes with excision via laparotomy. This was an inclusive retrospective analysis of patients presenting to our institution from 1997 to 2006 for surgical management of vaginal erosion of permanent mesh after sacrocolpopexy. Three techniques were utilized: transvaginal, endoscopic-assisted transvaginal, and laparotomy. For the patients undergoing transvaginal excision, data recorded included number and type of excisions performed, number of prior excisions performed at outside facilities, intraoperative and postoperative complications (including blood transfusions, pelvic abscess, or bowel complications), use of postoperative antibiotics, persistent symptoms of vaginal bleeding and discharge at follow-up, and demographic characteristics. The intraoperative and postoperative complications and the postoperative symptoms were recorded for the laparotomy cases. Thirty-one patients underwent transvaginal mesh excision during this time period: 17 endoscopic-assisted transvaginal and 14 transvaginal without endoscope assistance. In addition, a total of 7 patients underwent abdominal excision via laparotomy. Comparison of the 2 vaginal methods revealed no difference in the demographics or success rate, with success defined as no symptoms at follow-up. Endoscopic-assisted transvaginal excision was successful in 7 of 17 patients and transvaginal without endoscopic assistance in 9 of 13 patients (1 patient excluded for lack of follow-up data) for a total vaginal success rate of 53.3%. No intraoperative and only minor postoperative complications occurred with either vaginal method. Three patients underwent 3 vaginal attempts to achieve complete symptom resolution. The average follow-up time for the entire vaginal group was 14

  20. Preliminary Single-Phase Mixing Test using Wire Mesh System in a wire-wrapped 37-rod Bundle

    International Nuclear Information System (INIS)

    Bae, Hwang; Kim, Hyungmo; Lee, Dong Won; Choi, Hae Seob; Choi, Sun Rock; Chang, Seokkyu; Kim, Seok; Euh, Dongjin; Lee, Hyeongyeon

    2014-01-01

    In this paper, preliminary tests of the wire-mesh sensor are introduced before measuring of mixing coefficient in the wire-wrapped 37-pin fuel assembly for a sodium-cooled fast reactor. Through this preliminary test, it was confirmed that city water can be used as a tracer for demineralized water as a base. A simple test was performed to evaluate the characteristics of a wire mesh with of a short pipe shape. The conductivity of de-mineralized water and city water is linearly increased for the limited temperature ranges as the temperature is increased. The reliability of the wire mesh sensor was estimated based on the averages and standard deviations of the plane image using the cross points. A wire mesh sensor is suitable to apply to a single-phase flow measurement for a mixture with de-mineralized water and city water. A wire mesh sensor and system have been traditionally used to measure the void fraction of a two-phase flow field with gas and liquid. Recently, Ylonen et al. successfully designed and commissioned a measurement system for a single-phase flow using a wire mesh sensor

  1. Application of Linear Viscoelastic Properties in Semianalytical Finite Element Method with Recursive Time Integration to Analyze Asphalt Pavement Structure

    Directory of Open Access Journals (Sweden)

    Pengfei Liu

    2018-01-01

    Full Text Available Traditionally, asphalt pavements are considered as linear elastic materials in finite element (FE method to save computational time for engineering design. However, asphalt mixture exhibits linear viscoelasticity at small strain and low temperature. Therefore, the results derived from the elastic analysis will inevitably lead to discrepancies from reality. Currently, several FE programs have already adopted viscoelasticity, but the high hardware demands and long execution times render them suitable primarily for research purposes. Semianalytical finite element method (SAFEM was proposed to solve the abovementioned problem. The SAFEM is a three-dimensional FE algorithm that only requires a two-dimensional mesh by incorporating the Fourier series in the third dimension, which can significantly reduce the computational time. This paper describes the development of SAFEM to capture the viscoelastic property of asphalt pavements by using a recursive formulation. The formulation is verified by comparison with the commercial FE software ABAQUS. An application example is presented for simulations of creep deformation of the asphalt pavement. The investigation shows that the SAFEM is an efficient tool for pavement engineers to fast and reliably predict asphalt pavement responses; furthermore, the SAFEM provides a flexible, robust platform for the future development in the numerical simulation of asphalt pavements.

  2. Influence of mesh non-orthogonality on numerical simulation of buoyant jet flows

    International Nuclear Information System (INIS)

    Ishigaki, Masahiro; Abe, Satoshi; Sibamoto, Yasuteru; Yonomoto, Taisuke

    2017-01-01

    Highlights: • Influence of mesh non-orthogonality on numerical solution of buoyant jet flows. • Buoyant jet flows are simulated with hexahedral and prismatic meshes. • Jet instability with prismatic meshes may be overestimated compared to that with hexahedral meshes. • Modified solvers that can reduce the influence of mesh non-orthogonality and reduce computation time are proposed. - Abstract: In the present research, we discuss the influence of mesh non-orthogonality on numerical solution of a type of buoyant flow. Buoyant jet flows are simulated numerically with hexahedral and prismatic mesh elements in an open source Computational Fluid Dynamics (CFD) code called “OpenFOAM”. Buoyant jet instability obtained with the prismatic meshes may be overestimated compared to that obtained with the hexahedral meshes when non-orthogonal correction is not applied in the code. Although the non-orthogonal correction method can improve the instability generated by mesh non-orthogonality, it may increase computation time required to reach a convergent solution. Thus, we propose modified solvers that can reduce the influence of mesh non-orthogonality and reduce the computation time compared to the existing solvers in OpenFOAM. It is demonstrated that calculations for a buoyant jet with a large temperature difference are performed faster by the modified solver.

  3. Influence of mesh non-orthogonality on numerical simulation of buoyant jet flows

    Energy Technology Data Exchange (ETDEWEB)

    Ishigaki, Masahiro, E-mail: ishigaki.masahiro@jaea.go.jp; Abe, Satoshi; Sibamoto, Yasuteru; Yonomoto, Taisuke

    2017-04-01

    Highlights: • Influence of mesh non-orthogonality on numerical solution of buoyant jet flows. • Buoyant jet flows are simulated with hexahedral and prismatic meshes. • Jet instability with prismatic meshes may be overestimated compared to that with hexahedral meshes. • Modified solvers that can reduce the influence of mesh non-orthogonality and reduce computation time are proposed. - Abstract: In the present research, we discuss the influence of mesh non-orthogonality on numerical solution of a type of buoyant flow. Buoyant jet flows are simulated numerically with hexahedral and prismatic mesh elements in an open source Computational Fluid Dynamics (CFD) code called “OpenFOAM”. Buoyant jet instability obtained with the prismatic meshes may be overestimated compared to that obtained with the hexahedral meshes when non-orthogonal correction is not applied in the code. Although the non-orthogonal correction method can improve the instability generated by mesh non-orthogonality, it may increase computation time required to reach a convergent solution. Thus, we propose modified solvers that can reduce the influence of mesh non-orthogonality and reduce the computation time compared to the existing solvers in OpenFOAM. It is demonstrated that calculations for a buoyant jet with a large temperature difference are performed faster by the modified solver.

  4. Mathematical foundation of the optimization-based fluid animation method

    DEFF Research Database (Denmark)

    Erleben, Kenny; Misztal, Marek Krzysztof; Bærentzen, Jakob Andreas

    2011-01-01

    We present the mathematical foundation of a fluid animation method for unstructured meshes. Key contributions not previously treated are the extension to include diffusion forces and higher order terms of non-linear force approximations. In our discretization we apply a fractional step method to ...

  5. User Manual for the PROTEUS Mesh Tools

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, Emily R. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-06-01

    This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.

  6. Linear Methods for Image Interpolation

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  7. Anisotropic evaluation of synthetic surgical meshes.

    Science.gov (United States)

    Saberski, E R; Orenstein, S B; Novitsky, Y W

    2011-02-01

    The material properties of meshes used in hernia repair contribute to the overall mechanical behavior of the repair. The anisotropic potential of synthetic meshes, representing a difference in material properties (e.g., elasticity) in different material axes, is not well defined to date. Haphazard orientation of anisotropic mesh material can contribute to inconsistent surgical outcomes. We aimed to characterize and compare anisotropic properties of commonly used synthetic meshes. Six different polypropylene (Trelex(®), ProLite™, Ultrapro™), polyester (Parietex™), and PTFE-based (Dualmesh(®), Infinit) synthetic meshes were selected. Longitudinal and transverse axes were defined for each mesh, and samples were cut in each axis orientation. Samples underwent uniaxial tensile testing, from which the elastic modulus (E) in each axis was determined. The degree of anisotropy (λ) was calculated as a logarithmic expression of the ratio between the elastic modulus in each axis. Five of six meshes displayed significant anisotropic behavior. Ultrapro™ and Infinit exhibited approximately 12- and 20-fold differences between perpendicular axes, respectively. Trelex(®), ProLite™, and Parietex™ were 2.3-2.4 times. Dualmesh(®) was the least anisotropic mesh, without marked difference between the axes. Anisotropy of synthetic meshes has been underappreciated. In this study, we found striking differences between elastic properties of perpendicular axes for most commonly used synthetic meshes. Indiscriminate orientation of anisotropic mesh may adversely affect hernia repairs. Proper labeling of all implants by manufacturers should be mandatory. Understanding the specific anisotropic behavior of synthetic meshes should allow surgeons to employ rational implant orientation to maximize outcomes of hernia repair.

  8. Adaptive discontinuous Galerkin methods for non-linear reactive flows

    CERN Document Server

    Uzunca, Murat

    2016-01-01

    The focus of this monograph is the development of space-time adaptive methods to solve the convection/reaction dominated non-stationary semi-linear advection diffusion reaction (ADR) equations with internal/boundary layers in an accurate and efficient way. After introducing the ADR equations and discontinuous Galerkin discretization, robust residual-based a posteriori error estimators in space and time are derived. The elliptic reconstruction technique is then utilized to derive the a posteriori error bounds for the fully discrete system and to obtain optimal orders of convergence. As coupled surface and subsurface flow over large space and time scales is described by (ADR) equation the methods described in this book are of high importance in many areas of Geosciences including oil and gas recovery, groundwater contamination and sustainable use of groundwater resources, storing greenhouse gases or radioactive waste in the subsurface.

  9. An immersed interface vortex particle-mesh solver

    Science.gov (United States)

    Marichal, Yves; Chatelain, Philippe; Winckelmans, Gregoire

    2014-11-01

    An immersed interface-enabled vortex particle-mesh (VPM) solver is presented for the simulation of 2-D incompressible viscous flows, in the framework of external aerodynamics. Considering the simulation of free vortical flows, such as wakes and jets, vortex particle-mesh methods already provide a valuable alternative to standard CFD methods, thanks to the interesting numerical properties arising from its Lagrangian nature. Yet, accounting for solid bodies remains challenging, despite the extensive research efforts that have been made for several decades. The present immersed interface approach aims at improving the consistency and the accuracy of one very common technique (based on Lighthill's model) for the enforcement of the no-slip condition at the wall in vortex methods. Targeting a sharp treatment of the wall calls for substantial modifications at all computational levels of the VPM solver. More specifically, the solution of the underlying Poisson equation, the computation of the diffusion term and the particle-mesh interpolation are adapted accordingly and the spatial accuracy is assessed. The immersed interface VPM solver is subsequently validated on the simulation of some challenging impulsively started flows, such as the flow past a cylinder and that past an airfoil. Research Fellow (PhD student) of the F.R.S.-FNRS of Belgium.

  10. An Immersed Boundary - Adaptive Mesh Refinement solver (IB-AMR) for high fidelity fully resolved wind turbine simulations

    Science.gov (United States)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2015-11-01

    The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.

  11. A software framework for the portable parallelization of particle-mesh simulations

    DEFF Research Database (Denmark)

    Sbalzarini, I.F.; Walther, Jens Honore; Polasek, B.

    2006-01-01

    Abstract: We present a software framework for the transparent and portable parallelization of simulations using particle-mesh methods. Particles are used to transport physical properties and a mesh is required in order to reinitialize the distorted particle locations, ensuring the convergence...

  12. To mesh or not to mesh: a review of pelvic organ reconstructive surgery

    Directory of Open Access Journals (Sweden)

    Dällenbach P

    2015-04-01

    Full Text Available Patrick Dällenbach Department of Gynecology and Obstetrics, Division of Gynecology, Urogynecology Unit, Geneva University Hospitals, Geneva, Switzerland Abstract: Pelvic organ prolapse (POP is a major health issue with a lifetime risk of undergoing at least one surgical intervention estimated at close to 10%. In the 1990s, the risk of reoperation after primary standard vaginal procedure was estimated to be as high as 30% to 50%. In order to reduce the risk of relapse, gynecological surgeons started to use mesh implants in pelvic organ reconstructive surgery with the emergence of new complications. Recent studies have nevertheless shown that the risk of POP recurrence requiring reoperation is lower than previously estimated, being closer to 10% rather than 30%. The development of mesh surgery – actively promoted by the marketing industry – was tremendous during the past decade, and preceded any studies supporting its benefit for our patients. Randomized trials comparing the use of mesh to native tissue repair in POP surgery have now shown better anatomical but similar functional outcomes, and meshes are associated with more complications, in particular for transvaginal mesh implants. POP is not a life-threatening condition, but a functional problem that impairs quality of life for women. The old adage “primum non nocere” is particularly appropriate when dealing with this condition which requires no treatment when asymptomatic. It is currently admitted that a certain degree of POP is physiological with aging when situated above the landmark of the hymen. Treatment should be individualized and the use of mesh needs to be selective and appropriate. Mesh implants are probably an important tool in pelvic reconstructive surgery, but the ideal implant has yet to be found. The indications for its use still require caution and discernment. This review explores the reasons behind the introduction of mesh augmentation in POP surgery, and aims to

  13. Reconstructive laparoscopic prolapse surgery to avoid mesh erosions

    Directory of Open Access Journals (Sweden)

    Devassy, Rajesh

    2013-09-01

    Full Text Available Introduction: The objective of the study is to examine the efficacy of the purely laparoscopic reconstructive management of cystocele and rectocele with mesh, to avoid the risk of erosion by the graft material, a well known complication in vaginal mesh surgery. Material and methods: We performed a prospective, single-case, non-randomized study in 325 patients who received laparoscopic reconstructive management of pelvic organe prolaps with mesh. The study was conducted between January 2004 and December 2012 in a private clinic in India. The most common prolapse symptoms were reducible vaginal lump, urinary stress incontinence, constipation and flatus incontinence, sexual dysfunction and dypareunia. The degree e of the prolaps was staged according to POPQ system. The approach was purely laparoscopic and involved the use of polypropylene (Prolene or polyurethane with activated regenerated cellulose coating (Parietex mesh. Results: The mean age was 55 (30–80 years and the most of the patients were multiparous (272/325. The patients received a plastic correction of the rectocele only (138 cases, a cystocele and rectocele (187 cases with mesh. 132 patients had a concomitant total hysterectomy; in 2 cases a laparoscopic supracervical hysterectomy was performed and 190 patients had a laparoscopic colposuspension. The mean operation time was 82.2 (60–210 minutes. The mean follow up was 3.4 (3–5 years. Urinary retention developed in 1 case, which required a new laparoscopical intervention. Bladder injury, observed in the same case was in one session closed with absorbable suture. There were four recurrences of the rectocele, receiving a posterior vaginal colporrhaphy. Erosions of the mesh were not reported or documented. Conclusion: The pure laparoscopic reconstructive management of the cystocele and rectocele with mesh seems to be a safe and effective surgical procedure potentially avoiding the risk of mesh erosions.

  14. 6th International Meshing Roundtable '97

    Energy Technology Data Exchange (ETDEWEB)

    White, D.

    1997-09-01

    The goal of the 6th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the pas~ the Roundtable has enjoyed significant participation born each of these groups from a wide variety of countries. The Roundtable will consist of technical presentations from contributed papers and abstracts, two invited speakers, and two invited panels of experts discussing topics related to the development and use of automatic mesh generation tools. In addition, this year we will feature a "Bring Your Best Mesh" competition and poster session to encourage discussion and participation from a wide variety of mesh generation tool users. The schedule and evening social events are designed to provide numerous opportunities for informal dialog. A proceedings will be published by Sandia National Laboratories and distributed at the Roundtable. In addition, papers of exceptionally high quaIity will be submitted to a special issue of the International Journal of Computational Geometry and Applications. Papers and one page abstracts were sought that present original results on the meshing process. Potential topics include but are got limited to: Unstructured triangular and tetrahedral mesh generation Unstructured quadrilateral and hexahedral mesh generation Automated blocking and structured mesh generation Mixed element meshing Surface mesh generation Geometry decomposition and clean-up techniques Geometry modification techniques related to meshing Adaptive mesh refinement and mesh quality control Mesh visualization Special purpose meshing algorithms for particular applications Theoretical or novel ideas with practical potential Technical presentations from industrial researchers.

  15. Model of Random Polygon Particles for Concrete and Mesh Automatic Subdivision

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In order to study the constitutive behavior of concrete in mesoscopic level, a new method is proposed in this paper. This method uses random polygon particles to simulate full grading broken aggregates of concrete. Based on computational geometry, we carry out the automatic generation of the triangle finite element mesh for the model of random polygon particles of concrete. The finite element mesh generated in this paper is also applicable to many other numerical methods.

  16. Parameter Scaling in Non-Linear Microwave Tomography

    DEFF Research Database (Denmark)

    Jensen, Peter Damsgaard; Rubæk, Tonny; Talcoth, Oskar

    2012-01-01

    Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when the imag......Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when...... the imaging problem is formulated. Under such conditions, microwave imaging systems will most often be considerably more sensitive to changes in the electromagnetic properties in certain regions of the breast. The result is that the parameters might not be reconstructed correctly in the less sensitive regions...... introduced as a measure of the sensitivity. The scaling of the parameters is shown to improve performance of the microwave imaging system when applied to reconstruction of images from 2-D simulated data and measurement data....

  17. Management of complications of mesh surgery.

    Science.gov (United States)

    Lee, Dominic; Zimmern, Philippe E

    2015-07-01

    Transvaginal placements of synthetic mid-urethral slings and vaginal meshes have largely superseded traditional tissue repairs in the current era because of presumed efficacy and ease of implant with device 'kits'. The use of synthetic material has generated novel complications including mesh extrusion, pelvic and vaginal pain and mesh contraction. In this review, our aim is to discuss the management, surgical techniques and outcomes associated with mesh removal. Recent publications have seen an increase in presentation of these mesh-related complications, and reports from multiple tertiary centers have suggested that not all patients benefit from surgical intervention. Although the true incidence of mesh complications is unknown, recent publications can serve to guide physicians and inform patients of the surgical outcomes from mesh-related complications. In addition, the literature highlights the growing need for a registry to account for a more accurate reporting of these events and to counsel patients on the risk and benefits before proceeding with mesh surgeries.

  18. Seismic analysis of equipment system with non-linearities such as gap and friction using equivalent linearization method

    International Nuclear Information System (INIS)

    Murakami, H.; Hirai, T.; Nakata, M.; Kobori, T.; Mizukoshi, K.; Takenaka, Y.; Miyagawa, N.

    1989-01-01

    Many of the equipment systems of nuclear power plants contain a number of non-linearities, such as gap and friction, due to their mechanical functions. It is desirable to take such non-linearities into account appropriately for the evaluation of the aseismic soundness. However, in usual design works, linear analysis method with rough assumptions is applied from engineering point of view. An equivalent linearization method is considered to be one of the effective analytical techniques to evaluate non-linear responses, provided that errors to a certain extent are tolerated, because it has greater simplicity in analysis and economization in computing time than non-linear analysis. The objective of this paper is to investigate the applicability of the equivalent linearization method to evaluate the maximum earthquake response of equipment systems such as the CANDU Fuelling Machine which has multiple non- linearities

  19. User Manual for the PROTEUS Mesh Tools

    International Nuclear Information System (INIS)

    Smith, Micheal A.; Shemon, Emily R.

    2015-01-01

    This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT M eshToMesh.x and the MT R adialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as ''mesh'' input for any of the mesh tools discussed in this manual.

  20. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    International Nuclear Information System (INIS)

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-01-01

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Lastly, dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.

  1. To mesh or not to mesh: a review of pelvic organ reconstructive surgery

    Science.gov (United States)

    Dällenbach, Patrick

    2015-01-01

    Pelvic organ prolapse (POP) is a major health issue with a lifetime risk of undergoing at least one surgical intervention estimated at close to 10%. In the 1990s, the risk of reoperation after primary standard vaginal procedure was estimated to be as high as 30% to 50%. In order to reduce the risk of relapse, gynecological surgeons started to use mesh implants in pelvic organ reconstructive surgery with the emergence of new complications. Recent studies have nevertheless shown that the risk of POP recurrence requiring reoperation is lower than previously estimated, being closer to 10% rather than 30%. The development of mesh surgery – actively promoted by the marketing industry – was tremendous during the past decade, and preceded any studies supporting its benefit for our patients. Randomized trials comparing the use of mesh to native tissue repair in POP surgery have now shown better anatomical but similar functional outcomes, and meshes are associated with more complications, in particular for transvaginal mesh implants. POP is not a life-threatening condition, but a functional problem that impairs quality of life for women. The old adage “primum non nocere” is particularly appropriate when dealing with this condition which requires no treatment when asymptomatic. It is currently admitted that a certain degree of POP is physiological with aging when situated above the landmark of the hymen. Treatment should be individualized and the use of mesh needs to be selective and appropriate. Mesh implants are probably an important tool in pelvic reconstructive surgery, but the ideal implant has yet to be found. The indications for its use still require caution and discernment. This review explores the reasons behind the introduction of mesh augmentation in POP surgery, and aims to clarify the risks, benefits, and the recognized indications for its use. PMID:25848324

  2. Linear Methods for Image Interpolation

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  3. Local multigrid mesh refinement in view of nuclear fuel 3D modelling in pressurised water reactors

    International Nuclear Information System (INIS)

    Barbie, L.

    2013-01-01

    The aim of this study is to improve the performances, in terms of memory space and computational time, of the current modelling of the Pellet-Cladding mechanical Interaction (PCI), complex phenomenon which may occurs during high power rises in pressurised water reactors. Among the mesh refinement methods - methods dedicated to efficiently treat local singularities - a local multi-grid approach was selected because it enables the use of a black-box solver while dealing few degrees of freedom at each level. The Local Defect Correction (LDC) method, well suited to a finite element discretization, was first analysed and checked in linear elasticity, on configurations resulting from the PCI, since its use in solid mechanics is little widespread. Various strategies concerning the implementation of the multilevel algorithm were also compared. Coupling the LDC method with the Zienkiewicz-Zhu a posteriori error estimator in order to automatically detect the zones to be refined, was then tested. Performances obtained on two-dimensional and three-dimensional cases are very satisfactory, since the algorithm proposed is more efficient than h-adaptive refinement methods. Lastly, the LDC algorithm was extended to nonlinear mechanics. Space/time refinement as well as transmission of the initial conditions during the re-meshing step were looked at. The first results obtained are encouraging and show the interest of using the LDC method for PCI modelling. (author) [fr

  4. JTpack90: A parallel, object-based, Fortran 90 linear algebra package

    Energy Technology Data Exchange (ETDEWEB)

    Turner, J.A.; Kothe, D.B. [Los Alamos National Lab., NM (United States); Ferrell, R.C. [Cambridge Power Computing Associates, Ltd., Brookline, MA (United States)

    1997-03-01

    The authors have developed an object-based linear algebra package, currently with emphasis on sparse Krylov methods, driven primarily by needs of the Los Alamos National Laboratory parallel unstructured-mesh casting simulation tool Telluride. Support for a number of sparse storage formats, methods, and preconditioners have been implemented, driven primarily by application needs. They describe the object-based Fortran 90 approach, which enhances maintainability, performance, and extensibility, the parallelization approach using a new portable gather/scatter library (PGSLib), current capabilities and future plans, and present preliminary performance results on a variety of platforms.

  5. Properties of meshes used in hernia repair: a comprehensive review of synthetic and biologic meshes.

    Science.gov (United States)

    Ibrahim, Ahmed M S; Vargas, Christina R; Colakoglu, Salih; Nguyen, John T; Lin, Samuel J; Lee, Bernard T

    2015-02-01

    Data on the mechanical properties of the adult human abdominal wall have been difficult to obtain rendering manufacture of the ideal mesh for ventral hernia repair a challenge. An ideal mesh would need to exhibit greater biomechanical strength and elasticity than that of the abdominal wall. The aim of this study is to quantitatively compare the biomechanical properties of the most commonly used synthetic and biologic meshes in ventral hernia repair and presents a comprehensive literature review. A narrative review of the literature was performed using the PubMed database spanning articles from 1982 to 2012 including a review of company Web sites to identify all available information relating to the biomechanical properties of various synthetic and biologic meshes used in ventral hernia repair. There exist differences in the mechanical properties and the chemical nature of different meshes. In general, most synthetic materials have greater stiffness and elasticity than what is required for abdominal wall reconstruction; however, each exhibits unique properties that may be beneficial for clinical use. On the contrary, biologic meshes are more elastic but less stiff and with a lower tensile strength than their synthetic counterparts. The current standard of practice for the treatment of ventral hernias is the use of permanent synthetic mesh material. Recently, biologic meshes have become more frequently used. Most meshes exhibit biomechanical properties over the known abdominal wall thresholds. Augmenting strength requires increasing amounts of material contributing to more stiffness and foreign body reaction, which is not necessarily an advantage. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  6. Numerical simulation of deformation of dynamic mesh in the human vocal tract model

    Directory of Open Access Journals (Sweden)

    Řidký Václav

    2015-01-01

    Full Text Available Numerical simulation of the acoustic signal generation in the human vocal tract is a very complex problem. The computational mesh is not static; it is deformed due to vibration of vocal folds. Movement of vocal folds is in this case prescribed as function of translation and rotation. A new boundary condition for the 2DOF motion of the vocal folds was implemented in OpenFOAM, an open-source software package based on finite volume method Work is focused on the dynamic mesh and deformation of structured meshes in the computation a package OpenFOAM. These methods are compared with focus onquality of the mesh (non-orthogonality, aspect ratio and skewness.

  7. Linear programming using Matlab

    CERN Document Server

    Ploskas, Nikolaos

    2017-01-01

    This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book  are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus.  The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...

  8. Short-term outcomes of the transvaginal minimal mesh procedure for pelvic organ prolapse

    OpenAIRE

    Naoko Takazawa; Akiko Fujisaki; Yasukuni Yoshimura; Akira Tsujimura; Shigeo Horie

    2018-01-01

    Purpose: This study aimed to evaluate the clinical outcomes and complications of transvaginal minimal mesh repair without using commercially available kits for treatment of pelvic organ prolapse (POP). Materials and Methods: This retrospective cohort study involved 91 women who underwent surgical management of POP with originally designed small mesh between July 2014 and August 2015. This mesh is 56% smaller than the mesh widely used in Japan, and it has only two arms delivered into each righ...

  9. On the interaction of small-scale linear waves with nonlinear solitary waves

    Science.gov (United States)

    Xu, Chengzhu; Stastna, Marek

    2017-04-01

    In the study of environmental and geophysical fluid flows, linear wave theory is well developed and its application has been considered for phenomena of various length and time scales. However, due to the nonlinear nature of fluid flows, in many cases results predicted by linear theory do not agree with observations. One of such cases is internal wave dynamics. While small-amplitude wave motion may be approximated by linear theory, large amplitude waves tend to be solitary-like. In some cases, when the wave is highly nonlinear, even weakly nonlinear theories fail to predict the wave properties correctly. We study the interaction of small-scale linear waves with nonlinear solitary waves using highly accurate pseudo spectral simulations that begin with a fully nonlinear solitary wave and a train of small-amplitude waves initialized from linear waves. The solitary wave then interacts with the linear waves through either an overtaking collision or a head-on collision. During the collision, there is a net energy transfer from the linear wave train to the solitary wave, resulting in an increase in the kinetic energy carried by the solitary wave and a phase shift of the solitary wave with respect to a freely propagating solitary wave. At the same time the linear waves are greatly reduced in amplitude. The percentage of energy transferred depends primarily on the wavelength of the linear waves. We found that after one full collision cycle, the longest waves may retain as much as 90% of the kinetic energy they had initially, while the shortest waves lose almost all of their initial energy. We also found that a head-on collision is more efficient in destroying the linear waves than an overtaking collision. On the other hand, the initial amplitude of the linear waves has very little impact on the percentage of energy that can be transferred to the solitary wave. Because of the nonlinearity of the solitary wave, these results provide us some insight into wave-mean flow

  10. A split operator method for transient problems

    International Nuclear Information System (INIS)

    Belytschko, T.B.

    1983-01-01

    Numerous techniques have been developed for improving the computational efficiency of transient analysis: mesh partitioning, subcycling procedures and operator splitting methods. In mesh partitioning methods, the model is divided into subdomains which are integrated by different time integrators, typically implicit and explicit. Any stiff portions of the model are integrated by the implicit operator so that the size of the time step can be increased. In subcycling procedures, the stiff portions are integrated by smaller time steps, yielding similar benefits. However, in models for which the governing partial differential equations are basically of a parabolic character, explicit methods can become quite expensive for refined models because the size of the stable time step decreases with the square of the minimum element dimension. Thus explicit methods, whether employed alone or with partitioning or subcycling, have inherent limitations in these problems. A new procedure is here described for the element-by-element semi-implicit method of Hughes and coworkers which requires the solution of only small systems of equations. This procedure is described for a family of uniform gradient or strain elements which are widely used in nonlinear transient analysis. The diffusion equation and the equations of motion for both shells and continua have been treated, but only the former is considered herein. Results are presented for several examples which show the potential of this method for improving the efficiency of a large-scale linear and nonlinear computations. (orig./RW)

  11. The numerical simulation study of hemodynamics of the new dense-mesh stent

    Science.gov (United States)

    Ma, Jiali; Yuan, Zhishan; Yu, Xuebao; Feng, Zhaowei; Miao, Weidong; Xu, Xueli; Li, Juntao

    2017-09-01

    The treatment of aortic aneurysm in new dense mesh stent is based on the principle of hemodynamic changes. But the mechanism is not yet very clear. This paper analyzed and calculated the hemodynamic situation before and after the new dense mesh stent implanting by the method of numerical simulation. The results show the dense mesh stent changed and impacted the blood flow in the aortic aneurysm. The changes include significant decrement of blood velocity, pressure and shear forces, while ensuring blood can supply branches, which means the new dense mesh stent's hemodynamic mechanism in the treatment of aortic aneurysm is clearer. It has very important significance in developing new dense mesh stent in order to cure aortic aneurysm.

  12. Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods.

    Science.gov (United States)

    Ho, Yuh-Shan

    2006-01-01

    A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.

  13. Polyhedral meshing in numerical analysis of conjugate heat transfer

    Science.gov (United States)

    Sosnowski, Marcin; Krzywanski, Jaroslaw; Grabowska, Karolina; Gnatowska, Renata

    2018-06-01

    Computational methods have been widely applied in conjugate heat transfer analysis. The very first and crucial step in such research is the meshing process which consists in dividing the analysed geometry into numerous small control volumes (cells). In Computational Fluid Dynamics (CFD) applications it is desirable to use the hexahedral cells as the resulting mesh is characterized by low numerical diffusion. Unfortunately generating such mesh can be a very time-consuming task and in case of complicated geometry - it may not be possible to generate cells of good quality. Therefore tetrahedral cells have been implemented into commercial pre-processors. Their advantage is the ease of its generation even in case of very complex geometry. On the other hand tetrahedrons cannot be stretched excessively without decreasing the mesh quality factor, so significantly larger number of cells has to be used in comparison to hexahedral mesh in order to achieve a reasonable accuracy. Moreover the numerical diffusion of tetrahedral elements is significantly higher. Therefore the polyhedral cells are proposed within the paper in order to combine the advantages of hexahedrons (low numerical diffusion resulting in accurate solution) and tetrahedrons (rapid semi-automatic generation) as well as to overcome the disadvantages of both the above mentioned mesh types. The major benefit of polyhedral mesh is that each individual cell has many neighbours, so gradients can be well approximated. Polyhedrons are also less sensitive to stretching than tetrahedrons which results in better mesh quality leading to improved numerical stability of the model. In addition, numerical diffusion is reduced due to mass exchange over numerous faces. This leads to a more accurate solution achieved with a lower cell count. Therefore detailed comparison of numerical modelling results concerning conjugate heat transfer using tetrahedral and polyhedral meshes is presented in the paper.

  14. Behavior of thin rectangular ANCF shell elements in various mesh configurations

    DEFF Research Database (Denmark)

    Hyldahl, Per; Mikkola, Aki M.; Balling, Ole

    2014-01-01

    a thorough review of three available formulations, they are used in three different convergence studies. Initially a reference study is conducted to determine how the ANCF performs in an uniform and rectangular mesh. Subsequently, the ANCF methods sensitivity to irregular mesh is investigated and finally...

  15. Sparsity Prevention Pivoting Method for Linear Programming

    DEFF Research Database (Denmark)

    Li, Peiqiang; Li, Qiyuan; Li, Canbing

    2018-01-01

    When the simplex algorithm is used to calculate a linear programming problem, if the matrix is a sparse matrix, it will be possible to lead to many zero-length calculation steps, and even iterative cycle will appear. To deal with the problem, a new pivoting method is proposed in this paper....... The principle of this method is avoided choosing the row which the value of the element in the b vector is zero as the row of the pivot element to make the matrix in linear programming density and ensure that most subsequent steps will improve the value of the objective function. One step following...... this principle is inserted to reselect the pivot element in the existing linear programming algorithm. Both the conditions for inserting this step and the maximum number of allowed insertion steps are determined. In the case study, taking several numbers of linear programming problems as examples, the results...

  16. Sparsity Prevention Pivoting Method for Linear Programming

    DEFF Research Database (Denmark)

    Li, Peiqiang; Li, Qiyuan; Li, Canbing

    2018-01-01

    . The principle of this method is avoided choosing the row which the value of the element in the b vector is zero as the row of the pivot element to make the matrix in linear programming density and ensure that most subsequent steps will improve the value of the objective function. One step following......When the simplex algorithm is used to calculate a linear programming problem, if the matrix is a sparse matrix, it will be possible to lead to many zero-length calculation steps, and even iterative cycle will appear. To deal with the problem, a new pivoting method is proposed in this paper...... this principle is inserted to reselect the pivot element in the existing linear programming algorithm. Both the conditions for inserting this step and the maximum number of allowed insertion steps are determined. In the case study, taking several numbers of linear programming problems as examples, the results...

  17. An efficient formulation for linear and geometric non-linear membrane elements

    Directory of Open Access Journals (Sweden)

    Mohammad Rezaiee-Pajand

    Full Text Available Utilizing the straingradient notation process and the free formulation, an efficient way of constructing membrane elements will be proposed. This strategy can be utilized for linear and geometric non-linear problems. In the suggested formulation, the optimization constraints of insensitivity to distortion, rotational invariance and not having parasitic shear error are employed. In addition, the equilibrium equations will be established based on some constraints among the strain states. The authors' technique can easily separate the rigid body motions, and those belong to deformational motions. In this article, a novel triangular element, named SST10, is formulated. This element will be used in several plane problems having irregular mesh and complicated geometry with linear and geometrically nonlinear behavior. The numerical outcomes clearly demonstrate the efficiency of the new formulation.

  18. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming; Wang, Wen Ping; Liu, Yang; Yang, Zhouwang

    2012-01-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  19. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming

    2012-11-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  20. Sierra toolkit computational mesh conceptual model

    International Nuclear Information System (INIS)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.