WorldWideScience

Sample records for volume mesh sampling

  1. Adaptive sampling for mesh spectrum editing

    Institute of Scientific and Technical Information of China (English)

    ZHAO Xiang-jun; ZHANG Hong-xin; BAO Hu-jun

    2006-01-01

    A mesh editing framework is presented in this paper, which integrates Free-Form Deformation (FFD) and geometry signal processing. By using simplified model from original mesh, the editing task can be accomplished with a few operations. We take the deformation of the proxy and the position coordinates of the mesh models as geometry signal. Wavelet analysis is employed to separate local detail information gracefully. The crucial innovation of this paper is a new adaptive regular sampling approach for our signal analysis based editing framework. In our approach, an original mesh is resampled and then refined iteratively which reflects optimization of our proposed spectrum preserving energy. As an extension of our spectrum editing scheme,the editing principle is applied to geometry details transferring, which brings satisfying results.

  2. Unbiased sampling and meshing of isosurfaces

    KAUST Repository

    Yan, Dongming

    2014-11-01

    In this paper, we present a new technique to generate unbiased samples on isosurfaces. An isosurface, F(x,y,z) = c , of a function, F , is implicitly defined by trilinear interpolation of background grid points. The key idea of our approach is that of treating the isosurface within a grid cell as a graph (height) function in one of the three coordinate axis directions, restricted to where the slope is not too high, and integrating / sampling from each of these three. We use this unbiased sampling algorithm for applications in Monte Carlo integration, Poisson-disk sampling, and isosurface meshing.

  3. CUBIT mesh generation environment. Volume 1: Users manual

    Energy Technology Data Exchange (ETDEWEB)

    Blacker, T.D.; Bohnhoff, W.J.; Edwards, T.L. [and others

    1994-05-01

    The CUBIT mesh generation environment is a two- and three-dimensional finite element mesh generation tool which is being developed to pursue the goal of robust and unattended mesh generation--effectively automating the generation of quadrilateral and hexahedral elements. It is a solid-modeler based preprocessor that meshes volume and surface solid models for finite element analysis. A combination of techniques including paving, mapping, sweeping, and various other algorithms being developed are available for discretizing the geometry into a finite element mesh. CUBIT also features boundary layer meshing specifically designed for fluid flow problems. Boundary conditions can be applied to the mesh through the geometry and appropriate files for analysis generated. CUBIT is specifically designed to reduce the time required to create all-quadrilateral and all-hexahedral meshes. This manual is designed to serve as a reference and guide to creating finite element models in the CUBIT environment.

  4. Streaming Compression of Tetrahedral Volume Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Lindstrom, P; Gumhold, S; Shewchuk, J

    2005-11-21

    Geometry processing algorithms have traditionally assumed that the input data is entirely in main memory and available for random access. This assumption does not scale to large data sets, as exhausting the physical memory typically leads to IO-inefficient thrashing. Recent works advocate processing geometry in a 'streaming' manner, where computation and output begin as soon as possible. Streaming is suitable for tasks that require only local neighbor information and batch process an entire data set. We describe a streaming compression scheme for tetrahedral volume meshes that encodes vertices and tetrahedra in the order they are written. To keep the memory footprint low, the compressor is informed when vertices are referenced for the last time (i.e. are finalized). The compression achieved depends on how coherent the input order is and how many tetrahedra are buffered for local reordering. For reasonably coherent orderings and a buffer of 10,000 tetrahedra, we achieve compression rates that are only 25 to 40 percent above the state-of-the-art, while requiring drastically less memory resources and less than half the processing time.

  5. Optimization-based mesh correction with volume and convexity constraints

    Science.gov (United States)

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; Bochev, Pavel; Shashkov, Mikhail

    2016-05-01

    We consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimization problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.

  6. Tetrahedral meshing via maximal Poisson-disk sampling

    KAUST Repository

    Guo, Jianwei

    2016-02-15

    In this paper, we propose a simple yet effective method to generate 3D-conforming tetrahedral meshes from closed 2-manifold surfaces. Our approach is inspired by recent work on maximal Poisson-disk sampling (MPS), which can generate well-distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform or adaptive sampling, respectively. We also propose an efficient optimization strategy to protect the domain boundaries and to remove slivers to improve the meshing quality. We present various experimental results to illustrate the efficiency and the robustness of our proposed approach. We demonstrate that the performance and quality (e.g., minimal dihedral angle) of our approach are superior to current state-of-the-art optimization-based approaches.

  7. Design of Finite Element Tools for Coupled Surface and Volume Meshes

    Institute of Scientific and Technical Information of China (English)

    Daniel K(o)ster; Oliver Kriessl; Kunibert G. Siebert

    2008-01-01

    Many problems with underlying variational structure involve a coupling of volume with surface effects. A straight-forward approach in a finite element discretization is to make use of the surface triangulation that is naturally induced by the volume triangulation. In an adaptive method one wants to facilitate "matching" local mesh modifications, i.e., local refinement and/or coarsening, of volume and surface mesh with standard tools such that the surface grid is always induced by the volume grid. We describe the concepts behind this approach for bisectional refinement and describe new tools incorporated in the finite element toolbox ALBERTA. We also present several important applications of the mesh coupling.

  8. Dense mesh sampling for video-based facial animation

    Science.gov (United States)

    Peszor, Damian; Wojciechowska, Marzena

    2016-06-01

    The paper describes an approach for selection of feature points on three-dimensional, triangle mesh obtained using various techniques from several video footages. This approach has a dual purpose. First, it allows to minimize the data stored for the purpose of facial animation, so that instead of storing position of each vertex in each frame, one could store only a small subset of vertices for each frame and calculate positions of others based on the subset. Second purpose is to select feature points that could be used for anthropometry-based retargeting of recorded mimicry to another model, with sampling density beyond that which can be achieved using marker-based performance capture techniques. Developed approach was successfully tested on artificial models, models constructed using structured light scanner, and models constructed from video footages using stereophotogrammetry.

  9. voFoam - A geometrical Volume of Fluid algorithm on arbitrary unstructured meshes with local dynamic adaptive mesh refinement using OpenFOAM

    CERN Document Server

    Maric, Tomislav; Bothe, Dieter

    2013-01-01

    A new parallelized unsplit geometrical Volume of Fluid (VoF) algorithm with support for arbitrary unstructured meshes and dynamic local Adaptive Mesh Refinement (AMR), as well as for two and three dimensional computation is developed. The geometrical VoF algorithm supports arbitrary unstructured meshes in order to enable computations involving flow domains of arbitrary geometrical complexity. The implementation of the method is done within the framework of the OpenFOAM library for Computational Continuum Mechanics (CCM) using the C++ programming language with modern policy based design for high program code modularity. The development of the geometrical VoF algorithm significantly extends the method base of the OpenFOAM library by geometrical volumetric flux computation for two-phase flow simulations. For the volume fraction advection, a novel unsplit geometrical algorithm is developed, which inherently sustains volume conservation utilizing unique Lagrangian discrete trajectories located in the mesh points. ...

  10. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    Science.gov (United States)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  11. Relativistic Vlasov-Maxwell modelling using finite volumes and adaptive mesh refinement

    CERN Document Server

    Wettervik, Benjamin Svedung; Siminos, Evangelos; Fülöp, Tünde

    2016-01-01

    The dynamics of collisionless plasmas can be modelled by the Vlasov-Maxwell system of equations. An Eulerian approach is needed to accurately describe processes that are governed by high energy tails in the distribution function, but is of limited efficiency for high dimensional problems. The use of an adaptive mesh can reduce the scaling of the computational cost with the dimension of the problem. Here, we present a relativistic Eulerian Vlasov-Maxwell solver with block-structured adaptive mesh refinement in one spatial and one momentum dimension. The discretization of the Vlasov equation is based on a high-order finite volume method. A flux corrected transport algorithm is applied to limit spurious oscillations and ensure the physical character of the distribution function. We demonstrate a speed-up by a factor of five, because of the use of an adaptive mesh, in a typical scenario involving laser-plasma interaction in the self-induced transparency regime.

  12. Parallel Adaptive Mesh Refinement for High-Order Finite-Volume Schemes in Computational Fluid Dynamics

    Science.gov (United States)

    Schwing, Alan Michael

    For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable

  13. TRIM: A finite-volume MHD algorithm for an unstructured adaptive mesh

    Energy Technology Data Exchange (ETDEWEB)

    Schnack, D.D.; Lottati, I.; Mikic, Z. [Science Applications International Corp., San Diego, CA (United States)] [and others

    1995-07-01

    The authors describe TRIM, a MHD code which uses finite volume discretization of the MHD equations on an unstructured adaptive grid of triangles in the poloidal plane. They apply it to problems related to modeling tokamak toroidal plasmas. The toroidal direction is treated by a pseudospectral method. Care was taken to center variables appropriately on the mesh and to construct a self adjoint diffusion operator for cell centered variables.

  14. Mesh locking effects in the finite volume solution of 2-D anisotropic diffusion equations

    Science.gov (United States)

    Manzini, Gianmarco; Putti, Mario

    2007-01-01

    Strongly anisotropic diffusion equations require special techniques to overcome or reduce the mesh locking phenomenon. We present a finite volume scheme that tries to approximate with the best possible accuracy the quantities that are of importance in discretizing anisotropic fluxes. In particular, we discuss the crucial role of accurate evaluations of the tangential components of the gradient acting tangentially to the control volume boundaries, that are called into play by anisotropic diffusion tensors. To obtain the sought characteristics from the proposed finite volume method, we employ a second-order accurate reconstruction scheme which is used to evaluate both normal and tangential cell-interface gradients. The experimental results on a number of different meshes show that the scheme maintains optimal convergence rates in both L2 and H1 norms except for the benchmark test considering full Neumann boundary conditions on non-uniform grids. In such a case, a severe locking effect is experienced and documented. However, within the range of practical values of the anisotropy ratio, the scheme is robust and efficient. We postulate and verify experimentally the existence of a quadratic relationship between the anisotropy ratio and the mesh size parameter that guarantees optimal and sub-optimal convergence rates.

  15. Comparative study on triangular and quadrilateral meshes by a finite-volume method with a central difference scheme

    KAUST Repository

    Yu, Guojun

    2012-10-01

    In this article, comparative studies on computational accuracies and convergence rates of triangular and quadrilateral meshes are carried out in the frame work of the finite-volume method. By theoretical analysis, we conclude that the number of triangular cells needs to be 4/3 times that of quadrilateral cells to obtain similar accuracy. The conclusion is verified by a number of numerical examples. In addition, the convergence rates of the triangular meshes are found to be slower than those of the quadrilateral meshes when the same accuracy is obtained with these two mesh types. © 2012 Taylor and Francis Group, LLC.

  16. ADER-WENO finite volume schemes with space-time adaptive mesh refinement

    Science.gov (United States)

    Dumbser, Michael; Zanotti, Olindo; Hidalgo, Arturo; Balsara, Dinshaw S.

    2013-09-01

    We present the first high order one-step ADER-WENO finite volume scheme with adaptive mesh refinement (AMR) in multiple space dimensions. High order spatial accuracy is obtained through a WENO reconstruction, while a high order one-step time discretization is achieved using a local space-time discontinuous Galerkin predictor method. Due to the one-step nature of the underlying scheme, the resulting algorithm is particularly well suited for an AMR strategy on space-time adaptive meshes, i.e. with time-accurate local time stepping. The AMR property has been implemented 'cell-by-cell', with a standard tree-type algorithm, while the scheme has been parallelized via the message passing interface (MPI) paradigm. The new scheme has been tested over a wide range of examples for nonlinear systems of hyperbolic conservation laws, including the classical Euler equations of compressible gas dynamics and the equations of magnetohydrodynamics (MHD). High order in space and time have been confirmed via a numerical convergence study and a detailed analysis of the computational speed-up with respect to highly refined uniform meshes is also presented. We also show test problems where the presented high order AMR scheme behaves clearly better than traditional second order AMR methods. The proposed scheme that combines for the first time high order ADER methods with space-time adaptive grids in two and three space dimensions is likely to become a useful tool in several fields of computational physics, applied mathematics and mechanics.

  17. ComPASS : a tool for distributed parallel finite volume discretizations on general unstructured polyhedral meshes

    Directory of Open Access Journals (Sweden)

    Dalissier E.

    2013-12-01

    Full Text Available The objective of the ComPASS project is to develop a parallel multiphase Darcy flow simulator adapted to general unstructured polyhedral meshes (in a general sense with possibly non planar faces and to the parallelization of advanced finite volume discretizations with various choices of the degrees of freedom such as cell centres, vertices, or face centres. The main targeted applications are the simulation of CO2 geological storage, nuclear waste repository and reservoir simulations. The CEMRACS 2012 summer school devoted to high performance computing has been an ideal framework to start this collaborative project. This paper describes what has been achieved during the four weeks of the CEMRACS project which has been focusing on the implementation of basic features of the code such as the distributed unstructured polyhedral mesh, the synchronization of the degrees of freedom, and the connection to scientific libraries including the partitioner METIS, the visualization tool PARAVIEW, and the parallel linear solver library PETSc. The parallel efficiency of this first version of the ComPASS code has been validated on a toy parabolic problem using the Vertex Approximate Gradient finite volume spatial discretization with both cell and vertex degrees of freedom, combined with an Euler implicit time integration.

  18. ADER-WENO Finite Volume Schemes with Space-Time Adaptive Mesh Refinement

    CERN Document Server

    Dumbser, Michael; Hidalgo, Arturo; Balsara, Dinshaw S

    2012-01-01

    We present the first high order one-step ADER-WENO finite volume scheme with Adaptive Mesh Refinement (AMR) in multiple space dimensions. High order spatial accuracy is obtained through a WENO reconstruction, while a high order one-step time discretization is achieved using a local space-time discontinuous Galerkin predictor method. Due to the one-step nature of the underlying scheme, the resulting algorithm is particularly well suited for an AMR strategy on space-time adaptive meshes, i.e.with time-accurate local time stepping. The AMR property has been implemented 'cell-by-cell', with a standard tree-type algorithm, while the scheme has been parallelized via the Message Passing Interface (MPI) paradigm. The new scheme has been tested over a wide range of examples for nonlinear systems of hyperbolic conservation laws, including the classical Euler equations of compressible gas dynamics and the equations of magnetohydrodynamics (MHD). High order in space and time have been confirmed via a numerical convergenc...

  19. Higher-order conservative interpolation between control-volume meshes: Application to advection and multiphase flow problems with dynamic mesh adaptivity

    Science.gov (United States)

    Adam, A.; Pavlidis, D.; Percival, J. R.; Salinas, P.; Xie, Z.; Fang, F.; Pain, C. C.; Muggeridge, A. H.; Jackson, M. D.

    2016-09-01

    A general, higher-order, conservative and bounded interpolation for the dynamic and adaptive meshing of control-volume fields dual to continuous and discontinuous finite element representations is presented. Existing techniques such as node-wise interpolation are not conservative and do not readily generalise to discontinuous fields, whilst conservative methods such as Grandy interpolation are often too diffusive. The new method uses control-volume Galerkin projection to interpolate between control-volume fields. Bounded solutions are ensured by using a post-interpolation diffusive correction. Example applications of the method to interface capturing during advection and also to the modelling of multiphase porous media flow are presented to demonstrate the generality and robustness of the approach.

  20. ESECT/EMAP: mapping algorithm for computing intersection volumes of overlaid meshes in cylindrical geometry. [In FORTRAN for CDC 6600 and 7600 computers

    Energy Technology Data Exchange (ETDEWEB)

    Wienke, B.R.; O' Dell, R.D.

    1976-12-01

    ESECT and EMAP are subroutines which provide a computer algorithm for mapping arbitrary meshes onto rectangular meshes in cylindrical (r,z) geometry. Input consists of the lines defining the rectangular mesh and the coordinates of the arbitrary mesh, which are assumed to be joined by straight lines. Output consists of the intersection volumes with designation of common mesh zones. The ESECT and EMAP routines do not comprise a ''free-standing'' code but, instead, are intended for inclusion in existing codes for which one mesh structure (typically Lagrangian) needs to be mapped onto an Eulerian mesh. Such mappings are of interest in coupled hydrodynamic and neutronic calculations. Exact expressions for the volumes of rotation (about z-axis) generated by the planar mesh intersection areas are used. Intersection points of the two meshes are computed and mapped onto corresponding regions on the rectangular mesh. Intersection points with the same regional indices are recorded into multilaterals, and the multilaterals are triangulated to facilitate computation of the intersection volumes. Dimension statements within ESECT/EMAP presently allow for rectangular and arbitrary meshes of 10k and 3.6k grid points. Scaling of all arrays to suit individual applications is easily effected. Computations of intersection volumes generated by overlapping 10k rectangular and 2.2k radial meshes require an average of 18 s computer time, while computation times for the same meshes scaled by a factor of /sup 1///sub 4/ in number of grid points average 3 s on the CDC 7600. Generally, cases of small cell rectangular meshes overlaid on large cell arbitrary meshes require the longer running times. 10 figures, 2 tables.

  1. Spherical geodesic mesh generation

    Energy Technology Data Exchange (ETDEWEB)

    Fung, Jimmy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kenamond, Mark Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Burton, Donald E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shashkov, Mikhail Jurievich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-27

    In ALE simulations with moving meshes, mesh topology has a direct influence on feature representation and code robustness. In three-dimensional simulations, modeling spherical volumes and features is particularly challenging for a hydrodynamics code. Calculations on traditional spherical meshes (such as spin meshes) often lead to errors and symmetry breaking. Although the underlying differencing scheme may be modified to rectify this, the differencing scheme may not be accessible. This work documents the use of spherical geodesic meshes to mitigate solution-mesh coupling. These meshes are generated notionally by connecting geodesic surface meshes to produce triangular-prismatic volume meshes. This mesh topology is fundamentally different from traditional mesh topologies and displays superior qualities such as topological symmetry. This work describes the geodesic mesh topology as well as motivating demonstrations with the FLAG hydrocode.

  2. WLS-ENO: Weighted-least-squares based essentially non-oscillatory schemes for finite volume methods on unstructured meshes

    Science.gov (United States)

    Liu, Hongxu; Jiao, Xiangmin

    2016-06-01

    ENO (Essentially Non-Oscillatory) and WENO (Weighted Essentially Non-Oscillatory) schemes are widely used high-order schemes for solving partial differential equations (PDEs), especially hyperbolic conservation laws with piecewise smooth solutions. For structured meshes, these techniques can achieve high order accuracy for smooth functions while being non-oscillatory near discontinuities. For unstructured meshes, which are needed for complex geometries, similar schemes are required but they are much more challenging. We propose a new family of non-oscillatory schemes, called WLS-ENO, in the context of solving hyperbolic conservation laws using finite-volume methods over unstructured meshes. WLS-ENO is derived based on Taylor series expansion and solved using a weighted least squares formulation. Unlike other non-oscillatory schemes, the WLS-ENO does not require constructing sub-stencils, and hence it provides a more flexible framework and is less sensitive to mesh quality. We present rigorous analysis of the accuracy and stability of WLS-ENO, and present numerical results in 1-D, 2-D, and 3-D for a number of benchmark problems, and also report some comparisons against WENO.

  3. Arbitrary-Lagrangian-Eulerian Discontinuous Galerkin schemes with a posteriori subcell finite volume limiting on moving unstructured meshes

    Science.gov (United States)

    Boscheri, Walter; Dumbser, Michael

    2017-10-01

    We present a new family of high order accurate fully discrete one-step Discontinuous Galerkin (DG) finite element schemes on moving unstructured meshes for the solution of nonlinear hyperbolic PDE in multiple space dimensions, which may also include parabolic terms in order to model dissipative transport processes, like molecular viscosity or heat conduction. High order piecewise polynomials of degree N are adopted to represent the discrete solution at each time level and within each spatial control volume of the computational grid, while high order of accuracy in time is achieved by the ADER approach, making use of an element-local space-time Galerkin finite element predictor. A novel nodal solver algorithm based on the HLL flux is derived to compute the velocity for each nodal degree of freedom that describes the current mesh geometry. In our algorithm the spatial mesh configuration can be defined in two different ways: either by an isoparametric approach that generates curved control volumes, or by a piecewise linear decomposition of each spatial control volume into simplex sub-elements. Each technique generates a corresponding number of geometrical degrees of freedom needed to describe the current mesh configuration and which must be considered by the nodal solver for determining the grid velocity. The connection of the old mesh configuration at time tn with the new one at time t n + 1 provides the space-time control volumes on which the governing equations have to be integrated in order to obtain the time evolution of the discrete solution. Our numerical method belongs to the category of so-called direct Arbitrary-Lagrangian-Eulerian (ALE) schemes, where a space-time conservation formulation of the governing PDE system is considered and which already takes into account the new grid geometry (including a possible rezoning step) directly during the computation of the numerical fluxes. We emphasize that our method is a moving mesh method, as opposed to total

  4. A shock-fitting technique for cell-centered finite volume methods on unstructured dynamic meshes

    Science.gov (United States)

    Zou, Dongyang; Xu, Chunguang; Dong, Haibo; Liu, Jun

    2017-09-01

    In this work, the shock-fitting technique is further developed on unstructured dynamic meshes. The shock wave is fitted and regarded as a special boundary, whose boundary conditions and boundary speed (shock speed) are determined by solving Rankine-Hugoniot relations. The fitted shock splits the entire computational region into subregions, in which the flows are free from shocks and flow states are solved by a shock-capturing code based on arbitrary Lagrangian-Eulerian algorithm. Along with the motion of the fitted shock, an unstructured dynamic meshes algorithm is used to update the internal node's position to maintain the high quality of computational meshes. The successful applications prove the present shock-fitting to be a valid technique.

  5. Second order finite volume scheme for Maxwell's equations with discontinuous electromagnetic properties on unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ismagilov, Timur Z., E-mail: ismagilov@academ.org

    2015-02-01

    This paper presents a second order finite volume scheme for numerical solution of Maxwell's equations with discontinuous dielectric permittivity and magnetic permeability on unstructured meshes. The scheme is based on Godunov scheme and employs approaches of Van Leer and Lax–Wendroff to increase the order of approximation. To keep the second order of approximation near dielectric permittivity and magnetic permeability discontinuities a novel technique for gradient calculation and limitation is applied near discontinuities. Results of test computations for problems with linear and curvilinear discontinuities confirm second order of approximation. The scheme was applied to modelling propagation of electromagnetic waves inside photonic crystal waveguides with a bend.

  6. Adaptive Mesh Refinement for a Finite Volume Method for Flow and Transport of Radionuclides in Heterogeneous Porous Media

    Directory of Open Access Journals (Sweden)

    Amaziane Brahim

    2014-07-01

    Full Text Available In this paper, we consider adaptive numerical simulation of miscible displacement problems in porous media, which are modeled by single phase flow equations. A vertex-centred finite volume method is employed to discretize the coupled system: the Darcy flow equation and the diffusion-convection concentration equation. The convection term is approximated with a Godunov scheme over the dual finite volume mesh, whereas the diffusion-dispersion term is discretized by piecewise linear conforming finite elements. We introduce two kinds of indicators, both of them of residual type. The first one is related to time discretization and is local with respect to the time discretization: thus, at each time, it provides an appropriate information for the choice of the next time step. The second is related to space discretization and is local with respect to both the time and space variable and the idea is that at each time it is an efficient tool for mesh adaptivity. An error estimation procedure evaluates where additional refinement is needed and grid generation procedures dynamically create or remove fine-grid patches as resolution requirements change. The method was implemented in the software MELODIE, developed by the French Institute for Radiological Protection and Nuclear Safety (IRSN, Institut de Radioprotection et de Sûreté Nucléaire. The algorithm is then used to simulate the evolution of radionuclide migration from the waste packages through a heterogeneous disposal, demonstrating its capability to capture complex behavior of the resulting flow.

  7. Coupling of a 3-D vortex particle-mesh method with a finite volume near-wall solver

    Science.gov (United States)

    Marichal, Y.; Lonfils, T.; Duponcheel, M.; Chatelain, P.; Winckelmans, G.

    2011-11-01

    This coupling aims at improving the computational efficiency of high Reynolds number bluff body flow simulations by using two complementary methods and exploiting their respective advantages in distinct parts of the domain. Vortex particle methods are particularly well suited for free vortical flows such as wakes or jets (the computational domain -with non zero vorticity- is then compact and dispersion errors are negligible). Finite volume methods, however, can handle boundary layers much more easily due to anisotropic mesh refinement. In the present approach, the vortex method is used in the whole domain (overlapping domain technique) but its solution is highly underresolved in the vicinity of the wall. It thus has to be corrected by the near-wall finite volume solution at each time step. Conversely, the vortex method provides the outer boundary conditions for the near-wall solver. A parallel multi-resolution vortex particle-mesh approach is used here along with an Immersed Boundary method in order to take the walls into account. The near-wall flow is solved by OpenFOAM® using the PISO algorithm. We validate the methodology on the flow past a sphere at a moderate Reynolds number. F.R.S. - FNRS Research Fellow.

  8. New finite volume methods for approximating partial differential equations on arbitrary meshes; Nouvelles methodes de volumes finis pour approcher des equations aux derivees partielles sur des maillages quelconques

    Energy Technology Data Exchange (ETDEWEB)

    Hermeline, F

    2008-12-15

    This dissertation presents some new methods of finite volume type for approximating partial differential equations on arbitrary meshes. The main idea lies in solving twice the problem to be dealt with. One addresses the elliptic equations with variable (anisotropic, antisymmetric, discontinuous) coefficients, the parabolic linear or non linear equations (heat equation, radiative diffusion, magnetic diffusion with Hall effect), the wave type equations (Maxwell, acoustics), the elasticity and Stokes'equations. Numerous numerical experiments show the good behaviour of this type of method. (author)

  9. Using adaptive sampling and triangular meshes for the processing and inversion of potential field data

    Science.gov (United States)

    Foks, Nathan Leon

    The interpretation of geophysical data plays an important role in the analysis of potential field data in resource exploration industries. Two categories of interpretation techniques are discussed in this thesis; boundary detection and geophysical inversion. Fault or boundary detection is a method to interpret the locations of subsurface boundaries from measured data, while inversion is a computationally intensive method that provides 3D information about subsurface structure. My research focuses on these two aspects of interpretation techniques. First, I develop a method to aid in the interpretation of faults and boundaries from magnetic data. These processes are traditionally carried out using raster grid and image processing techniques. Instead, I use unstructured meshes of triangular facets that can extract inferred boundaries using mesh edges. Next, to address the computational issues of geophysical inversion, I develop an approach to reduce the number of data in a data set. The approach selects the data points according to a user specified proxy for its signal content. The approach is performed in the data domain and requires no modification to existing inversion codes. This technique adds to the existing suite of compressive inversion algorithms. Finally, I develop an algorithm to invert gravity data for an interfacing surface using an unstructured mesh of triangular facets. A pertinent property of unstructured meshes is their flexibility at representing oblique, or arbitrarily oriented structures. This flexibility makes unstructured meshes an ideal candidate for geometry based interface inversions. The approaches I have developed provide a suite of algorithms geared towards large-scale interpretation of potential field data, by using an unstructured representation of both the data and model parameters.

  10. A new high-order finite volume method for 3D elastic wave simulation on unstructured meshes

    Science.gov (United States)

    Zhang, Wensheng; Zhuang, Yuan; Zhang, Lina

    2017-07-01

    In this paper, we proposed a new efficient high-order finite volume method for 3D elastic wave simulation on unstructured tetrahedral meshes. With the relative coarse tetrahedral meshes, we make subdivision in each tetrahedron to generate a stencil for the high-order polynomial reconstruction. The subdivision algorithm guarantees the number of subelements is greater than the degrees of freedom of a complete polynomial. We perform the reconstruction on this stencil by using cell-averaged quantities based on the hierarchical orthonormal basis functions. Unlike the traditional high-order finite volume method, our new method has a very local property like DG and can be written as an inner-split computational scheme which is beneficial to reducing computational amount. Moreover, the stencil in our method is easy to generate for all tetrahedrons especially in the three-dimensional case. The resulting reconstruction matrix is invertible and remains unchanged for all tetrahedrons and thus it can be pre-computed and stored before time evolution. These special advantages facilitate the parallelization and high-order computations. We show convergence results obtained with the proposed method up to fifth order accuracy in space. The high-order accuracy in time is obtained by the Runge-Kutta method. Comparisons between numerical and analytic solutions show the proposed method can provide accurate wavefield information. Numerical simulation for a realistic model with complex topography demonstrates the effectiveness and potential applications of our method. Though the method is proposed based on the 3D elastic wave equation, it can be extended to other linear hyperbolic system.

  11. Improved Simulation of Subsurface Flow in Heterogeneous Reservoirs Using a Fully Discontinuous Control-Volume-Finite-Element Method, Implicit Timestepping and Dynamic Unstructured Mesh Optimization

    Science.gov (United States)

    Salinas, P.; Jackson, M.; Pavlidis, D.; Pain, C.; Adam, A.; Xie, Z.; Percival, J. R.

    2015-12-01

    We present a new, high-order, control-volume-finite-element (CVFE) method with discontinuous representation for pressure and velocity to simulate multiphase flow in heterogeneous porous media. Time is discretized using an adaptive, fully implicit method. Heterogeneous geologic features are represented as volumes bounded by surfaces. Within these volumes, termed geologic domains, the material properties are constant. A given model typically contains numerous such geologic domains. Our approach conserves mass and does not require the use of CVs that span domain boundaries. Computational efficiency is increased by use of dynamic mesh optimization, in which an unstructured mesh adapts in space and time to key solution fields, such as pressure, velocity or saturation, whilst preserving the geometry of the geologic domains. Up-, cross- or down-scaling of material properties during mesh optimization is not required, as the properties are uniform within each geologic domain. We demonstrate that the approach, amongst other features, accurately preserves sharp saturation changes associated with high aspect ratio geologic domains such as fractures and mudstones, allowing efficient simulation of flow in highly heterogeneous models. Moreover, accurate solutions are obtained at significantly lower computational cost than an equivalent fine, fixed mesh and conventional CVFE methods. The use of implicit time integration allows the method to efficiently converge using highly anisotropic meshes without having to reduce the time-step. The work is significant for two key reasons. First, it resolves a long-standing problem associated with the use of classical CVFE methods to model flow in highly heterogeneous porous media, in which CVs span boundaries between domains of contrasting material properties. Second, it reduces computational cost/increases solution accuracy through the use of dynamic mesh optimization and time-stepping with large Courant number.

  12. A Study of the Transient Response of Duct Junctions: Measurements and Gas-Dynamic Modeling with a Staggered Mesh Finite Volume Approach

    Directory of Open Access Journals (Sweden)

    Antonio J. Torregrosa

    2017-05-01

    Full Text Available Duct junctions play a major role in the operation and design of most piping systems. The objective of this paper is to establish the potential of a staggered mesh finite volume model as a way to improve the description of the effect of simple duct junctions on an otherwise one-dimensional flow system, such as the intake or exhaust of an internal combustion engine. Specific experiments have been performed in which different junctions have been characterized as a multi-port, and that have provided precise and reliable results on the propagation of pressure pulses across junctions. The results obtained have been compared to simulations performed with a staggered mesh finite volume method with different flux limiters and different meshes and, as a reference, have also been compared with the results of a more conventional pressure loss-based model. The results indicate that the staggered mesh finite volume model provides a closer description of wave dynamics, even if further work is needed to establish the optimal calculation settings.

  13. Delaunay mesh generation

    CERN Document Server

    Cheng, Siu-Wing; Shewchuk, Jonathan

    2013-01-01

    Written by authors at the forefront of modern algorithms research, Delaunay Mesh Generation demonstrates the power and versatility of Delaunay meshers in tackling complex geometric domains ranging from polyhedra with internal boundaries to piecewise smooth surfaces. Covering both volume and surface meshes, the authors fully explain how and why these meshing algorithms work.The book is one of the first to integrate a vast amount of cutting-edge material on Delaunay triangulations. It begins with introducing the problem of mesh generation and describing algorithms for constructing Delaunay trian

  14. Development of a higher-order finite volume method for simulation of thermal oil recovery process using moving mesh strategy

    Energy Technology Data Exchange (ETDEWEB)

    Ahmadi, M. [Heriot Watt Univ., Edinburgh (United Kingdom)

    2008-10-15

    This paper described a project in which a higher order up-winding scheme was used to solve mass/energy conservation equations for simulating steam flood processes in an oil reservoir. Thermal recovery processes are among the most complex because they require a detailed accounting of thermal energy and chemical reaction kinetics. The numerical simulation of thermal recovery processes involves localized phenomena such as saturation and temperatures fronts due to hyperbolic features of governing conservation laws. A second order accurate FV method that was improved by a moving mesh strategy was used to adjust for moving coordinates on a finely gridded domain. The Finite volume method was used and the problem of steam injection was then tested using derived solution frameworks on both mixed and moving coordinates. The benefits of using a higher-order Godunov solver instead of lower-order ones were qualified. This second order correction resulted in better resolution on moving features. Preferences of higher-order solvers over lower-order ones in terms of shock capturing is under further investigation. It was concluded that although this simulation study was limited to steam flooding processes, the newly presented approach may be suitable to other enhanced oil recovery processes such as VAPEX, SAGD and in situ combustion processes. 23 refs., 28 figs.

  15. Opfront: mesh

    DEFF Research Database (Denmark)

    2015-01-01

    Mesh generation and visualization software based on the CGAL library. Folder content: drawmesh Visualize slices of the mesh (surface/volumetric) as wireframe on top of an image (3D). drawsurf Visualize surfaces of the mesh (surface/volumetric). img2mesh Convert isosurface in image to volumetric...

  16. TRU Waste Sampling Program: Volume I. Waste characterization

    Energy Technology Data Exchange (ETDEWEB)

    Clements, T.L. Jr.; Kudera, D.E.

    1985-09-01

    Volume I of the TRU Waste Sampling Program report presents the waste characterization information obtained from sampling and characterizing various aged transuranic waste retrieved from storage at the Idaho National Engineering Laboratory and the Los Alamos National Laboratory. The data contained in this report include the results of gas sampling and gas generation, radiographic examinations, waste visual examination results, and waste compliance with the Waste Isolation Pilot Plant-Waste Acceptance Criteria (WIPP-WAC). A separate report, Volume II, contains data from the gas generation studies.

  17. Adaptive mesh generation for image registration and segmentation

    DEFF Research Database (Denmark)

    Fogtmann, Mads; Larsen, Rasmus

    2013-01-01

    This paper deals with the problem of generating quality tetrahedral meshes for image registration. From an initial coarse mesh the approach matches the mesh to the image volume by combining red-green subdivision and mesh evolution through mesh-to-image matching regularized with a mesh quality...

  18. Precise small volume sample handling for capillary electrophoresis.

    Science.gov (United States)

    Mozafari, Mona; Nachbar, Markus; Deeb, Sami El

    2015-09-03

    Capillary electrophoresis is one of the most important analytical techniques. Although the injected sample volume in capillary electrophoresis is only in the nanoliter range, most commercial CE-instruments need approximately 50 μL of the sample in the injection vial to perform the analysis. Hence, in order to fully profit from the low injection volumes, smaller vial volumes are required. Thus experiments were performed using silicone oil which has higher density than water (1.09 g/mL) to replace sample dead volume in the vial. The results were compared to those performed without using the silicone oil in the sample vial. As an example five standard proteins namely beta-lactoglobulin, BSA, HSA, Myoglobin and Ovalbumin, and one of the coagulation cascade involved proteins called vitonectin were investigated using capillary electrophoresis. Mobility ratios and peak areas were compared. However no significant changes were observed (RSDs% for mobility ratios and peak areas were better than 0.9% and 5.8% respectively). Afterwards an affinity capillary electrophoresis method was used to investigate the interactions of two proteins, namely HSA and vitronectin, with three ligands namely enoxaparin sodium, unfractionated heparin and pentosan polysulfate sodium (PPS). Mobility shift precision results showed that the employment of the filling has no noticeable effect on any of the protein-ligand interactions. Using a commercial PrinCE instrument and an autosampler the required sample volume is reduced down to 10 μL, and almost this complete volume can be subsequently injected during repeated experiments. This article is protected by copyright. All rights reserved.

  19. Volume Ray Casting with Peak Finding and Differential Sampling

    KAUST Repository

    Knoll, A.

    2009-11-01

    Direct volume rendering and isosurfacing are ubiquitous rendering techniques in scientific visualization, commonly employed in imaging 3D data from simulation and scan sources. Conventionally, these methods have been treated as separate modalities, necessitating different sampling strategies and rendering algorithms. In reality, an isosurface is a special case of a transfer function, namely a Dirac impulse at a given isovalue. However, artifact-free rendering of discrete isosurfaces in a volume rendering framework is an elusive goal, requiring either infinite sampling or smoothing of the transfer function. While preintegration approaches solve the most obvious deficiencies in handling sharp transfer functions, artifacts can still result, limiting classification. In this paper, we introduce a method for rendering such features by explicitly solving for isovalues within the volume rendering integral. In addition, we present a sampling strategy inspired by ray differentials that automatically matches the frequency of the image plane, resulting in fewer artifacts near the eye and better overall performance. These techniques exhibit clear advantages over standard uniform ray casting with and without preintegration, and allow for high-quality interactive volume rendering with sharp C0 transfer functions. © 2009 IEEE.

  20. Influence of volume of sample processed on detection of Chlamydia trachomatis in urogenital samples by PCR.

    OpenAIRE

    Goessens, Wil; Kluytmans, Jan; Toom, N.; van Rijsoort-Vos, T H; Stolz, Ernst; Verbrugh, Henri; Quint, Wim; Niesters, Bert

    1995-01-01

    textabstractIn the present study, it was demonstrated that the sensitivity of the PCR for the detection of Chlamydia trachomatis is influenced by the volume of the clinical sample which is processed in the PCR. An adequate sensitivity for PCR was established by processing at least 4%, i.e., 80 microliters, of the clinical sample volume per PCR. By using this preparation procedure, 1,110 clinical samples were evaluated by PCR and by cell culture, and results were compared. After discordant ana...

  1. Analysis of Orbital Volume Measurements Following Reduction and Internal Fixation Using Absorbable Mesh Plates and Screws for Patients With Orbital Floor Blowout Fractures.

    Science.gov (United States)

    Hwang, Won Joo; Lee, Do Heon; Choi, Won; Hwang, Jae Ha; Kim, Kwang Seog; Lee, Sam Yong

    2017-08-22

    Hinge-shaped fractures are common type of orbital floor blowout fractures, for which reduction and internal fixation is ideal. Nonetheless, orbital floor reconstruction using alloplastic materials without reducing the number of bone fragments is the most frequently used procedure. Therefore, this study analyzed and compared the outcomes between open reduction and internal fixation using absorbable mesh plates and screws, and orbital floor reconstruction, by measuring the orbital volume before and after surgery. Among patients with orbital floor blowout fractures, this study was conducted on 28 patients who underwent open reduction and internal fixation, and 27 patients who underwent orbital floor reconstruction from December 2008 to September 2015. The mechanism of injury, ophthalmic symptoms before and after surgery, and the degree of enophthalmos were examined; subsequently, the volumes of the affected and unaffected sides were measured before and after surgery based on computed tomography images. This study compared the degree of recovery in the correction rate of the orbital volume, ophthalmic symptoms, and enophthalmos between the 2 groups. The patients who underwent open reduction and internal fixation, and the patients who underwent orbital floor reconstruction showed average correction rates of 100.36% and 105.24%, respectively. Open reduction and internal fixation showed statistically, significantly superior treatment outcomes compared with orbital floor reconstruction. The ophthalmic symptoms and incidence of enophthalmos completely resolved in both groups. For orbital floor blowout fractures, open reduction and internal fixation using absorbable mesh plates and screws was a feasible alternative to orbital floor reconstruction.

  2. Pulsed Direct Current Electrospray: Enabling Systematic Analysis of Small Volume Sample by Boosting Sample Economy.

    Science.gov (United States)

    Wei, Zhenwei; Xiong, Xingchuang; Guo, Chengan; Si, Xingyu; Zhao, Yaoyao; He, Muyi; Yang, Chengdui; Xu, Wei; Tang, Fei; Fang, Xiang; Zhang, Sichun; Zhang, Xinrong

    2015-11-17

    We had developed pulsed direct current electrospray ionization mass spectrometry (pulsed-dc-ESI-MS) for systematically profiling and determining components in small volume sample. Pulsed-dc-ESI utilized constant high voltage to induce the generation of single polarity pulsed electrospray remotely. This method had significantly boosted the sample economy, so as to obtain several minutes MS signal duration from merely picoliter volume sample. The elongated MS signal duration enable us to collect abundant MS(2) information on interested components in a small volume sample for systematical analysis. This method had been successfully applied for single cell metabolomics analysis. We had obtained 2-D profile of metabolites (including exact mass and MS(2) data) from single plant and mammalian cell, concerning 1034 components and 656 components for Allium cepa and HeLa cells, respectively. Further identification had found 162 compounds and 28 different modification groups of 141 saccharides in a single Allium cepa cell, indicating pulsed-dc-ESI a powerful tool for small volume sample systematical analysis.

  3. Comprehensive workflow for wireline fluid sampling in an unconsolidated formations utilizing new large volume sampling equipment

    Energy Technology Data Exchange (ETDEWEB)

    Kvinnsland, S.; Brun, M. [TOTAL EandP Norge (Norway); Achourov, V.; Gisolf, A. [Schlumberger (Canada)

    2011-07-01

    Precise and accurate knowledge of fluid properties is essential in unconsolidated formations to the design of production facilities. Wireline formation testers (WFT) have a wide range of applications and the latest WFT can be used to define fluid properties in the wells drilled with oil based mud (OBM) by acquiring PVT and large volume samples. To use these technologies, a comprehensive workflow has to be implemented and the aim of this paper is to present such a workflow. A sampling was conducted in highly unconsolidated sand saturated with biodegradable fluid in the Hild filed in the North Sea. Results showed the use of comprehensive workflow to be successful in obtaining large volume samples with the contamination level below 1%. Oil was precisely characterized thanks to these samples and design updates to the project were made possible. This paper highlighted that the use of the latest WFT technologies can help better characterize fluids in unconsolidated formations and thus optimize production facilities design.

  4. Influence of volume of sample processed on detection of Chlamydia trachomatis in urogenital samples by PCR

    NARCIS (Netherlands)

    Goessens, W H; Kluytmans, J A; den Toom, N; van Rijsoort-Vos, T H; Niesters, B G; Stolz, E; Verbrugh, H A; Quint, W G

    In the present study, it was demonstrated that the sensitivity of the PCR for the detection of Chlamydia trachomatis is influenced by the volume of the clinical sample which is processed in the PCR. An adequate sensitivity for PCR was established by processing at least 4%, i.e., 80 microliters, of

  5. Influence of volume of sample processed on detection of Chlamydia trachomatis in urogenital samples by PCR

    NARCIS (Netherlands)

    Goessens, W H; Kluytmans, J A; den Toom, N; van Rijsoort-Vos, T H; Niesters, B G; Stolz, E; Verbrugh, H A; Quint, W G

    1995-01-01

    In the present study, it was demonstrated that the sensitivity of the PCR for the detection of Chlamydia trachomatis is influenced by the volume of the clinical sample which is processed in the PCR. An adequate sensitivity for PCR was established by processing at least 4%, i.e., 80 microliters, of t

  6. Influence of volume of sample processed on detection of Chlamydia trachomatis in urogenital samples by PCR

    NARCIS (Netherlands)

    W.H.F. Goessens (Wil); J.A.J.W. Kluytmans (Jan); N. den Toom; T.H. van Rijsoort-Vos; E. Stolz (Ernst); H.A. Verbrugh (Henri); W.G.V. Quint (Wim); H.G.M. Niesters (Bert)

    1995-01-01

    textabstractIn the present study, it was demonstrated that the sensitivity of the PCR for the detection of Chlamydia trachomatis is influenced by the volume of the clinical sample which is processed in the PCR. An adequate sensitivity for PCR was established by processing at least

  7. Metabolic profiling of ultrasmall sample volumes with GC/MS: From microliter to nanoliter samples

    NARCIS (Netherlands)

    Koek, M.M.; Bakels, F.; Engel, W.; Maagdenberg, A. van den; Ferrari, M.D.; Coulier, L.; Hankemeier, T.

    2010-01-01

    Profiling of metabolites is increasingly used to study the functioning of biological systems. For some studies the volume of available samples is limited to only a few microliters or even less, for fluids such as cerebrospinal fluid (CSF) of small animals like mice or the analysis of individual oocy

  8. Automated force volume image processing for biological samples.

    Directory of Open Access Journals (Sweden)

    Pavel Polyakov

    Full Text Available Atomic force microscopy (AFM has now become a powerful technique for investigating on a molecular level, surface forces, nanomechanical properties of deformable particles, biomolecular interactions, kinetics, and dynamic processes. This paper specifically focuses on the analysis of AFM force curves collected on biological systems, in particular, bacteria. The goal is to provide fully automated tools to achieve theoretical interpretation of force curves on the basis of adequate, available physical models. In this respect, we propose two algorithms, one for the processing of approach force curves and another for the quantitative analysis of retraction force curves. In the former, electrostatic interactions prior to contact between AFM probe and bacterium are accounted for and mechanical interactions operating after contact are described in terms of Hertz-Hooke formalism. Retraction force curves are analyzed on the basis of the Freely Jointed Chain model. For both algorithms, the quantitative reconstruction of force curves is based on the robust detection of critical points (jumps, changes of slope or changes of curvature which mark the transitions between the various relevant interactions taking place between the AFM tip and the studied sample during approach and retraction. Once the key regions of separation distance and indentation are detected, the physical parameters describing the relevant interactions operating in these regions are extracted making use of regression procedure for fitting experiments to theory. The flexibility, accuracy and strength of the algorithms are illustrated with the processing of two force-volume images, which collect a large set of approach and retraction curves measured on a single biological surface. For each force-volume image, several maps are generated, representing the spatial distribution of the searched physical parameters as estimated for each pixel of the force-volume image.

  9. Automated Force Volume Image Processing for Biological Samples

    Science.gov (United States)

    Duan, Junbo; Duval, Jérôme F. L.; Brie, David; Francius, Grégory

    2011-01-01

    Atomic force microscopy (AFM) has now become a powerful technique for investigating on a molecular level, surface forces, nanomechanical properties of deformable particles, biomolecular interactions, kinetics, and dynamic processes. This paper specifically focuses on the analysis of AFM force curves collected on biological systems, in particular, bacteria. The goal is to provide fully automated tools to achieve theoretical interpretation of force curves on the basis of adequate, available physical models. In this respect, we propose two algorithms, one for the processing of approach force curves and another for the quantitative analysis of retraction force curves. In the former, electrostatic interactions prior to contact between AFM probe and bacterium are accounted for and mechanical interactions operating after contact are described in terms of Hertz-Hooke formalism. Retraction force curves are analyzed on the basis of the Freely Jointed Chain model. For both algorithms, the quantitative reconstruction of force curves is based on the robust detection of critical points (jumps, changes of slope or changes of curvature) which mark the transitions between the various relevant interactions taking place between the AFM tip and the studied sample during approach and retraction. Once the key regions of separation distance and indentation are detected, the physical parameters describing the relevant interactions operating in these regions are extracted making use of regression procedure for fitting experiments to theory. The flexibility, accuracy and strength of the algorithms are illustrated with the processing of two force-volume images, which collect a large set of approach and retraction curves measured on a single biological surface. For each force-volume image, several maps are generated, representing the spatial distribution of the searched physical parameters as estimated for each pixel of the force-volume image. PMID:21559483

  10. MULTISTEP FINITE VOLUME APPROXIMATIONS TO THE TRANSIENT BEHAVIOR OF A SEMICONDUCTOR DEVICE ON GENERAL 2D OR 3D MESHES

    Institute of Scientific and Technical Information of China (English)

    Min Yang

    2007-01-01

    In this paper, we consider a hydrodynamic model of the semiconductor device. The approximate solutions are obtained by a mixed finite volume method for the potential equation and multistep upwind finite volume methods for the concentration equations.Error estimates in some discrete norms are derived under some regularity assumptions on the exact solutions.

  11. The discrete ordinate method in association with the finite-volume method in non-structured mesh; Methode des ordonnees discretes associee a la methode des volumes finis en maillage non structure

    Energy Technology Data Exchange (ETDEWEB)

    Le Dez, V.; Lallemand, M. [Ecole Nationale Superieure de Mecanique et d`Aerotechnique (ENSMA), 86 - Poitiers (France); Sakami, M.; Charette, A. [Quebec Univ., Chicoutimi, PQ (Canada). Dept. des Sciences Appliquees

    1996-12-31

    The description of an efficient method of radiant heat transfer field determination in a grey semi-transparent environment included in a 2-D polygonal cavity with surface boundaries that reflect the radiation in a purely diffusive manner is proposed, at the equilibrium and in radiation-conduction coupling situation. The technique uses simultaneously the finite-volume method in non-structured triangular mesh, the discrete ordinate method and the ray shooting method. The main mathematical developments and comparative results with the discrete ordinate method in orthogonal curvilinear coordinates are included. (J.S.) 10 refs.

  12. Effect of additional sample bias in Meshed Plasma Immersion Ion Deposition (MPIID) on microstructural, surface and mechanical properties of Si-DLC films

    Science.gov (United States)

    Wu, Mingzhong; Tian, Xiubo; Li, Muqin; Gong, Chunzhi; Wei, Ronghua

    2016-07-01

    Meshed Plasma Immersion Ion Deposition (MPIID) using cage-like hollow cathode discharge is a modified process of conventional PIID, but it allows the deposition of thick diamond-like carbon (DLC) films (up to 50 μm) at a high deposition rate (up to 6.5 μm/h). To further improve the DLC film properties, a new approach to the MPIID process is proposed, in which the energy of ions incident to the sample surface can be independently controlled by an additional voltage applied between the samples and the metal meshed cage. In this study, the meshed cage was biased with a pulsed DC power supply at -1350 V peak voltage for the plasma generation, while the samples inside the cage were biased with a DC voltage from 0 V to -500 V with respect to the cage to study its effect. Si-DLC films were synthesized with a mixture of Ar, C2H2 and tetramethylsilane (TMS). After the depositions, scanning electron microscopy (SEM), atomic force microscopy (AFM), X-ray photoelectrons spectroscopy (XPS), Raman spectroscopy and nanoindentation were used to study the morphology, surface roughness, chemical bonding and structure, and the surface hardness as well as the modulus of elasticity of the Si-DLC films. It was observed that the intense ion bombardment significantly densified the films, reduced the surface roughness, reduced the H and Si contents, and increased the nanohardness (H) and modulus of elasticity (E), whereas the deposition rate decreased slightly. Using the H and E data, high values of H3/E2 and H/E were obtained on the biased films, indicating the potential excellent mechanical and tribological properties of the films. In this paper, the effects of the sample bias voltage on the film properties are discussed in detail and the optimal bias voltage is presented.

  13. Effect of additional sample bias in Meshed Plasma Immersion Ion Deposition (MPIID) on microstructural, surface and mechanical properties of Si-DLC films

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Mingzhong [State Key Laboratory of Advanced Welding & Joining, Harbin Institute of Technology, Harbin 150001 (China); School of Materials Science & Engineering, Jiamusi University, Jiamusi 154007 (China); Tian, Xiubo, E-mail: xiubotian@163.com [State Key Laboratory of Advanced Welding & Joining, Harbin Institute of Technology, Harbin 150001 (China); Li, Muqin [School of Materials Science & Engineering, Jiamusi University, Jiamusi 154007 (China); Gong, Chunzhi [State Key Laboratory of Advanced Welding & Joining, Harbin Institute of Technology, Harbin 150001 (China); Wei, Ronghua [Southwest Research Institute, San Antonio, TX 78238 (United States)

    2016-07-15

    Highlights: • A novel Meshed Plasma Immersion Ion Deposition is proposed. • The deposited Si-DLC films possess denser structures and high deposition rate. • It is attributed to ion bombardment of the deposited films. • The ion energy can be independently controlled by an additional bias (novel set up). - Abstract: Meshed Plasma Immersion Ion Deposition (MPIID) using cage-like hollow cathode discharge is a modified process of conventional PIID, but it allows the deposition of thick diamond-like carbon (DLC) films (up to 50 μm) at a high deposition rate (up to 6.5 μm/h). To further improve the DLC film properties, a new approach to the MPIID process is proposed, in which the energy of ions incident to the sample surface can be independently controlled by an additional voltage applied between the samples and the metal meshed cage. In this study, the meshed cage was biased with a pulsed DC power supply at −1350 V peak voltage for the plasma generation, while the samples inside the cage were biased with a DC voltage from 0 V to −500 V with respect to the cage to study its effect. Si-DLC films were synthesized with a mixture of Ar, C{sub 2}H{sub 2} and tetramethylsilane (TMS). After the depositions, scanning electron microscopy (SEM), atomic force microscopy (AFM), X-ray photoelectrons spectroscopy (XPS), Raman spectroscopy and nanoindentation were used to study the morphology, surface roughness, chemical bonding and structure, and the surface hardness as well as the modulus of elasticity of the Si-DLC films. It was observed that the intense ion bombardment significantly densified the films, reduced the surface roughness, reduced the H and Si contents, and increased the nanohardness (H) and modulus of elasticity (E), whereas the deposition rate decreased slightly. Using the H and E data, high values of H{sup 3}/E{sup 2} and H/E were obtained on the biased films, indicating the potential excellent mechanical and tribological properties of the films. In this

  14. Surface meshing with curvature convergence

    KAUST Repository

    Li, Huibin

    2014-06-01

    Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.

  15. Sampling-based motion planning with reachable volumes: Theoretical foundations

    KAUST Repository

    McMahon, Troy

    2014-05-01

    © 2014 IEEE. We introduce a new concept, reachable volumes, that denotes the set of points that the end effector of a chain or linkage can reach. We show that the reachable volume of a chain is equivalent to the Minkowski sum of the reachable volumes of its links, and give an efficient method for computing reachable volumes. We present a method for generating configurations using reachable volumes that is applicable to various types of robots including open and closed chain robots, tree-like robots, and complex robots including both loops and branches. We also describe how to apply constraints (both on end effectors and internal joints) using reachable volumes. Unlike previous methods, reachable volumes work for spherical and prismatic joints as well as planar joints. Visualizations of reachable volumes can allow an operator to see what positions the robot can reach and can guide robot design. We present visualizations of reachable volumes for representative robots including closed chains and graspers as well as for examples with joint and end effector constraints.

  16. A hybrid vertex-centered finite volume/element method for viscous incompressible flows on non-staggered unstructured meshes

    Institute of Scientific and Technical Information of China (English)

    Wei Gao; Ru-Xun Liu; Hong Li

    2012-01-01

    This paper proposes a hybrid vertex-centered finite volume/finite element method for sol ution of the two dimensional (2D) incompressible Navier-Stokes equations on unstructured grids.An incremental pressure fractional step method is adopted to handle the velocity-pressure coupling.The velocity and the pressure are collocated at the node of the vertex-centered control volume which is formed by joining the centroid of cells sharing the common vertex.For the temporal integration of the momentum equations,an implicit second-order scheme is utilized to enhance the computational stability and eliminate the time step limit due to the diffusion term.The momentum equations are discretized by the vertex-centered finite volume method (FVM) and the pressure Poisson equation is solved by the Galerkin finite element method (FEM).The momentum interpolation is used to damp out the spurious pressure wiggles.The test case with analytical solutions demonstrates second-order accuracy of the current hybrid scheme in time and space for both velocity and pressure.The classic test cases,the lid-driven cavity flow,the skew cavity flow and the backward-facing step flow,show that numerical results are in good agreement with the published benchmark solutions.

  17. A method for rapid sampling and characterization of smokeless powder using sorbent-coated wire mesh and direct analysis in real time - mass spectrometry (DART-MS).

    Science.gov (United States)

    Li, Frederick; Tice, Joseph; Musselman, Brian D; Hall, Adam B

    2016-09-01

    Improvised explosive devices (IEDs) are often used by terrorists and criminals to create public panic and destruction, necessitating rapid investigative information. However, backlogs in many forensic laboratories resulting in part from time-consuming GC-MS and LC-MS techniques prevent prompt analytical information. Direct analysis in real time - mass spectrometry (DART-MS) is a promising analytical technique that can address this challenge in the forensic science community by permitting rapid trace analysis of energetic materials. Therefore, we have designed a qualitative analytical approach that utilizes novel sorbent-coated wire mesh and dynamic headspace concentration to permit the generation of information rich chemical attribute signatures (CAS) for trace energetic materials in smokeless powder with DART-MS. Sorbent-coated wire mesh improves the overall efficiency of capturing trace energetic materials in comparison to swabbing or vacuuming. Hodgdon Lil' Gun smokeless powder was used to optimize the dynamic headspace parameters. This method was compared to traditional GC-MS methods and validated using the NIST RM 8107 smokeless powder reference standard. Additives and energetic materials, notably nitroglycerin, were rapidly and efficiently captured by the Carbopack X wire mesh, followed by detection and identification using DART-MS. This approach has demonstrated the capability of generating comparable results with significantly reduced analysis time in comparison to GC-MS. All targeted components that can be detected by GC-MS were detected by DART-MS in less than a minute. Furthermore, DART-MS offers the advantage of detecting targeted analytes that are not amenable to GC-MS. The speed and efficiency associated with both the sample collection technique and DART-MS demonstrate an attractive and viable potential alternative to conventional techniques.

  18. Determination of cell electroporation in small-volume samples.

    Science.gov (United States)

    Saulis, Gintautas; Praneviciŭte, Rita

    2007-01-01

    Expose of cells to electric field pulses increases the cell membrane permeability. Intracellular potassium ions leak out of the cells through aqueous pores created in the membrane. This release is used here for the determination of the fraction of electroporated cells. To determine cell membrane electroporation in small-volume samples (40-50 miacrol), mini both potassium ion-selective and reference electrodes, with tip diameters of 1-1.5 mm and minimum immersion depths of 1 mm, were utilized. The obtained calibration graph was linear within the concentration range 0.2-100 mM. The slope was 50-51 and 53-56 mV per concentration order at 10-11 and 19-21 degrees C, respectively. Detection limit of the electrode was determined to be 0.05-0.08 mM, however, it was possible to work down to concentrations in the range of 0.01 mM. Experiments have been carried out on human erythrocytes exposed to a square-wave electric pulse with the duration of 0.1-2 ms. The extracellular potassium concentrations were in the range between 0.04-0.08 mM (intact cells) and 3-5 mM (100% electroporation). The obtained dependences of the fraction of electroporated cells on the pulse intensity were of a sigmoid shape. The dependence of the pulse amplitude required to electroporate 50% of cells on the pulse duration, obtained from the release of intracellular potassium ions, coincided with the one determined from the extent of hemolysis after 24 h-incubation at low temperature.

  19. A Freestream-Preserving High-Order Finite-Volume Method for Mapped Grids with Adaptive-Mesh Refinement

    Energy Technology Data Exchange (ETDEWEB)

    Guzik, S; McCorquodale, P; Colella, P

    2011-12-16

    A fourth-order accurate finite-volume method is presented for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Novel considerations for formulating the semi-discrete system of equations in computational space combined with detailed mechanisms for accommodating the adapting grids ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). Advancement in time is achieved with a fourth-order Runge-Kutta method.

  20. SyPRID sampler: A large-volume, high-resolution, autonomous, deep-ocean precision plankton sampling system

    Science.gov (United States)

    Billings, Andrew; Kaiser, Carl; Young, Craig M.; Hiebert, Laurel S.; Cole, Eli; Wagner, Jamie K. S.; Van Dover, Cindy Lee

    2017-03-01

    The current standard for large-volume (thousands of cubic meters) zooplankton sampling in the deep sea is the MOCNESS, a system of multiple opening-closing nets, typically lowered to within 50 m of the seabed and towed obliquely to the surface to obtain low-spatial-resolution samples that integrate across 10 s of meters of water depth. The SyPRID (Sentry Precision Robotic Impeller Driven) sampler is an innovative, deep-rated (6000 m) plankton sampler that partners with the Sentry Autonomous Underwater Vehicle (AUV) to obtain paired, large-volume plankton samples at specified depths and survey lines to within 1.5 m of the seabed and with simultaneous collection of sensor data. SyPRID uses a perforated Ultra-High-Molecular-Weight (UHMW) plastic tube to support a fine mesh net within an outer carbon composite tube (tube-within-a-tube design), with an axial flow pump located aft of the capture filter. The pump facilitates flow through the system and reduces or possibly eliminates the bow wave at the mouth opening. The cod end, a hollow truncated cone, is also made of UHMW plastic and includes a collection volume designed to provide an area where zooplankton can collect, out of the high flow region. SyPRID attaches as a saddle-pack to the Sentry vehicle. Sentry itself is configured with a flight control system that enables autonomous survey paths to low altitudes. In its verification deployment at the Blake Ridge Seep (2160 m) on the US Atlantic Margin, SyPRID was operated for 6 h at an altitude of 5 m. It recovered plankton samples, including delicate living larvae, from the near-bottom stratum that is seldom sampled by a typical MOCNESS tow. The prototype SyPRID and its next generations will enable studies of plankton or other particulate distributions associated with localized physico-chemical strata in the water column or above patchy habitats on the seafloor.

  1. Fire performance of basalt FRP mesh reinforced HPC thin plates

    DEFF Research Database (Denmark)

    Hulin, Thomas; Hodicky, Kamil; Schmidt, Jacob Wittrup;

    2013-01-01

    An experimental program was carried out to investigate the influence of basalt FRP (BFRP) reinforcing mesh on the fire behaviour of thin high performance concrete (HPC) plates applied to sandwich elements. Samples with BFRP mesh were compared to samples with no mesh, samples with steel mesh...

  2. EVALUATION OF VAPOR EQUILIBRATION AND IMPACT OF PURGE VOLUME ON SOIL-GAS SAMPLING RESULTS

    Science.gov (United States)

    Sequential sampling was utilized at the Raymark Superfund site to evaluate attainment of vapor equilibration and the impact of purge volume on soil-gas sample results. A simple mass-balance equation indicates that removal of three to five internal volumes of a sample system shou...

  3. Saliva sampling in global clinical studies: the impact of low sampling volume on performance of DNA in downstream genotyping experiments

    Science.gov (United States)

    2013-01-01

    Background The collection of viable DNA samples is an essential element of any genetics research programme. Biological samples for DNA purification are now routinely collected in many studies with a variety of sampling methods available. Initial observation in this study suggested a reduced genotyping success rate of some saliva derived DNA samples when compared to blood derived DNA samples prompting further investigation. Methods Genotyping success rate was investigated to assess the suitability of using saliva samples in future safety and efficacy pharmacogenetics experiments. The Oragene® OG-300 DNA Self-Collection kit was used to collect and extract DNA from saliva from 1468 subjects enrolled in global clinical studies. Statistical analysis evaluated the impact of saliva sample volume of collection on the quality, yield, concentration and performance of saliva DNA in genotyping assays. Results Across 13 global clinical studies that utilized the Oragene® OG-300 DNA Self-Collection kit there was variability in the volume of saliva sample collection with ~31% of participants providing 0.5 mL of saliva, rather than the recommended 2 mL. While the majority of saliva DNA samples provided high quality genotype data, collection of 0.5 mL volumes of saliva contributed to DNA samples being significantly less likely to pass genotyping quality control standards. Assessment of DNA sample characteristics that may influence genotyping outcomes indicated that saliva sample volume, DNA purity and turbidity were independently associated with sample genotype pass rate, but that saliva collection volume had the greatest effect. Conclusion When employing saliva sampling to obtain DNA, it is important to encourage all study participants to provide sufficient sample to minimize potential loss of data in downstream genotyping experiments. PMID:23759220

  4. Federal Radiological Monitoring and Assessment Center Monitoring Manual Volume 2, Radiation Monitoring and Sampling

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Aerial Measurement Systems

    2012-07-31

    The FRMAC Monitoring and Sampling Manual, Volume 2 provides standard operating procedures (SOPs) for field radiation monitoring and sample collection activities that are performed by the Monitoring group during a FRMAC response to a radiological emergency.

  5. Critical length sampling: a method to estimate the volume of downed coarse woody debris

    Science.gov (United States)

    G& #246; ran St& #229; hl; Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey

    2010-01-01

    In this paper, critical length sampling for estimating the volume of downed coarse woody debris is presented. Using this method, the volume of downed wood in a stand can be estimated by summing the critical lengths of down logs included in a sample obtained using a relascope or wedge prism; typically, the instrument should be tilted 90° from its usual...

  6. Automatic Mesh Generation of Hybrid Mesh on Valves in Multiple Positions in Feedline Systems

    Science.gov (United States)

    Ross, Douglass H.; Ito, Yasushi; Dorothy, Fredric W.; Shih, Alan M.; Peugeot, John

    2010-01-01

    Fluid flow simulations through a valve often require evaluation of the valve in multiple opening positions. A mesh has to be generated for the valve for each position and compounding. The problem is the fact that the valve is typically part of a larger feedline system. In this paper, we propose to develop a system to create meshes for feedline systems with parametrically controlled valve openings. Herein we outline two approaches to generate the meshes for a valve in a feedline system at multiple positions. There are two issues that must be addressed. The first is the creation of the mesh on the valve for multiple positions. The second is the generation of the mesh for the total feedline system including the valve. For generation of the mesh on the valve, we will describe the use of topology matching and mesh generation parameter transfer. For generation of the total feedline system, we will describe two solutions that we have implemented. In both cases the valve is treated as a component in the feedline system. In the first method the geometry of the valve in the feedline system is replaced with a valve at a different opening position. Geometry is created to connect the valve to the feedline system. Then topology for the valve is created and the portion of the topology for the valve is topology matched to the standard valve in a different position. The mesh generation parameters are transferred and then the volume mesh for the whole feedline system is generated. The second method enables the user to generate the volume mesh on the valve in multiple open positions external to the feedline system, to insert it into the volume mesh of the feedline system, and to reduce the amount of computer time required for mesh generation because only two small volume meshes connecting the valve to the feedline mesh need to be updated.

  7. Hernia Surgical Mesh Implants

    Science.gov (United States)

    ... Prosthetics Hernia Surgical Mesh Implants Hernia Surgical Mesh Implants Share Tweet Linkedin Pin it More sharing options ... majority of tissue used to produce these mesh implants are from a pig (porcine) or cow (bovine) ...

  8. Urogynecologic Surgical Mesh Implants

    Science.gov (United States)

    ... Prosthetics Urogynecologic Surgical Mesh Implants Urogynecologic Surgical Mesh Implants Share Tweet Linkedin Pin it More sharing options ... majority of tissue used to produce these mesh implants are from a pig (porcine) or cow (bovine). ...

  9. A simple robust and accurate a posteriori sub-cell finite volume limiter for the discontinuous Galerkin method on unstructured meshes

    Science.gov (United States)

    Dumbser, Michael; Loubère, Raphaël

    2016-08-01

    In this paper we propose a simple, robust and accurate nonlinear a posteriori stabilization of the Discontinuous Galerkin (DG) finite element method for the solution of nonlinear hyperbolic PDE systems on unstructured triangular and tetrahedral meshes in two and three space dimensions. This novel a posteriori limiter, which has been recently proposed for the simple Cartesian grid case in [62], is able to resolve discontinuities at a sub-grid scale and is substantially extended here to general unstructured simplex meshes in 2D and 3D. It can be summarized as follows: At the beginning of each time step, an approximation of the local minimum and maximum of the discrete solution is computed for each cell, taking into account also the vertex neighbors of an element. Then, an unlimited discontinuous Galerkin scheme of approximation degree N is run for one time step to produce a so-called candidate solution. Subsequently, an a posteriori detection step checks the unlimited candidate solution at time t n + 1 for positivity, absence of floating point errors and whether the discrete solution has remained within or at least very close to the bounds given by the local minimum and maximum computed in the first step. Elements that do not satisfy all the previously mentioned detection criteria are flagged as troubled cells. For these troubled cells, the candidate solution is discarded as inappropriate and consequently needs to be recomputed. Within these troubled cells the old discrete solution at the previous time tn is scattered onto small sub-cells (Ns = 2 N + 1 sub-cells per element edge), in order to obtain a set of sub-cell averages at time tn. Then, a more robust second order TVD finite volume scheme is applied to update the sub-cell averages within the troubled DG cells from time tn to time t n + 1. The new sub-grid data at time t n + 1 are finally gathered back into a valid cell-centered DG polynomial of degree N by using a classical conservative and higher order

  10. Parallel Adaptive Mesh Refinement

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  11. Sampling based motion planning with reachable volumes: Application to manipulators and closed chain systems

    KAUST Repository

    McMahon, Troy

    2014-09-01

    © 2014 IEEE. Reachable volumes are a geometric representation of the regions the joints of a robot can reach. They can be used to generate constraint satisfying samples for problems including complicated linkage robots (e.g. closed chains and graspers). They can also be used to assist robot operators and to help in robot design.We show that reachable volumes have an O(1) complexity in unconstrained problems as well as in many constrained problems. We also show that reachable volumes can be computed in linear time and that reachable volume samples can be generated in linear time in problems without constraints. We experimentally validate reachable volume sampling, both with and without constraints on end effectors and/or internal joints. We show that reachable volume samples are less likely to be invalid due to self-collisions, making reachable volume sampling significantly more efficient for higher dimensional problems. We also show that these samples are easier to connect than others, resulting in better connected roadmaps. We demonstrate that our method can be applied to 262-dof, multi-loop, and tree-like linkages including combinations of planar, prismatic and spherical joints. In contrast, existing methods either cannot be used for these problems or do not produce good quality solutions.

  12. Comparison of uncertainties related to standardization of urine samples with volume and creatinine concentration

    DEFF Research Database (Denmark)

    Garde, Anne Helene; Hansen, Ase Marie; Kristiansen, Jesper;

    2004-01-01

    When measuring biomarkers in urine, volume (and time) or concentration of creatinine are both accepted methods of standardization for diuresis. Both types of standardization contribute uncertainty to the final result. The aim of the present paper was to compare the uncertainty introduced when using...... that the uncertainty associated with creatinine standardization (19-35%) was higher than the uncertainty related to volume standardization (up to 10%, when not correcting for deviations from 24 h) for 24 h urine samples. However, volume standardization introduced an average bias of 4% due to missed volumes...... in population studies. When studying a single 24 h sample from one individual, there was a 15-20% risk that the sample was incomplete. In this case a bias of approximately 25% was introduced when using volume standardization, whereas the uncertainty related to creatinine standardization was independent...

  13. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    Science.gov (United States)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  14. A technique for sampling low shrub vegetation, by crown volume classes

    Science.gov (United States)

    Jay R. Bentley; Donald W. Seegrist; David A. Blakeman

    1970-01-01

    The effects of herbicides or other cultural treatments of low shrubs can be sampled by a new technique using crown volume as the key variable. Low shrubs were grouped in 12 crown volume classes with index values based on height times surface area of crown. The number of plants, by species, in each class is counted on quadrats. Many quadrats are needed for highly...

  15. The effects of different syringe volume, needle size and sample volume on blood gas analysis in syringes washed with heparin

    Science.gov (United States)

    Küme, Tuncay; Şişman, Ali Rıza; Solak, Ahmet; Tuğlu, Birsen; Çinkooğlu, Burcu; Çoker, Canan

    2012-01-01

    Introductıon: We evaluated the effect of different syringe volume, needle size and sample volume on blood gas analysis in syringes washed with heparin. Materials and methods: In this multi-step experimental study, percent dilution ratios (PDRs) and final heparin concentrations (FHCs) were calculated by gravimetric method for determining the effect of syringe volume (1, 2, 5 and 10 mL), needle size (20, 21, 22, 25 and 26 G) and sample volume (0.5, 1, 2, 5 and 10 mL). The effect of different PDRs and FHCs on blood gas and electrolyte parameters were determined. The erroneous results from nonstandardized sampling were evaluated according to RiliBAK’s TEa. Results: The increase of PDRs and FHCs was associated with the decrease of syringe volume, the increase of needle size and the decrease of sample volume: from 2.0% and 100 IU/mL in 10 mL-syringe to 7.0% and 351 IU/mL in 1 mL-syringe; from 4.9% and 245 IU/mL in 26G to 7.6% and 380 IU/mL in 20 G with combined 1 mL syringe; from 2.0% and 100 IU/mL in full-filled sample to 34% and 1675 IU/mL in 0.5 mL suctioned sample into 10 mL-syringe. There was no statistical difference in pH; but the percent decreasing in pCO2, K+, iCa2+, iMg2+; the percent increasing in pO2 and Na+ were statistical significance compared to samples full-filled in syringes. The all changes in pH and pO2 were acceptable; but the changes in pCO2, Na+, K+ and iCa2+ were unacceptable according to TEa limits except fullfilled-syringes. Conclusions: The changes in PDRs and FHCs due nonstandardized sampling in syringe washed with liquid heparin give rise to erroneous test results for pCO2 and electrolytes. PMID:22838185

  16. Effect of sample volume size and sampling method on feline longitudinal myocardial velocity profiles from color tissue Doppler imaging.

    Science.gov (United States)

    Granström, Sara; Pipper, Christian Bressen; Møgelvang, Rasmus; Sogaard, Peter; Willesen, Jakob Lundgren; Koch, Jørgen

    2012-12-01

    The aims of this study were to compare the effect of sample volume (SV) size settings and sampling method on measurement variability and peak systolic (s'), and early (e') and late (a') diastolic longitudinal myocardial velocities using color tissue Doppler imaging (cTDI) in cats. Twenty cats with normal echocardiograms and 20 cats with hypertrophic cardiomyopathy. We quantified and compared empirical variance and average absolute values of s', e' and a' for three cardiac cycles using eight different SV settings (length 1,2,3 and 5 mm; width 1 and 2 mm) and three methods of sampling (end-diastolic sampling with manual tracking of the SV, end-systolic sampling without tracking, and random-frame sampling without tracking). No significant difference in empirical variance could be demonstrated between most of the tested SVs. However, the two settings with a length of 1 mm resulted in a significantly higher variance compared with all settings where the SV length exceeded 2 mm (p sampling method on the variability of measurements (p = 0.003) and manual tracking obtained the lowest variance. No difference in average values of s', e' or a' could be found between any of the SV settings or sampling methods. Within the tested range of SV settings, an SV length of 1 mm resulted in higher measurement variability compared with an SV length of 3 and 5 mm, and should therefore be avoided. Manual tracking of the sample volume is recommended. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Tensile Behaviour of Welded Wire Mesh and Hexagonal Metal Mesh for Ferrocement Application

    Science.gov (United States)

    Tanawade, A. G.; Modhera, C. D.

    2017-08-01

    Tension tests were conducted on welded mesh and hexagonal Metal mesh. Welded Mesh is available in the market in different sizes. The two types are analysed viz. Ø 2.3 mm and Ø 2.7 mm welded mesh, having opening size 31.75 mm × 31.75 mm and 25.4 mm × 25.4 mm respectively. Tensile strength test was performed on samples of welded mesh in three different orientations namely 0°, 30° and 45° degrees with the loading axis and hexagonal Metal mesh of Ø 0.7 mm, having opening 19.05 × 19.05 mm. Experimental tests were conducted on samples of these meshes. The objective of this study was to investigate the behaviour of the welded mesh and hexagonal Metal mesh. The result shows that the tension load carrying capacity of welded mesh of Ø 2.7 mm of 0° orientation is good as compared to Ø2.3 mm mesh and ductility of hexagonal Metal mesh is good in behaviour.

  18. Repetitive Cyclic Potentiodynamic Polarization Scan Results for Reduced Sample Volume Testing

    Energy Technology Data Exchange (ETDEWEB)

    LaMothe, Margaret E. [Washington River Protection Solutions, Richland, WA (United States)

    2016-03-15

    This report is the compilation of data gathered after repetitively testing simulated tank waste and a radioactive tank waste sample using a cyclic potentiodynamic polarization (CPP) test method to determine corrosion resistance of metal samples. Electrochemistry testing of radioactive tank samples is often used to assess the corrosion susceptibility and material integrity of waste tank steel. Repetitive testing of radiological tank waste is occasionally requested at 222-S Laboratory due to the limited volume of radiological tank sample received for testing.

  19. New Approach to Purging Monitoring Wells: Lower Flow Rates Reduce Required Purging Volumes and Sample Turbidity

    Science.gov (United States)

    It is generally accepted that monitoring wells must be purged to access formation water to obtain “representative” ground water quality samples. Historically anywhere from 3 to 5 well casing volumes have been removed prior to sample collection to evacuate the standing well water...

  20. An immunomagnetic separator for concentration of pathogenic micro-organisms from large volume samples

    Energy Technology Data Exchange (ETDEWEB)

    Rotariu, Ovidiu [School of Biological Sciences, University of Aberdeen, Cruickshank Building, St. Machar Drive, Aberdeen (United Kingdom) and National Institute of R-D for Technical Physics I.F.T. Iasi, Mangeron 47 Blvd., Iasi (Romania)]. E-mail: o.rotariu@abdn.ac.uk; Ogden, Iain D. [Department of Medical Microbiology, University of Aberdeen, Aberdeen (United Kingdom); MacRae, Marion [Department of Medical Microbiology, University of Aberdeen, Aberdeen (United Kingdom); Badescu, Vasile [National Institute of R-D for Technical Physics I.F.T. Iasi, Mangeron 47 Blvd., Iasi (Romania); Strachan, Norval J.C. [School of Biological Sciences, University of Aberdeen, Cruickshank Building, St. Machar Drive, Aberdeen (United Kingdom)

    2005-05-15

    The standard method of immunomagnetic separation of pathogenic bacteria from food and environmental matrices processes 1 ml volumes. Pathogens present at low levels (<1 pathogenic bacteria per ml) will not be consistently detected by this method. Here a flow through immunomagnetic separator (FTIMS) has been designed and tested to process large volume samples (>50 ml). Preliminary results show that between 70 and 113 times more Escherchia coli O157 are recovered compared with the standard 1 ml method.

  1. Socioeconomic status and the cerebellar grey matter volume. Data from a well-characterised population sample.

    Science.gov (United States)

    Cavanagh, Jonathan; Krishnadas, Rajeev; Batty, G David; Burns, Harry; Deans, Kevin A; Ford, Ian; McConnachie, Alex; McGinty, Agnes; McLean, Jennifer S; Millar, Keith; Sattar, Naveed; Shiels, Paul G; Tannahill, Carol; Velupillai, Yoga N; Packard, Chris J; McLean, John

    2013-12-01

    The cerebellum is highly sensitive to adverse environmental factors throughout the life span. Socioeconomic deprivation has been associated with greater inflammatory and cardiometabolic risk, and poor neurocognitive function. Given the increasing awareness of the association between early-life adversities on cerebellar structure, we aimed to explore the relationship between early life (ESES) and current socioeconomic status (CSES) and cerebellar volume. T1-weighted MRI was used to create models of cerebellar grey matter volumes in 42 adult neurologically healthy males selected from the Psychological, Social and Biological Determinants of Ill Health study. The relationship between potential risk factors, including ESES, CSES and cerebellar grey matter volumes were examined using multiple regression techniques. We also examined if greater multisystem physiological risk index-derived from inflammatory and cardiometabolic risk markers-mediated the relationship between socioeconomic status (SES) and cerebellar grey matter volume. Both ESES and CSES explained the greatest variance in cerebellar grey matter volume, with age and alcohol use as a covariate in the model. Low CSES explained additional significant variance to low ESES on grey matter decrease. The multisystem physiological risk index mediated the relationship between both early life and current SES and grey matter volume in cerebellum. In a randomly selected sample of neurologically healthy males, poorer socioeconomic status was associated with a smaller cerebellar volume. Early and current socioeconomic status and the multisystem physiological risk index also apparently influence cerebellar volume. These findings provide data on the relationship between socioeconomic deprivation and a brain region highly sensitive to environmental factors.

  2. SAMPLING INTENSITY WITH FIXED PRECISION WHEN ESTIMATING VOLUME OF HUMAN BRAIN COMPARTMENTS

    Directory of Open Access Journals (Sweden)

    Rhiannon Maudsley

    2011-05-01

    Full Text Available Cavalieri sampling and point counting are frequently applied in combination with magnetic resonance (MR imaging to estimate the volume of human brain compartments. Current practice involves arbitrarily choosing the number of sections and sampling intensity within each section, and subsequently applying error prediction formulae to estimate the precision. The aim of this study is to derive a reference table for researchers who are interested in estimating the volume of brain regions, namely grey matter, white matter, and their union, to a given precision. In particular, this table, which is based on subsampling of a large brain data set obtained from coronal MR images, offers a recommendation for the minimum number of sections and mean number of points per section that are required to achieve a pre-defined coefficient of error of the volume estimator. Further analysis onMR brain data from a second human brain shows that the sampling intensity recommended is appropriate.

  3. Comparison of uncertainties related to standardization of urine samples with volume and creatinine concentration

    DEFF Research Database (Denmark)

    Garde, Anne Helene; Hansen, Ase Marie; Kristiansen, Jesper

    2004-01-01

    When measuring biomarkers in urine, volume (and time) or concentration of creatinine are both accepted methods of standardization for diuresis. Both types of standardization contribute uncertainty to the final result. The aim of the present paper was to compare the uncertainty introduced when using...... the two types of standardization on 24 h samples from healthy individuals. Estimates of uncertainties were based on results from the literature supplemented with data from our own studies. Only the difference in uncertainty related to the two standardization methods was evaluated. It was found...... increase in convenience for the participants, when collecting small volumes rather than complete 24 h samples....

  4. 21st International Meshing Roundtable

    CERN Document Server

    Weill, Jean-Christophe

    2013-01-01

    This volume contains the articles presented at the 21st International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held on October 7–10, 2012 in San Jose, CA, USA. The first IMR was held in 1992, and the conference series has been held annually since. Each year the IMR brings together researchers, developers, and application experts in a variety of disciplines, from all over the world, to present and discuss ideas on mesh generation and related topics. The technical papers in this volume present theoretical and novel ideas and algorithms with practical potential, as well as technical applications in science and engineering, geometric modeling, computer graphics, and visualization.

  5. Automatic Scheme Selection for Toolkit Hex Meshing

    Energy Technology Data Exchange (ETDEWEB)

    TAUTGES,TIMOTHY J.; WHITE,DAVID R.

    1999-09-27

    Current hexahedral mesh generation techniques rely on a set of meshing tools, which when combined with geometry decomposition leads to an adequate mesh generation process. Of these tools, sweeping tends to be the workhorse algorithm, accounting for at least 50% of most meshing applications. Constraints which must be met for a volume to be sweepable are derived, and it is proven that these constraints are necessary but not sufficient conditions for sweepability. This paper also describes a new algorithm for detecting extruded or sweepable geometries. This algorithm, based on these constraints, uses topological and local geometric information, and is more robust than feature recognition-based algorithms. A method for computing sweep dependencies in volume assemblies is also given. The auto sweep detect and sweep grouping algorithms have been used to reduce interactive user time required to generate all-hexahedral meshes by filtering out non-sweepable volumes needing further decomposition and by allowing concurrent meshing of independent sweep groups. Parts of the auto sweep detect algorithm have also been used to identify independent sweep paths, for use in volume-based interval assignment.

  6. Mesh optimization for microbial fuel cell cathodes constructed around stainless steel mesh current collectors

    KAUST Repository

    Zhang, Fang

    2011-02-01

    Mesh current collectors made of stainless steel (SS) can be integrated into microbial fuel cell (MFC) cathodes constructed of a reactive carbon black and Pt catalyst mixture and a poly(dimethylsiloxane) (PDMS) diffusion layer. It is shown here that the mesh properties of these cathodes can significantly affect performance. Cathodes made from the coarsest mesh (30-mesh) achieved the highest maximum power of 1616 ± 25 mW m-2 (normalized to cathode projected surface area; 47.1 ± 0.7 W m-3 based on liquid volume), while the finest mesh (120-mesh) had the lowest power density (599 ± 57 mW m-2). Electrochemical impedance spectroscopy showed that charge transfer and diffusion resistances decreased with increasing mesh opening size. In MFC tests, the cathode performance was primarily limited by reaction kinetics, and not mass transfer. Oxygen permeability increased with mesh opening size, accounting for the decreased diffusion resistance. At higher current densities, diffusion became a limiting factor, especially for fine mesh with low oxygen transfer coefficients. These results demonstrate the critical nature of the mesh size used for constructing MFC cathodes. © 2010 Elsevier B.V. All rights reserved.

  7. Mesh network simulation

    OpenAIRE

    Pei Ping; YURY N. PETRENKO

    2015-01-01

    A Mesh network simulation framework which provides a powerful and concise modeling chain for a network structure will be introduce in this report. Mesh networks has a special topologic structure. The paper investigates a message transfer in wireless mesh network simulation and how does it works in cellular network simulation. Finally the experimental result gave us the information that mesh networks have different principle in transmission way with cellular networks in transmission, and multi...

  8. Comparison of the effects of two bongo net mesh sizes on the estimation of abundance and size of Engraulidae eggs

    Directory of Open Access Journals (Sweden)

    Jana Menegassi del Favero

    2015-06-01

    Full Text Available Abstract Studies of ichthyoplankton retention by nets of different mesh sizes are important because they help in choosing a sampler when planning collection and the establishment of correction factors. These factors make it possible to compare studies performed with nets of different mesh sizes. In most studies of mesh retention of fish eggs, the taxonomic identification is done at the family level, resulting in the loss of detailed information. We separated Engraulidae eggs, obtained with 0.333 mm and 0.505 mm mesh bongo nets at 172 oceanographic stations in the southeastern Brazilian Bight, into four groups based on their morphometric characteristics. The difference in the abundance of eggs caught by the two nets was not significant for those groups with highest volume, types A and B, but in type C (Engraulis anchoita, the most eccentric, and in type D, of the smallest volume, the difference was significant. However, no significant difference was observed in the egg size sampled with each net for E. anchoita and type D, which exhibited higher abundance in the 0.333 mm mesh net and minor axis varying from 0.45-0.71 mm, smaller than the 0.505 mm mesh aperture and the mesh diagonal.

  9. Mesh generation in archipelagos

    NARCIS (Netherlands)

    Terwisscha van Scheltinga, A.; Myers, P.G.; Pietrzak, J.D.

    2012-01-01

    A new mesh size field is presented that is specifically designed for efficient meshing of highly irregular oceanic domains: archipelagos. The new approach is based on the standard mesh size field that uses the proximity to the nearest coastline. Here, the proximities to the two nearest coastlines

  10. The electrical breakdown strength of pre-stretched elastomers, with and without sample volume conservation

    DEFF Research Database (Denmark)

    Zakaria, Shamsul Bin; Morshuis, Peter H. F.; Yahia, Benslimane Mohamed;

    2015-01-01

    strength of polydimethylsiloxane (PDMS) elastomers. Breakdown strength was determined for samples with and without volume conservation and was found to depend strongly on the stretch ratio and the thickness of thesamples. PDMS elastomers are shown to increase breakdown strength by a factor of ∼3 when...

  11. Finite element mesh generation

    CERN Document Server

    Lo, Daniel SH

    2014-01-01

    Highlights the Progression of Meshing Technologies and Their ApplicationsFinite Element Mesh Generation provides a concise and comprehensive guide to the application of finite element mesh generation over 2D domains, curved surfaces, and 3D space. Organised according to the geometry and dimension of the problem domains, it develops from the basic meshing algorithms to the most advanced schemes to deal with problems with specific requirements such as boundary conformity, adaptive and anisotropic elements, shape qualities, and mesh optimization. It sets out the fundamentals of popular techniques

  12. Determination of air-loop volume and radon partition coefficient for measuring radon in water sample.

    Science.gov (United States)

    Lee, Kil Yong; Burnett, William C

    A simple method for the direct determination of the air-loop volume in a RAD7 system as well as the radon partition coefficient was developed allowing for an accurate measurement of the radon activity in any type of water. The air-loop volume may be measured directly using an external radon source and an empty bottle with a precisely measured volume. The partition coefficient and activity of radon in the water sample may then be determined via the RAD7 using the determined air-loop volume. Activity ratios instead of absolute activities were used to measure the air-loop volume and the radon partition coefficient. In order to verify this approach, we measured the radon partition coefficient in deionized water in the temperature range of 10-30 °C and compared the values to those calculated from the well-known Weigel equation. The results were within 5 % variance throughout the temperature range. We also applied the approach for measurement of the radon partition coefficient in synthetic saline water (0-75 ppt salinity) as well as tap water. The radon activity of the tap water sample was determined by this method as well as the standard RAD-H2O and BigBottle RAD-H2O. The results have shown good agreement between this method and the standard methods.

  13. Accurately measuring volume of soil samples using low cost Kinect 3D scanner

    Science.gov (United States)

    van der Sterre, Boy-Santhos; Hut, Rolf; van de Giesen, Nick

    2013-04-01

    The 3D scanner of the Kinect game controller can be used to increase the accuracy and efficiency of determining in situ soil moisture content. Soil moisture is one of the principal hydrological variables in both the water and energy interactions between soil and atmosphere. Current in situ measurements of soil moisture either rely on indirect measurements (of electromagnetic constants or heat capacity) or on physically taking a sample and weighing it in a lab. The bottleneck in accurately retrieving soil moisture using samples is the determining of the volume of the sample. Currently this is mostly done by the very time consuming "sand cone method" in which the volume were the sample used to sit is filled with sand. We show that 3D scanner that is part of the 150 game controller extension "Kinect" can be used to make 3D scans before and after taking the sample. The accuracy of this method is tested by scanning forms of known volume. This method is less time consuming and less error-prone than using a sand cone.

  14. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  15. Reliable measurements for an image-derived sample volume in an open-configuration MR system

    Science.gov (United States)

    Han, Bong Soo; Lee, Man Woo; Hong, Cheolpyo

    2016-11-01

    Open-configuration magnetic resonance (MR) systems are becoming desirable for volume measurements of off-center samples due to their non-claustrophobic system configuration, excellent soft-tissue contrast, high efficiency, and low-cost. However, geometric distortion is produced by the unwanted background magnetic field and hinders volume measurements. The present study describes the characteristics of geometric distortion in off-center samples such as the thigh muscle and adipose tissue measurements using an open-type MR system. The American Association of Physicists in Medicine uniformity and linearity phantom was used for the detection and the evaluation of the geometric distortion. The geometric distortion decreased near the isocenter and increased toward the off-center. A cylindrical phantom image was acquired at the isocenter and was used as the distortion-free, reference image. Two cylindrical phantoms were scanned off-center at a position analogous to that of the human thigh. The differences between the two cylindrical phantom volumes and the reference volume were 1.62 % ± 0.16 and 5.18 % ± 0.14. Off-center-related MR imaging requires careful consideration for image interpretation and volumetric assessment of tissue.

  16. Local Environmental Dependence of Galaxy Properties in a Volume-Limited Sample of Main Galaxies

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Using a volume-limited sample of Main Galaxies from SDSS Data Release 5, we investigate the dependence of galaxy properties on local environment. For each galaxy, a local three-dimensional density is calculated. We find that the galaxy morphological type depends strongly on the local environment: galaxies in dense environments have predominantly early type morphologies. Galaxy colors have only a weak dependence on the environment. This puts an important constraint on the process of galaxy formation.

  17. ON MOBILE MESH NETWORKS

    OpenAIRE

    2015-01-01

    With the advances in mobile computing technologies and the growth of the Net, mobile mesh networks are going through a set of important evolutionary steps. In this paper, we survey architectural aspects of mobile mesh networks and their use cases and deployment models. Also, we survey challenging areas of mobile mesh networks and describe our vision of promising mobile services. This paper presents a basic introductory material for Masters of Open Information Technologies Lab, interested in m...

  18. Hydrodynamic simulations on a moving Voronoi mesh

    CERN Document Server

    Springel, Volker

    2011-01-01

    At the heart of any method for computational fluid dynamics lies the question of how the simulated fluid should be discretized. Traditionally, a fixed Eulerian mesh is often employed for this purpose, which in modern schemes may also be adaptively refined during a calculation. Particle-based methods on the other hand discretize the mass instead of the volume, yielding an approximately Lagrangian approach. It is also possible to achieve Lagrangian behavior in mesh-based methods if the mesh is allowed to move with the flow. However, such approaches have often been fraught with substantial problems related to the development of irregularity in the mesh topology. Here we describe a novel scheme that eliminates these weaknesses. It is based on a moving unstructured mesh defined by the Voronoi tessellation of a set of discrete points. The mesh is used to solve the hyperbolic conservation laws of ideal hydrodynamics with a finite volume approach, based on a second-order Godunov scheme with an exact Riemann solver. A...

  19. Rotation-translation device for condensed-phase spectroscopy with small sample volumes

    Science.gov (United States)

    Nuernberger, Patrick; Krampert, Gerhard; Brixner, Tobias; Vogt, Gerhard

    2006-08-01

    We present and characterize an experimental device for optical spectroscopy with small sample volumes contained in a thin film. Employing rotational and translational motion, the sample transport speeds are high enough to offer a new sample volume for each interaction in time-resolved spectroscopy experiments working with a 1kHz repetition rate. This is especially suited for ultrafast femtosecond spectroscopy such as transient absorption spectroscopy or fluorescence upconversion. To reduce photodegradation and effects from local thermal heating, a large sample area is scanned in contrast to conventional devices with either only rotation or translation movements. For characterization of the setup, transient absorption experiments are carried out using the rotation-translation device and a conventional flow-cell setup, which exhibit similar signal-to-noise ratio in the two cases. The effects of photodegradation and diffusion are also investigated, demonstrating the suitability of the device for time-resolved spectroscopic experiments. The transient absorption data show that the setup is well suited for biomolecular samples, which are often only available in small amounts and are very sensitive to thermal heating.

  20. Assignment of fields from particles to mesh

    CERN Document Server

    Duque, Daniel

    2016-01-01

    In Computational Fluid Dynamics there have been many attempts to combine the power of a fixed mesh on which to carry out spatial calculations with that of a set of particles that moves following the velocity field. These ideas indeed go back to Particle-in-Cell methods, proposed about 60 years ago. Of course, some procedure is needed to transfer field information between particles and mesh. There are many possible choices for this "assignment", or "projection". Several requirements may guide this choice. Two well-known ones are conservativity and stability, which apply to volume integrals of the fields. An additional one is here considered: preservation of information. This means that mesh interpolation, followed by mesh assignment, should leave the field values invariant. The resulting methods are termed "mass" assignments due to their strong similarities with the Finite Element Method. We test several procedures, including the well-known FLIP, on three scenarios: simple 1D convection, 2D convection of Zales...

  1. Influence of Sample Volume and Solvent Evaporation on Absorbance Spectroscopy in a Microfluidic "Pillar-Cuvette".

    Science.gov (United States)

    Kriel, Frederik H; Priest, Craig

    2016-01-01

    Spectroscopic analysis of solutions containing samples at high concentrations or molar absorptivity can present practical challenges. In absorbance spectroscopy, short optical path lengths or multiple dilution is required to bring the measured absorbance into the range of the Beer's Law calibration. We have previously reported an open "pillar-cuvette" with a micropillar array that is spontaneously filled with a precise (nL or μL) volume to create the well-defined optical path of, for example, 10 to 20 μm. Evaporation should not be ignored for open cuvettes and, herein, the volume of loaded sample and the rate of evaporation from the cuvette are studied. It was found that the volume of loaded sample (between 1 and 10 μL) had no effect on the Beer's Law calibration for methyl orange solutions (molar absorptivity of (2.42 ± 0.02)× 10(4) L mol(-1) cm(-1)) for cuvettes with a 14.2 ± 0.2 μm path length. Evaporation rates of water from methyl orange solutions were between 2 and 5 nL s(-1) (30 - 40% relative humidity; 23°C), depending on the sample concentration and ambient conditions. Evaporation could be reduced by placing a cover slip several millimeters above the cuvette. Importantly, the results show that a "drop-and-measure" method (measurement within ∼3 s of cuvette loading) eliminates the need for extrapolation of the absorbance-time data for accurate analysis of samples.

  2. Comparison of estimates of hardwood bole volume using importance sampling, the centroid method, and some taper equations

    Science.gov (United States)

    Harry V., Jr. Wiant; Michael L. Spangler; John E. Baumgras

    2002-01-01

    Various taper systems and the centroid method were compared to unbiased volume estimates made by importance sampling for 720 hardwood trees selected throughout the state of West Virginia. Only the centroid method consistently gave volumes estimates that did not differ significantly from those made by importance sampling, although some taper equations did well for most...

  3. Analytical Chemistry Laboratory (ACL) procedure compendium. Volume 2, Sample preparation methods

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    This volume contains the interim change notice for sample preparation methods. Covered are: acid digestion for metals analysis, fusion of Hanford tank waste solids, water leach of sludges/soils/other solids, extraction procedure toxicity (simulate leach in landfill), sample preparation for gamma spectroscopy, acid digestion for radiochemical analysis, leach preparation of solids for free cyanide analysis, aqueous leach of solids for anion analysis, microwave digestion of glasses and slurries for ICP/MS, toxicity characteristic leaching extraction for inorganics, leach/dissolution of activated metal for radiochemical analysis, extraction of single-shell tank (SST) samples for semi-VOC analysis, preparation and cleanup of hydrocarbon- containing samples for VOC and semi-VOC analysis, receiving of waste tank samples in onsite transfer cask, receipt and inspection of SST samples, receipt and extrusion of core samples at 325A shielded facility, cleaning and shipping of waste tank samplers, homogenization of solutions/slurries/sludges, and test sample preparation for bioassay quality control program.

  4. 2D Mesh Manipulation

    Science.gov (United States)

    2011-11-01

    triangles in two dimensions and tetrahedra ( tets ) in three dimensions. There are many other ways to discretize a region using unstructured meshes, but this...The boundary points associated with the airfoil surface were moved, but all of the interior points remained stationary , which resulted in a mesh

  5. An Adaptive Mesh Algorithm: Mapping the Mesh Variables

    Energy Technology Data Exchange (ETDEWEB)

    Scannapieco, Anthony J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-25

    Both thermodynamic and kinematic variables must be mapped. The kinematic variables are defined on a separate kinematic mesh; it is the duel mesh to the thermodynamic mesh. The map of the kinematic variables is done by calculating the contributions of kinematic variables on the old thermodynamic mesh, mapping the kinematic variable contributions onto the new thermodynamic mesh and then synthesizing the mapped kinematic variables on the new kinematic mesh. In this document the map of the thermodynamic variables will be described.

  6. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang

    2013-02-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  7. Enrichment of diluted cell populations from large sample volumes using 3D carbon-electrode dielectrophoresis.

    Science.gov (United States)

    Islam, Monsur; Natu, Rucha; Larraga-Martinez, Maria Fernanda; Martinez-Duarte, Rodrigo

    2016-05-01

    Here, we report on an enrichment protocol using carbon electrode dielectrophoresis to isolate and purify a targeted cell population from sample volumes up to 4 ml. We aim at trapping, washing, and recovering an enriched cell fraction that will facilitate downstream analysis. We used an increasingly diluted sample of yeast, 10(6)-10(2) cells/ml, to demonstrate the isolation and enrichment of few cells at increasing flow rates. A maximum average enrichment of 154.2 ± 23.7 times was achieved when the sample flow rate was 10 μl/min and yeast cells were suspended in low electrically conductive media that maximizes dielectrophoresis trapping. A COMSOL Multiphysics model allowed for the comparison between experimental and simulation results. Discussion is conducted on the discrepancies between such results and how the model can be further improved.

  8. Automated high-volume aerosol sampling station for environmental radiation monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Toivonen, H.; Honkamaa, T.; Ilander, T.; Leppaenen, A.; Nikkinen, M.; Poellaenen, R.; Ylaetalo, S

    1998-07-01

    An automated high-volume aerosol sampling station, known as CINDERELLA.STUK, for environmental radiation monitoring has been developed by the Radiation and Nuclear Safety Authority (STUK), Finland. The sample is collected on a glass fibre filter (attached into a cassette), the airflow through the filter is 800 m{sup 3}/h at maximum. During the sampling, the filter is continuously monitored with Na(I) scintillation detectors. After the sampling, the large filter is automatically cut into 15 pieces that form a small sample and after ageing, the pile of filter pieces is moved onto an HPGe detector. These actions are performed automatically by a robot. The system is operated at a duty cycle of 1 d sampling, 1 d decay and 1 d counting. Minimum detectable concentrations of radionuclides in air are typically 1Ae10 x 10{sup -6} Bq/m{sup 3}. The station is equipped with various sensors to reveal unauthorized admittance. These sensors can be monitored remotely in real time via Internet or telephone lines. The processes and operation of the station are monitored and partly controlled by computer. The present approach fulfils the requirements of CTBTO for aerosol monitoring. The concept suits well for nuclear material safeguards, too 10 refs.

  9. Technical Note: New methodology for measuring viscosities in small volumes characteristic of environmental chamber particle samples

    Directory of Open Access Journals (Sweden)

    L. Renbaum-Wolff

    2012-10-01

    Full Text Available Herein, a method for the determination of viscosities of small sample volumes is introduced, with important implications for the viscosity determination of particle samples from environmental chambers (used to simulate atmospheric conditions. The amount of sample needed is < 1 μl, and the technique is capable of determining viscosities (η ranging between 10−3 and 103 Pascal seconds (Pa s in samples that cover a range of chemical properties and with real-time relative humidity and temperature control; hence, the technique should be well-suited for determining the viscosities, under atmospherically relevant conditions, of particles collected from environmental chambers. In this technique, supermicron particles are first deposited on an inert hydrophobic substrate. Then, insoluble beads (~1 μm in diameter are embedded in the particles. Next, a flow of gas is introduced over the particles, which generates a shear stress on the particle surfaces. The sample responds to this shear stress by generating internal circulations, which are quantified with an optical microscope by monitoring the movement of the beads. The rate of internal circulation is shown to be a function of particle viscosity but independent of the particle material for a wide range of organic and organic-water samples. A calibration curve is constructed from the experimental data that relates the rate of internal circulation to particle viscosity, and this calibration curve is successfully used to predict viscosities in multicomponent organic mixtures.

  10. Calcium isolation from large-volume human urine samples for 41Ca analysis by accelerator mass spectrometry.

    Science.gov (United States)

    Miller, James J; Hui, Susanta K; Jackson, George S; Clark, Sara P; Einstein, Jane; Weaver, Connie M; Bhattacharyya, Maryka H

    2013-08-01

    Calcium oxalate precipitation is the first step in preparation of biological samples for (41)Ca analysis by accelerator mass spectrometry. A simplified protocol for large-volume human urine samples was characterized, with statistically significant increases in ion current and decreases in interference. This large-volume assay minimizes cost and effort and maximizes time after (41)Ca administration during which human samples, collected over a lifetime, provide (41)Ca:Ca ratios that are significantly above background.

  11. Calcium Isolation from Large-Volume Human Urine Samples for 41Ca Analysis by Accelerator Mass Spectrometry

    Science.gov (United States)

    Miller, James J; Hui, Susanta K; Jackson, George S; Clark, Sara P; Einstein, Jane; Weaver, Connie M; Bhattacharyya, Maryka H

    2013-01-01

    Calcium oxalate precipitation is the first step in preparation of biological samples for 41Ca analysis by accelerator mass spectrometry. A simplified protocol for large-volume human urine samples was characterized, with statistically significant increases in ion current and decreases in interference. This large-volume assay minimizes cost and effort and maximizes time after 41Ca administration during which human samples, collected over a lifetime, provide 41Ca:Ca ratios that are significantly above background. PMID:23672965

  12. Relationship between LIBS Ablation and Pit Volume for Geologic Samples: Applications for in situ Absolute Geochronology

    Science.gov (United States)

    Devismes, D.; Cohen, Barbara A.

    2014-01-01

    In planetary sciences, in situ absolute geochronology is a scientific and engineering challenge. Currently, the age of the Martian surface can only be determined by crater density counting. However this method has significant uncertainties and needs to be calibrated with absolute ages. We are developing an instrument to acquire in situ absolute geochronology based on the K-Ar method. The protocol is based on the laser ablation of a rock by hundreds of laser pulses. Laser Induced Breakdown Spectroscopy (LIBS) gives the potassium content of the ablated material and a mass spectrometer (quadrupole or ion trap) measures the quantity of 40Ar released. In order to accurately measure the quantity of released 40Ar in cases where Ar is an atmospheric constituent (e.g., Mars), the sample is first put into a chamber under high vacuum. The 40Arquantity, the concentration of K and the estimation of the ablated mass are the parameters needed to give the age of the rocks. The main uncertainties with this method are directly linked to the measures of the mass (typically some µg) and of the concentration of K by LIBS (up to 10%). Because the ablated mass is small compared to the mass of the sample, and because material is redeposited onto the sample after ablation, it is not possible to directly measure the ablated mass. Our current protocol measures the ablated volume and estimates the sample density to calculate ablated mass. The precision and accuracy of this method may be improved by using knowledge of the sample's geologic properties to predict its response to laser ablation, i.e., understanding whether natural samples have a predictable relationship between laser energy deposited and resultant ablation volume. In contrast to most previous studies of laser ablation, theoretical equations are not highly applicable. The reasons are numerous, but the most important are: a) geologic rocks are complex, polymineralic materials; b) the conditions of ablation are unusual (for example

  13. Tracer techniques for urine volume determination and urine collection and sampling back-up system

    Science.gov (United States)

    Ramirez, R. V.

    1971-01-01

    The feasibility, functionality, and overall accuracy of the use of lithium were investigated as a chemical tracer in urine for providing a means of indirect determination of total urine volume by the atomic absorption spectrophotometry method. Experiments were conducted to investigate the parameters of instrumentation, tracer concentration, mixing times, and methods for incorporating the tracer material in the urine collection bag, and to refine and optimize the urine tracer technique to comply with the Skylab scheme and operational parameters of + or - 2% of volume error and + or - 1% accuracy of amount of tracer added to each container. In addition, a back-up method for urine collection and sampling system was developed and evaluated. This back-up method incorporates the tracer technique for volume determination in event of failure of the primary urine collection and preservation system. One chemical preservative was selected and evaluated as a contingency chemical preservative for the storage of urine in event of failure of the urine cooling system.

  14. Wireless mesh networks

    CERN Document Server

    Held, Gilbert

    2005-01-01

    Wireless mesh networking is a new technology that has the potential to revolutionize how we access the Internet and communicate with co-workers and friends. Wireless Mesh Networks examines the concept and explores its advantages over existing technologies. This book explores existing and future applications, and examines how some of the networking protocols operate.The text offers a detailed analysis of the significant problems affecting wireless mesh networking, including network scale issues, security, and radio frequency interference, and suggests actual and potential solutions for each pro

  15. Oral, intestinal, and skin bacteria in ventral hernia mesh implants

    Directory of Open Access Journals (Sweden)

    Odd Langbach

    2016-07-01

    Full Text Available Background: In ventral hernia surgery, mesh implants are used to reduce recurrence. Infection after mesh implantation can be a problem and rates around 6–10% have been reported. Bacterial colonization of mesh implants in patients without clinical signs of infection has not been thoroughly investigated. Molecular techniques have proven effective in demonstrating bacterial diversity in various environments and are able to identify bacteria on a gene-specific level. Objective: The purpose of this study was to detect bacterial biofilm in mesh implants, analyze its bacterial diversity, and look for possible resemblance with bacterial biofilm from the periodontal pocket. Methods: Thirty patients referred to our hospital for recurrence after former ventral hernia mesh repair, were examined for periodontitis in advance of new surgical hernia repair. Oral examination included periapical radiographs, periodontal probing, and subgingival plaque collection. A piece of mesh (1×1 cm from the abdominal wall was harvested during the new surgical hernia repair and analyzed for bacteria by PCR and 16S rRNA gene sequencing. From patients with positive PCR mesh samples, subgingival plaque samples were analyzed with the same techniques. Results: A great variety of taxa were detected in 20 (66.7% mesh samples, including typical oral commensals and periodontopathogens, enterics, and skin bacteria. Mesh and periodontal bacteria were further analyzed for similarity in 16S rRNA gene sequences. In 17 sequences, the level of resemblance between mesh and subgingival bacterial colonization was 98–100% suggesting, but not proving, a transfer of oral bacteria to the mesh. Conclusion: The results show great bacterial diversity on mesh implants from the anterior abdominal wall including oral commensals and periodontopathogens. Mesh can be reached by bacteria in several ways including hematogenous spread from an oral site. However, other sites such as gut and skin may also

  16. Mesh implants: An overview of crucial mesh parameters

    Institute of Scientific and Technical Information of China (English)

    Lei-Ming; Zhu; Philipp; Schuster; Uwe; Klinge

    2015-01-01

    Hernia repair is one of the most frequently performed surgical interventions that use mesh implants. This article evaluates crucial mesh parameters to facilitate selection of the most appropriate mesh implant, considering raw materials, mesh composition, structure parameters and mechanical parameters. A literature review was performed using the Pub Med database. The most important mesh parameters in the selection of a mesh implant are the raw material, structural parameters and mechanical parameters, which should match the physiological conditions. The structural parameters, especially the porosity, are the most important predictors of the biocompatibility performance of synthetic meshes. Meshes with large pores exhibit less inflammatory infiltrate, connective tissue and scar bridging, which allows increased soft tissue ingrowth. The raw material and combination of raw materials of the used mesh, including potential coatings and textile design, strongly impact the inflammatory reaction to the mesh. Synthetic meshes made from innovative polymers combined with surface coating have been demonstrated to exhibit advantageous behavior in specialized fields. Monofilament, largepore synthetic meshes exhibit advantages. The value of mesh classification based on mesh weight seems to be overestimated. Mechanical properties of meshes, such as anisotropy/isotropy, elasticity and tensile strength, are crucial parameters for predicting mesh performance after implantation.

  17. 28 Comparative Study of Open Mesh Repair and Desarda's No ...

    African Journals Online (AJOL)

    user

    2006-12-02

    Dec 2, 2006 ... East And Central African Journal of Surgery Volume 11 Number 2. ... new technique and the open mesh repair done in a district level general hospital set ... laparoscopic repairs or the patients given ..... Hernia repair (Open Vs.

  18. Application-specific mesh-based heterogeneous FPGA architectures

    CERN Document Server

    Parvez, Husain

    2011-01-01

    This volume presents a new exploration environment for mesh-based, heterogeneous FPGA architectures. Readers will find a description of state-of-the-art techniques for reducing area requirements, which both increase performance and enable power reduction.

  19. Image-Based Geometric Modeling and Mesh Generation

    CERN Document Server

    2013-01-01

    As a new interdisciplinary research area, “image-based geometric modeling and mesh generation” integrates image processing, geometric modeling and mesh generation with finite element method (FEM) to solve problems in computational biomedicine, materials sciences and engineering. It is well known that FEM is currently well-developed and efficient, but mesh generation for complex geometries (e.g., the human body) still takes about 80% of the total analysis time and is the major obstacle to reduce the total computation time. It is mainly because none of the traditional approaches is sufficient to effectively construct finite element meshes for arbitrarily complicated domains, and generally a great deal of manual interaction is involved in mesh generation. This contributed volume, the first for such an interdisciplinary topic, collects the latest research by experts in this area. These papers cover a broad range of topics, including medical imaging, image alignment and segmentation, image-to-mesh conversion,...

  20. Determination of the efficiency of a detector in gamma spectrometry of large-volume samples

    CERN Document Server

    Tertyshnik, E G

    2012-01-01

    The experimental - calculated method is proposed to determine the full energy peak efficiency (FEPE) of detectors {\\epsilon}(E) in case a measurement of the large-volume samples. Water is used as standard absorber in which the linear attenuation coefficients for photons {\\mu}0 (E) is well known. The value {\\mu} (E) in sample material (matrix of the sample) is determined experimentally by means of spectrometer. The formulas are given for calculation of the ratio {\\epsilon}(E)/ {\\epsilon}0(E), where {\\epsilon}0(E) is FEPE of the detector for photons those are arising in the container filled with water (it is found by adding in the container of the Reference radioactive solutions). To prove the validity of the method ethanol (density 0,8 g/cm3) and water solutions of salts (density 1,2 and 1,5 g/cm3) were used for simulation of the samples with different attenuation coefficients. Standard deviation between experimental and calculated efficiencies has been about 5 %.

  1. Polygon mesh processing

    CERN Document Server

    Botsch, Mario; Pauly, Mark; Alliez, Pierre; Levy, Bruno

    2010-01-01

    Geometry processing, or mesh processing, is a fast-growing area of research that uses concepts from applied mathematics, computer science, and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation, and transmission of complex 3D models. Applications of geometry processing algorithms already cover a wide range of areas from multimedia, entertainment, and classical computer-aided design, to biomedical computing, reverse engineering, and scientific computing. Over the last several years, triangle meshes have become increasingly popular,

  2. Geometrically Consistent Mesh Modification

    KAUST Repository

    Bonito, A.

    2010-01-01

    A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.

  3. High-Fidelity Geometric Modeling and Mesh Generation for Mechanics Characterization of Polycrystalline Materials

    Science.gov (United States)

    2015-01-07

    Mesh Generation. Lecture Notes in Computational Vision and Biomechanics , Volume 3. Springer Publisher. Editor: Yongjie (Jessica) Zhang. ISBN-10...Modeling and Mesh Generation. Lecture Notes in Computational Vision and Biomechanics , Volume 3. Springer Publisher. Editor: Yongjie (Jessica) Zhang... Biomechanics , Volume 3. Springer Publisher. Editor: Yongjie (Jessica) Zhang. ISBN-10: 9400742541, ISBN-13: 978-9400742543. 2013. 3. Y. Zhang. Challenges

  4. Precise quantification of dialysis using continuous sampling of spent dialysate and total dialysate volume measurement.

    Science.gov (United States)

    Argilés, A; Ficheux, A; Thomas, M; Bosc, J Y; Kerr, P G; Lorho, R; Flavier, J L; Stec, F; Adelé, C; Leblanc, M; Garred, L J; Canaud, B; Mion, H; Mion, C M

    1997-08-01

    The "gold standard" method to evaluate the mass balances achieved during dialysis for a given solute remains total dialysate collection (TDC). However, since handling over 100 liter volumes is unfeasible in our current dialysis units, alternative methods have been proposed, including urea kinetic modeling, partial dialysate collection (PDC) and more recently, monitoring of dialysate urea by on-line devices. Concerned by the complexity and costs generated by these devices, we aimed to adapt the simple "gold standard" TDC method to clinical practice by diminishing the total volumes to be handled. We describe a new system based on partial dialysate collection, the continuous spent sampling of dialysate (CSSD), and present its technical validation. Further, and for the first time, we report a long-term assessment of dialysis dosage in a dialysis clinic using both the classical PDC and the new CSSD system in a group of six stable dialysis patients who were followed for a period of three years. For the CSSD technique, spent dialysate was continuously sampled by a reversed automatic infusion pump at a rate of 10 ml/hr. The piston was automatically driven by the dialysis machine: switched on when dialysis started, off when dialysis terminated and held during the by pass periods. At the same time the number of production cycles of dialysate was monitored and the total volume of dialysate was calculated by multiplying the volume of the production chamber by the number of cycles. Urea and creatinine concentrations were measured in the syringe and the masses were obtained by multiplying this concentration by the total volume. CSSD and TDC were simultaneously performed in 20 dialysis sessions. The total mass of urea removed was calculated as 58038 and 60442 mmol/session (CSSD and TDC respectively; 3.1 +/- 1.2% variation; r = 0.99; y = 0.92x -28.9; P urea removal: 510 +/- 59 during the first year with PDC and 516 +/- 46 mmol/dialysis session during the third year, using CSSD

  5. Hex-dominant mesh generation using 3D constrained triangulation

    Energy Technology Data Exchange (ETDEWEB)

    OWEN,STEVEN J.

    2000-05-30

    A method for decomposing a volume with a prescribed quadrilateral surface mesh, into a hexahedral-dominated mesh is proposed. With this method, known as Hex-Morphing (H-Morph), an initial tetrahedral mesh is provided. Tetrahedral are transformed and combined starting from the boundary and working towards the interior of the volume. The quadrilateral faces of the hexahedra are treated as internal surfaces, which can be recovered using constrained triangulation techniques. Implementation details of the edge and face recovery process are included. Examples and performance of the H-Morph algorithm are also presented.

  6. INFLUENCE OF SAMPLE THICKNESS ON ISOTHERMAL CRYSTALLIZATION KINETICS OF POLYMERS IN A CONFINED VOLUME

    Institute of Scientific and Technical Information of China (English)

    Hui Sun; Zhi-ying Zhang; Shi-zhen Wu; Bin Yu; Chang-fa Xiao

    2005-01-01

    Isothermal crystallization process of polymers in a confined volume was simulated in the case of instantaneous nucleation by use of the Monte Carlo method. The influence of sample thickness on some kinetic parameters of crystallization was quantitatively evaluated. It was found that there was a critical thickness value. Influence of thickness on the crystallization behavior was only found for samples of thickness near and less than the critical value. For thick samples the Avrami plot showed straight lines with a turning point at the late stage of crystallization due to the secondary crystallization. When the thickness was near or less than the critical value a primary turning point appeared in the Avrami plot at the very beginning of the crystallization process. A model was proposed to explain the mechanism of this phenomenon. According to this model the critical thickness value is related to the nucleation density or the average distance between adjacent nuclei, and the primary turning point is an indication of a transformation of crystal growth geometry from a three-dimensional mode to a two-dimensional one. Analysis of experimental results of PEO isothermally crystallized at 53.5℃ was consistent with the proposed model.

  7. SAMPLING ARTIFACTS IN MEASUREMENT OF ELEMENTAL AND ORGANIC CARBON: LOW VOLUME SAMPLING IN INDOOR AND OUTDOOR ENVIRONMENTS

    Science.gov (United States)

    Experiments were completed to determine the extent of artifacts from sampling elemental carbon (EC) and organic carbon (OC) under sample conditions consistent with personal sampling. Two different types of experiments were completed; the first examined possible artifacts from oil...

  8. Liquid chromatography-mass spectrometric determination of rufinamide in low volume plasma samples.

    Science.gov (United States)

    Gáll, Zsolt; Vancea, Szende; Dogaru, Maria T; Szilágyi, Tibor

    2013-12-01

    Quantification of rufinamide in plasma was achieved using a selective and sensitive liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) method. The chromatographic separation was achieved on a reversed phase column (Zorbax SB-C18 100mm×3mm, 3.5μm) under isocratic conditions. The mobile phase consisted of a mixture of water containing 0.1% formic acid and methanol (50:50, v/v). The mass spectrometric detection of the analyte was in multiple reaction monitoring mode (MRM) using an electrospray positive ionization (ESI positive). The monitored ions were 127m/z derived from 239m/z rufinamide and 108m/z derived from 251m/z the internal standard (lacosamide). Protein precipitation with methanol was applied for sample preparation using only 50μl aliquots. The concentration range was 40-2000ng/ml for rufinamide in plasma. The limit of detection was 1.25ng/ml and the lower limit of quantification was established at 5ng/ml rufinamide concentration. Selectivity and matrix effect was verified using individual human, rat and rabbit plasma samples. Short-term, post-preparative and freeze-thaw stability was also investigated. The proposed method provides accuracy, precision and high-throughput (short runtime 4.5min) for quantitative determination of rufinamide in plasma. This is the first reported liquid chromatography-tandem mass spectrometric (LC-MS/MS) method for analysis of rufinamide from low volume plasma samples. The LC-MS/MS method was validated according to the current official guidelines and can be applied to accurately measure rufinamide level of large number of plasma samples from clinical studies or therapeutic drug monitoring.

  9. New methods to interpolate large volume of data from points or particles (Mesh-Free) methods application for its scientific visualization; Nuevos metodos de interpolacion de grandes volumenes de datos provenientes de la aplicacion de los metodos de particulas o puntos (libre de malla) para su visualizacion cientifica

    Energy Technology Data Exchange (ETDEWEB)

    Reyes Lopez, Y.; Yervilla Herrera, H.; Viamontes Esquivel, A.; Recarey Morfa, C. A.

    2009-07-01

    In the following paper we developed a new method to interpolate large volumes of scattered data, focused mainly on the results of the Mesh free Methods, Points Methods and the Particles Methods application. Through this one, we use local radial basis function as interpolating functions. We also use over-tree as the data structure that allows to accelerate the localization of the data that influences to interpolate the values at a new point, speeding up the application of scientific visualization techniques to generate images from large data volumes from the application of Mesh-free Methods, Points and Particle Methods, in the resolution of diverse models of physics-mathematics. As an example, the results obtained after applying this method using the local interpolation functions of Shepard are shown. (Author) 22 refs.

  10. Conservative interpolation between general spherical meshes

    Directory of Open Access Journals (Sweden)

    E. Kritsikis

    2015-06-01

    Full Text Available An efficient, local, explicit, second-order, conservative interpolation algorithm between spherical meshes is presented. The cells composing the source and target meshes may be either spherical polygons or longitude–latitude quadrilaterals. Second-order accuracy is obtained by piecewise-linear finite volume reconstruction over the source mesh. Global conservation is achieved through the introduction of a supermesh, whose cells are all possible intersections of source and target cells. Areas and intersections are computed exactly to yield a geometrically exact method. The main efficiency bottleneck caused by the construction of the supermesh is overcome by adopting tree-based data structures and algorithms, from which the mesh connectivity can also be deduced efficiently. The theoretical second-order accuracy is verified using a smooth test function and pairs of meshes commonly used for atmospheric modelling. Experiments confirm that the most expensive operations, especially the supermesh construction, have O(NlogN computational cost. The method presented is meant to be incorporated in pre- or post-processing atmospheric modelling pipelines, or directly into models for flexible input/output. It could also serve as a basis for conservative coupling between model components, e.g. atmosphere and ocean.

  11. Conservative interpolation between general spherical meshes

    Science.gov (United States)

    Kritsikis, Evaggelos; Aechtner, Matthias; Meurdesoif, Yann; Dubos, Thomas

    2017-01-01

    An efficient, local, explicit, second-order, conservative interpolation algorithm between spherical meshes is presented. The cells composing the source and target meshes may be either spherical polygons or latitude-longitude quadrilaterals. Second-order accuracy is obtained by piece-wise linear finite-volume reconstruction over the source mesh. Global conservation is achieved through the introduction of a supermesh, whose cells are all possible intersections of source and target cells. Areas and intersections are computed exactly to yield a geometrically exact method. The main efficiency bottleneck caused by the construction of the supermesh is overcome by adopting tree-based data structures and algorithms, from which the mesh connectivity can also be deduced efficiently.The theoretical second-order accuracy is verified using a smooth test function and pairs of meshes commonly used for atmospheric modelling. Experiments confirm that the most expensive operations, especially the supermesh construction, have O(NlogN) computational cost. The method presented is meant to be incorporated in pre- or post-processing atmospheric modelling pipelines, or directly into models for flexible input/output. It could also serve as a basis for conservative coupling between model components, e.g., atmosphere and ocean.

  12. Petrophysical studies of north American carbonate rock samples and evaluation of pore-volume compressibility models

    Science.gov (United States)

    da Silva, Gilberto Peixoto; Franco, Daniel R.; Stael, Giovanni C.; da Costa de Oliveira Lima, Maira; Sant'Anna Martins, Ricardo; de Moraes França, Olívia; Azeredo, Rodrigo B. V.

    2015-12-01

    In this work, we evaluate two pore volume compressibility models that are currently discussed in the literature (Horne, 1990; Jalalh, 2006b). Five groups of carbonate rock samples from the three following sedimentary basins in North America that are known for their association with hydrocarbon deposits were selected for this study: (i) the Guelph Formation of the Michigan Basin (Middle Silurian); (ii) the Edwards Formation of the Central Texas Platform (Middle Cretaceous); and (iii) the Burlington-Keokuk Formation of the Mississippian System (Lower Mississippian). In addition to the evaluation of the compressibility model, a petrophysical evaluation of these rock samples was conducted. Additional characterizations, such as grain density, the effective porosity, absolute grain permeability, thin section petrography, MICP and NMR, were performed to complement constant pore-pressure compressibility tests. Although both models presented an overall good representation of the compressibility behavior of the studied carbonate rocks, even when considering their broad porosity range (~ 2-38%), the model proposed by Jalalh (2006b) performed better with a confidence level of 95% and a prediction interval of 68%.

  13. Triangulated manifold meshing method preserving molecular surface topology.

    Science.gov (United States)

    Chen, Minxin; Tu, Bin; Lu, Benzhuo

    2012-09-01

    Generation of manifold mesh is an urgent issue in mathematical simulations of biomolecule using boundary element methods (BEM) or finite element method (FEM). Defects, such as not closed mesh, intersection of elements and missing of small structures, exist in surface meshes generated by most of the current meshing method. Usually the molecular surface meshes produced by existing methods need to be revised carefully by third party software to ensure the surface represents a continuous manifold before being used in a BEM and FEM calculations. Based on the trace technique proposed in our previous work, in this paper, we present an improved meshing method to avoid intersections and preserve the topology of the molecular Gaussian surface. The new method divides the whole Gaussian surface into single valued pieces along each of x, y, z directions by tracing the extreme points along the fold curves on the surface. Numerical test results show that the surface meshes produced by the new method are manifolds and preserve surface topologies. The result surface mesh can also be directly used in surface conforming volume mesh generation for FEM type simulation. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Documentation for MeshKit - Reactor Geometry (&mesh) Generator

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-09-30

    This report gives documentation for using MeshKit’s Reactor Geometry (and mesh) Generator (RGG) GUI and also briefly documents other algorithms and tools available in MeshKit. RGG is a program designed to aid in modeling and meshing of complex/large hexagonal and rectilinear reactor cores. RGG uses Argonne’s SIGMA interfaces, Qt and VTK to produce an intuitive user interface. By integrating a 3D view of the reactor with the meshing tools and combining them into one user interface, RGG streamlines the task of preparing a simulation mesh and enables real-time feedback that reduces accidental scripting mistakes that could waste hours of meshing. RGG interfaces with MeshKit tools to consolidate the meshing process, meaning that going from model to mesh is as easy as a button click. This report is designed to explain RGG v 2.0 interface and provide users with the knowledge and skills to pilot RGG successfully. Brief documentation of MeshKit source code, tools and other algorithms available are also presented for developers to extend and add new algorithms to MeshKit. RGG tools work in serial and parallel and have been used to model complex reactor core models consisting of conical pins, load pads, several thousands of axially varying material properties of instrumentation pins and other interstices meshes.

  15. Desorption of Herbicides from Atmospheric Particulates During High-Volume Air Sampling

    Directory of Open Access Journals (Sweden)

    Dwight V. Quiring

    2011-11-01

    Full Text Available Pesticides can be present in the atmosphere either as vapours and/or in association with suspended particles. High-volume air sampling, in which air is aspirated first through a glass fibre filter to capture pesticides associated with atmospheric particulates and then polyurethane foam (PUF, often in combination with an adsorbent resin such as XAD-2, to capture pesticides present as vapours, is generally employed during atmospheric monitoring for pesticides. However, the particulate fraction may be underestimated because some pesticides may be stripped or desorbed from captured particulates due to the pressure drop created by the high flow of air through the filter. This possibility was investigated with ten herbicide active ingredients commonly used on the Canadian prairies (dimethylamine salts of 2,4-D, MCPA and dicamba, 2,4-D 2-ethylhexyl ester, bromoxynil octanoate, diclofop methyl ester, fenoxaprop ethyl ester, trifluralin, triallate and ethalfluralin and seven hydrolysis products (2,4-D, MCPA, dicamba, bromoxynil, diclofop, clopyralid and mecoprop. Finely ground heavy clay soil fortified with active ingredients/hydrolysis products was evenly distributed on the glass fibre filters of high-volume air samplers and air aspirated through the samplers at a flow rate of 12.5 m3/h for a 7-day period. The proportion desorbed as vapour from the fortified soil was determined by analysis of the PUF/XAD-2 resin composite cartridges. The extent of desorption from the fortified soil applied to the filters varied from 0% for each of the dimethylamine salts of 2,4-D, MCPA and dicamba to approximately 50% for trifluralin, triallate and ethalfluralin.

  16. Cosmology on a Mesh

    CERN Document Server

    Gill, S P D; Gibson, B K; Flynn, C; Ibata, R A; Lewis, G F; Gill, Stuart P.D.; Knebe, Alexander; Gibson, Brad K.; Flynn, Chris; Ibata, Rodrigo A.; Lewis, Geraint F.

    2002-01-01

    An adaptive multi grid approach to simulating the formation of structure from collisionless dark matter is described. MLAPM (Multi-Level Adaptive Particle Mesh) is one of the most efficient serial codes available on the cosmological 'market' today. As part of Swinburne University's role in the development of the Square Kilometer Array, we are implementing hydrodynamics, feedback, and radiative transfer within the MLAPM adaptive mesh, in order to simulate baryonic processes relevant to the interstellar and intergalactic media at high redshift. We will outline our progress to date in applying the existing MLAPM to a study of the decay of satellite galaxies within massive host potentials.

  17. Technique for bone volume measurement from human femur head samples by classification of micro-CT image histograms

    Directory of Open Access Journals (Sweden)

    Franco Marinozzi

    2013-09-01

    Full Text Available INTRODUCTION: Micro-CT analysis is a powerful technique for a non-invasive evaluation of the morphometric parameters of trabecular bone samples. This elaboration requires a previous binarization of the images. A problem which arises from the binarization process is the partial volume artifact. Voxels at the external surface of the sample can contain both bone and air so thresholding operates an incorrect estimation of volume occupied by the two materials. AIM: The aim of this study is the extraction of bone volumetric information directly from the image histograms, by fitting them with a suitable set of functions. METHODS: Nineteen trabecular bone samples were extracted from femoral heads of eight patients subject to a hip arthroplasty surgery. Trabecular bone samples were acquired using micro-CT Scanner. Hystograms of the acquired images were computed and fitted by Gaussian-like functions accounting for: a gray levels produced by the bone x-ray absorption, b the portions of the image occupied by air and c voxels that contain a mixture of bone and air. This latter contribution can be considered such as an estimation of the partial volume effect. RESULTS: The comparison of the proposed technique to the bone volumes measured by a reference instrument such as by a helium pycnometer show the method as a good way for an accurate bone volume calculation of trabecular bone samples.

  18. Integration of monolithic porous polymer with droplet-based microfluidics on a chip for nano/picoliter volume sample analysis

    OpenAIRE

    Kim, Jin-Young; Chang, Soo-Ik; Andrew J deMello; O’Hare, Danny

    2014-01-01

    In this paper, a porous polymer nanostructure has been integrated with droplet-based microfluidics in a single planar format. Monolithic porous polymer (MPP) was formed selectively within a microfluidic channel. The resulting analyte bands were sequentially comartmentalised into droplets. This device reduces band broadening and the effects of post-column dead volume by the combination of the two techniques. Moreover it offers the precise control of nano/picoliter volume samples.

  19. Relationship Between LIBS Ablation and Pit Volume for Geologic Samples: Applications for the In Situ Absolute Geochronology

    Science.gov (United States)

    Devismes, Damien; Cohen, Barbara; Miller, J.-S.; Gillot, P.-Y.; Lefevre, J.-C.; Boukari, C.

    2014-01-01

    These first results demonstrate that LIBS spectra can be an interesting tool to estimate the ablated volume. When the ablated volume is bigger than 9.10(exp 6) cubic micrometers, this method has less than 10% of uncertainties. Far enough to be directly implemented in the KArLE experiment protocol. Nevertheless, depending on the samples and their mean grain size, the difficulty to have homogeneous spectra will increase with the ablated volume. Several K-Ar dating studies based on this approach will be implemented. After that, the results will be shown and discussed.

  20. Isotopic Implicit Surface Meshing

    NARCIS (Netherlands)

    Boissonnat, Jean-Daniel; Cohen-Steiner, David; Vegter, Gert

    2004-01-01

    This paper addresses the problem of piecewise linear approximation of implicit surfaces. We first give a criterion ensuring that the zero-set of a smooth function and the one of a piecewise linear approximation of it are isotopic. Then, we deduce from this criterion an implicit surface meshing algor

  1. Relationship between sample volumes and modulus of human vertebral trabecular bone in micro-finite element analysis.

    Science.gov (United States)

    Wen, Xin-Xin; Xu, Chao; Zong, Chun-Lin; Feng, Ya-Fei; Ma, Xiang-Yu; Wang, Fa-Qi; Yan, Ya-Bo; Lei, Wei

    2016-07-01

    Micro-finite element (μFE) models have been widely used to assess the biomechanical properties of trabecular bone. How to choose a proper sample volume of trabecular bone, which could predict the real bone biomechanical properties and reduce the calculation time, was an interesting problem. Therefore, the purpose of this study was to investigate the relationship between different sample volumes and apparent elastic modulus (E) calculated from μFE model. 5 Human lumbar vertebral bodies (L1-L5) were scanned by micro-CT. Cubic concentric samples of different lengths were constructed as the experimental groups and the largest possible volumes of interest (VOI) were constructed as the control group. A direct voxel-to-element approach was used to generate μFE models and steel layers were added to the superior and inferior surface to mimic axial compression tests. A 1% axial strain was prescribed to the top surface of the model to obtain the E values. ANOVA tests were performed to compare the E values from the different VOIs against that of the control group. Nonlinear function curve fitting was performed to study the relationship between volumes and E values. The larger cubic VOI included more nodes and elements, and more CPU times were needed for calculations. E values showed a descending tendency as the length of cubic VOI decreased. When the volume of VOI was smaller than (7.34mm(3)), E values were significantly different from the control group. The fit function showed that E values approached an asymptotic values with increasing length of VOI. Our study demonstrated that apparent elastic modulus calculated from μFE models were affected by the sample volumes. There was a descending tendency of E values as the length of cubic VOI decreased. Sample volume which was not smaller than (7.34mm(3)) was efficient enough and timesaving for the calculation of E.

  2. Sampling small volumes of ambient ammonia using a miniaturized gas sampler.

    Science.gov (United States)

    Timmer, Björn; Olthuis, Wouter; van den Berg, Albert

    2004-06-01

    The development of a gas sampler for a miniaturized ambient ammonia detector is described. A micromachined channel system is realized in glass and silicon using powder blasting and anodic bonding. The analyte gas is directly mixed with purified water, dissolving the ammonia that will dissociate into ammonium ions. Carrier gas bubbles are subsequently removed from the liquid stream through a venting hole sealed with a microporous water repellent PTFE membrane. A flow restrictor is placed at the outlet of the sampler to create a small overpressure underneath the membrane, enabling the gas to leave through the membrane. Experiments with a gas flow of 1 ml min(-1), containing ammonia concentrations ranging from 9.4 ppm to 0.6 ppm in a nitrogen carrier flow have been carried out, at a water flow of 20 microl min(-1). The ammonium concentration in the sample solution is measured with an electrolyte conductivity detector. The measured values correspond with the concentration calculated from the initial ammonia concentration in the analyte gas, the fifty times concentration enhancement due to the gas-liquid volume difference and the theoretical dissociation equilibrium as a function of the resulting pH.

  3. Advanced Automatic Hexahedral Mesh Generation from Surface Quad Meshes

    OpenAIRE

    Kremer, Michael; Bommes, David; Lim, Isaak; Kobbelt, Leif

    2013-01-01

    International audience; A purely topological approach for the generation of hexahedral meshes from quadrilateral surface meshes of genus zero has been proposed by M. Müller-Hannemann: in a first stage, the input surface mesh is reduced to a single hexahedron by successively eliminating loops from the dual graph of the quad mesh; in the second stage, the hexahedral mesh is constructed by extruding a layer of hexahedra for each dual loop from the first stage in reverse elimination order. In th...

  4. RESULTS FROM EPA FUNDED RESEARCH PROGRAMS ON THE IMPORTANCE OF PURGE VOLUME, SAMPLE VOLUME, SAMPLE FLOW RATE AND TEMPORAL VARIATIONS ON SOIL GAS CONCENTRATIONS

    Science.gov (United States)

    Two research studies funded and overseen by EPA have been conducted since October 2006 on soil gas sampling methods and variations in shallow soil gas concentrations with the purpose of improving our understanding of soil gas methods and data for vapor intrusion applications. Al...

  5. Gulf of Mexico continental slope study annual report, year 2. Volume 2. Primary volume. Interim report 1985-1986. [Sampling for hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    1986-09-01

    This report, which was prepared in three volumes (Executive Summary, Primary Volume, and Appendix), details the findings of two years of sampling on the continental slope of the northern Gulf of Mexico at depths of 300-3000 m. Preliminary results from a third year of sampling are also presented. Physical and chemical measurements included: CTD casts at 35 stations; sediment characteristics, including hydrocarbons and bulk sediment parameters from 60 stations; tissue hydrocarbon levels of representative benthic organisms; and delta carbon-13 values from sediments and organisms, including comparison of areas of natural petroleum seepage to prevailing slope conditions. The biological oceanography section provides detailed enumeration of megafaunal specimens collected by trawling and of macro- and meiofaunal specimens collected with a 600 sq cm box core. Major megafaunal groups treated are Arthropoda, Echinodermata, and demersal fishes.

  6. Three-dimensional sensitivity distribution and sample volume of low-induction-number electromagnetic-induction instruments

    Science.gov (United States)

    Callegary, J.B.; Ferre, T. P. A.; Groom, R.W.

    2012-01-01

    There is an ongoing effort to improve the understanding of the correlation of soil properties with apparent soil electrical conductivity as measured by low-induction-number electromagnetic-induction (LIN FEM) instruments. At a minimum, the dimensions of LIN FEM instruments' sample volume, the spatial distribution of sensitivity within that volume, and implications for surveying and analyses must be clearly defined and discussed. Therefore, a series of numerical simulations was done in which a conductive perturbation was moved systematically through homogeneous soil to elucidate the three-dimensional sample volume of LIN FEM instruments. For a small perturbation with electrical conductivity similar to that of the soil, instrument response is a measure of local sensitivity (LS). Our results indicate that LS depends strongly on the orientation of the instrument's transmitter and receiver coils and includes regions of both positive and negative LS. Integration of the absolute value of LS from highest to lowest was used to contour cumulative sensitivity (CS). The 90% CS contour was used to define the sample volume. For both horizontal and vertical coplanar coil orientations, the longest dimension of the sample volume was at the surface along the main instrument axis with a length of about four times the intercoil spacing (s) with maximum thicknesses of about 1 and 0.3 s, respectively. The imaged distribution of spatial sensitivity within the sample volume is highly complex and should be considered in conjunction with the expected scale of heterogeneity before the use and interpretation of LIN FEM for mapping and profiling. ?? Soil Science Society of America.

  7. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center

    Science.gov (United States)

    Dou, Chao

    2016-01-01

    The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. 
 PMID:28090205

  8. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center

    Directory of Open Access Journals (Sweden)

    Beibei Miao

    2016-01-01

    Full Text Available The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value.

  9. Laparoscopic sacrocolpopexy versus transvaginal mesh for recurrent pelvic organ prolapse.

    Science.gov (United States)

    Iglesia, Cheryl B; Hale, Douglass S; Lucente, Vincent R

    2013-03-01

    Both expert surgeons agree with the following: (1) Surgical mesh, whether placed laparoscopically or transvaginally, is indicated for pelvic floor reconstruction in cases involving recurrent advanced pelvic organ prolapse. (2) Procedural expertise and experience gained from performing a high volume of cases is fundamentally necessary. Knowledge of outcomes and complications from an individual surgeon's audit of cases is also needed when discussing the risks and benefits of procedures and alternatives. Yet controversy still exists on how best to teach new surgical techniques and optimal ways to efficiently track outcomes, including subjective and objective cure of prolapse as well as perioperative complications. A mesh registry will be useful in providing data needed for surgeons. Cost factors are also a consideration since laparoscopic and especially robotic surgical mesh procedures are generally more costly than transvaginal mesh kits when operative time, extra instrumentation and length of stay are included. Long-term outcomes, particularly for transvaginal mesh procedures, are lacking. In conclusion, all surgery poses risks; however, patients should be made aware of the pros and cons of various routes of surgery as well as the potential risks and benefits of using mesh. Surgeons should provide patients with honest information about their own experience implanting mesh and also their experience dealing with mesh-related complications.

  10. An unstructured-mesh atmospheric model for nonhydrostatic dynamics

    Science.gov (United States)

    Smolarkiewicz, Piotr K.; Szmelter, Joanna; Wyszogrodzki, Andrzej A.

    2013-12-01

    A three-dimensional semi-implicit edge-based unstructured-mesh model is developed that integrates nonhydrostatic anelastic equations, suitable for simulation of small-to-mesoscale atmospheric flows. The model builds on nonoscillatory forward-in-time MPDATA approach using finite-volume discretization and admitting unstructured meshes with arbitrarily shaped cells. The numerical advancements are evaluated with canonical simulations of convective planetary boundary layer and strongly (stably) stratified orographic flows, epitomizing diverse aspects of highly nonlinear nonhydrostatic dynamics. The unstructured-mesh solutions are compared to equivalent results generated with an established structured-grid model and observation.

  11. Layer-adapted meshes for reaction-convection-diffusion problems

    CERN Document Server

    Linß, Torsten

    2010-01-01

    This book on numerical methods for singular perturbation problems - in particular, stationary reaction-convection-diffusion problems exhibiting layer behaviour is devoted to the construction and analysis of layer-adapted meshes underlying these numerical methods. A classification and a survey of layer-adapted meshes for reaction-convection-diffusion problems are included. This structured and comprehensive account of current ideas in the numerical analysis for various methods on layer-adapted meshes is addressed to researchers in finite element theory and perturbation problems. Finite differences, finite elements and finite volumes are all covered.

  12. Multiphase flow of immiscible fluids on unstructured moving meshes

    DEFF Research Database (Denmark)

    Misztal, Marek Krzysztof; Erleben, Kenny; Bargteil, Adam;

    2012-01-01

    In this paper, we present a method for animating multiphase flow of immiscible fluids using unstructured moving meshes. Our underlying discretization is an unstructured tetrahedral mesh, the deformable simplicial complex (DSC), that moves with the flow in a Lagrangian manner. Mesh optimization...... that the underlying discretization matches the physics and avoids the additional book-keeping required in grid-based methods where multiple fluids may occupy the same cell. Our Lagrangian approach naturally leads us to adopt a finite element approach to simulation, in contrast to the finite volume approaches adopted...

  13. Efficient Packet Forwarding in Mesh Network

    OpenAIRE

    Soumen Kanrar

    2012-01-01

    Wireless Mesh Network (WMN) is a multi hop low cost, with easy maintenance robust network providing reliable service coverage. WMNs consist of mesh routers and mesh clients. In this architecture, while static mesh routers form the wireless backbone, mesh clients access the network through mesh routers as well as directly meshing with each other. Different from traditional wireless networks, WMN is dynamically self-organized and self-configured. In other words, the nodes in the mesh network au...

  14. Metal-mesh lithography.

    Science.gov (United States)

    Tang, Zhao; Wei, Qingshan; Wei, Alexander

    2011-12-01

    Metal-mesh lithography (MML) is a practical hybrid of microcontact printing and capillary force lithography that can be applied over millimeter-sized areas with a high level of uniformity. MML can be achieved by blotting various inks onto substrates through thin copper grids, relying on preferential wetting and capillary interactions between template and substrate for pattern replication. The resulting mesh patterns, which are inverted relative to those produced by stenciling or serigraphy, can be reproduced with low micrometer resolution. MML can be combined with other surface chemistry and lift-off methods to create functional microarrays for diverse applications, such as periodic islands of gold nanorods and patterned corrals for fibroblast cell cultures.

  15. Detection of triazole deicing additives in soil samples from airports with low, mid, and large volume aircraft deicing activities.

    Science.gov (United States)

    McNeill, K S; Cancilla, D A

    2009-03-01

    Soil samples from three USA airports representing low, mid, and large volume users of aircraft deicing fluids (ADAFs) were analyzed by LC/MS/MS for the presence of triazoles, a class of corrosion inhibitors historically used in ADAFs. Triazoles, specifically the 4-methyl-1H-benzotriazole and the 5-methyl-1H-benzotriazole, were detected in a majority of samples and ranged from 2.35 to 424.19 microg/kg. Previous studies have focused primarily on ground and surface water impacts of larger volume ADAF users. The detection of triazoles in soils at low volume ADAF use airports suggests that deicing activities may have a broader environmental impact than previously considered.

  16. Open Volumetric Mesh-An Efficient Data Structure for Tetrahedral and Hexa-hedral Meshes

    Institute of Scientific and Technical Information of China (English)

    XIAN Chu-hua; LI Gui-qing; GAO Shu-ming

    2013-01-01

    This work introduces a scalable and efficient topological structure for tetrahedral and hexahedral meshes. The design of the data structure aims at maximal flexibility and high performance. It provides a high scalability by using hierarchical representa-tions of topological elements. The proposed data structure is array-based, and it is a compact representation of the half-edge data structure for volume elements and half-face data structure for volumetric meshes. This guarantees constant access time to the neighbors of the topological elements. In addition, an open-source implementation named Open Volumetric Mesh (OVM) of the pro-posed data structure is written in C++using generic programming concepts.

  17. Functionalized Nanofiber Meshes Enhance Immunosorbent Assays.

    Science.gov (United States)

    Hersey, Joseph S; Meller, Amit; Grinstaff, Mark W

    2015-12-01

    Three-dimensional substrates with high surface-to-volume ratios and subsequently large protein binding capacities are of interest for advanced immunosorbent assays utilizing integrated microfluidics and nanosensing elements. A library of bioactive and antifouling electrospun nanofiber substrates, which are composed of high-molecular-weight poly(oxanorbornene) derivatives, is described. Specifically, a set of copolymers are synthesized from three 7-oxanorbornene monomers to create a set of water insoluble copolymers with both biotin (bioactive) and triethylene glycol (TEG) (antifouling) functionality. Porous three-dimensional nanofiber meshes are electrospun from these copolymers with the ability to specifically bind streptavidin while minimizing the nonspecific binding of other proteins. Fluorescently labeled streptavidin is used to quantify the streptavidin binding capacity of each mesh type through confocal microscopy. A simplified enzyme-linked immunosorbent assay (ELISA) is presented to assess the protein binding capabilities and detection limits of these nanofiber meshes under both static conditions (26 h) and flow conditions (1 h) for a model target protein (i.e., mouse IgG) using a horseradish peroxidase (HRP) colorimetric assay. Bioactive and antifouling nanofiber meshes outperform traditional streptavidin-coated polystyrene plates under flow, validating their use in future advanced immunosorbent assays and their compatibility with microfluidic-based biosensors.

  18. Development of a novel high volume band compression injector for the analysis of complex samples like toxaphene pesticide.

    Science.gov (United States)

    Gagné, Jean-Pierre; Gouteux, Bruno; Bertrand, Michel J

    2009-01-16

    A new type of injector has been developed for gas chromatographic analysis. The injector has high volume and band compression (HVBC) capabilities useful for the analysis of complex samples. The injector consists essentially of a packed liner operated at room temperature while a narrow heated zone is used to axially scan the liner selectively desorbing the compounds of interest. The scanning speed, distance and temperature of the zone are precisely controlled. The liner is connected to an interface which can vent the solvent or any undesirable compounds, and transfer the analytes to an analytical column for separation and quantification. The injector is designed to be compatible with injection volumes from 1 to more than 250microL. At a low sample volume of 1microL, the injector has competitive performances compared to those of the "on-column" and "split/splitless" injectors for the fatty acid methyl esters and toxaphene compounds tested. For higher volumes, the system produces a linear response according to the injected volume. In this explorative study, the maximum volume injected seems to be limited by the saturation of the chromatographic system instead of being defined by the design of the injector. The HVBC injector can also be used to conduct "in situ" pretreatment of the sample before its transfer to the analytical column. For instance, a toxaphene sample was successively fractionated, using the HVBC injector, in six sub-fractions characterized by simpler chromatograms than the chromatogram of the original mixture. Finally, the ability of the HVBC injector to "freeze" the separation in time allowing the analyst to complete the analysis at a later time is also discussed.

  19. Improved method for collection of sputum for tuberculosis testing to ensure adequate sample volumes for molecular diagnostic testing.

    Science.gov (United States)

    Fisher, Mark; Dolby, Tania; Surtie, Shireen; Omar, Gaironesa; Hapeela, Nchimunya; Basu, Debby; DeWalt, Abby; Kelso, David; Nicol, Mark; McFall, Sally

    2017-04-01

    The quality and quantity of sputum collected has an important impact on the laboratory diagnosis of pulmonary TB. We conducted a pilot study to assess a new collection cups for the collection of sputum for the diagnosis of pulmonary tuberculosis. The pilot study utilized the standard collection cup in South Africa demonstrating a mean collection volume of 2.86±2.36SDml for 198 samples; 19% of the specimens contained 5ml. We designed and tested two novel sputum cups with a narrow bottom section and clear minimum and maximum markings to allow patients and clinicians to know whether sufficient sputum volume has been produced. The cups differed in their shape and manufacturing approach. The two options also support different mixing approaches being considered for a highly sensitive companion TB-screening assay being developed at Northwestern University (XtracTB assay). Sputum was collected from 102 patients at Nolungile Youth Centre, Khayelitsha, Cape Town, South Africa for a total of 204 samples. The mean volumes collected from the two cups were 2.70±0.88SDml and 2.88±0.89SDml. While the mean volumes of current and novel cups are similar, the volume ranges collected with the novel cups were narrower, and 98% of the specimen volumes were within the target range. Only 4 samples contained >5ml, but none were >6ml, and none of the specimens contained <1ml. The number of coughs that produced the samples, patient HIV and TB status plus qualitative descriptions of the sputum specimens were also evaluated.

  20. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  1. Mesh Algorithms for PDE with Sieve I: Mesh Distribution

    Directory of Open Access Journals (Sweden)

    Matthew G. Knepley

    2009-01-01

    Full Text Available We have developed a new programming framework, called Sieve, to support parallel numerical partial differential equation(s (PDE algorithms operating over distributed meshes. We have also developed a reference implementation of Sieve in C++ as a library of generic algorithms operating on distributed containers conforming to the Sieve interface. Sieve makes instances of the incidence relation, or arrows, the conceptual first-class objects represented in the containers. Further, generic algorithms acting on this arrow container are systematically used to provide natural geometric operations on the topology and also, through duality, on the data. Finally, coverings and duality are used to encode not only individual meshes, but all types of hierarchies underlying PDE data structures, including multigrid and mesh partitions. In order to demonstrate the usefulness of the framework, we show how the mesh partition data can be represented and manipulated using the same fundamental mechanisms used to represent meshes. We present the complete description of an algorithm to encode a mesh partition and then distribute a mesh, which is independent of the mesh dimension, element shape, or embedding. Moreover, data associated with the mesh can be similarly distributed with exactly the same algorithm. The use of a high level of abstraction within the Sieve leads to several benefits in terms of code reuse, simplicity, and extensibility. We discuss these benefits and compare our approach to other existing mesh libraries.

  2. Near-infrared diffuse reflectance spectroscopy with sample spots and chemometrics for fast determination of bovine serum albumin in micro-volume samples

    Institute of Scientific and Technical Information of China (English)

    Cai-Jing Cui; Wen-Sheng Cai; Xue-Guang Shao

    2013-01-01

    Near-infrared diffuse reflectance spectroscopy (NIRDRS) has attracted more and more attention in analyzing the components in samples with complex matrices.However,to apply this technique to micro-analysis,there are still some obstacles to overcome such as the low sensitivity and spectral overlapping associated with this approach.A method for fast determination of bovine serum albumin (BSA) in micro-volume samples was studied using NIRDRS with sample spots and chemometric techniques.10 μL of sample spotted on a filter paper substrate was used for the spectral measurements.Quantitative analysis was obtained by partial least squares (PLS) regression with signal processing and variable selection.The results show that the correlation coefficient (R) between the predicted and the reference concentration is 0.9897 and the recoveries are in the range of 87.4%-114.4% for the validation samples in the concentration range of 0.61-8.10 mg/mL.These results suggest that the method has the potential to quickly measure proteins in micro-volume solutions.

  3. Remedial investigation sampling and analysis plan for J-Field, Aberdeen Proving Ground, Maryland. Volume 1: Field Sampling Plan

    Energy Technology Data Exchange (ETDEWEB)

    Benioff, P.; Biang, R.; Dolak, D.; Dunn, C.; Martino, L.; Patton, T.; Wang, Y.; Yuen, C.

    1995-03-01

    The Environmental Management Division (EMD) of Aberdeen Proving Ground (APG), Maryland, is conducting a remedial investigation and feasibility study (RI/FS) of the J-Field area at APG pursuant to the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), as amended. J-Field is within the Edgewood Area of APG in Harford County, Maryland (Figure 1. 1). Since World War II activities in the Edgewood Area have included the development, manufacture, testing, and destruction of chemical agents and munitions. These materials were destroyed at J-Field by open burning and open detonation (OB/OD). Considerable archival information about J-Field exists as a result of efforts by APG staff to characterize the hazards associated with the site. Contamination of J-Field was first detected during an environmental survey of the Edgewood Area conducted in 1977 and 1978 by the US Army Toxic and Hazardous Materials Agency (USATHAMA) (predecessor to the US Army Environmental Center [AEC]). As part of a subsequent USATHAMA -environmental survey, 11 wells were installed and sampled at J-Field. Contamination at J-Field was also detected during a munitions disposal survey conducted by Princeton Aqua Science in 1983. The Princeton Aqua Science investigation involved the installation and sampling of nine wells and the collection and analysis of surficial and deep composite soil samples. In 1986, a Resource Conservation and Recovery Act (RCRA) permit (MD3-21-002-1355) requiring a basewide RCRA Facility Assessment (RFA) and a hydrogeologic assessment of J-Field was issued by the US Environmental Protection Agency (EPA). In 1987, the US Geological Survey (USGS) began a two-phased hydrogeologic assessment in data were collected to model, groundwater flow at J-Field. Soil gas investigations were conducted, several well clusters were installed, a groundwater flow model was developed, and groundwater and surface water monitoring programs were established that continue today.

  4. Rapid determination of benzene derivatives in water samples by trace volume solvent DLLME prior to GC-FID

    Energy Technology Data Exchange (ETDEWEB)

    Diao, Chun Peng; Wei, Chao Hai; Feng, Chun Hua [South China Univ. of Technology, Guangzhou Higher Education Mega Center (China). College of Environmental Science and Engineering; Guangdong Regular Higher Education Institutions, Guangzhou (China). Key Lab. of Environmental Protection and Eco-Remediation

    2012-05-15

    An inexpensive, simple and environmentally friendly method based on dispersive liquid liquid microextraction (DLLME) for rapid determination of benzene derivatives in water samples was proposed. A significant improvement of DLLME procedure was achieved. Trace volume ethyl acetate (60 {mu}L) was exploited as dispersion solvent instead of common ones such as methanol and acetone, the volume of which was more than 0.5 mL, and the organic solvent required in DLLME was reduced to a great extent. Only 83-{mu}L organic solvent was consumed in the whole analytic process and the preconcentration procedure was less than 10 min. The advantageous approach coupled with gas chromatograph-flame ionization detector was proposed for the rapid determination of benzene, toluene, ethylbenzene and xylene isomers in water samples. Results showed that the proposed approach was an efficient method for rapid determination of benzene derivatives in aqueous samples. (orig.)

  5. Robust, multidimensional mesh motion based on Monge-Kantorovich equidistribution

    Energy Technology Data Exchange (ETDEWEB)

    Delzanno, G L [Los Alamos National Laboratory; Finn, J M [Los Alamos National Laboratory

    2009-01-01

    Mesh-motion (r-refinement) grid adaptivity schemes are attractive due to their potential to minimize the numerical error for a prescribed number of degrees of freedom. However, a key roadblock to a widespread deployment of the technique has been the formulation of robust, reliable mesh motion governing principles, which (1) guarantee a solution in multiple dimensions (2D and 3D), (2) avoid grid tangling (or folding of the mesh, whereby edges of a grid cell cross somewhere in the domain), and (3) can be solved effectively and efficiently. In this study, we formulate such a mesh-motion governing principle, based on volume equidistribution via Monge-Kantorovich optimization (MK). In earlier publications [1, 2], the advantages of this approach in regards to these points have been demonstrated for the time-independent case. In this study, demonstrate that Monge-Kantorovich equidistribution can in fact be used effectively in a time stepping context, and delivers an elegant solution to the otherwise pervasive problem of grid tangling in mesh motion approaches, without resorting to ad-hoc time-dependent terms (as in moving-mesh PDEs, or MMPDEs [3, 4]). We explore two distinct r-refinement implementations of MK: direct, where the current mesh relates to an initial, unchanging mesh, and sequential, where the current mesh is related to the previous one in time. We demonstrate that the direct approach is superior in regards to mesh distortion and robustness. The properties of the approach are illustrated with a paradigmatic hyperbolic PDE, the advection of a passive scalar. Imposed velocity flow fields or varying vorticity levels and flow shears are considered.

  6. Optimizing human semen cryopreservation by reducing test vial volume and repetitive test vial sampling

    DEFF Research Database (Denmark)

    Jensen, Christian F S; Ohl, Dana A; Parker, Walter R

    2015-01-01

    OBJECTIVE: To investigate optimal test vial (TV) volume, utility and reliability of TVs, intermediate temperature exposure (-88°C to -93°C) before cryostorage, cryostorage in nitrogen vapor (VN2) and liquid nitrogen (LN2), and long-term stability of VN2 cryostorage of human semen. DESIGN: Prospec......OBJECTIVE: To investigate optimal test vial (TV) volume, utility and reliability of TVs, intermediate temperature exposure (-88°C to -93°C) before cryostorage, cryostorage in nitrogen vapor (VN2) and liquid nitrogen (LN2), and long-term stability of VN2 cryostorage of human semen. DESIGN...

  7. [Meta-Mesh: metagenomic data analysis system].

    Science.gov (United States)

    Su, Xiaoquan; Song, Baoxing; Wang, Xuetao; Ma, Xinle; Xu, Jian; Ning, Kang

    2014-01-01

    With the current accumulation of metagenome data, it is possible to build an integrated platform for processing of rigorously selected metagenomic samples (also referred as "metagenomic communities" here) of interests. Any metagenomic samples could then be searched against this database to find the most similar sample(s). However, on one hand, current databases with a large number of metagenomic samples mostly serve as data repositories but not well annotated database, and only offer few functions for analysis. On the other hand, the few available methods to measure the similarity of metagenomic data could only compare a few pre-defined set of metagenome. It has long been intriguing scientists to effectively calculate similarities between microbial communities in a large repository, to examine how similar these samples are and to find the correlation of the meta-information of these samples. In this work we propose a novel system, Meta-Mesh, which includes a metagenomic database and its companion analysis platform that could systematically and efficiently analyze, compare and search similar metagenomic samples. In the database part, we have collected more than 7 000 high quality and well annotated metagenomic samples from the public domain and in-house facilities. The analysis platform supplies a list of online tools which could accept metagenomic samples, build taxonomical annotations, compare sample in multiple angle, and then search for similar samples against its database by a fast indexing strategy and scoring function. We also used case studies of "database search for identification" and "samples clustering based on similarity matrix" using human-associated habitat samples to demonstrate the performance of Meta-Mesh in metagenomic analysis. Therefore, Meta-Mesh would serve as a database and data analysis system to quickly parse and identify similar

  8. toolkit computational mesh conceptual model.

    Energy Technology Data Exchange (ETDEWEB)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-03-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  9. Development of a Solid Phase Extraction Method for Agricultural Pesticides in Large-Volume Water Samples

    Science.gov (United States)

    An analytical method using solid phase extraction (SPE) and analysis by gas chromatography/mass spectrometry (GC/MS) was developed for the trace determination of a variety of agricultural pesticides and selected transformation products in large-volume high-elevation lake water sa...

  10. Integrating Girl Child Issues into Population Education: Strategies and Sample Curriculum and Instructional Materials. Volume 2.

    Science.gov (United States)

    United Nations Educational, Scientific, and Cultural Organization, Bangkok (Thailand).

    One of the most important vehicles for promoting the concerns of the "girl child" and the elimination of gender bias is through education, and since programs in population education are being funded all over the world, population education is a suitable and effective medium for integrating messages on the girl child. This two-volume publication…

  11. Kinetic Solvers with Adaptive Mesh in Phase Space

    CERN Document Server

    Arslanbekov, Robert R; Frolova, Anna A

    2013-01-01

    An Adaptive Mesh in Phase Space (AMPS) methodology has been developed for solving multi-dimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a tree of trees data structure. The mesh in r-space is automatically generated around embedded boundaries and dynamically adapted to local solution properties. The mesh in v-space is created on-the-fly for each cell in r-space. Mappings between neighboring v-space trees implemented for the advection operator in configuration space. We have developed new algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the full Boltzmann collision integral with dynamically adaptive mesh in velocity space: importance sampling, multi-point projection method, and the variance reduction method. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic...

  12. Reference Equations for Static Lung Volumes and TLCO from a Population Sample in Northern Greece.

    Science.gov (United States)

    Michailopoulos, Pavlos; Kontakiotis, Theodoros; Spyratos, Dionisios; Argyropoulou-Pataka, Paraskevi; Sichletidis, Lazaros

    2015-02-14

    Background: The most commonly used reference equations for the measurement of static lung volumes/capacities and transfer factor of the lung for CO (TLCO) are based on studies around 30-40 years old with significant limitations. Objectives: Our aim was to (1) develop reference equations for static lung volumes and TLCO using the current American Thoracic Society/European Respiratory Society guidelines, and (2) compare the equations derived with those most commonly used. Methods: Healthy Caucasian subjects (234 males and 233 females) aged 18-91 years were recruited. All of them were healthy never smokers with a normal chest X-ray. Static lung volumes and TLCO were measured with a single-breath technique according to the latest guidelines. Results: Curvilinear regression prediction equations derived from the present study were compared with those that are most commonly used. Our reference equations in accordance with the latest studies show lower values for all static lung volume parameters and TLCO as well as a different way of deviation of those parameters (i.e. declining with age total lung capacity, TLCO age decline in both sex and functional residual capacity age rise in males). Conclusions: We suggest that old reference values of static lung volumes and TLCO should be updated, and our perception of deviation of some spirometric parameters should be revised. Our new reference curvilinear equations derived according to the latest guidelines could contribute to the updating by respiratory societies of old existing reference values and result in a better estimation of the lung function of contemporary populations with similar Caucasian characteristics. © 2015 S. Karger AG, Basel.

  13. Gossiping on meshes and tori

    OpenAIRE

    Sibeyn, J.; Rao, P; Juurlink, B.

    1996-01-01

    Algorithms for performing gossiping on one- and higher dimensional meshes are presented. As a routing model, we assume the practically important worm-hole routing. For one-dimensional arrays and rings, we give a novel lower bound and an asymptotically optimal gossiping algorithm for all choices of the parameters involved. For two-dimensional meshes and tori, several simple algorithms composed of one-dimensional phases are presented. For an important range of packet and mesh sizes it gives cle...

  14. An Adaptive Mesh Algorithm: Mesh Structure and Generation

    Energy Technology Data Exchange (ETDEWEB)

    Scannapieco, Anthony J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-21

    The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented by a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally

  15. Synthesized Optimization of Triangular Mesh

    Institute of Scientific and Technical Information of China (English)

    HU Wenqiang; YANG Wenyu

    2006-01-01

    Triangular mesh is often used to describe geometric object as computed model in digital manufacture, thus the mesh model with both uniform triangular shape and excellent geometric shape is expected. But in fact, the optimization of triangular shape often is contrary with that of geometric shape. In this paper, one synthesized optimizing algorithm is presented through subdividing triangles to achieve the trade-off solution between the geometric and triangular shape optimization of mesh model. The result mesh with uniform triangular shape and excellent topology are obtained.

  16. Form-finding with polyhedral meshes made simple

    KAUST Repository

    Tang, Chengcheng

    2014-07-27

    We solve the form-finding problem for polyhedral meshes in a way which combines form, function and fabrication; taking care of user-specified constraints like boundary interpolation, planarity of faces, statics, panel size and shape, enclosed volume, and last, but not least, cost. Our main application is the interactive modeling of meshes for architectural and industrial design. Our approach can be described as guided exploration of the constraint space whose algebraic structure is simplified by introducing auxiliary variables and ensuring that constraints are at most quadratic. Computationally, we perform a projection onto the constraint space which is biased towards low values of an energy which expresses desirable "soft" properties like fairness. We have created a tool which elegantly handles difficult tasks, such as taking boundary-alignment of polyhedral meshes into account, planarization, fairing under planarity side conditions, handling hybrid meshes, and extending the treatment of static equilibrium to shapes which possess overhanging parts.

  17. Quality Tetrahedral Mesh Smoothing via Boundary-Optimized Delaunay Triangulation.

    Science.gov (United States)

    Gao, Zhanheng; Yu, Zeyun; Holst, Michael

    2012-12-01

    Despite its great success in improving the quality of a tetrahedral mesh, the original optimal Delaunay triangulation (ODT) is designed to move only inner vertices and thus cannot handle input meshes containing "bad" triangles on boundaries. In the current work, we present an integrated approach called boundary-optimized Delaunay triangulation (B-ODT) to smooth (improve) a tetrahedral mesh. In our method, both inner and boundary vertices are repositioned by analytically minimizing the error between a paraboloid function and its piecewise linear interpolation over the neighborhood of each vertex. In addition to the guaranteed volume-preserving property, the proposed algorithm can be readily adapted to preserve sharp features in the original mesh. A number of experiments are included to demonstrate the performance of our method.

  18. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  19. Mesh Algorithms for PDE with Sieve I: Mesh Distribution

    CERN Document Server

    Knepley, Matthew G

    2009-01-01

    We have developed a new programming framework, called Sieve, to support parallel numerical PDE algorithms operating over distributed meshes. We have also developed a reference implementation of Sieve in C++ as a library of generic algorithms operating on distributed containers conforming to the Sieve interface. Sieve makes instances of the incidence relation, or \\emph{arrows}, the conceptual first-class objects represented in the containers. Further, generic algorithms acting on this arrow container are systematically used to provide natural geometric operations on the topology and also, through duality, on the data. Finally, coverings and duality are used to encode not only individual meshes, but all types of hierarchies underlying PDE data structures, including multigrid and mesh partitions. In order to demonstrate the usefulness of the framework, we show how the mesh partition data can be represented and manipulated using the same fundamental mechanisms used to represent meshes. We present the complete des...

  20. Quantitative pH assessment of small-volume samples using a universal pH indicator.

    Science.gov (United States)

    Brown, Jeffrey D; Bell, Nathaniel; Li, Victoria; Cantrell, Kevin

    2014-10-01

    We developed a hue-based pH determination method to analyze digital images of samples in a 384-well plate after the addition of a universal pH indicator. The standard error of calibration for 69 pH standards was 0.078 pH units, and no sample gave an error greater than 0.23 units. We then used in-solution isoelectric focusing to determine the isoelectric point of Wnt3A protein in conditioned medium and after purification and applied the described method to assess the pH of these small-volume samples. End users may access our standard to assay the pH of their own samples with no additional calibration. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Recovery of Cryptosporidium oocysts from small and large volume water samples using a compressed foam filter system.

    Science.gov (United States)

    Sartory, D P; Parton, A; Parton, A C; Roberts, J; Bergmann, K

    1998-12-01

    A novel filter system comprising open cell reticulated foam rings compressed between retaining plates and fitted into a filtration housing was evaluated for the recovery of oocysts of Cryptosporidium from water. Mean recoveries of 90.2% from seeded small and large volume (100-2000 l) tap water samples, and 88.8% from 10-20 l river water samples, were achieved. Following a simple potassium citrate flotation concentrate clean-up procedure, mean recoveries were 56.7% for the tap water samples and 60.9% for river water samples. This represents a marked improvement in capture and recovery of Cryptosporidium oocysts from water compared with conventional polypropylene wound cartridge filters and membrane filters.

  2. Prevention of Adhesion to Prosthetic Mesh

    Science.gov (United States)

    van ’t Riet, Martijne; de Vos van Steenwijk, Peggy J.; Bonthuis, Fred; Marquet, Richard L.; Steyerberg, Ewout W.; Jeekel, Johannes; Bonjer, H. Jaap

    2003-01-01

    Objective To assess whether use of antiadhesive liquids or coatings could prevent adhesion formation to prosthetic mesh. Summary Background Data Incisional hernia repair frequently involves the use of prosthetic mesh. However, concern exists about development of adhesions between viscera and the mesh, predisposing to intestinal obstruction or enterocutaneous fistulas. Methods In 91 rats, a defect in the muscular abdominal wall was created, and mesh was fixed intraperitoneally to cover the defect. Rats were divided in five groups: polypropylene mesh only (control group), addition of Sepracoat or Icodextrin solution to polypropylene mesh, Sepramesh (polypropylene mesh with Seprafilm coating), and Parietex composite mesh (polyester mesh with collagen coating). Seven and 30 days postoperatively, adhesions were assessed and wound healing was studied by microscopy. Results Intraperitoneal placement of polypropylene mesh was followed by bowel adhesions to the mesh in 50% of the cases. A mean of 74% of the mesh surface was covered by adhesions after 7 days, and 48% after 30 days. Administration of Sepracoat or Icodextrin solution had no influence on adhesion formation. Coated meshes (Sepramesh and Parietex composite mesh) had no bowel adhesions. Sepramesh was associated with a significant reduction of the mesh surface covered by adhesions after 7 and 30 days. Infection was more prevalent with Parietex composite mesh, with concurrent increased mesh surface covered by adhesions after 30 days (78%). Conclusions Sepramesh significantly reduced mesh surface covered by adhesions and prevented bowel adhesion to the mesh. Parietex composite mesh prevented bowel adhesions as well but increased infection rates in the current model. PMID:12496539

  3. Testing of high-volume sampler inlets for the sampling of atmospheric radionuclides.

    Science.gov (United States)

    Irshad, Hammad; Su, Wei-Chung; Cheng, Yung S; Medici, Fausto

    2006-09-01

    Sampling of air for radioactive particles is one of the most important techniques used to determine the nuclear debris from a nuclear weapon test in the Earth's atmosphere or those particles vented from underground or underwater tests. Massive-flow air samplers are used to sample air for any indication of radionuclides that are a signature of nuclear tests. The International Monitoring System of the Comprehensive Nuclear Test Ban Treaty Organization includes seismic, hydroacoustic, infrasound, and gaseous xenon isotopes sampling technologies, in addition to radionuclide sampling, to monitor for any violation of the treaty. Lovelace Respiratory Research Institute has developed a large wind tunnel to test the outdoor radionuclide samplers for the International Monitoring System. The inlets for these samplers are tested for their collection efficiencies for different particle sizes at various wind speeds. This paper describes the results from the testing of two radionuclide sampling units used in the International Monitoring System. The possible areas of depositional wall losses are identified and the losses in these areas are determined. Sampling inlet type 1 was tested at 2.2 m s wind speed for 5, 10, and 20-microm aerodynamic diameter particles. The global collection efficiency was about 87.6% for 10-microm particles for sampling inlet type 1. Sampling inlet type 2 was tested for three wind speeds at 0.56, 2.2, and 6.6 m s for 5, 10, and 20-microm aerodynamic diameter particles in two different configurations (sampling head lowered and raised). The global collection efficiencies for these configurations for 10-microm particles at 2.2 m s wind speed were 77.4% and 82.5%, respectively. The sampling flow rate was 600 m h for both sampling inlets.

  4. Risk Factors for Mesh Exposure after Transvaginal Mesh Surgery

    Institute of Scientific and Technical Information of China (English)

    Ke Niu; Yong-Xian Lu; Wen-Jie Shen; Ying-Hui Zhang; Wen-Ying Wang

    2016-01-01

    Background:Mesh exposure after surgery continues to be a clinical challenge for urogynecological surgeons.The purpose of this study was to explore the risk factors for polypropylene (PP) mesh exposure after transvaginal mesh (TVM) surgery.Methods:This study included 195 patients with advanced pelvic organ prolapse (POP),who underwent TVM from January 2004to December 2012 at the First Affiliated Hospital of Chinese PLA General Hospital.Clinical data were evaluated including patient's demography,TVM type,concomitant procedures,operation time,blood loss,postoperative morbidity,and mesh exposure.Mesh exposure was identified through postoperative vaginal examination.Statistical analysis was performed to identify risk factors for mesh exposure.Results:Two-hundred and nine transvaginal PP meshes were placed,including 194 in the anterior wall and 15 in the posterior wall.Concomitant tension-free vaginal tape was performed in 61 cases.The mean follow-up time was 35.1 ± 23.6 months.PP mesh exposure was identified in 32 cases (16.4%),with 31 in the anterior wall and 1 in the posterior wall.Significant difference was found in operating time and concomitant procedures between exposed and nonexposed groups (F =7.443,P =0.007;F =4.307,P =0.039,respectively).Binary logistic regression revealed that the number of concomitant procedures and operation time were risk factors for mesh exposure (P =0.001,P =0.043).Conclusion:Concomitant procedures and increased operating time increase the risk for postoperative mesh exposure in patients undergoing TVM surgery for POP.

  5. Parameterization for fitting triangular mesh

    Institute of Scientific and Technical Information of China (English)

    LIN Hongwei; WANG Guojin; LIU Ligang; BAO Hujun

    2006-01-01

    In recent years, with the development of 3D data acquisition equipments, the study on reverse engineering has become more and more important. However, the existing methods for parameterization can hardly ensure that the parametric domain is rectangular, and the parametric curve grid is regular. In order to overcome these limitations, we present a novel method for parameterization of triangular meshes in this paper. The basic idea is twofold: first, because the isotherms in the steady temperature do not intersect with each other, and are distributed uniformly, no singularity (fold-over) exists in the parameterization; second, a 3D harmonic equation is solved by the finite element method to obtain the steady temperature field on a 2D triangular mesh surface with four boundaries. Therefore, our proposed method avoids the embarrassment that it is impossible to solve the 2D quasi-harmonic equation on the 2D triangular mesh without the parametric values at mesh vertices. Furthermore, the isotherms on the temperature field are taken as a set of iso-parametric curves on the triangular mesh surface. The other set of iso-parametric curves can be obtained by connecting the points with the same chord-length on the isotherms sequentially. The obtained parametric curve grid is regular, and distributed uniformly, and can map the triangular mesh surface to the unit square domain with boundaries of mesh surface to boundaries of parametric domain, which ensures that the triangular mesh surface or point cloud can be fitted with the NURBS surface.

  6. Guaranteed-Quality Triangular Meshes

    Science.gov (United States)

    1989-04-01

    Defense Ad, : ed Research P: jects Pgency or the U.S- Gower ment° iI Guaranteed-Quality Triangular Meshes DTIC ELECTE L. Paul Chew* JUL 1419891 TR 89-983 S... Wittchen , M. S. Shephard, K. R. Grice, and M. A. Yerry, Robust, geometrically based, automatic two-dimensional mesh generation, International Journal for

  7. An Improved Moving Mesh Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    we consider an iterative algorithm of mesh optimization for finite element solution, and give an improved moving mesh strategy that reduces rapidly the complexity and cost of solving variational problems.A numerical result is presented for a 2-dimensional problem by the improved algorithm.

  8. Adaptive and Unstructured Mesh Cleaving

    Science.gov (United States)

    Bronson, Jonathan R.; Sastry, Shankar P.; Levine, Joshua A.; Whitaker, Ross T.

    2015-01-01

    We propose a new strategy for boundary conforming meshing that decouples the problem of building tetrahedra of proper size and shape from the problem of conforming to complex, non-manifold boundaries. This approach is motivated by the observation that while several methods exist for adaptive tetrahedral meshing, they typically have difficulty at geometric boundaries. The proposed strategy avoids this conflict by extracting the boundary conforming constraint into a secondary step. We first build a background mesh having a desired set of tetrahedral properties, and then use a generalized stenciling method to divide, or “cleave”, these elements to get a set of conforming tetrahedra, while limiting the impacts cleaving has on element quality. In developing this new framework, we make several technical contributions including a new method for building graded tetrahedral meshes as well as a generalization of the isosurface stuffing and lattice cleaving algorithms to unstructured background meshes. PMID:26137171

  9. Nuclear waste calorimeter for very large drums with 385 litres sample volume

    Energy Technology Data Exchange (ETDEWEB)

    Jossens, G.; Mathonat, C. [SETARAM Instrumentation, Caluire (France); Bachelet, F. [CEA Valduc, Is sur Tille (France)

    2015-03-15

    Calorimetry is a very precise and well adapted tool for the classification of drums containing nuclear waste material depending on their level of activities (low, medium, high). A new calorimeter has been developed by SETARAM Instrumentation and the CEA Valduc in France. This new calorimeter is designed for drums having a volume bigger than 100 liters. It guarantees high operator safety by optimizing drum handling and air circulation for cooling, and optimized software for direct measurement of the quantity of nuclear material. The LVC1380 calorimeter makes it possible to work over the range 10 to 3000 mW, which corresponds to approximately 0.03 to 10 g of tritium or 3 to 955 g of {sup 241}Pu in a volume up to 385 liters. This calorimeter is based on the heat flow measurement using Peltier elements which surround the drum in the 3 dimensions and therefore measure all the heat coming from the radioactive stuff whatever its position inside the drum. Calorimeter's insulating layers constitute a thermal barrier designed to filter disturbances until they represent less than 0.001 Celsius degrees and to eliminate long term disturbances associated, for example, with laboratory temperature variations between day and night. A calibration device based on Joule effect has also been designed. Measurement time has been optimized but remains long compared with other methods of measurement such as gamma spectrometry but its main asset is to have a good accuracy for low level activities.

  10. On-chip polarimetry for high-throughput screening of nanoliter and smaller sample volumes

    Science.gov (United States)

    Bornhop, Darryl J. (Inventor); Dotson, Stephen (Inventor); Bachmann, Brian O. (Inventor)

    2012-01-01

    A polarimetry technique for measuring optical activity that is particularly suited for high throughput screening employs a chip or substrate (22) having one or more microfluidic channels (26) formed therein. A polarized laser beam (14) is directed onto optically active samples that are disposed in the channels. The incident laser beam interacts with the optically active molecules in the sample, which slightly alter the polarization of the laser beam as it passes multiple times through the sample. Interference fringe patterns (28) are generated by the interaction of the laser beam with the sample and the channel walls. A photodetector (34) is positioned to receive the interference fringe patterns and generate an output signal that is input to a computer or other analyzer (38) for analyzing the signal and determining the rotation of plane polarized light by optically active material in the channel from polarization rotation calculations.

  11. Watermarking on 3D mesh based on spherical wavelet transform

    Institute of Scientific and Technical Information of China (English)

    金剑秋; 戴敏雅; 鲍虎军; 彭群生

    2004-01-01

    In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.

  12. Dynamic Mesh Adaptation for Front Evolution Using Discontinuous Galerkin Based Weighted Condition Number Mesh Relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Greene, Patrick T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schofield, Samuel P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nourgaliev, Robert [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-06-21

    A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.

  13. Cellulose Nanofibre Mesh for Use in Dental Materials

    Directory of Open Access Journals (Sweden)

    Anthony J. Ireland

    2012-07-01

    Full Text Available The aim of this study was to produce a 3D mesh of defect free electrospun cellulose acetate nanofibres and to use this to produce a prototype composite resin containing nanofibre fillers. This might find use as an aesthetic orthodontic bracket material or composite veneer for restorative dentistry. In this laboratory based study cellulose acetate was dissolved in an acetone and dimethylacetamide solvent solution and electrospun. The spinning parameters were optimised and lithium chloride added to the solution to produce a self supporting nanofibre mesh. This mesh was then silane coated and infiltrated with either epoxy resin or an unfilled Bis-GMA resin. The flexural strength of the produced samples was measured and compared to that of unfilled resin samples. Using this method cellulose acetate nanofibres were successfully electrospun in the 286 nm range. However, resin infiltration of this mesh resulted in samples with a flexural strength less than that of the unfilled control samples. Air inclusion during preparation and incomplete wetting of the nanofibre mesh was thought to cause this reduction in flexural strength. Further work is required to reduce the air inclusions before the true effect of resin reinforcement with a 3D mesh of cellulose acetate nanofibres can be determined.

  14. MCNP ESTIMATE OF THE SAMPLED VOLUME IN A NON-DESTRUCTIVE IN SITU SOIL CARBON ANALYSIS.

    Energy Technology Data Exchange (ETDEWEB)

    WIELOPOLSKI, L.; DIOSZEGI, I.; MITRA, S.

    2004-05-03

    Global warming, promoted by anthropogenic CO{sub 2} emission into the atmosphere, is partially mitigated by the photosynthesis processes of the terrestrial echo systems that act as atmospheric CO{sub 2} scrubbers and sequester carbon in soil. Switching from till to no till soils management practices in agriculture further augments this process. Carbon sequestration is also advanced by putting forward a carbon ''credit'' system whereby these can be traded between CO{sub 2} producers and sequesters. Implementation of carbon ''credit'' trade will be further promulgated by recent development of a non-destructive in situ carbon monitoring system based on inelastic neutron scattering (INS). Volumes and depth distributions defined by the 0.1, 1.0, 10, 50, and 90 percent neutron isofluxes, from a point source located at either 5 or 30 cm above the surface, were estimated using Monte Carlo calculations.

  15. Density-viscosity product of small-volume ionic liquid samples using quartz crystal impedance analysis.

    Science.gov (United States)

    McHale, Glen; Hardacre, Chris; Ge, Rile; Doy, Nicola; Allen, Ray W K; MacInnes, Jordan M; Bown, Mark R; Newton, Michael I

    2008-08-01

    Quartz crystal impedance analysis has been developed as a technique to assess whether room-temperature ionic liquids are Newtonian fluids and as a small-volume method for determining the values of their viscosity-density product, rho eta. Changes in the impedance spectrum of a 5-MHz fundamental frequency quartz crystal induced by a water-miscible room-temperature ionic liquid, 1-butyl-3-methylimiclazolium trifluoromethylsulfonate ([C4mim][OTf]), were measured. From coupled frequency shift and bandwidth changes as the concentration was varied from 0 to 100% ionic liquid, it was determined that this liquid provided a Newtonian response. A second water-immiscible ionic liquid, 1-butyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide [C4mim][NTf2], with concentration varied using methanol, was tested and also found to provide a Newtonian response. In both cases, the values of the square root of the viscosity-density product deduced from the small-volume quartz crystal technique were consistent with those measured using a viscometer and density meter. The third harmonic of the crystal was found to provide the closest agreement between the two measurement methods; the pure ionic liquids had the largest difference of approximately 10%. In addition, 18 pure ionic liquids were tested, and for 11 of these, good-quality frequency shift and bandwidth data were obtained; these 12 all had a Newtonian response. The frequency shift of the third harmonic was found to vary linearly with square root of viscosity-density product of the pure ionic liquids up to a value of square root(rho eta) approximately 18 kg m(-2) s(-1/2), but with a slope 10% smaller than that predicted by the Kanazawa and Gordon equation. It is envisaged that the quartz crystal technique could be used in a high-throughput microfluidic system for characterizing ionic liquids.

  16. Gradient Domain Mesh Deformation - A Survey

    Institute of Scientific and Technical Information of China (English)

    Wei-Wei Xu; Kun Zhou

    2009-01-01

    This survey reviews the recent development of gradient domain mesh deformation method. Different to other deformation methods, the gradient domain deformation method is a surface-based, variational optimization method. It directly encodes the geometric details in differential coordinates, which are also called Laplacian coordinates in literature. By preserving the Laplacian coordinates, the mesh details can be well preserved during deformation. Due to the locality of the Laplacian coordinates, the variational optimization problem can be casted into a sparse linear system. Fast sparse linear solver can be adopted to generate deformation result interactively, or even in real-time. The nonlinear nature of gradient domain mesh deformation leads to the development of two categories of deformation methods: linearization methods and nonlinear optimization methods. Basically, the linearization methods only need to solve the linear least-squares system once. They are fast, easy to understand and control, while the deformation result might be suboptimal. Nonlinear optimization methods can reach optimal solution of deformation energy function by iterative updating. Since the computation of nonlinear methods is expensive, reduced deformable models should be adopted to achieve interactive performance. The nonlinear optimization methods avoid the user burden to input transformation at deformation handles, and they can be extended to incorporate various nonlinear constraints, like volume constraint, skeleton constraint, and so on. We review representative methods and related approaches of each category comparatively and hope to help the user understand the motivation behind the algorithms. Finally, we discuss the relation between physical simulation and gradient domain mesh deformation to reveal why it can achieve physically plausible deformation result.

  17. In situ sampling of small volumes of soil solution using modified micro-suction cups

    NARCIS (Netherlands)

    Shen, Jianbo; Hoffland, E.

    2007-01-01

    Two modified designs of micro-pore-water samplers were tested for their capacity to collect unbiased soil solution samples containing zinc and citrate. The samplers had either ceramic or polyethersulfone (PES) suction cups. Laboratory tests of the micro-samplers were conducted using (a) standard sol

  18. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 2: Mission payloads subsystem description

    Science.gov (United States)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    The scheduling algorithm for mission planning and logistics evaluation (SAMPLE) is presented. Two major subsystems are included: The mission payloads program; and the set covering program. Formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.

  19. Sampling of high amounts of bioaerosols using a high-volume electrostatic field sampler

    DEFF Research Database (Denmark)

    Madsen, A. M.; Sharma, Anoop Kumar

    2008-01-01

    by the electrostatic field sampler and 11.8 mg m(-3) when measured by the GSP inhalable dust sampler. The quantity (amount per mg dust) of total fungi, Aspergillus fumigatus, total bacteria, endotoxin and mesophilic actinomycetes sampled by the electrostatic field samplers and the Gravikon samplers varied within...

  20. The properties of a large volume-limited sample of face-on low surface brightness disk galaxies

    Institute of Scientific and Technical Information of China (English)

    Guo-Hu Zhong; Yan-Chun Liang; Feng-Shan Liu; Francois Hammer; Karen Disseau; Li-Cai Deng

    2012-01-01

    We select a large volume-limited sample of low surface brightness galaxies (LSBGs,2021) to investigate in detail their statistical properties and their differences from high surface brightness galaxies (HSBGs,3639).The distributions of stellar masses of LSBGs and HSBGs are nearly the same and they have the same median values.Thus this volume-limited sample has good completeness and is further removed from the effect of stellar masses on their other properties when we compare LSBGs to HSBGs.We found that LSBGs tend to have lower stellar metallicities and lower effective dust attenuations,indicating that they have lower dust than HSBGs.The LSBGs have relatively higher stellar mass-to-light ratios,higher gas fractions,lower star forming rates (SFRs),and lower specific SFRs than HSBGs.Moreover,with the decreasing surface brightness,gas fraction increases,but the SFRs and specific SFRs decrease rapidly for the sample galaxies.This could mean that the star formation histories between LSBGs and HSBGs are different,and HSBGs may have stronger star forming activities than LSBGs.

  1. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  2. Modelling axisymmetric cod-ends made of different mesh types

    DEFF Research Database (Denmark)

    Priour, D.; Herrmann, Bent; O'Neill, F.G.

    2009-01-01

    Cod-ends are the rearmost part of trawl fishing gears. They collect the catch, and for many important species it is where fish selection takes place. Generally speaking they are axisymmetric, and their shape is influenced by the catch volume, the mesh shape, and the material characteristics. The ...

  3. Application of the VOF method based on unstructured quadrilateral mesh

    Institute of Scientific and Technical Information of China (English)

    JI Chun-ning; SHI Ying

    2008-01-01

    To simulate two-dimensional free-surface flows with complex boundaries directly and accurately, a novel VOF (Volume-of-fluid) method based on unstructured quadrilateral mesh is presented. Without introducing any complicated boundary treatment or artificial diffusion, this method treated curved boundaries directly by utilizing the inherent merit of unstructured mesh in fitting curves. The PLIC (Piecewise Linear Interface Calculation) method was adopted to obtain a second-order accurate linearized reconstruction approximation and the MLER (Modified Lagrangian-Eulerian Re-map) method was introduced to advect fluid volumes on unstructured mesh. Moreover, an analytical relation for the interface's line constant vs. the volume clipped by the interface was developed so as to improve the method's efficiency. To validate this method, a comprehensive series of large straining advection tests were performed. Numerical results provide convincing evidences for the method's high volume conservative accuracy and second-order shape error convergence rate. Also, a dramatic improvement on computational accuracy over its unstructured triangular mesh counterpart is checked.

  4. Mesh Adaptation and Shape Optimization on Unstructured Meshes Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this SBIR CRM proposes to implement the entropy adjoint method for solution adaptive mesh refinement into the Loci/CHEM unstructured flow solver. The scheme will...

  5. Clustering properties of a type-selected volume-limited sample of galaxies in the CFHTLS

    CERN Document Server

    McCracken, H J; Mellier, Y; Bertin, E; Guzzo, L; Arnouts, S; Le Fèvre, O; Zamorani, G

    2007-01-01

    (abridged) We present an investigation of the clustering of i'AB<24.5 galaxies in the redshift interval 0.2volume-limited galaxy catalogues. We study the dependence of the amplitude and slope of the galaxy correlation function on absolute B-band rest-frame luminosity, redshift and best-fitting spectral type. We find: 1. The comoving correlation length for all galaxies decreases steadily from z~0.3 to z~1. 2. At all redshifts and luminosities, galaxies with redder rest-frame colours have clustering amplitudes between two and three times higher than bluer ones. 3. For bright red and blue galaxies, the clustering amplitude is invariant with redshift. 4. At z~0.5, less luminous galaxies have higher clustering amplitudes of around 6 h-1 Mpc. 5. The relative bias between galaxies with red and blue rest-frame colours increases gradually towards fainter absolute magnitud...

  6. An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method

    Directory of Open Access Journals (Sweden)

    Jibum Kim

    2014-01-01

    Full Text Available We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient and second-order (Hessian derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality.

  7. Nanowire mesh solar fuels generator

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Peidong; Chan, Candace; Sun, Jianwei; Liu, Bin

    2016-05-24

    This disclosure provides systems, methods, and apparatus related to a nanowire mesh solar fuels generator. In one aspect, a nanowire mesh solar fuels generator includes (1) a photoanode configured to perform water oxidation and (2) a photocathode configured to perform water reduction. The photocathode is in electrical contact with the photoanode. The photoanode may include a high surface area network of photoanode nanowires. The photocathode may include a high surface area network of photocathode nanowires. In some embodiments, the nanowire mesh solar fuels generator may include an ion conductive polymer infiltrating the photoanode and the photocathode in the region where the photocathode is in electrical contact with the photoanode.

  8. Ultimate detectability of volatile organic compounds: how much further can we reduce their ambient air sample volumes for analysis?

    Science.gov (United States)

    Kim, Yong-Hyun; Kim, Ki-Hyun

    2012-10-02

    To understand the ultimately lowest detection range of volatile organic compounds (VOCs) in air, application of a high sensitivity analytical system was investigated by coupling thermal desorption (TD) technique with gas chromatography (GC) and time-of-flight (TOF) mass spectrometry (MS). The performance of the TD-GC/TOF MS system was evaluated using liquid standards of 19 target VOCs prepared in the range of 35 pg to 2.79 ng per μL. Studies were carried out using both total ion chromatogram (TIC) and extracted ion chromatogram (EIC) mode. EIC mode was used for calibration to reduce background and to improve signal-to-noise. The detectability of 19 target VOCs, if assessed in terms of method detection limit (MDL, per US EPA definition) and limit of detection (LOD), averaged 5.90 pg and 0.122 pg, respectively, with the mean coefficient of correlation (R(2)) of 0.9975. The minimum quantifiable mass of target analytes, when determined using real air samples by the TD-GC/TOF MS, is highly comparable to the detection limits determined experimentally by standard. In fact, volumes for the actual detection of the major aromatic VOCs like benzene, toluene, and xylene (BTX) in ambient air samples were as low as 1.0 mL in the 0.11-2.25 ppb range. It was thus possible to demonstrate that most target compounds including those in low abundance could be reliably quantified at concentrations down to 0.1 ppb at sample volumes of less than 10 mL. The unique sensitivity of this advanced analytical system can ultimately lead to a shift in field sampling strategy with smaller air sample volumes facilitating faster, simpler air sampling (e.g., use of gas syringes rather than the relative complexity of pumps or bags/canisters), with greatly reduced risk of analyte breakthrough and minimal interference, e.g., from atmospheric humidity. The improved detection limits offered by this system can also enhance accuracy and measurement precision.

  9. Numerical modelling of the laser cladding process using a dynamic mesh approach

    Directory of Open Access Journals (Sweden)

    E.H. Amara

    2006-02-01

    Full Text Available Purpose: In this paper, a tridimensional modelling of laser cladding by powder injection is developed.Design/methodology/approach: In our approach, the task consists in the numerical resolution of the governing equations including heat transfer and flow dynamic assuming an unsteady state. The related differential equations are discretized using the finite volume method, allowing to obtain an algebraic set of equations. the clad formation is simulated by considering the finite volume mesh deformation.Findings: The shape of the deposited layer is determined as a function of the operating parameters related to the laser beam, the powder, the sample, and the environing atmosphere.Research limitations/implications: By including as much as possible of terms describing physical mechanisms in the general form of the equations, one can model more accurately the cladding process. Afterwards, a validation with experimental results must be done.Practical implications: The comprehension of the occurring physical processes would allow the enhancing of the products quality, the process can then be optimized since predictions on the results to be obtained can be made for given operating parameters.Originality/value: In our contribution, the introduction of the dynamic mesh method involving the use of user defined functions (UDF in the calculation procedure, have allowed to follow the variation of the cells volume and then to obtain the clad profiles as a function of the operating parameters.

  10. Wire-Mesh-Based Sorber for Removing Contaminants from Air

    Science.gov (United States)

    Perry, Jay; Roychoudhury, Subir; Walsh, Dennis

    2006-01-01

    A paper discusses an experimental regenerable sorber for removing CO2 and trace components principally, volatile organic compounds, halocarbons, and NH3 from spacecraft cabin air. This regenerable sorber is a prototype of what is intended to be a lightweight alternative to activated-carbon and zeolite-pellet sorbent beds now in use. The regenerable sorber consists mainly of an assembly of commercially available meshes that have been coated with a specially-formulated washcoat containing zeolites. The zeolites act as the sorbents while the meshes support the zeolite-containing washcoat in a configuration that affords highly effective surface area for exposing the sorbents to flowing air. The meshes also define flow paths characterized by short channel lengths to prevent excessive buildup of flow boundary layers. Flow boundary layer resistance is undesired because it can impede mass and heat transfer. The total weight and volume comparison versus the atmosphere revitalization equipment used onboard the International Space Station for CO2 and trace-component removal will depend upon the design details of the final embodiment. However, the integrated mesh-based CO2 and trace-contaminant removal system is expected to provide overall weight and volume savings by eliminating most of the trace-contaminant control equipment presently used in parallel processing schemes traditionally used for spacecraft. The mesh-based sorbent media enables integrating the two processes within a compact package. For the purpose of regeneration, the sorber can be heated by passing electric currents through the metallic meshes combined with exposure to space vacuum. The minimal thermal mass of the meshes offers the potential for reduced regeneration-power requirements and cycle time required for regeneration compared to regenerable sorption processes now in use.

  11. An unstructured-mesh atmospheric model for nonhydrostatic dynamics: Towards optimal mesh resolution

    Science.gov (United States)

    Szmelter, Joanna; Zhang, Zhao; Smolarkiewicz, Piotr K.

    2015-08-01

    The paper advances the limited-area anelastic model (Smolarkiewicz et al. (2013) [45]) for investigation of nonhydrostatic dynamics in mesoscale atmospheric flows. New developments include the extension to a tetrahedral-based median-dual option for unstructured meshes and a static mesh adaptivity technique using an error indicator based on inherent properties of the Multidimensional Positive Definite Advection Transport Algorithm (MPDATA). The model employs semi-implicit nonoscillatory forward-in-time integrators for soundproof PDEs, built on MPDATA and a robust non-symmetric Krylov-subspace elliptic solver. Finite-volume spatial discretisation adopts an edge-based data structure. Simulations of stratified orographic flows and the associated gravity-wave phenomena in media with uniform and variable dispersive properties verify the advancement and demonstrate the potential of heterogeneous anisotropic discretisation with large variation in spatial resolution for study of complex stratified flows that can be computationally unattainable with regular grids.

  12. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 3: The GREEDY algorithm

    Science.gov (United States)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    The functional specifications, functional design and flow, and the program logic of the GREEDY computer program are described. The GREEDY program is a submodule of the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) program and has been designed as a continuation of the shuttle Mission Payloads (MPLS) program. The MPLS uses input payload data to form a set of feasible payload combinations; from these, GREEDY selects a subset of combinations (a traffic model) so all payloads can be included without redundancy. The program also provides the user a tutorial option so that he can choose an alternate traffic model in case a particular traffic model is unacceptable.

  13. Brief communication: Endocranial volumes in an ontogenetic sample of chimpanzees from the Taï Forest National Park, Ivory Coast.

    Science.gov (United States)

    Neubauer, Simon; Gunz, Philipp; Schwarz, Uta; Hublin, Jean-Jacques; Boesch, Christophe

    2012-02-01

    Ontogenetic samples of endocranial volumes (EVs) from great apes and humans are critical for understanding the evolution of the brain growth pattern in the hominin lineage. However, high quality ontogenetic data are scarce, especially for nonhuman primates. Here, we provide original data derived from an osteological collection of a wild population of Pan troglodytes verus from the Taï Forest National Park, Ivory Coast. This sample is unique, because age, sex, and pedigree information are available for many specimens from behavioral observations in the wild. We scanned crania of all 30 immature specimens and 13 adult individuals using high-resolution computed tomography. We then created virtual casts of the bony braincase (endocasts) to measure EVs. We also measured cranial length, width, and height and attempted to relate cranial distances to EV via regression analysis. Our data are consistent with previous studies. The only neonate in the sample has an EV of 127 cm(3) or 34% of the adult mean. EV increases rapidly during early ontogeny. The average adult EV in this sample is 378.7 ± 30.1 cm(3) . We found sexual dimorphism in adults; males seem to be already larger than females before adult EV is attained. Regressions on cranial width and multiple regression provide better estimates for EV than regressions on cranial length or height. Increasing the sample size and compiling more high quality ontogenetic data of EV will help to reconcile ongoing discussions about the evolution of hominin brain growth. Copyright © 2011 Wiley Periodicals, Inc.

  14. Revisiting the Lick Observatory Supernova Search Volume-Limited Sample: Updated Classifications and Revised Stripped-envelope Supernova Fractions

    CERN Document Server

    Shivvers, Isaac; Zheng, Weikang; Filippenko, Alexei V; Silverman, Jeffrey M; Liu, Yuqian; Matheson, Thomas; Pastorello, Andrea; Graur, Or; Foley, Ryan J; Chornock, Ryan; Smith, Nathan; Leaman, Jesse; Benetti, Stefano

    2016-01-01

    We re-examine the classifications of supernovae (SNe) presented in the Lick Observatory Supernova Search (LOSS) volume-limited sample with a focus on the stripped-envelope SNe. The LOSS volumetric sample, presented by Leaman et al. (2011) and Li et al. (2011b), was calibrated to provide meaningful measurements of SN rates in the local universe; the results presented therein continue to be used for comparisons to theoretical and modeling efforts. Many of the objects from the LOSS sample were originally classified based upon only a small subset of the data now available, and recent studies have both updated some subtype distinctions and improved our ability to perform robust classifications, especially for stripped-envelope SNe. We re-examine the spectroscopic classifications of all events in the LOSS volumetric sample (180 SNe and SN impostors) and update them if necessary. We discuss the populations of rare objects in our sample including broad-lined Type Ic SNe, Ca-rich SNe, SN 1987A-like events (we identify...

  15. High resolution triple resonance micro magic angle spinning NMR spectroscopy of nanoliter sample volumes.

    Science.gov (United States)

    Brauckmann, J Ole; Janssen, J W G Hans; Kentgens, Arno P M

    2016-02-14

    To be able to study mass-limited samples and small single crystals, a triple resonance micro-magic angle spinning (μMAS) probehead for the application of high-resolution solid-state NMR of nanoliter samples was developed. Due to its excellent rf performance this allows us to explore the limits of proton NMR resolution in strongly coupled solids. Using homonuclear decoupling we obtain unprecedented (1)H linewidths for a single crystal of glycine (Δν(CH2) = 0.14 ppm) at high field (20 T) in a directly detected spectrum. The triple channel design allowed the recording of high-resolution μMAS (13)C-(15)N correlations of [U-(13)C-(15)N] arginine HCl and shows that the superior (1)H resolution opens the way for high-sensitivity inverse detection of heteronuclei even at moderate spinning speeds and rf-fields. Efficient decoupling leads to long coherence times which can be exploited in many correlation experiments.

  16. Mersiline mesh in premaxillary augmentation.

    Science.gov (United States)

    Foda, Hossam M T

    2005-01-01

    Premaxillary retrusion may distort the aesthetic appearance of the columella, lip, and nasal tip. This defect is characteristically seen in, but not limited to, patients with cleft lip nasal deformity. This study investigated 60 patients presenting with premaxillary deficiencies in which Mersiline mesh was used to augment the premaxilla. All the cases had surgery using the external rhinoplasty technique. Two methods of augmentation with Mersiline mesh were used: the Mersiline roll technique, for the cases with central symmetric deficiencies, and the Mersiline packing technique, for the cases with asymmetric deficiencies. Premaxillary augmentation with Mersiline mesh proved to be simple technically, easy to perform, and not associated with any complications. Periodic follow-up evaluation for a mean period of 32 months (range, 12-98 months) showed that an adequate degree of premaxillary augmentation was maintained with no clinically detectable resorption of the mesh implant.

  17. E pur si muove: Galiliean-invariant cosmological hydrodynamical simulations on a moving mesh

    CERN Document Server

    Springel, Volker

    2009-01-01

    Hydrodynamic cosmological simulations at present usually employ either the Lagrangian SPH technique, or Eulerian hydrodynamics on a Cartesian mesh with adaptive mesh refinement. Both of these methods have disadvantages that negatively impact their accuracy in certain situations. We here propose a novel scheme which largely eliminates these weaknesses. It is based on a moving unstructured mesh defined by the Voronoi tessellation of a set of discrete points. The mesh is used to solve the hyperbolic conservation laws of ideal hydrodynamics with a finite volume approach, based on a second-order unsplit Godunov scheme with an exact Riemann solver. The mesh-generating points can in principle be moved arbitrarily. If they are chosen to be stationary, the scheme is equivalent to an ordinary Eulerian method with second order accuracy. If they instead move with the velocity of the local flow, one obtains a Lagrangian formulation of hydrodynamics that does not suffer from the mesh distortion limitations inherent in othe...

  18. ARPA-E Impacts: A Sampling of Project Outcomes, Volume II

    Energy Technology Data Exchange (ETDEWEB)

    Rohlfing, Eric [Dept. of Energy (DOE), Washington DC (United States). Advanced Research Projects Agency-Energy (ARPA-E)

    2017-02-27

    The Advanced Research Projects Agency-Energy (ARPA-E) is demonstrating that a collaborative model has the power to deliver real value. The Agency’s first compilation booklet of impact sheets, published in 2016, began to tell the story of how ARPA-E has already made an impact in just seven years—funding a diverse and sophisticated research portfolio on advanced energy technologies that enable the United States to tackle our most pressing energy challenges. One year later our research investments continue to pay off, with a number of current and alumni project teams successfully commercializing their technologies and advancing the state of the art in transformative areas of energy science and engineering. There is no single measure that can fully illustrate ARPA-E’s success to date, but several statistics viewed collectively begin to reveal the Agency’s impact. Since 2009, ARPA-E has provided more than $1.5 billion in funding for 36 focused programs and three open funding solicitations, totaling over 580 projects. Of those, 263 are now alumni projects. Many teams have successfully leveraged ARPA-E’s investment: 56 have formed new companies, 68 have partnered with other government agencies to continue their technology development, and 74 teams have together raised more than $1.8 billion in reported funding from the private sector to bring their technologies to market. However, even when viewed together, those measures do not capture ARPA-E’s full impact. To best understand the Agency’s success, the specific scientific and engineering challenges that ARPA-E project teams have overcome must be understood. This booklet provides concrete examples of those successes, ranging from innovations that will bear fruit in the future to ones that are beginning to penetrate the market as products today. Importantly, half of the projects highlighted in this volume stem from OPEN solicitations, which the agency has run in 2009, 2012, and 2015. ARPA-E’s OPEN programs

  19. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  20. Natural air ventilation in underground galleries as a tool to increase radon sampling volumes for geologic monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Eff-Darwich, Antonio [Departamento de Edafologia y Geologia, Universidad de La Laguna, Av. Astrofisico Francisco, Sanchez s/n, 38206 La Laguna, Tenerife (Spain); Instituto de Astrofisica de Canarias, c/Via Lactea s/n, 38205 La Laguna, Tenerife (Spain)], E-mail: adarwich@ull.es; Vinas, Ronaldo [Departamento de Edafologia y Geologia, Universidad de La Laguna, Av. Astrofisico Francisco, Sanchez s/n, 38206 La Laguna, Tenerife (Spain); Soler, Vicente [Estacion Volcanologica de Canarias, IPNA-CSIC, Av. Astrofisico Francisco Sanchez s/n, 38206 La Laguna, Tenerife (Spain); Nuez, Julio de la; Quesada, Maria L. [Departamento de Edafologia y Geologia, Universidad de La Laguna, Av. Astrofisico Francisco, Sanchez s/n, 38206 La Laguna, Tenerife (Spain)

    2008-09-15

    A simple numerical model was implemented to infer airflow (natural ventilation) in underground tunnels from the differences in the temporal patterns of radon, {sup 222}Rn, concentration time-series that were measured at two distant points in the interior of the tunnels. The main purpose of this work was to demonstrate that the installation of radon monitoring stations closer to the entrance of the tunnels was sufficient to remotely analyse the distribution of radon concentration in their interiors. This could ease the monitoring of radon, since the effective sampling volume of a single monitoring station located closer to the entrance of a tunnel is approximately 30,000 times larger than the sampling volume of a sub-soil radon sensor. This methodology was applied to an underground gallery located in the volcanic island of Tenerife, Canary Islands. This island constitutes an ideal laboratory to study the geo-dynamical behaviour of radon because of the existence of a vast network of galleries that conforms the main water supply of the island.

  1. Highly sensitive SERS detection of cancer proteins in low sample volume using hollow core photonic crystal fiber.

    Science.gov (United States)

    U S, Dinish; Fu, Chit Yaw; Soh, Kiat Seng; Ramaswamy, Bhuvaneswari; Kumar, Anil; Olivo, Malini

    2012-03-15

    Enzyme-linked immunosorbent assays (ELISA) are commonly used for detecting cancer proteins at concentration in the range of about ng-μg/mL. Hence it often fails to detect tumor markers at the early stages of cancer and other diseases where the amount of protein is extremely low. Herein, we report a novel photonic crystal fiber (PCF) based surface enhanced Raman scattering (SERS) sensing platform for the ultrasensitive detection of cancer proteins in an extremely low sample volume. As a proof of concept, epidermal growth factor receptors (EGFRs) in a lysate solution from human epithelial carcinoma cells were immobilized into the hollow core PCF. Highly sensitive detection of protein was achieved using anti-EGFR antibody conjugated SERS nanotag. This SERS nanotag probe was realized by anchoring highly active Raman molecules onto the gold nanoparticles followed by bioconjugation. The proposed sensing method can detect low amount of proteins at ∼100 pg in a sample volume of ∼10 nL. Our approach may lead to the highly sensitive protein sensing methodology for the early detection of diseases.

  2. A method to generate conformal finite-element meshes from 3D measurements of microstructurally small fatigue-crack propagation: 3D Meshes of Microstructurally Small Crack Growth

    Energy Technology Data Exchange (ETDEWEB)

    Spear, A. D. [Department of Mechanical Engineering, University of Utah, Salt Lake City UT USA; Hochhalter, J. D. [NASA Langley Research Center, Hampton VA USA; Cerrone, A. R. [GE Global Research Center, Niskayuna NY USA; Li, S. F. [Lawrence Livermore National Laboratory, Livermore CA USA; Lind, J. F. [Lawrence Livermore National Laboratory, Livermore CA USA; Suter, R. M. [Department of Physics, Carnegie Mellon University, Pittsburgh PA USA; Ingraffea, A. R. [School of Civil & Environmental Engineering, Cornell University, Ithaca NY USA

    2016-04-27

    In an effort to reproduce computationally the observed evolution of microstructurally small fatigue cracks (MSFCs), a method is presented for generating conformal, finite-element (FE), volume meshes from 3D measurements of MSFC propagation. The resulting volume meshes contain traction-free surfaces that conform to incrementally measured 3D crack shapes. Grain morphologies measured using near-field high-energy X-ray diffraction microscopy are also represented within the FE volume meshes. Proof-of-concept simulations are performed to demonstrate the utility of the mesh-generation method. The proof-of-concept simulations employ a crystal-plasticity constitutive model and are performed using the conformal FE meshes corresponding to successive crack-growth increments. Although the simulations for each crack increment are currently independent of one another, they need not be, and transfer of material-state information among successive crack-increment meshes is discussed. The mesh-generation method was developed using post-mortem measurements, yet it is general enough that it can be applied to in-situ measurements of 3D MSFC propagation.

  3. GENERATION OF IRREGULAR HEXAGONAL MESHES

    Directory of Open Access Journals (Sweden)

    Vlasov Aleksandr Nikolaevich

    2012-07-01

    Decomposition is performed in a constructive way and, as option, it involves meshless representation. Further, this mapping method is used to generate the calculation mesh. In this paper, the authors analyze different cases of mapping onto simply connected and bi-connected canonical domains. They represent forward and backward mapping techniques. Their potential application for generation of nonuniform meshes within the framework of the asymptotic homogenization theory is also performed to assess and project effective characteristics of heterogeneous materials (composites.

  4. Image-driven mesh optimization

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P; Turk, G

    2001-01-05

    We describe a method of improving the appearance of a low vertex count mesh in a manner that is guided by rendered images of the original, detailed mesh. This approach is motivated by the fact that greedy simplification methods often yield meshes that are poorer than what can be represented with a given number of vertices. Our approach relies on edge swaps and vertex teleports to alter the mesh connectivity, and uses the downhill simplex method to simultaneously improve vertex positions and surface attributes. Note that this is not a simplification method--the vertex count remains the same throughout the optimization. At all stages of the optimization the changes are guided by a metric that measures the differences between rendered versions of the original model and the low vertex count mesh. This method creates meshes that are geometrically faithful to the original model. Moreover, the method takes into account more subtle aspects of a model such as surface shading or whether cracks are visible between two interpenetrating parts of the model.

  5. Robust Generation of Signed Distance Fields from Triangle Meshes

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas

    2005-01-01

    is then used to convert the binary volume into a distance field. The method is robust and handles holes, spurious triangles and ambiguities. Moreover, the method lends itself to Boolean operations between solids. Since a point cloud as well as a signed distance is generated, it is possible to extract an iso......-surface of the distance field and fit it to the point set. Using this method, one may recover sharp edge information. Examples are given where the method for generating distance fields coupled with mesh fitting is used to perform Boolean and morphological operations on triangle meshes....

  6. Form-finding with polyhedral meshes made simple

    KAUST Repository

    Tang, Chengcheng

    2015-08-09

    We solve the form-finding problem for polyhedral meshes in a way which combines form, function and fabrication; taking care of user-specified constraints like boundary interpolation, planarity of faces, statics, panel size and shape, enclosed volume, and cost. Our main application is the interactive modeling of meshes for architectural and industrial design. Our approach can be described as guided exploration of the constraint space whose algebraic structure is simplified by introducing auxiliary variables and ensuring that constraints are at most quadratic.

  7. Method and system for mesh network embedded devices

    Science.gov (United States)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for managing mesh network devices. A mesh network device with integrated features creates an N-way mesh network with a full mesh network topology or a partial mesh network topology.

  8. Extended two-photon microscopy in live samples with Bessel beams: steadier focus, faster volume scans, and simpler stereoscopic imaging

    Science.gov (United States)

    Thériault, Gabrielle; Cottet, Martin; Castonguay, Annie; McCarthy, Nathalie; De Koninck, Yves

    2014-01-01

    Two-photon microscopy has revolutionized functional cellular imaging in tissue, but although the highly confined depth of field (DOF) of standard set-ups yields great optical sectioning, it also limits imaging speed in volume samples and ease of use. For this reason, we recently presented a simple and retrofittable modification to the two-photon laser-scanning microscope which extends the DOF through the use of an axicon (conical lens). Here we demonstrate three significant benefits of this technique using biological samples commonly employed in the field of neuroscience. First, we use a sample of neurons grown in culture and move it along the z-axis, showing that a more stable focus is achieved without compromise on transverse resolution. Second, we monitor 3D population dynamics in an acute slice of live mouse cortex, demonstrating that faster volumetric scans can be conducted. Third, we acquire a stereoscopic image of neurons and their dendrites in a fixed sample of mouse cortex, using only two scans instead of the complete stack and calculations required by standard systems. Taken together, these advantages, combined with the ease of integration into pre-existing systems, make the extended depth-of-field imaging based on Bessel beams a strong asset for the field of microscopy and life sciences in general. PMID:24904284

  9. Analysis of Three Compounds in Flos Farfarae by Capillary Electrophoresis with Large-Volume Sample Stacking

    Directory of Open Access Journals (Sweden)

    Hai-xia Yu

    2017-01-01

    Full Text Available The aim of this study was to develop a method combining an online concentration and high-efficiency capillary electrophoresis separation to analyze and detect three compounds (rutin, hyperoside, and chlorogenic acid in Flos Farfarae. In order to get good resolution and enrichment, several parameters such as the choice of running buffer, pH and concentration of the running buffer, organic modifier, temperature, and separation voltage were all investigated. The optimized conditions were obtained as follows: the buffer of 40 mM NaH2P04-40 mM Borax-30% v/v methanol (pH 9.0; the sample hydrodynamic injection of up to 4 s at 0.5 psi; 20 kV applied voltage. The diode-array detector was used, and the detection wavelength was 364 nm. Based on peak area, higher levels of selective and sensitive improvements in analysis were observed and about 14-, 26-, and 5-fold enrichment of rutin, hyperoside, and chlorogenic acid were achieved, respectively. This method was successfully applied to determine the three compounds in Flos Farfarae. The linear curve of peak response versus concentration was from 20 to 400 µg/ml, 16.5 to 330 µg/mL, and 25 to 500 µg/mL, respectively. The regression coefficients were 0.9998, 0.9999, and 0.9991, respectively.

  10. Method for generating a mesh representation of a region characterized by a trunk and a branch thereon

    Science.gov (United States)

    Shepherd, Jason; Mitchell, Scott A.; Jankovich, Steven R.; Benzley, Steven E.

    2007-05-15

    The present invention provides a meshing method, called grafting, that lifts the prior art constraint on abutting surfaces, including surfaces that are linking, source/target, or other types of surfaces of the trunk volume. The grafting method locally modifies the structured mesh of the linking surfaces allowing the mesh to conform to additional surface features. Thus, the grafting method can provide a transition between multiple sweep directions extending sweeping algorithms to 23/4-D solids. The method is also suitable for use with non-sweepable volumes; the method provides a transition between meshes generated by methods other than sweeping as well.

  11. User Manual for the PROTEUS Mesh Tools

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, Emily R. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-06-01

    This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.

  12. Mesh refinement strategy for optimal control problems

    OpenAIRE

    Paiva, Luis Tiago; Fontes, Fernando,

    2013-01-01

    International audience; Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform node...

  13. Association between waist circumference and gray matter volume in 2344 individuals from two adult community-based samples.

    Science.gov (United States)

    Janowitz, Deborah; Wittfeld, Katharina; Terock, Jan; Freyberger, Harald Jürgen; Hegenscheid, Katrin; Völzke, Henry; Habes, Mohamad; Hosten, Norbert; Friedrich, Nele; Nauck, Matthias; Domanska, Grazyna; Grabe, Hans Jörgen

    2015-11-15

    We analyzed the putative association between abdominal obesity (measured in waist circumference) and gray matter volume (Study of Health in Pomerania: SHIP-2, N=758) adjusted for age and gender by applying volumetric analysis and voxel-based morphometry (VBM) with VBM8 to brain magnetic resonance (MR) imaging. We sought replication in a second, independent population sample (SHIP-TREND, N=1586). In a combined analysis (SHIP-2 and SHIP-TREND) we investigated the impact of hypertension, type II diabetes and blood lipids on the association between waist circumference and gray matter. Volumetric analysis revealed a significant inverse association between waist circumference and gray matter volume. VBM in SHIP-2 indicated distinct inverse associations in the following structures for both hemispheres: frontal lobe, temporal lobes, pre- and postcentral gyrus, supplementary motor area, supramarginal gyrus, insula, cingulate gyrus, caudate nucleus, olfactory sulcus, para-/hippocampus, gyrus rectus, amygdala, globus pallidus, putamen, cerebellum, fusiform and lingual gyrus, (pre-) cuneus and thalamus. These areas were replicated in SHIP-TREND. More than 76% of the voxels with significant gray matter volume reduction in SHIP-2 were also distinct in TREND. These brain areas are involved in cognition, attention to interoceptive signals as satiety or reward and control food intake. Due to our cross-sectional design we cannot clarify the causal direction of the association. However, previous studies described an association between subjects with higher waist circumference and future cognitive decline suggesting a progressive brain alteration in obese subjects. Pathomechanisms may involve chronic inflammation, increased oxidative stress or cellular autophagy associated with obesity.

  14. GOTPM: A Parallel Hybrid Particle-Mesh Treecode

    CERN Document Server

    Dubinski, J; Park, C; Humble, R J; Dubinski, John; Kim, Juhan; Park, Changbom; Humble, Robin

    2004-01-01

    We describe a parallel, cosmological N-body code based on a hybrid scheme using the particle-mesh (PM) and Barnes-Hut (BH) oct-tree algorithm. We call the algorithm GOTPM for Grid-of-Oct-Trees-Particle-Mesh. The code is parallelized using the Message Passing Interface (MPI) library and is optimized to run on Beowulf clusters as well as symmetric multi-processors. The gravitational potential is determined on a mesh using a standard PM method with particle forces determined through interpolation. The softened PM force is corrected for short range interactions using a grid of localized BH trees throughout the entire simulation volume in a completely analogous way to P$^3$M methods. This method makes no assumptions about the local density for short range force corrections and so is consistent with the results of the P$^3$M method in the limit that the treecode opening angle parameter, $\\theta \\to 0$. (abridged)

  15. An edge-based unstructured mesh discretisation in geospherical framework

    Science.gov (United States)

    Szmelter, Joanna; Smolarkiewicz, Piotr K.

    2010-07-01

    An arbitrary finite-volume approach is developed for discretising partial differential equations governing fluid flows on the sphere. Unconventionally for unstructured-mesh global models, the governing equations are cast in the anholonomic geospherical framework established in computational meteorology. The resulting discretisation retains proven properties of the geospherical formulation, while it offers the flexibility of unstructured meshes in enabling irregular spatial resolution. The latter allows for a global enhancement of the spatial resolution away from the polar regions as well as for a local mesh refinement. A class of non-oscillatory forward-in-time edge-based solvers is developed and applied to numerical examples of three-dimensional hydrostatic flows, including shallow-water benchmarks, on a rotating sphere.

  16. Soundproof simulations of stratospheric gravity waves on unstructured meshes

    Science.gov (United States)

    Smolarkiewicz, P.; Szmelter, J.

    2012-04-01

    An edge-based unstructured-mesh semi-implicit model is presented that integrates nonhydrostatic soundproof equations, inclusive of anelastic and pseudo-incompressible systems of partial differential equations. The model numerics employ nonoscillatory forward-in-time MPDATA methods [Smolarkiewicz, 2006, Int. J. Numer. Meth. Fl., 50, 1123-1144] using finite-volume spatial discretization and unstructured meshes with arbitrarily shaped cells. Implicit treatment of gravity waves benefits both accuracy and stability of the model. The unstructured-mesh solutions are compared to equivalent structured-grid results for intricate, multiscale internal-wave phenomenon of a non-Boussinesq amplification and breaking of deep stratospheric gravity waves. The departures of the anelastic and pseudo-incompressible results are quantified in reference to a recent asymptotic theory [Achatz et al., 2010, J. Fluid Mech., 663, 120-147].

  17. Biomechanics and biocompatibility of woven spider silk meshes during remodeling in a rodent fascia replacement model.

    Science.gov (United States)

    Schäfer-Nolte, Franziska; Hennecke, Kathleen; Reimers, Kerstin; Schnabel, Reinhild; Allmeling, Christina; Vogt, Peter M; Kuhbier, Joern W; Mirastschijski, Ursula

    2014-04-01

    The aim of this study was to investigate biomechanical and immunogenic properties of spider silk meshes implanted as fascia replacement in a rat in vivo model. Meshes for hernia repair require optimal characteristics with regard to strength, elasticity, and cytocompatibility. Spider silk as a biomaterial with outstanding mechanical properties is potentially suitable for this application. Commercially available meshes used for hernia repair (Surgisis and Ultrapro) were compared with handwoven meshes manufactured from native dragline silk of Nephila spp. All meshes were tied onto the paravertebral fascia, whereas sham-operated rats were sutured without mesh implantation. After 4 or 14 days, 4 weeks, and 4 or 8 months, tissue samples were analyzed concerning inflammation and biointegration both by histological and biochemical methods and by biomechanical stability tests. Histological sections revealed rapid cell migration into the spider silk meshes with increased numbers of giant cells compared with controls with initial decomposition of silk fibers after 4 weeks. Four months postoperatively, spider silk was completely degraded with the formation of a stable scar verified by constant tensile strength values. Surgisis elicited excessive stability loss from day 4 to day 14 (P spider silk samples had the highest relative elongation (P spider silk meshes with good biocompatibility and beneficial mechanical properties seem superior to standard biological and synthetic meshes, implying an innovative alternative to currently used meshes for hernia repair.

  18. Connectivity editing for quadrilateral meshes

    KAUST Repository

    Peng, Chihan

    2011-12-12

    We propose new connectivity editing operations for quadrilateral meshes with the unique ability to explicitly control the location, orientation, type, and number of the irregular vertices (valence not equal to four) in the mesh while preserving sharp edges. We provide theoretical analysis on what editing operations are possible and impossible and introduce three fundamental operations to move and re-orient a pair of irregular vertices. We argue that our editing operations are fundamental, because they only change the quad mesh in the smallest possible region and involve the fewest irregular vertices (i.e., two). The irregular vertex movement operations are supplemented by operations for the splitting, merging, canceling, and aligning of irregular vertices. We explain how the proposed highlevel operations are realized through graph-level editing operations such as quad collapses, edge flips, and edge splits. The utility of these mesh editing operations are demonstrated by improving the connectivity of quad meshes generated from state-of-art quadrangulation techniques. © 2011 ACM.

  19. Connectivity editing for quadrilateral meshes

    KAUST Repository

    Peng, Chihan

    2011-12-01

    We propose new connectivity editing operations for quadrilateral meshes with the unique ability to explicitly control the location, orientation, type, and number of the irregular vertices (valence not equal to four) in the mesh while preserving sharp edges. We provide theoretical analysis on what editing operations are possible and impossible and introduce three fundamental operations to move and re-orient a pair of irregular vertices. We argue that our editing operations are fundamental, because they only change the quad mesh in the smallest possible region and involve the fewest irregular vertices (i.e., two). The irregular vertex movement operations are supplemented by operations for the splitting, merging, canceling, and aligning of irregular vertices. We explain how the proposed high-level operations are realized through graph-level editing operations such as quad collapses, edge flips, and edge splits. The utility of these mesh editing operations are demonstrated by improving the connectivity of quad meshes generated from state-of-art quadrangulation techniques.

  20. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

    Science.gov (United States)

    Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

    2015-12-01

    Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

  1. Mesh Router Nodes placement in Rural Wireless Mesh Networks

    OpenAIRE

    Ebongue, Jean Louis Fendji Kedieng; Thron, Christopher; Nlong, Jean Michel

    2015-01-01

    The problem of placement of mesh router nodes in Wireless Mesh Networks is known to be a NP hard problem. In this paper, the problem is addressed under a constraint of network model tied to rural regions where we usually observe low density and sparse population. We consider the area to cover as decomposed into a set of elementary areas which can be required or optional in terms of coverage and where a node can be placed or not. We propose an effective algorithm to ensure the coverage. This a...

  2. Comparing the Effects of Mesh Size on Benthic Macroinvertebrate Performance Characteristics in Montana streams

    Science.gov (United States)

    Laidlaw, T. L.; Jessup, B.; Stagliano, D.; Stribling, J.; Feldman, D. L.; Bollman, W.

    2005-05-01

    Montana's Department of Environmental Quality (DEQ) has collected macroinvertebrate data for twenty years. During this time, sampling methods and mesh sizes have been modified, though the effects of the modifications on the samples collected have not been studied. DEQ has used and continues to use both 500 and 1200 ìm mesh sizes. The purpose of this study is to evaluate the effects of the different mesh sizes on taxonomic diversity and metric values. Field crews followed DEQ's traveling kick sampling methods and collected samples at each site using both mesh sizes. Sixteen sampling locations were distributed throughout two ecoregions (the Mountains and the Mountain and Valley Foothills) with replicate samples collected at seven locations. We developed a suite of both quantitative and qualitative performance characteristics (precision, accuracy, bias) and directly compared them for each mesh size. Preliminary ordination results showed no significant differences between the community level performance measures. Preliminary metric analysis showed that the 1200 ìm mesh captured a greater abundance and diversity of caddisflies (Trichoptera) than the 500 ìm mesh. This study will determine if data collected using different mesh sizes can be aggregated for development of bioassessment tools and will help DEQ implement consistent statewide sampling protocols.

  3. Design methodology of the strength properties of medical knitted meshes

    Science.gov (United States)

    Mikołajczyk, Z.; Walkowska, A.

    2016-07-01

    One of the most important utility properties of medical knitted meshes intended for hernia and urological treatment is their bidirectional strength along the courses and wales. The value of this parameter, expected by the manufacturers and surgeons, is estimated at 100 N per 5 cm of the sample width. The most frequently, these meshes are produced on the basis of single- or double-guide stitches. They are made of polypropylene and polyester monofilament yarns with the diameter in the range from 0.6 to 1.2 mm, characterized by a high medical purity. The aim of the study was to develop the design methodology of meshes strength based on the geometrical construction of the stitch and strength of yarn. In the environment of the ProCAD warpknit 5 software the simulated stretching process of meshes together with an analysis of their geometry changes was carried out. Simulations were made for four selected representative stitches. Both on a built, unique measuring position and on the tensile testing machine the real parameters of the loops geometry of meshes were measured. Model of mechanical stretching of warp-knitted meshes along the courses and wales was developed. The thesis argument was made, that the force that breaks the loop of warp-knitted fabric is the lowest value of breaking forces of loop link yarns or yarns that create straight sections of loop. This thesis was associate with the theory of strength that uses the “the weakest link concept”. Experimental verification of model was carried out for the basic structure of the single-guide mesh. It has been shown that the real, relative strength of the mesh related to one course is equal to the strength of the yarn breakage in a loop, while the strength along the wales is close to breaking strength of a single yarn. In relation to the specific construction of the medical mesh, based on the knowledge of the density of the loops structure, the a-jour mesh geometry and the yarns strength, it is possible, with high

  4. Efficient Packet Forwarding in Mesh Network

    CERN Document Server

    Kanrar, Soumen

    2012-01-01

    Wireless Mesh Network (WMN) is a multi hop low cost, with easy maintenance robust network providing reliable service coverage. WMNs consist of mesh routers and mesh clients. In this architecture, while static mesh routers form the wireless backbone, mesh clients access the network through mesh routers as well as directly meshing with each other. Different from traditional wireless networks, WMN is dynamically self-organized and self-configured. In other words, the nodes in the mesh network automatically establish and maintain network connectivity. Over the years researchers have worked, to reduce the redundancy in broadcasting packet in the mesh network in the wireless domain for providing reliable service coverage, the source node deserves to broadcast or flood the control packets. The redundant control packet consumes the bandwidth of the wireless medium and significantly reduces the average throughput and consequently reduces the overall system performance. In this paper I study the optimization problem in...

  5. On Linear Spaces of Polyhedral Meshes.

    Science.gov (United States)

    Poranne, Roi; Chen, Renjie; Gotsman, Craig

    2015-05-01

    Polyhedral meshes (PM)-meshes having planar faces-have enjoyed a rise in popularity in recent years due to their importance in architectural and industrial design. However, they are also notoriously difficult to generate and manipulate. Previous methods start with a smooth surface and then apply elaborate meshing schemes to create polyhedral meshes approximating the surface. In this paper, we describe a reverse approach: given the topology of a mesh, we explore the space of possible planar meshes having that topology. Our approach is based on a complete characterization of the maximal linear spaces of polyhedral meshes contained in the curved manifold of polyhedral meshes with a given topology. We show that these linear spaces can be described as nullspaces of differential operators, much like harmonic functions are nullspaces of the Laplacian operator. An analysis of this operator provides tools for global and local design of a polyhedral mesh, which fully expose the geometric possibilities and limitations of the given topology.

  6. Finding Regions of Interest on Toroidal Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Sinha, Rishi R; Jones, Chad; Ethier, Stephane; Klasky, Scott; Ma, Kwan-Liu; Shoshani, Arie; Winslett, Marianne

    2011-02-09

    Fusion promises to provide clean and safe energy, and a considerable amount of research effort is underway to turn this aspiration intoreality. This work focuses on a building block for analyzing data produced from the simulation of microturbulence in magnetic confinementfusion devices: the task of efficiently extracting regions of interest. Like many other simulations where a large amount of data are produced,the careful study of ``interesting'' parts of the data is critical to gain understanding. In this paper, we present an efficient approach forfinding these regions of interest. Our approach takes full advantage of the underlying mesh structure in magnetic coordinates to produce acompact representation of the mesh points inside the regions and an efficient connected component labeling algorithm for constructingregions from points. This approach scales linearly with the surface area of the regions of interest instead of the volume as shown with bothcomputational complexity analysis and experimental measurements. Furthermore, this new approach is 100s of times faster than a recentlypublished method based on Cartesian coordinates.

  7. Wang--Landau algorithm for entropic sampling of arch-based microstates in the volume ensemble of static granular packings

    Directory of Open Access Journals (Sweden)

    D. Slobinsky

    2015-03-01

    Full Text Available We implement the Wang-Landau algorithm to sample with equal probabilities the static configurations of a model granular system. The "non-interacting rigid arch model" used is based on the description of static configurations by means of splitting the assembly of grains into sets of stable arches. This technique allows us to build the entropy as a function of the volume of the packing for large systems. We make a special note of the details that have to be considered when defining the microstates and proposing the moves for the correct sampling in these unusual models. We compare our results with previous exact calculations of the model made at moderate system sizes. The technique opens a new opportunity to calculate the entropy of more complex granular models. Received: 19 January 2015, Accepted: 25 February 2015; Reviewed by: M. Pica Ciamarra, Nanyang Technological University, Singapore; Edited by: C. S. O'Hern; DOI: http://dx.doi.org/10.4279/PIP.070001 Cite as: D Slobinsky, L A Pugnaloni, Papers in Physics 7, 070001 (2015

  8. Deformable mesh registration for the validation of automatic target localization algorithms

    Science.gov (United States)

    Robertson, Scott; Weiss, Elisabeth; Hugo, Geoffrey D.

    2013-01-01

    Purpose: To evaluate deformable mesh registration (DMR) as a tool for validating automatic target registration algorithms used during image-guided radiation therapy. Methods: DMR was implemented in a hierarchical model, with rigid, affine, and B-spline transforms optimized in succession to register a pair of surface meshes. The gross tumor volumes (primary tumor and involved lymph nodes) were contoured by a physician on weekly CT scans in a cohort of lung cancer patients and converted to surface meshes. The meshes from weekly CT images were registered to the mesh from the planning CT, and the resulting registered meshes were compared with the delineated surfaces. Known deformations were also applied to the meshes, followed by mesh registration to recover the known deformation. Mesh registration accuracy was assessed at the mesh surface by computing the symmetric surface distance (SSD) between vertices of each registered mesh pair. Mesh registration quality in regions within 5 mm of the mesh surface was evaluated with respect to a high quality deformable image registration. Results: For 18 patients presenting with a total of 19 primary lung tumors and 24 lymph node targets, the SSD averaged 1.3 ± 0.5 and 0.8 ± 0.2 mm, respectively. Vertex registration errors (VRE) relative to the applied known deformation were 0.8 ± 0.7 and 0.2 ± 0.3 mm for the primary tumor and lymph nodes, respectively. Inside the mesh surface, corresponding average VRE ranged from 0.6 to 0.9 and 0.2 to 0.9 mm, respectively. Outside the mesh surface, average VRE ranged from 0.7 to 1.8 and 0.2 to 1.4 mm. The magnitude of errors generally increased with increasing distance away from the mesh. Conclusions: Provided that delineated surfaces are available, deformable mesh registration is an accurate and reliable method for obtaining a reference registration to validate automatic target registration algorithms for image-guided radiation therapy, specifically in regions on or near the target surfaces

  9. Respiratory Allergy and Inflammation Due to Ambient Particles (RAIAP) Collection of Particulate Matter samples from 5 European sites with High Volume Cascade Impactors

    NARCIS (Netherlands)

    Cassee FR; Fokkens PHB; Leseman DLAC; Bloemen HJTh; Boere AJF; MGO

    2003-01-01

    The aim of this deliverable was to perform an European-wide collection of particulate samples. With the aid of two high-volume Cascade impactor (HVCI), coarse (2.5-10 mu m ) and fine (0.1-2.5 mu m) particulate samples were collected in Amsterdam, Lodz, Oslo, Rome and the Dutch sea-side (De Zilk) dur

  10. Respiratory Allergy and Inflammation Due to Ambient Particles (RAIAP) Collection of Particulate Matter samples from 5 European sites with High Volume Cascade Impactors

    NARCIS (Netherlands)

    Cassee FR; Fokkens PHB; Leseman DLAC; Bloemen HJTh; Boere AJF; MGO

    2003-01-01

    The aim of this deliverable was to perform an European-wide collection of particulate samples. With the aid of two high-volume Cascade impactor (HVCI), coarse (2.5-10 mu m ) and fine (0.1-2.5 mu m) particulate samples were collected in Amsterdam, Lodz, Oslo, Rome and the Dutch sea-side (De Zilk)

  11. Biomechanical and histological evaluation of abdominal wall compliance with intraperitoneal onlay mesh implants in rabbits: a comparison of six different state-of-the-art meshes.

    Science.gov (United States)

    Konerding, M A; Chantereau, P; Delventhal, V; Holste, J-L; Ackermann, M

    2012-09-01

    An ideal prosthetic mesh for incisional hernia repair should mimic the anisotropic compliance of the abdominal wall, and at lower loads should exhibit higher distensibility without impairment of safety at higher loads. This study evaluated the biomechanical properties of six meshes in a rabbit model. New Zealand white rabbits were used for this study. Two meshes of the same brand (Ethicon Physiomesh™, Bard Composix(®) L/P, Gore Dualmesh(®), Bard Sepramesh(®), Ethicon Proceed(®) or Parietex™ Composite) were implanted into each animal for assessment of intra-abdominal hernia repair, with a total of ten meshes per group. Twelve weeks after implantation, the abdominal walls with ingrown meshes were harvested and examined biomechanically with a plunger test. The mesh-tissue compliance was evaluated by the forces exerted at given displacements and also described through a simple mathematical approximation. Abdominal wall samples were collected for histopathology, cell turnover and morphometry. No mesh-related complications were seen. The adhesion score was significantly higher in Bard Composix(®) L/P and Ethicon Proceed(®) meshes. Significant shrinkage was seen in Gore Dualmesh(®) and Parietex™ Composite meshes. Physiomesh™ exhibited the highest compliance during plunger testing, characterized by lower, more physiological reaction forces against tissue displacement than the competitor meshes. In contrast, the safety modulus was comparable in all groups. Histology showed less collagen and less foreign body reaction in the Physiomesh™ samples contributing to patient's comfort. In terms of safety, this study showed no superiority of any single mesh. The comfort modulus however differed, being lowest in the newly developed Physiomesh™. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. The Evolution of P-wave Velocity in Fault Gouge: Initial Results for Samples from the SAFOD Volume.

    Science.gov (United States)

    Knuth, M. W.; Tobin, H. J.; Marone, C.

    2008-12-01

    We present initial results from a new technique for observing the evolution of elastic properties in sheared fault zone materials via acoustic wave velocity. The relationship between the mechanical strength of fault gouge and acoustic velocity during active deformation has important implications not only for a physical understanding of elasticity in deforming granular media, but also for the interpretation of the seismic velocity at the field scale. Experiments are conducted at atmospheric temperature and saturation state in a double-direct-shear testing apparatus, with normal stress stepped from 1 to 19 MPa to interrogate behavior during compaction, and sheared at a rate of 10 microns/second to observe changes in velocity with increasing strain. Tests are divided between those involving continuous shear to a displacement of 22.5 mm, and those with intervals of 3.75 mm shear separated by unloading and reloading sequences in normal stress. Velocity is measured by time-of-flight between two piezoelectric P-wave transducers set into the sample configuration on either side of the shearing layers. Samples tested include common laboratory standards for simulated fault gouge and field samples taken from representative localities in the 3D rock volume containing the San Andreas Fault Observatory at Depth experiment in Parkfield, California. The velocities of sand and clay end-member gouges are observed to behave differently under shear, and mixtures of quartz sand and montmorillonite behave differently from both end-member materials. Initial results suggest that particle sorting exerts a strong influence on both the absolute velocity and the evolution of velocity in response to increasing shear strain where the elastic properties of the grains are similar. We also observe a first-order relationship between the coefficient of friction and P-wave velocity that appears to be related to grain reorganization at the onset of shear following initial compaction.

  13. High-resolution liquid- and solid-state nuclear magnetic resonance of nanoliter sample volumes using microcoil detectors

    Science.gov (United States)

    Kentgens, A. P. M.; Bart, J.; van Bentum, P. J. M.; Brinkmann, A.; van Eck, E. R. H.; Gardeniers, J. G. E.; Janssen, J. W. G.; Knijn, P.; Vasa, S.; Verkuijlen, M. H. W.

    2008-02-01

    The predominant means to detect nuclear magnetic resonance (NMR) is to monitor the voltage induced in a radiofrequency coil by the precessing magnetization. To address the sensitivity of NMR for mass-limited samples it is worthwhile to miniaturize this detector coil. Although making smaller coils seems a trivial step, the challenges in the design of microcoil probeheads are to get the highest possible sensitivity while maintaining high resolution and keeping the versatility to apply all known NMR experiments. This means that the coils have to be optimized for a given sample geometry, circuit losses should be avoided, susceptibility broadening due to probe materials has to be minimized, and finally the B1-fields generated by the rf coils should be homogeneous over the sample volume. This contribution compares three designs that have been miniaturized for NMR detection: solenoid coils, flat helical coils, and the novel stripline and microslot designs. So far most emphasis in microcoil research was in liquid-state NMR. This contribution gives an overview of the state of the art of microcoil solid-state NMR by reviewing literature data and showing the latest results in the development of static and micro magic angle spinning (microMAS) solenoid-based probeheads. Besides their mass sensitivity, microcoils can also generate tremendously high rf fields which are very useful in various solid-state NMR experiments. The benefits of the stripline geometry for studying thin films are shown. This geometry also proves to be a superior solution for microfluidic NMR implementations in terms of sensitivity and resolution.

  14. Implicit compressible flow solvers on unstructured meshes

    Science.gov (United States)

    Nagaoka, Makoto; Horinouchi, Nariaki

    1993-09-01

    An implicit solver for compressible flows using Bi-CGSTAB method is proposed. The Euler equations are discretized with the delta-form by the finite volume method on the cell-centered triangular unstructured meshes. The numerical flux is calculated by Roe's upwind scheme. The linearized simultaneous equations with the irregular nonsymmetric sparse matrix are solved by the Bi-CGSTAB method with the preconditioner of incomplete LU factorization. This method is also vectorized by the multi-colored ordering. Although the solver requires more computational memory, it shows faster and more robust convergence than the other conventional methods: three-stage Runge-Kutta method, point Gauss-Seidel method, and Jacobi method for two-dimensional inviscid steady flows.

  15. A compact time-of-flight SANS instrument optimised for measurements of small sample volumes at the European Spallation Source

    Science.gov (United States)

    Kynde, Søren; Hewitt Klenø, Kaspar; Nagy, Gergely; Mortensen, Kell; Lefmann, Kim; Kohlbrecher, Joachim; Arleth, Lise

    2014-11-01

    The high flux at European Spallation Source (ESS) will allow for performing experiments with relatively small beam-sizes while maintaining a high intensity of the incoming beam. The pulsed nature of the source makes the facility optimal for time-of-flight small-angle neutron scattering (ToF-SANS). We find that a relatively compact SANS instrument becomes the optimal choice in order to obtain the widest possible q-range in a single setting and the best possible exploitation of the neutrons in each pulse and hence obtaining the highest possible flux at the sample position. The instrument proposed in the present article is optimised for performing fast measurements of small scattering volumes, typically down to 2×2×2 mm3, while covering a broad q-range from about 0.005 1/Å to 0.5 1/Å in a single instrument setting. This q-range corresponds to that available at a typical good BioSAXS instrument and is relevant for a wide set of biomacromolecular samples. A central advantage of covering the whole q-range in a single setting is that each sample has to be loaded only once. This makes it convenient to use the fully automated high-throughput flow-through sample changers commonly applied at modern synchrotron BioSAXS-facilities. The central drawback of choosing a very compact instrument is that the resolution in terms of δλ / λ obtained with the short wavelength neutrons becomes worse than what is usually the standard at state-of-the-art SANS instruments. Our McStas based simulations of the instrument performance for a set of characteristic biomacromolecular samples show that the resulting smearing effects still have relatively minor effects on the obtained data and can be compensated for in the data analysis. However, in cases where a better resolution is required in combination with the large simultaneous q-range characteristic of the instrument, we show that this can be obtained by inserting a set of choppers.

  16. A compact time-of-flight SANS instrument optimised for measurements of small sample volumes at the European Spallation Source

    Energy Technology Data Exchange (ETDEWEB)

    Kynde, Søren, E-mail: kynde@nbi.ku.dk [Niels Bohr Institute, University of Copenhagen (Denmark); Hewitt Klenø, Kaspar [Niels Bohr Institute, University of Copenhagen (Denmark); Nagy, Gergely [SINQ, Paul Scherrer Institute (Switzerland); Mortensen, Kell; Lefmann, Kim [Niels Bohr Institute, University of Copenhagen (Denmark); Kohlbrecher, Joachim, E-mail: Joachim.kohlbrecher@psi.ch [SINQ, Paul Scherrer Institute (Switzerland); Arleth, Lise, E-mail: arleth@nbi.ku.dk [Niels Bohr Institute, University of Copenhagen (Denmark)

    2014-11-11

    The high flux at European Spallation Source (ESS) will allow for performing experiments with relatively small beam-sizes while maintaining a high intensity of the incoming beam. The pulsed nature of the source makes the facility optimal for time-of-flight small-angle neutron scattering (ToF-SANS). We find that a relatively compact SANS instrument becomes the optimal choice in order to obtain the widest possible q-range in a single setting and the best possible exploitation of the neutrons in each pulse and hence obtaining the highest possible flux at the sample position. The instrument proposed in the present article is optimised for performing fast measurements of small scattering volumes, typically down to 2×2×2 mm{sup 3}, while covering a broad q-range from about 0.005 1/Å to 0.5 1/Å in a single instrument setting. This q-range corresponds to that available at a typical good BioSAXS instrument and is relevant for a wide set of biomacromolecular samples. A central advantage of covering the whole q-range in a single setting is that each sample has to be loaded only once. This makes it convenient to use the fully automated high-throughput flow-through sample changers commonly applied at modern synchrotron BioSAXS-facilities. The central drawback of choosing a very compact instrument is that the resolution in terms of δλ/λ obtained with the short wavelength neutrons becomes worse than what is usually the standard at state-of-the-art SANS instruments. Our McStas based simulations of the instrument performance for a set of characteristic biomacromolecular samples show that the resulting smearing effects still have relatively minor effects on the obtained data and can be compensated for in the data analysis. However, in cases where a better resolution is required in combination with the large simultaneous q-range characteristic of the instrument, we show that this can be obtained by inserting a set of choppers.

  17. The Tree-Particle-Mesh N-body Gravity Solver

    CERN Document Server

    Bode, P; Xu, G; Bode, Paul; Ostriker, Jeremiah P.; Xu, Guohong

    2000-01-01

    The Tree-Particle-Mesh (TPM) N-body algorithm couples the tree algorithm for directly computing forces on particles in an hierarchical grouping scheme with the extremely efficient mesh based PM structured approach. The combined TPM algorithm takes advantage of the fact that gravitational forces are linear functions of the density field. Thus one can use domain decomposition to break down the density field into many separate high density regions containing a significant fraction of the mass but residing in a very small fraction of the total volume. In each of these high density regions the gravitational potential is computed via the tree algorithm supplemented by tidal forces from the external density distribution. For the bulk of the volume, forces are computed via the PM algorithm; timesteps in this PM component are large compared to individually determined timesteps in the tree regions. Since each tree region can be treated independently, the algorithm lends itself to very efficient parallelization using me...

  18. Inching toward 'push-button' meshing

    National Research Council Canada - National Science Library

    James Masters

    2015-01-01

      While "push-button" meshing remains an elusive goal, advances in 2015 have brought the technology to the point where meshes can be constructed with relative ease when appropriate surfaces are available...

  19. Capillary ion chromatography with on-column focusing for ultra-trace analysis of methanesulfonate and inorganic anions in limited volume Antarctic ice core samples.

    Science.gov (United States)

    Rodriguez, Estrella Sanz; Poynter, Sam; Curran, Mark; Haddad, Paul R; Shellie, Robert A; Nesterenko, Pavel N; Paull, Brett

    2015-08-28

    Preservation of ionic species within Antarctic ice yields a unique proxy record of the Earth's climate history. Studies have been focused until now on two proxies: the ionic components of sea salt aerosol and methanesulfonic acid. Measurement of the all of the major ionic species in ice core samples is typically carried out by ion chromatography. Former methods, whilst providing suitable detection limits, have been based upon off-column preconcentration techniques, requiring larger sample volumes, with potential for sample contamination and/or carryover. Here, a new capillary ion chromatography based analytical method has been developed for quantitative analysis of limited volume Antarctic ice core samples. The developed analytical protocol applies capillary ion chromatography (with suppressed conductivity detection) and direct on-column sample injection and focusing, thus eliminating the requirement for off-column sample preconcentration. This limits the total sample volume needed to 300μL per analysis, allowing for triplicate sample analysis with anions, including fluoride, methanesulfonate, chloride, sulfate and nitrate anions. Application to composite ice-core samples is demonstrated, with coupling of the capillary ion chromatograph to high resolution mass spectrometry used to confirm the presence and purity of the observed methanesulfonate peak.

  20. Particle Collection Efficiency for Nylon Mesh Screens

    OpenAIRE

    Cena, Lorenzo G.; Ku, Bon-Ki; Peters, Thomas M.

    2011-01-01

    Mesh screens composed of nylon fibers leave minimal residual ash and produce no significant spectral interference when ashed for spectrometric examination. These characteristics make nylon mesh screens attractive as a collection substrate for nanoparticles. A theoretical single-fiber efficiency expression developed for wire-mesh screens was evaluated for estimating the collection efficiency of submicrometer particles for nylon mesh screens. Pressure drop across the screens, the effect of part...

  1. 6th International Meshing Roundtable '97

    Energy Technology Data Exchange (ETDEWEB)

    White, D.

    1997-09-01

    The goal of the 6th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the pas~ the Roundtable has enjoyed significant participation born each of these groups from a wide variety of countries. The Roundtable will consist of technical presentations from contributed papers and abstracts, two invited speakers, and two invited panels of experts discussing topics related to the development and use of automatic mesh generation tools. In addition, this year we will feature a "Bring Your Best Mesh" competition and poster session to encourage discussion and participation from a wide variety of mesh generation tool users. The schedule and evening social events are designed to provide numerous opportunities for informal dialog. A proceedings will be published by Sandia National Laboratories and distributed at the Roundtable. In addition, papers of exceptionally high quaIity will be submitted to a special issue of the International Journal of Computational Geometry and Applications. Papers and one page abstracts were sought that present original results on the meshing process. Potential topics include but are got limited to: Unstructured triangular and tetrahedral mesh generation Unstructured quadrilateral and hexahedral mesh generation Automated blocking and structured mesh generation Mixed element meshing Surface mesh generation Geometry decomposition and clean-up techniques Geometry modification techniques related to meshing Adaptive mesh refinement and mesh quality control Mesh visualization Special purpose meshing algorithms for particular applications Theoretical or novel ideas with practical potential Technical presentations from industrial researchers.

  2. Adaptive Meshing of Ship Air-Wake Flowfields

    Science.gov (United States)

    2014-10-21

    Atkins ] Table 1. Coefficients for BDF schemes Order Φn+1 Φn Φn-1 1st 1 -1 0 2nd 3/2 -2 1/2 The control volume surrounding each node is the median...Enhancements To The FUN3D Flow Solver For Moving-Mesh Applications”, AIAA-2009-1360. Biedron, Robert T., Vatsa, Veer N., and Atkins , Harold L

  3. Efficient and exact mesh deformation using multiscale RBF interpolation

    Science.gov (United States)

    Kedward, L.; Allen, C. B.; Rendall, T. C. S.

    2017-09-01

    Radial basis function (RBF) interpolation is popular for mesh deformation due to robustness and generality, but the cost scales with the number of surface points sourcing the deformation as O (Ns3). Hence, there have been numerous works investigating efficient methods using reduced datasets. However, although reduced-data methods are efficient, they require a secondary method to treat an error vector field to ensure surface points not included in the primary deformation are moved to the correct location, and the volume mesh moved accordingly. A new method is presented which captures global and local motions at multiple scales using all the surface points, and so no correction stage is required; all surface points are used and a single interpolation built, but the cost and conditioning issues associated with RBF methods are eliminated. Moreover, the sparsity introduced is exploited using a wall distance function, to further reduce the cost. The method is compared to an efficient greedy method, and it is shown mesh quality is always comparable with or better than with the greedy method, and cost is comparable or cheaper at all stages. Surface mesh preprocessing is the dominant cost for reduced-data methods and this cost is reduced significantly here: greedy methods select points to minimise interpolation error, requiring repeated system solution and cost O (Nred4) to select Nred points; the multiscale method has no error, and the problem is transferred to a geometric search, with cost O (Ns log (Ns)), resulting in an eight orders of magnitude cost reduction for three-dimensional meshes. Furthermore, since the method is dependent on geometry, not deformation, it only needs to be applied once, prior to simulation, as the mesh deformation is decoupled from the point selection process.

  4. Fast pyrolysis in a novel wire-mesh reactor: design and initial results

    NARCIS (Netherlands)

    Hoekstra, E.; Swaaij, van W.P.M.; Kersten, S.R.A.; Hogendoorn, J.A.

    2012-01-01

    Pyrolysis is known to occur by decomposition processes followed by vapour phase reactions. The goal of this research is to develop a novel device to study the initial decomposition processes. For this, a novel wire-mesh reactor was constructed. A small sample (<0.1 g) was clamped between two meshes

  5. Fast pyrolysis in a novel wire-mesh reactor: design and initial results

    NARCIS (Netherlands)

    Hoekstra, E.; van Swaaij, Willibrordus Petrus Maria; Kersten, Sascha R.A.; Hogendoorn, Kees

    2012-01-01

    Pyrolysis is known to occur by decomposition processes followed by vapour phase reactions. The goal of this research is to develop a novel device to study the initial decomposition processes. For this, a novel wire-mesh reactor was constructed. A small sample (<0.1 g) was clamped between two meshes

  6. Mg II Absorption Characteristics of a Volume-Limited Sample of Galaxies at z ~ 0.1

    Science.gov (United States)

    Barton, Elizabeth J.; Cooke, Jeff

    2009-12-01

    We present an initial survey of Mg II absorption characteristics in the halos of a carefully constructed, volume-limited subsample of galaxies embedded in the spectroscopic part of the Sloan Digital Sky Survey (SDSS). We observed quasars near sightlines to 20 low-redshift (z ~ 0.1), luminous (M r + 5log h background quasar within a projected 75 h -1 kpc of its center, although we preferentially sample galaxies with lower impact parameters and slightly more star formation within this range. Of the observed systems, six exhibit strong (W eq(2796) >= 0.3 Å) Mg II absorption at the galaxy's redshift, six systems have upper limits which preclude strong Mg II absorption, while the remaining observations rule out very strong (W eq(2796) >= 1-2 Å) absorption. The absorbers fall at higher impact parameters than many non-absorber sightlines, indicating a covering fraction fc lsim 0.4 for >=0.3 Å absorbers at z ~ 0.1, even at impact parameters Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.

  7. A volume-limited sample of X-ray galaxy groups and clusters - I. Radial entropy and cooling time profiles

    CERN Document Server

    Panagoulia, Electra; Sanders, Jeremy

    2013-01-01

    We present the first results of our study of a sample of 101 X-ray galaxy groups and clusters, which is volume-limited in each of three X-ray luminosity bins. The aim of this work is to study the properties of the innermost ICM in the cores of our groups and clusters, and to determine the effect of non-gravitational processes, such as active galactic nucleus (AGN) feedback, on the ICM. The entropy of the ICM is of special interest, as it bears the imprint of the thermal history of a cluster, and it also determines a cluster's global properties. Entropy profiles can therefore be used to examine any deviations from cluster self-similarity, as well as the effects of feedback on the ICM. We find that the entropy profiles are well-fitted by a simple powerlaw model, of the form $K(r) = \\alpha\\times(r/100 \\rm{kpc})^{\\beta}$, where $\\alpha$ and $\\beta$ are constants. We do not find evidence for the existence of an "entropy floor", i.e. our entropy profiles do not flatten out at small radii, as suggested by some previ...

  8. 50 CFR 300.110 - Mesh size.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Mesh size. 300.110 Section 300.110... Antarctic Marine Living Resources § 300.110 Mesh size. (a) The use of pelagic and bottom trawls having the mesh size in any part of a trawl less than indicated is prohibited for any directed fishing for the...

  9. Markov Random Fields on Triangle Meshes

    DEFF Research Database (Denmark)

    Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas

    2010-01-01

    mesh edges according to a feature detecting prior. Since we should not smooth across a sharp feature, we use edge labels to control the vertex process. In a Bayesian framework, MRF priors are combined with the likelihood function related to the mesh formation method. The output of our algorithm...... is a piecewise smooth mesh with explicit labelling of edges belonging to the sharp features....

  10. Mesh network achieve its fuction on Linux

    OpenAIRE

    Pei Ping; PETRENKO Y.N.

    2015-01-01

    In this paper, we introduce a Mesh network protocol evaluation and development. It has a special protocol. We could easily understand the Linux operation principles which are in use in mesh network. In addition to our comprehension, we describe the graph which shows package routing way. At last according to testing we prove that Mesh protocol AODV satisfy Linux platform performance requirements.

  11. The mesh network protocol evaluation and development

    OpenAIRE

    Pei Ping; PETRENKO Y.N.

    2015-01-01

    In this paper, we introduce a Mesh network protocol evaluation and development. It has a special protocol. We could easily to understand that how different protocols are used in mesh network. In addition to our comprehension, Multi – hop routing protocol could provide robustness and load balancing to communication in wireless mesh networks.

  12. Method of sequential mesh on Koopman-Darmois distributions

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    For costly and/or destructive tests,the sequential method with a proper maximum sample size is needed.Based on Koopman-Darmois distributions,this paper proposes the method of sequential mesh,which has an acceptable maximum sample size.In comparison with the popular truncated sequential probability ratio test,our method has the advantage of a smaller maximum sample size and is especially applicable for costly and/or destructive tests.

  13. User Manual for the PROTEUS Mesh Tools

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, Emily R [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-09-19

    PROTEUS is built around a finite element representation of the geometry for visualization. In addition, the PROTEUS-SN solver was built to solve the even-parity transport equation on a finite element mesh provided as input. Similarly, PROTEUS-MOC and PROTEUS-NEMO were built to apply the method of characteristics on unstructured finite element meshes. Given the complexity of real world problems, experience has shown that using commercial mesh generator to create rather simple input geometries is overly complex and slow. As a consequence, significant effort has been put into place to create multiple codes that help assist in the mesh generation and manipulation. There are three input means to create a mesh in PROTEUS: UFMESH, GRID, and NEMESH. At present, the UFMESH is a simple way to generate two-dimensional Cartesian and hexagonal fuel assembly geometries. The UFmesh input allows for simple assembly mesh generation while the GRID input allows the generation of Cartesian, hexagonal, and regular triangular structured grid geometry options. The NEMESH is a way for the user to create their own mesh or convert another mesh file format into a PROTEUS input format. Given that one has an input mesh format acceptable for PROTEUS, we have constructed several tools which allow further mesh and geometry construction (i.e. mesh extrusion and merging). This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial

  14. A tetrahedral mesh generation approach for 3D marine controlled-source electromagnetic modeling

    Science.gov (United States)

    Um, Evan Schankee; Kim, Seung-Sep; Fu, Haohuan

    2017-03-01

    3D finite-element (FE) mesh generation is a major hurdle for marine controlled-source electromagnetic (CSEM) modeling. In this paper, we present a FE discretization operator (FEDO) that automatically converts a 3D finite-difference (FD) model into reliable and efficient tetrahedral FE meshes for CSEM modeling. FEDO sets up wireframes of a background seabed model that precisely honors the seafloor topography. The wireframes are then partitioned into multiple regions. Outer regions of the wireframes are discretized with coarse tetrahedral elements whose maximum size is as large as a skin depth of the regions. We demonstrate that such coarse meshes can produce accurate FE solutions because numerical dispersion errors of tetrahedral meshes do not accumulate but oscillates. In contrast, central regions of the wireframes are discretized with fine tetrahedral elements to describe complex geology in detail. The conductivity distribution is mapped from FD to FE meshes in a volume-averaged sense. To avoid excessive mesh refinement around receivers, we introduce an effective receiver size. Major advantages of FEDO are summarized as follow. First, FEDO automatically generates reliable and economic tetrahedral FE meshes without adaptive meshing or interactive CAD workflows. Second, FEDO produces FE meshes that precisely honor the boundaries of the seafloor topography. Third, FEDO derives multiple sets of FE meshes from a given FD model. Each FE mesh is optimized for a different set of sources and receivers and is fed to a subgroup of processors on a parallel computer. This divide and conquer approach improves the parallel scalability of the FE solution. Both accuracy and effectiveness of FEDO are demonstrated with various CSEM examples.

  15. Connectivity-Based Segmentation for GPU-Accelerated Mesh Decompression

    Institute of Scientific and Technical Information of China (English)

    Jie-Yi Zhao; Min Tang; Ruo-Feng Tong

    2012-01-01

    We present a novel algorithm to partition large 3D meshes for GPU-accelerated decompression.Our formulation focuses on minimizing the replicated vertices between patches,and balancing the numbers of faces of patches for efficient parallel computing.First we generate a topology model of the original mesh and remove vertex positions.Then we assign the centers of patches using geodesic farthest point sampling and cluster the faces according to the geodesic distance to the centers.After the segmentation we swap boundary faces to fix jagged boundaries and store the boundary vertices for whole-mesh preservation.The decompression of each patch runs on a thread of GPU,and we evaluate its performance on various large benchmarks.In practice,the GPU-based decompression algorithm runs more than 48x faster on NVIDIA GeForce GTX 580 GPU compared with that on the CPU using single core.

  16. Correlation between shrinkage and infection of implanted synthetic meshes using an animal model of mesh infection.

    OpenAIRE

    Mamy, Laurent; Letouzey, Vincent; Lavigne, Jean-Philippe; Garric, Xavier; Gondry, Jean; Mares, Pierre; De Tayrac, Renaud

    2010-01-01

    International audience; INTRODUCTION AND HYPOTHESIS: The aim of this study was to evaluate a link between mesh infection and shrinkage. METHODS: Twenty-eight Wistar rats were implanted with synthetic meshes that were either non-absorbable (polypropylene (PP), n = 14) or absorbable (poly (D: ,L: -lactic acid) (PLA94), n = 14). A validated animal incisionnal abdominal hernia model of mesh infection was used. Fourteen meshes (n = 7 PLA94 and n = 7 PP meshes) were infected intraoperatively with 1...

  17. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-01

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH4+ strategy for ethylene and SO2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO2 from fruits. It was satisfied that trace ethylene and SO2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO2 during the entire LVCC sampling process were proved to be samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS.

  18. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis.

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-05

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH4(+) strategy for ethylene and SO2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO2 from fruits. It was satisfied that trace ethylene and SO2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO2 during the entire LVCC sampling process were proved to be samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Evaluation of the programmed temperature vaporiser for large-volume injection of biological samples in gas chromatography

    NARCIS (Netherlands)

    van Hout, M.W J; de Zeeuw, R.A; Franke, J.P.; de Jong, G.J.

    1999-01-01

    The use of a programmed temperature vaporiser (PTV) with a packed liner was evaluated for the injection of large volumes (up to 100 mu l) of plasma extracts in a gas chromatograph. Solvent purity, which is essential when large volumes are injected into the GC system, was determined. Special attentio

  20. Confined helium on Lagrange meshes

    CERN Document Server

    Baye, Daniel

    2015-01-01

    The Lagrange-mesh method has the simplicity of a calculation on a mesh and can have the accuracy of a variational method. It is applied to the study of a confined helium atom. Two types of confinement are considered. Soft confinements by potentials are studied in perimetric coordinates. Hard confinement in impenetrable spherical cavities is studied in a system of rescaled perimetric coordinates varying in [0,1] intervals. Energies and mean values of the distances between electrons and between an electron and the helium nucleus are calculated. A high accuracy of 11 to 15 significant figures is obtained with small computing times. Pressures acting on the confined atom are also computed. For sphere radii smaller than 1, their relative accuracies are better than $10^{-10}$. For larger radii up to 10, they progressively decrease to $10^{-3}$, still improving the best literature results.

  1. The moving mesh code Shadowfax

    CERN Document Server

    Vandenbroucke, Bert

    2016-01-01

    We introduce the moving mesh code Shadowfax, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public License. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare Shadowfax with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.

  2. The moving mesh code SHADOWFAX

    Science.gov (United States)

    Vandenbroucke, B.; De Rijcke, S.

    2016-07-01

    We introduce the moving mesh code SHADOWFAX, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public Licence. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare SHADOWFAX with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.

  3. On the flexibility of Kokotsakis meshes

    OpenAIRE

    Karpenkov, Oleg

    2008-01-01

    In this paper we study geometric, algebraic, and computational aspects of flexibility and infinitesimal flexibility of Kokotsakis meshes. A Kokotsakis mesh is a mesh that consists of a face in the middle and a certain band of faces attached to the middle face by its perimeter. In particular any 3x3-mesh made of quadrangles is a Kokotsakis mesh. We express the infinitesimal flexibility condition in terms of Ceva and Menelaus theorems. Further we study semi-algebraic properties of the set of fl...

  4. Image meshing via hierarchical optimization

    Institute of Scientific and Technical Information of China (English)

    Hao XIE; Ruo-feng TONG‡

    2016-01-01

    Vector graphic, as a kind of geometric representation of raster images, has many advantages, e.g., defi nition independence and editing facility. A popular way to convert raster images into vector graphics is image meshing, the aim of which is to fi nd a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it difficult to fi nd a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to fi ner ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.

  5. Image meshing via hierarchical optimization*

    Institute of Scientific and Technical Information of China (English)

    Hao XIE; Ruo-feng TONGS

    2016-01-01

    Vector graphic, as a kind of geometric representation of raster images, has many advantages, e.g., definition independence and editing facility. A popular way to convert raster images into vector graphics is image meshing, the aim of which is to find a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it difficult to find a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to finer ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.

  6. Grating droplets with a mesh

    Science.gov (United States)

    Soto, Dan; Le Helloco, Antoine; Clanet, Cristophe; Quere, David; Varanasi, Kripa

    2016-11-01

    A drop thrown against a mesh can pass through its holes if impacting with enough inertia. As a result, although part of the droplet may remain on one side of the sieve, the rest will end up grated through the other side. This inexpensive method to break up millimetric droplets into micrometric ones may be of particular interest in a wide variety of applications: enhancing evaporation of droplets launched from the top of an evaporative cooling tower or preventing drift of pesticides sprayed above crops by increasing their initial size and atomizing them at the very last moment with a mesh. In order to understand how much liquid will be grated we propose in this presentation to start first by studying a simpler situation: a drop impacting a plate pierced with a single off centered hole. The study of the role of natural parameters such as the radius drop and speed or the hole position, size and thickness allows us to discuss then the more general situation of a plate pierced with multiple holes: the mesh.

  7. Dynamic mesh for TCAD modeling with ECORCE

    Science.gov (United States)

    Michez, A.; Boch, J.; Touboul, A.; Saigné, F.

    2016-08-01

    Mesh generation for TCAD modeling is challenging. Because densities of carriers can change by several orders of magnitude in thin areas, a significant change of the solution can be observed for two very similar meshes. The mesh must be defined at best to minimize this change. To address this issue, a criterion based on polynomial interpolation on adjacent nodes is proposed that adjusts accurately the mesh to the gradients of Degrees of Freedom. Furthermore, a dynamic mesh that follows changes of DF in DC and transient mode is a powerful tool for TCAD users. But, in transient modeling, adding nodes to a mesh induces oscillations in the solution that appears as spikes at the current collected at the contacts. This paper proposes two schemes that solve this problem. Examples show that using these techniques, the dynamic mesh generator of the TCAD tool ECORCE handle semiconductors devices in DC and transient mode.

  8. SHARP/PRONGHORN Interoperability: Mesh Generation

    Energy Technology Data Exchange (ETDEWEB)

    Avery Bingham; Javier Ortensi

    2012-09-01

    Progress toward collaboration between the SHARP and MOOSE computational frameworks has been demonstrated through sharing of mesh generation and ensuring mesh compatibility of both tools with MeshKit. MeshKit was used to build a three-dimensional, full-core very high temperature reactor (VHTR) reactor geometry with 120-degree symmetry, which was used to solve a neutron diffusion critical eigenvalue problem in PRONGHORN. PRONGHORN is an application of MOOSE that is capable of solving coupled neutron diffusion, heat conduction, and homogenized flow problems. The results were compared to a solution found on a 120-degree, reflected, three-dimensional VHTR mesh geometry generated by PRONGHORN. The ability to exchange compatible mesh geometries between the two codes is instrumental for future collaboration and interoperability. The results were found to be in good agreement between the two meshes, thus demonstrating the compatibility of the SHARP and MOOSE frameworks. This outcome makes future collaboration possible.

  9. Cluster parallel rendering based on encoded mesh

    Institute of Scientific and Technical Information of China (English)

    QIN Ai-hong; XIONG Hua; PENG Hao-yu; LIU Zhen; SHI Jiao-ying

    2006-01-01

    Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compression scheme PRMC (Parallel Rendering based Mesh Compression) supplying encoded meshes that can be partitioned and sorted in parallel rendering system even in encoded-domain. First, we segment the mesh into submeshes and clip the submeshes' boundary into Runs, and then piecewise compress the submeshes and Runs respectively. With the help of several auxiliary index tables, compressed submeshes and Runs can serve as rendering primitives in parallel rendering system. Based on PRMC, we design and implement a parallel rendering architecture. Compared with uncompressed representation, experimental results showed that PRMC meshes applied in cluster parallel rendering system can dramatically reduce the communication requirement.

  10. Liquid Metering Centrifuge Sticks (LMCS): A Centrifugal Approach to Metering Known Sample Volumes for Colorimetric Solid Phase Extraction (C-SPE)

    Science.gov (United States)

    Gazda, Daniel B.; Schultz, John R.; Clarke, Mark S.

    2007-01-01

    Phase separation is one of the most significant obstacles encountered during the development of analytical methods for water quality monitoring in spacecraft environments. Removing air bubbles from water samples prior to analysis is a routine task on earth; however, in the absence of gravity, this routine task becomes extremely difficult. This paper details the development and initial ground testing of liquid metering centrifuge sticks (LMCS), devices designed to collect and meter a known volume of bubble-free water in microgravity. The LMCS uses centrifugal force to eliminate entrapped air and reproducibly meter liquid sample volumes for analysis with Colorimetric Solid Phase Extraction (C-SPE). C-SPE is a sorption-spectrophotometric platform that is being developed as a potential spacecraft water quality monitoring system. C-SPE utilizes solid phase extraction membranes impregnated with analyte-specific colorimetric reagents to concentrate and complex target analytes in spacecraft water samples. The mass of analyte extracted from the water sample is determined using diffuse reflectance (DR) data collected from the membrane surface and an analyte-specific calibration curve. The analyte concentration can then be calculated from the mass of extracted analyte and the volume of the sample analyzed. Previous flight experiments conducted in microgravity conditions aboard the NASA KC-135 aircraft demonstrated that the inability to collect and meter a known volume of water using a syringe was a limiting factor in the accuracy of C-SPE measurements. Herein, results obtained from ground based C-SPE experiments using ionic silver as a test analyte and either the LMCS or syringes for sample metering are compared to evaluate the performance of the LMCS. These results indicate very good agreement between the two sample metering methods and clearly illustrate the potential of utilizing centrifugal forces to achieve phase separation and metering of water samples in microgravity.

  11. Prostate specific antigen in a community-based sample of men without prostate cancer: Correlations with prostate volume, age, body mass index, and symptoms of prostatism

    NARCIS (Netherlands)

    J.L.H.R. Bosch (Ruud); W.C.J. Hop (Wim); C.H. Bangma (Chris); W.J. Kirkels (Wim); F.H. Schröder (Fritz)

    1995-01-01

    textabstractThe correlation between both prostate specific antigen levels (PSA) and prostate specific antigen density (PSAD) and age, prostate volume parameters, body mass index, and the International Prostate Symptom Score (IPSS) were studied in a community‐based population. A sample of 502 men age

  12. A study of toxic emissions from a coal-fired power plant: Niles Station Boiler No. 2. Volume 1, Sampling/results/special topics: Final report

    Energy Technology Data Exchange (ETDEWEB)

    1994-06-01

    This study was one of a group of assessments of toxic emissions from coal-fired power plants, conducted for US Department of Energy, Pittsburgh Energy Technology Center (DOE-PETC) during 1993. The motivation for those assessments was the mandate in the 1990 Clean Air Act Amendments that a study be made of emissions of hazardous air pollutants (HAPs) from electrical utilities. The results of this study will be used by the US Environmental Protection Agency to evaluate whether regulation of HAPs emissions from utilities is warranted. This report is organized in two volumes. Volume 1: Sampling/Results/Special Topics describes the sampling effort conducted as the basis for this study, presents the concentration data on toxic chemicals in the several power plant streams, and reports the results of evaluations and calculations conducted with those data. The Special Topics section of Volume 1 reports on issues such as comparison of sampling methods and vapor/particle distributions of toxic chemicals. Volume 2: Appendices include field sampling data sheets, quality assurance results, and uncertainty calculations. The chemicals measured at Niles Boiler No. 2 were the following: five major and 16 trace elements, including mercury, chromium, cadmium, lead, selenium, arsenic, beryllium, and nickel; acids and corresponding anions (HCl, HF, chloride, fluoride, phosphate, sulfate); ammonia and cyanide; elemental carbon; radionuclides; volatile organic compounds (VOC); semivolatile compounds (SVOC) including polynuclear aromatic hydrocarbons (PAH), and polychlorinated dioxins and furans; and aldehydes.

  13. Effects of pore-scale dispersion, degree of heterogeneity, sampling size, and source volume on the concentration moments of conservative solutes in heterogeneous formations

    Science.gov (United States)

    Daniele Tonina; Alberto Bellin

    2008-01-01

    Pore-scale dispersion (PSD), aquifer heterogeneity, sampling volume, and source size influence solute concentrations of conservative tracers transported in heterogeneous porous formations. In this work, we developed a new set of analytical solutions for the concentration ensemble mean, variance, and coefficient of variation (CV), which consider the effects of all these...

  14. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    Directory of Open Access Journals (Sweden)

    Bjoern B. Burckhardt

    2015-01-01

    Full Text Available In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum. Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers.

  15. Development of an Unstructured Mesh Code for Flows About Complete Vehicles

    Science.gov (United States)

    Peraire, Jaime; Gupta, K. K. (Technical Monitor)

    2001-01-01

    This report describes the research work undertaken at the Massachusetts Institute of Technology, under NASA Research Grant NAG4-157. The aim of this research is to identify effective algorithms and methodologies for the efficient and routine solution of flow simulations about complete vehicle configurations. For over ten years we have received support from NASA to develop unstructured mesh methods for Computational Fluid Dynamics. As a result of this effort a methodology based on the use of unstructured adapted meshes of tetrahedra and finite volume flow solvers has been developed. A number of gridding algorithms, flow solvers, and adaptive strategies have been proposed. The most successful algorithms developed from the basis of the unstructured mesh system FELISA. The FELISA system has been extensively for the analysis of transonic and hypersonic flows about complete vehicle configurations. The system is highly automatic and allows for the routine aerodynamic analysis of complex configurations starting from CAD data. The code has been parallelized and utilizes efficient solution algorithms. For hypersonic flows, a version of the code which incorporates real gas effects, has been produced. The FELISA system is also a component of the STARS aeroservoelastic system developed at NASA Dryden. One of the latest developments before the start of this grant was to extend the system to include viscous effects. This required the development of viscous generators, capable of generating the anisotropic grids required to represent boundary layers, and viscous flow solvers. We show some sample hypersonic viscous computations using the developed viscous generators and solvers. Although this initial results were encouraging it became apparent that in order to develop a fully functional capability for viscous flows, several advances in solution accuracy, robustness and efficiency were required. In this grant we set out to investigate some novel methodologies that could lead to the

  16. Synthetic Versus Biological Mesh-Related Erosion After Laparoscopic Ventral Mesh Rectopexy: A Systematic Review.

    Science.gov (United States)

    Balla, Andrea; Quaresima, Silvia; Smolarek, Sebastian; Shalaby, Mostafa; Missori, Giulia; Sileri, Pierpaolo

    2017-04-01

    This review reports the incidence of mesh-related erosion after ventral mesh rectopexy to determine whether any difference exists in the erosion rate between synthetic and biological mesh. A systematic search of the MEDLINE and the Ovid databases was conducted to identify suitable articles published between 2004 and 2015. The search strategy capture terms were laparoscopic ventral mesh rectopexy, laparoscopic anterior rectopexy, robotic ventral rectopexy, and robotic anterior rectopexy. Eight studies (3,956 patients) were included in this review. Of those patients, 3,517 patients underwent laparoscopic ventral rectopexy (LVR) using synthetic mesh and 439 using biological mesh. Sixty-six erosions were observed with synthetic mesh (26 rectal, 32 vaginal, 8 recto-vaginal fistulae) and one (perineal erosion) with biological mesh. The synthetic and the biological mesh-related erosion rates were 1.87% and 0.22%, respectively. The time between rectopexy and diagnosis of mesh erosion ranged from 1.7 to 124 months. No mesh-related mortalities were reported. The incidence of mesh-related erosion after LVR is low and is more common after the placement of synthetic mesh. The use of biological mesh for LVR seems to be a safer option; however, large, multicenter, randomized, control trials with long follow-ups are required if a definitive answer is to be obtained.

  17. A high order special relativistic hydrodynamic code with space-time adaptive mesh refinement

    CERN Document Server

    Zanotti, Olindo

    2013-01-01

    We present a high order one-step ADER-WENO finite volume scheme with space-time adaptive mesh refinement (AMR) for the solution of the special relativistic hydrodynamics equations. By adopting a local discontinuous Galerkin predictor method, a high order one-step time discretization is obtained, with no need for Runge-Kutta sub-steps. This turns out to be particularly advantageous in combination with space-time adaptive mesh refinement, which has been implemented following a "cell-by-cell" approach. As in existing second order AMR methods, also the present higher order AMR algorithm features time-accurate local time stepping (LTS), where grids on different spatial refinement levels are allowed to use different time steps. We also compare two different Riemann solvers for the computation of the numerical fluxes at the cell interfaces. The new scheme has been validated over a sample of numerical test problems in one, two and three spatial dimensions, exploring its ability in resolving the propagation of relativ...

  18. Development and Verification of Unstructured Adaptive Mesh Technique with Edge Compatibility

    Science.gov (United States)

    Ito, Kei; Kunugi, Tomoaki; Ohshima, Hiroyuki

    In the design study of the large-sized sodium-cooled fast reactor (JSFR), one key issue is suppression of gas entrainment (GE) phenomena at a gas-liquid interface. Therefore, the authors have been developed a high-precision CFD algorithm to evaluate the GE phenomena accurately. The CFD algorithm has been developed on unstructured meshes to establish an accurate modeling of JSFR system. For two-phase interfacial flow simulations, a high-precision volume-of-fluid algorithm is employed. It was confirmed that the developed CFD algorithm could reproduce the GE phenomena in a simple GE experiment. Recently, the authors have been developed an important technique for the simulation of the GE phenomena in JSFR. That is an unstructured adaptive mesh technique which can apply fine cells dynamically to the region where the GE occurs in JSFR. In this paper, as a part of the development, a two-dimensional unstructured adaptive mesh technique is discussed. In the two-dimensional adaptive mesh technique, each cell is refined isotropically to reduce distortions of the mesh. In addition, connection cells are formed to eliminate the edge incompatibility between refined and non-refined cells. The two-dimensional unstructured adaptive mesh technique is verified by solving well-known lid-driven cavity flow problem. As a result, the two-dimensional unstructured adaptive mesh technique succeeds in providing a high-precision solution, even though poor-quality distorted initial mesh is employed. In addition, the simulation error on the two-dimensional unstructured adaptive mesh is much less than the error on the structured mesh with a larger number of cells.

  19. A longitudinal study of alterations of hippocampal volumes and serum BDNF levels in association to atypical antipsychotics in a sample of first-episode patients with schizophrenia.

    Directory of Open Access Journals (Sweden)

    Emmanouil Rizos

    Full Text Available BACKGROUND: Schizophrenia is associated with structural and functional abnormalities of the hippocampus, which have been suggested to play an important role in the formation and emergence of schizophrenia syndrome. Patients with schizophrenia exhibit significant bilateral hippocampal volume reduction and progressive hippocampal volume decrease in first-episode patients with schizophrenia has been shown in many neuroimaging studies. Dysfunction of the neurotrophic system has been implicated in the pathophysiology of schizophrenia. The initiation of antipsychotic medication alters the levels of serum Brain Derived Neurotrophic Factor (BDNF levels. However it is unclear whether treatment with antipsychotics is associated with alterations of hippocampal volume and BDNF levels. METHODS: In the present longitudinal study we investigated the association between serum BDNF levels and hippocampal volumes in a sample of fourteen first-episode drug-naïve patients with schizophrenia (FEP. MRI scans, BDNF and clinical measurements were performed twice: at baseline before the initiation of antipsychotic treatment and 8 months later, while the patients were receiving monotherapy with second generation antipsychotics (SGAs. RESULTS: We found that left hippocampal volume was decreased (corrected left HV [t = 2.977, df = 13, p = .011] at follow-up; We also found that the higher the BDNF levels change the higher were the differences of corrected left hippocampus after 8 months of treatment with atypical antipsychotics (Pearson r = 0.597, p = 0.024. CONCLUSIONS: The association of BDNF with hippocampal volume alterations in schizophrenia merits further investigation and replication in larger longitudinal studies.

  20. Mesh-Based Fourier Imaging for Biological and Security Applications

    Science.gov (United States)

    Hayden, Danielle

    Traditional x-ray imaging provides only low contrast from low atomic number materials, like soft tissue, due to the small attenuation variations producing very small intensity changes. Higher contrast can be achieved through phase information. The phase change is obtained from the x-ray refracting in a sample, or phase object, due to the difference in refractive indexes. This causes a small angular deviation from the original path. Phase contrast imaging has not been realized in everyday practice due to the requirement for large spatial coherence width of the x-ray beam which typically requires sources on the order of 10-50 m, the use of a grating technique or synchrotron sources. The grating-based phase imaging method depends upon multiple fine-pitched, expensive gratings and extremely precise alignment. An alternative procedure based on a technique recently demonstrated by Bennett is mesh-based phase imaging that utilizes a single, inexpensive mesh with a coarse pitch. This considerably eases the small spot size source requirement, allowing the use of a 150 micron, micro-focus, tungsten anode source. The mesh-based phase imaging set up used to study biomedical and security screening applications consisted of a 123x123 m stainless steel mesh and a 1200x1600 CCD detector with a pixel size of 22 microns. This mesh based approach allows for near-real-time phase extraction of the first harmonics in the Fourier domain. With the phase information and absorption information (collected at the zeroth harmonic), edge enhanced images of a mouse's skull were optimized and several potentially dangerous liquids and powders were discriminated from water. The mesh-based phase set up resulted in high contrasts, signal-to-noise ratios and good resolution verifying the potential utility of this technique for future biomedical imaging and airport security screening.

  1. Moving Mesh Cosmology: Properties of Gas Disks

    CERN Document Server

    Torrey, Paul; Sijacki, Debora; Springel, Volker; Hernquist, Lars

    2011-01-01

    We compare the structural properties of galaxies formed in cosmological simulations using the smoothed particle hydrodynamics (SPH) code GADGET with those using the moving-mesh code AREPO. Both codes employ identical gravity solvers and the same sub-resolution physics but use very different methods to track the hydrodynamic evolution of gas. This permits us to isolate the effects of the hydro solver on the formation and evolution of galactic disks. In a matching sample of GADGET and AREPO haloes we fit simulated gas disks with exponential profiles. We find that the cold gas disks formed using AREPO have systematically larger disk scale lengths and higher specific angular momenta than their GADGET counterparts. The reason for these differences is rooted in the inaccuracies of the SPH solver and calls for a reassessment of commonly adopted feedback prescriptions in cosmological simulations.

  2. Laparoscopic appendicectomy for suspected mesh-induced appendicitis after laparoscopic transabdominal preperitoneal polypropylene mesh inguinal herniorraphy

    Directory of Open Access Journals (Sweden)

    Jennings Jason

    2010-01-01

    Full Text Available Laparoscopic inguinal herniorraphy via a transabdominal preperitoneal (TAPP approach using Polypropylene Mesh (Mesh and staples is an accepted technique. Mesh induces a localised inflammatory response that may extend to, and involve, adjacent abdominal and pelvic viscera such as the appendix. We present an interesting case of suspected Mesh-induced appendicitis treated successfully with laparoscopic appendicectomy, without Mesh removal, in an elderly gentleman who presented with symptoms and signs of acute appendicitis 18 months after laparoscopic inguinal hernia repair. Possible mechanisms for Mesh-induced appendicitis are briefly discussed.

  3. Mechanical behavior and numerical analysis of corrugated wire mesh laminates

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jeong Ho; Shankar, Krishna; Tahtali, Murat [UNSW, ADFA, Canberra (Australia)

    2012-01-15

    The objective is to show a possibility of corrugated wire mesh laminate (CWML) structure for bone application. CWML is a part of open-cell structures with low density and high strength built with bonded mesh layers. Specimens of CWML made of 316 stainless steel woven meshes with 0.22 mm wire diameter and 0.95 mm mesh aperture, bonded by transit liquid phase (TLP) at low temperatures, were fabricated and tested under quasi-static conditions to determine their compressive behavior with varying numbers of layers of the sample. The finite element software was used to model the CWML and studied their response to mechanical loading. Then, the numerical model was confirmed by the tested sample. Consequently, CWML specimens were reasonably matched with the human tibia bone ranged over apparent density from 0.05 to 0.08 g/cm{sup 3} in Young's modulus and from 0.05 to 0.11 g/cm{sup 3} in compressive yield strength. The CWML model can have the potential for bone application.

  4. An effective quadrilateral mesh adaptation

    Institute of Scientific and Technical Information of China (English)

    KHATTRI Sanjay Kumar

    2006-01-01

    Accuracy of a simulation strongly depends on the grid quality. Here, quality means orthogonality at the boundaries and quasi-orthogonality within the critical regions, smoothness, bounded aspect ratios and solution adaptive behaviour. It is not recommended to refine the parts of the domain where the solution shows little variation. It is desired to concentrate grid points and cells in the part of the domain where the solution shows strong gradients or variations. We present a simple, effective and computationally efficient approach for quadrilateral mesh adaptation. Several numerical examples are presented for supporting our claim.

  5. Anisotropic Diffusion in Mesh-Free Numerical Magnetohydrodynamics

    CERN Document Server

    Hopkins, Philip F

    2016-01-01

    We extend recently-developed mesh-free Lagrangian methods for numerical magnetohydrodynamics (MHD) to arbitrary anisotropic diffusion equations, including: passive scalar diffusion, Spitzer-Braginskii conduction and viscosity, cosmic ray diffusion/streaming, anisotropic radiation transport, non-ideal MHD (Ohmic resistivity, ambipolar diffusion, the Hall effect), and turbulent 'eddy diffusion.' We study these as implemented in the code GIZMO for both new meshless finite-volume Godunov schemes (MFM/MFV) as well as smoothed-particle hydrodynamics (SPH). We show the MFM/MFV methods are accurate and stable even with noisy fields and irregular particle arrangements, and recover the correct behavior even in arbitrarily anisotropic cases. They are competitive with state-of-the-art AMR/moving-mesh methods, and can correctly treat anisotropic diffusion-driven instabilities (e.g. the MTI and HBI, Hall MRI). We also develop a new scheme for stabilizing anisotropic tensor-valued fluxes with high-order gradient estimators ...

  6. Proceedings of the 20th International Meshing Roundtable

    CERN Document Server

    2012-01-01

    This volume contains the articles presented at the 20th International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held in Paris, France on Oct 23-26, 2011. This is the first year the IMR was held outside the United States territory. Other sponsors of the 20th IMR are Systematic Paris Region Systems & ICT Cluster, AIAA, NAFEMS, CEA, and NSF. The Sandia National Laboratories started the first IMR in 1992, and the conference has been held annually since. Each year the IMR brings together researchers, developers, and application experts, from a variety of disciplines, to present and discuss ideas on mesh generation and related topics. The topics covered by the IMR have applications in numerical analysis, computational geometry, computer graphics, as well as other areas, and the presentations describe novel work ranging from theory to application.     .

  7. A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics

    CERN Document Server

    Mocz, Philip; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars

    2016-01-01

    We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code Arepo. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this co...

  8. Kinetic mesh-free method for flutter prediction in turbomachines

    Indian Academy of Sciences (India)

    V Ramesh; S M Deshpande

    2014-02-01

    The present paper deals with the development and application of a kinetic theory-based mesh-free method for unsteady flows. The method has the capability to compute on any arbitrary distribution of moving nodes. In general, computation of unsteady flow past multiple moving boundaries using conventional finite volume solvers are quite involved. They invariably require repeated grid generation or an efficient grid movement strategy. This approach becomes more difficult when there are many moving boundaries. In the present work, we propose a simple and an effective node movement strategy for the mesh-free solver. This can tackle the unsteady problems with moving boundaries in a much easier way. Using the present method we have computed unsteady flow in oscillating turbomachinery blades. A simple energy method has been used to predict flutter using the unsteady computations. The results compare well with the available experiments and other computations.

  9. Smooth Rotation Enhanced As-Rigid-As-Possible Mesh Animation.

    Science.gov (United States)

    Levi, Zohar; Gotsman, Craig

    2015-02-01

    In recent years, the As-Rigid-As-Possible (ARAP) shape deformation and shape interpolation techniques gained popularity, and the ARAP energy was successfully used in other applications as well. We improve the ARAP animation technique in two aspects. First, we introduce a new ARAP-type energy, named SR-ARAP, which has a consistent discretization for surfaces (triangle meshes). The quality of our new surface deformation scheme competes with the quality of the volumetric ARAP deformation (for tetrahedral meshes). Second, we propose a new ARAP shape interpolation method that is superior to prior art also based on the ARAP energy. This method is compatible with our new SR-ARAP energy, as well as with the ARAP volume energy.

  10. Block-Structured Adaptive Mesh Refinement Algorithms for Vlasov Simulation

    CERN Document Server

    Hittinger, J A F

    2012-01-01

    Direct discretization of continuum kinetic equations, like the Vlasov equation, are under-utilized because the distribution function generally exists in a high-dimensional (>3D) space and computational cost increases geometrically with dimension. We propose to use high-order finite-volume techniques with block-structured adaptive mesh refinement (AMR) to reduce the computational cost. The primary complication comes from a solution state comprised of variables of different dimensions. We develop the algorithms required to extend standard single-dimension block structured AMR to the multi-dimension case. Specifically, algorithms for reduction and injection operations that transfer data between mesh hierarchies of different dimensions are explained in detail. In addition, modifications to the basic AMR algorithm that enable the use of high-order spatial and temporal discretizations are discussed. Preliminary results for a standard 1D+1V Vlasov-Poisson test problem are presented. Results indicate that there is po...

  11. Bluetooth Low Energy Mesh Networks: A Survey.

    Science.gov (United States)

    Darroudi, Seyed Mahdi; Gomez, Carles

    2017-06-22

    Bluetooth Low Energy (BLE) has gained significant momentum. However, the original design of BLE focused on star topology networking, which limits network coverage range and precludes end-to-end path diversity. In contrast, other competing technologies overcome such constraints by supporting the mesh network topology. For these reasons, academia, industry, and standards development organizations have been designing solutions to enable BLE mesh networks. Nevertheless, the literature lacks a consolidated view on this emerging area. This paper comprehensively surveys state of the art BLE mesh networking. We first provide a taxonomy of BLE mesh network solutions. We then review the solutions, describing the variety of approaches that leverage existing BLE functionality to enable BLE mesh networks. We identify crucial aspects of BLE mesh network solutions and discuss their advantages and drawbacks. Finally, we highlight currently open issues.

  12. Mesh networking optimized for robotic teleoperation

    Science.gov (United States)

    Hart, Abraham; Pezeshkian, Narek; Nguyen, Hoa

    2012-06-01

    Mesh networks for robot teleoperation pose different challenges than those associated with traditional mesh networks. Unmanned ground vehicles (UGVs) are mobile and operate in constantly changing and uncontrollable environments. Building a mesh network to work well under these harsh conditions presents a unique challenge. The Manually Deployed Communication Relay (MDCR) mesh networking system extends the range of and provides non-line-of-sight (NLOS) communications for tactical and explosive ordnance disposal (EOD) robots currently in theater. It supports multiple mesh nodes, robots acting as nodes, and works with all Internet Protocol (IP)-based robotic systems. Under MDCR, the performance of different routing protocols and route selection metrics were compared resulting in a modified version of the Babel mesh networking protocol. This paper discusses this and other topics encountered during development and testing of the MDCR system.

  13. MPDATA error estimator for mesh adaptivity

    Science.gov (United States)

    Szmelter, Joanna; Smolarkiewicz, Piotr K.

    2006-04-01

    In multidimensional positive definite advection transport algorithm (MPDATA) the leading error as well as the first- and second-order solutions are known explicitly by design. This property is employed to construct refinement indicators for mesh adaptivity. Recent progress with the edge-based formulation of MPDATA facilitates the use of the method in an unstructured-mesh environment. In particular, the edge-based data structure allows for flow solvers to operate on arbitrary hybrid meshes, thereby lending itself to implementations of various mesh adaptivity techniques. A novel unstructured-mesh nonoscillatory forward-in-time (NFT) solver for compressible Euler equations is used to illustrate the benefits of adaptive remeshing as well as mesh movement and enrichment for the efficacy of MPDATA-based flow solvers. Validation against benchmark test cases demonstrates robustness and accuracy of the approach.

  14. Bluetooth Low Energy Mesh Networks: A Survey

    Science.gov (United States)

    Darroudi, Seyed Mahdi; Gomez, Carles

    2017-01-01

    Bluetooth Low Energy (BLE) has gained significant momentum. However, the original design of BLE focused on star topology networking, which limits network coverage range and precludes end-to-end path diversity. In contrast, other competing technologies overcome such constraints by supporting the mesh network topology. For these reasons, academia, industry, and standards development organizations have been designing solutions to enable BLE mesh networks. Nevertheless, the literature lacks a consolidated view on this emerging area. This paper comprehensively surveys state of the art BLE mesh networking. We first provide a taxonomy of BLE mesh network solutions. We then review the solutions, describing the variety of approaches that leverage existing BLE functionality to enable BLE mesh networks. We identify crucial aspects of BLE mesh network solutions and discuss their advantages and drawbacks. Finally, we highlight currently open issues. PMID:28640183

  15. Development and acceleration of unstructured mesh-based cfd solver

    Science.gov (United States)

    Emelyanov, V.; Karpenko, A.; Volkov, K.

    2017-06-01

    The study was undertaken as part of a larger effort to establish a common computational fluid dynamics (CFD) code for simulation of internal and external flows and involves some basic validation studies. The governing equations are solved with ¦nite volume code on unstructured meshes. The computational procedure involves reconstruction of the solution in each control volume and extrapolation of the unknowns to find the flow variables on the faces of control volume, solution of Riemann problem for each face of the control volume, and evolution of the time step. The nonlinear CFD solver works in an explicit time-marching fashion, based on a three-step Runge-Kutta stepping procedure. Convergence to a steady state is accelerated by the use of geometric technique and by the application of Jacobi preconditioning for high-speed flows, with a separate low Mach number preconditioning method for use with low-speed flows. The CFD code is implemented on graphics processing units (GPUs). Speedup of solution on GPUs with respect to solution on central processing units (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  16. Aeroallergen analyses and their clinical relevance. II. Sampling by high-volume airsampler with immunochemical quantification versus Burkard pollen trap sampling with morphologic quantification

    DEFF Research Database (Denmark)

    Johnsen, C R; Weeke, E R; Nielsen, J

    1992-01-01

    , was analysed, and close correlations between the 2 sampling techniques were found (rs 0.5-0.8, p .... Pollen counts and immunochemical estimation were compared with the symptom score recordings of allergic persons for birch season 1989 and for grass seasons 1986, 1988, and 1989. A close correlation was found for both sampling techniques for the grass seasons in 1986 and 1989 (rs 0.51-0.61, p

  17. Unstructured Polyhedral Mesh Thermal Radiation Diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Palmer, T.S.; Zika, M.R.; Madsen, N.K.

    2000-07-27

    Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module.

  18. Delaunay triangulation and computational fluid dynamics meshes

    Science.gov (United States)

    Posenau, Mary-Anne K.; Mount, David M.

    1992-01-01

    In aerospace computational fluid dynamics (CFD) calculations, the Delaunay triangulation of suitable quadrilateral meshes can lead to unsuitable triangulated meshes. Here, we present case studies which illustrate the limitations of using structured grid generation methods which produce points in a curvilinear coordinate system for subsequent triangulations for CFD applications. We discuss conditions under which meshes of quadrilateral elements may not produce a Delaunay triangulation suitable for CFD calculations, particularly with regard to high aspect ratio, skewed quadrilateral elements.

  19. [Determination of trace and ultra-trace level bromate in water by large volume sample injection with enrichment column for on-line preconcentration coupled with ion chromatography].

    Science.gov (United States)

    Liu, Jing; He, Qingqing; Yang, Lili; Hu, Enyu; Wang, Meifei

    2015-10-01

    A method for the determination of trace and ultra-trace level bromate in water by ion chromatography with large volume sample injection for on-line preconcentration was established. A high capacity Dionex IonPac AG23 guard column was simply used as the enrichment column instead of the loop for the preconcentration of bromate. High purity KOH solution used as eluent for gradient elution was on-line produced by an eluent generator automatically. The results showed that a good linear relationship of bromate was exhibited in the range of 0.05-51.2 μg/L (r ≥ 0.999 5), and the method detection limit was 0.01 μg/L. Compared with conventional sample injection, the injection volume was up to 5 mL, and the enrichment factor of this method was about 240 times. This method was successfully applied for several real samples of pure water which were purchased in the supermarket, and the recoveries of bromate were between 90%-100% with the RSDs (n = 6) of 2.1%-6.4% at two spiked levels. This method without pretreatment is simple, and of high accuracy and precision. The preconcentration can be achieved by large volume sample injection. It is suitable for the analysis of trace and ultra-trace level bromate.

  20. Design of electrospinning mesh devices

    Science.gov (United States)

    Russo, Giuseppina; Peters, Gerrit W. M.; Solberg, Ramon H. M.; Vittoria, Vittoria

    2012-07-01

    This paper describes the features of new membranes that can act as local biomedical devices owing to their peculiar shape in the form of mesh structure. These materials are designed to provide significant effects to reduce local inflammations and improve the tissue regeneration. Lamellar Hydrotalcite loaded with Diclofenac Sodium (HTLc-DIK) was homogenously dispersed inside a polymeric matrix of Poly-caprolactone (PCL) to manufacture membranes by electrospinning technique. The experimental procedure and the criteria employed have shown to be extremely effective at increasing potentiality and related applications. The employed technique has proved to be very useful to manufacture polymeric fibers with diameters in the range of nano-micro scale. In this work a dedicated collector based on a proprietary technology of IME Technologies and Eindhoven University of Technology (TU/e) was used. It allowed to obtain devices with a macro shape of a 3D-mesh. Atomic Force Microscopy (AFM) highlights a very interesting texture of the electrospun fibers. They show a lamellar morphology that is only slightly modified by the inclusion of the interclay embedded in the devices to control the drug release phenomena.

  1. Conformal refinement of unstructured quadrilateral meshes

    Energy Technology Data Exchange (ETDEWEB)

    Garmella, Rao [Los Alamos National Laboratory

    2009-01-01

    We present a multilevel adaptive refinement technique for unstructured quadrilateral meshes in which the mesh is kept conformal at all times. This means that the refined mesh, like the original, is formed of only quadrilateral elements that intersect strictly along edges or at vertices, i.e., vertices of one quadrilateral element do not lie in an edge of another quadrilateral. Elements are refined using templates based on 1:3 refinement of edges. We demonstrate that by careful design of the refinement and coarsening strategy, we can maintain high quality elements in the refined mesh. We demonstrate the method on a number of examples with dynamically changing refinement regions.

  2. MOAB : a mesh-oriented database.

    Energy Technology Data Exchange (ETDEWEB)

    Tautges, Timothy James; Ernst, Corey; Stimpson, Clint; Meyers, Ray J.; Merkley, Karl

    2004-04-01

    A finite element mesh is used to decompose a continuous domain into a discretized representation. The finite element method solves PDEs on this mesh by modeling complex functions as a set of simple basis functions with coefficients at mesh vertices and prescribed continuity between elements. The mesh is one of the fundamental types of data linking the various tools in the FEA process (mesh generation, analysis, visualization, etc.). Thus, the representation of mesh data and operations on those data play a very important role in FEA-based simulations. MOAB is a component for representing and evaluating mesh data. MOAB can store structured and unstructured mesh, consisting of elements in the finite element 'zoo'. The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. The MOAB data model consists of a mesh interface instance, mesh entities (vertices and elements), sets, and tags. Entities are addressed through handles rather than pointers, to allow the underlying representation of an entity to change without changing the handle to that entity. Sets are arbitrary groupings of mesh entities and other sets. Sets also support parent/child relationships as a relation distinct from sets containing other sets. The directed-graph provided by set parent/child relationships is useful for modeling topological relations from a geometric model or other metadata. Tags are named data which can be assigned to the mesh as a whole, individual entities, or sets. Tags are a mechanism for attaching data to individual entities and sets are a mechanism for describing relations between entities; the combination of these two mechanisms is a powerful yet simple interface for representing metadata or application

  3. Preparation of cylindrical Bi-2223 sintered bulk composed with nickel meshes for current lead

    Energy Technology Data Exchange (ETDEWEB)

    Yoshizawa, S [Department of Environmental Systems, Meisei University, 2-1-1, Hodokubo, Hino, Tokyo 191-8506 (Japan); Sakamoto, M [Department of Electrical Engineering, Kogakuin University, 2665-1, Nakano, Hachioji, Tokyo 192-0015 (Japan); Hishinuma, Y [Fusion Engineering Research Center, National Institute for Fusion Scence, 322-6, Oroshi, Toki, Gifu 509-5202 (Japan); Nishimura, A [Fusion Engineering Research Center, National Institute for Fusion Scence, 322-6, Oroshi, Toki, Gifu 509-5202 (Japan); Yamazaki, S [Department of Electrical Engineering, Kogakuin University, 2665-1, Nakano, Hachioji, Tokyo 192-0015 (Japan); Kojima, S [Department of Electrical Engineering, Kogakuin University, 2665-1, Nakano, Hachioji, Tokyo 192-0015 (Japan)

    2006-06-01

    In order to improve the superconducting property and mechanical property of Bi- 2223 sintered bulk, Ni wire meshes were added in the bulk. The mesh concentration was 18 x 18 meshes/cm{sup 2} using Ni wires of 0.25 mm in diameter. The Ni meshes were plated with Ag by 0.03 mm in thickness. We prepared the cylindrical sintered bulk, 27 mm in outer diameter 2 mm in thickness and 50 mm in length using a cold isostatic pressing (CIP) method. The samples were sintered at 845 deg. C for 50 h. Critical current density (J{sub c}) of the samples was estimated at 77 K under self-field. By composing with the Ni meshes the J{sub c} property was improved compared with the samples without the mesh. It is mentioned that the J{sub c} increase of the Bi-2223 bulk by adding with the Ni meshes is attributed from formation of the Bi-2223 plate-like grains in the vicinity of the Ag-plated Ni wires.

  4. An arbitrary boundary triangle mesh generation method for multi-modality imaging

    Science.gov (United States)

    Zhang, Xuanxuan; Deng, Yong; Gong, Hui; Meng, Yuanzheng; Yang, Xiaoquan; Luo, Qingming

    2012-03-01

    Low-resolution and ill-posedness are the major challenges in diffuse optical tomography(DOT)/fluorescence molecular tomography(FMT). Recently, the multi-modality imaging technology that combines micro-computed tomography (micro-CT) with DOT/FMT is developed to improve resolution and ill-posedness. To take advantage of the fine priori anatomical maps obtained from micro-CT, we present an arbitrary boundary triangle mesh generation method for FMT/DOT/micro-CT multi-modality imaging. A planar straight line graph (PSLG) based on the image of micro-CT is obtained by an adaptive boundary sampling algorithm. The subregions of mesh are accurately matched with anatomical structures by a two-step solution, firstly, the triangles and nodes during mesh refinement are labeled respectively, and then a revising algorithm is used to modifying meshes of each subregion. The triangle meshes based on a regular model and a micro-CT image are generated respectively. The results show that the subregions of triangle meshes can match with anatomical structures accurately and triangle meshes have good quality. This provides an arbitrary boundaries triangle mesh generation method with the ability to incorporate the fine priori anatomical information into DOT/FMT reconstructions.

  5. A Full Automatic Device for Sampling Small Solution Volumes in Photometric Titration Procedure Based on Multicommuted Flow System

    Science.gov (United States)

    Borges, Sivanildo S.; Vieira, Gláucia P.; Reis, Boaventura F.

    2007-01-01

    In this work, an automatic device to deliver titrant solution into a titration chamber with the ability to determine the dispensed volume of solution, with good precision independent of both elapsed time and flow rate, is proposed. A glass tube maintained at the vertical position was employed as a container for the titrant solution. Electronic devices were coupled to the glass tube in order to control its filling with titrant solution, as well as the stepwise solution delivering into the titration chamber. The detection of the titration end point was performed employing a photometer designed using a green LED (λ=545 nm) and a phototransistor. The titration flow system comprised three-way solenoid valves, which were assembled to allow that the steps comprising the solution container loading and the titration run were carried out automatically. The device for the solution volume determination was designed employing an infrared LED (λ=930 nm) and a photodiode. When solution volume delivered from proposed device was within the range of 5 to 105 μl, a linear relationship (R = 0.999) between the delivered volumes and the generated potential difference was achieved. The usefulness of the proposed device was proved performing photometric titration of hydrochloric acid solution with a standardized sodium hydroxide solution and using phenolphthalein as an external indicator. The achieved results presented relative standard deviation of 1.5%. PMID:18317510

  6. Evaluation of sampling plans for in-service inspection of steam generator tubes. Volume 2, Comprehensive analytical and Monte Carlo simulation results for several sampling plans

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, R.J.; Heasler, P.G.; Baird, D.B. [Pacific Northwest Lab., Richland, WA (United States)

    1994-02-01

    This report summarizes the results of three previous studies to evaluate and compare the effectiveness of sampling plans for steam generator tube inspections. An analytical evaluation and Monte Carlo simulation techniques were the methods used to evaluate sampling plan performance. To test the performance of candidate sampling plans under a variety of conditions, ranges of inspection system reliability were considered along with different distributions of tube degradation. Results from the eddy current reliability studies performed with the retired-from-service Surry 2A steam generator were utilized to guide the selection of appropriate probability of detection and flaw sizing models for use in the analysis. Different distributions of tube degradation were selected to span the range of conditions that might exist in operating steam generators. The principal means of evaluating sampling performance was to determine the effectiveness of the sampling plan for detecting and plugging defective tubes. A summary of key results from the eddy current reliability studies is presented. The analytical and Monte Carlo simulation analyses are discussed along with a synopsis of key results and conclusions.

  7. Determination of Atto- to Femtogram Levels of Americium and Curium Isotopes in Large-Volume Urine Samples by Compact Accelerator Mass Spectrometry.

    Science.gov (United States)

    Dai, Xiongxin; Christl, Marcus; Kramer-Tremblay, Sheila; Synal, Hans-Arno

    2016-03-01

    Ultralow level analysis of actinides in urine samples may be required for dose assessment in the event of internal exposures to these radionuclides at nuclear facilities and nuclear power plants. A new bioassay method for analysis of sub-femtogram levels of Am and Cm in large-volume urine samples was developed. Americium and curium were co-precipitated with hydrous titanium oxide from the urine matrix and purified by column chromatography separation. After target preparation using mixed titanium/iron oxides, the final sample was measured by compact accelerator mass spectrometry. Urine samples spiked with known quantities of Am and Cm isotopes in the range of attogram to femtogram levels were measured for method evaluation. The results are in good agreement with the expected values, demonstrating the feasibility of compact accelerator mass spectrometry (AMS) for the determination of minor actinides at the levels of attogram/liter in urine samples to meet stringent sensitivity requirements for internal dosimetry assessment.

  8. Changes in brain volume and cognition in a randomized trial of exercise and social interaction in a community-based sample of non-demented Chinese elders.

    Science.gov (United States)

    Mortimer, James A; Ding, Ding; Borenstein, Amy R; DeCarli, Charles; Guo, Qihao; Wu, Yougui; Zhao, Qianhua; Chu, Shugang

    2012-01-01

    Physical exercise has been shown to increase brain volume and improve cognition in randomized trials of non-demented elderly. Although greater social engagement was found to reduce dementia risk in observational studies, randomized trials of social interventions have not been reported. A representative sample of 120 elderly from Shanghai, China was randomized to four groups (Tai Chi, Walking, Social Interaction, No Intervention) for 40 weeks. Two MRIs were obtained, one before the intervention period, the other after. A neuropsychological battery was administered at baseline, 20 weeks, and 40 weeks. Comparison of changes in brain volumes in intervention groups with the No Intervention group were assessed by t-tests. Time-intervention group interactions for neuropsychological measures were evaluated with repeated-measures mixed models. Compared to the No Intervention group, significant increases in brain volume were seen in the Tai Chi and Social Intervention groups (p brain volume and improvements in cognition with a largely non-aerobic exercise (Tai Chi). In addition, intellectual stimulation through social interaction was associated with increases in brain volume as well as with some cognitive improvements.

  9. Total and regional brain volumes in a population-based normative sample from 4 to 18 years: the NIH MRI Study of Normal Brain Development.

    Science.gov (United States)

    2012-01-01

    Using a population-based sampling strategy, the National Institutes of Health (NIH) Magnetic Resonance Imaging Study of Normal Brain Development compiled a longitudinal normative reference database of neuroimaging and correlated clinical/behavioral data from a demographically representative sample of healthy children and adolescents aged newborn through early adulthood. The present paper reports brain volume data for 325 children, ages 4.5-18 years, from the first cross-sectional time point. Measures included volumes of whole-brain gray matter (GM) and white matter (WM), left and right lateral ventricles, frontal, temporal, parietal and occipital lobe GM and WM, subcortical GM (thalamus, caudate, putamen, and globus pallidus), cerebellum, and brainstem. Associations with cross-sectional age, sex, family income, parental education, and body mass index (BMI) were evaluated. Key observations are: 1) age-related decreases in lobar GM most prominent in parietal and occipital cortex; 2) age-related increases in lobar WM, greatest in occipital, followed by the temporal lobe; 3) age-related trajectories predominantly curvilinear in females, but linear in males; and 4) small systematic associations of brain tissue volumes with BMI but not with IQ, family income, or parental education. These findings constitute a normative reference on regional brain volumes in children and adolescents.

  10. Generation of hybrid meshes for the simulation of petroleum reservoirs; Generation de maillages hybrides pour la simulation de reservoirs petroliers

    Energy Technology Data Exchange (ETDEWEB)

    Balaven-Clermidy, S.

    2001-12-01

    Oil reservoir simulations study multiphase flows in porous media. These flows are described and evaluated through numerical schemes on a discretization of the reservoir domain. In this thesis, we were interested in this spatial discretization and a new kind of hybrid mesh has been proposed where the radial nature of flows in the vicinity of wells is directly taken into account in the geometry. Our modular approach described wells and their drainage area through radial circular meshes. These well meshes are inserted in a structured reservoir mesh (a Corner Point Geometry mesh) made up with hexahedral cells. Finally, in order to generate a global conforming mesh, proper connections are realized between the different kinds of meshes through unstructured transition ones. To compute these transition meshes that we want acceptable in terms of finite volume methods, an automatic method based on power diagrams has been developed. Our approach can deal with a homogeneous anisotropic medium and allows the user to insert vertical or horizontal wells as well as secondary faults in the reservoir mesh. Our work has been implemented, tested and validated in 2D and 2D1/2. It can also be extended in 3D when the geometrical constraints are simplicial ones: points, segments and triangles. (author)

  11. Mesh Exposure and Associated Risk Factors in Women Undergoing Transvaginal Prolapse Repair with Mesh

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Frankman

    2013-01-01

    Full Text Available Objective. To determine frequency, rate, and risk factors associated with mesh exposure in women undergoing transvaginal prolapse repair with polypropylene mesh. Methods. Retrospective chart review was performed for all women who underwent Prolift Pelvic Floor Repair System (Gynecare, Somerville, NJ between September 2005 and September 2008. Multivariable logistic regression was performed to identify risk factors for mesh exposure. Results. 201 women underwent Prolift. Mesh exposure occurred in 12% (24/201. Median time to mesh exposure was 62 days (range: 10–372. When mesh was placed in the anterior compartment, the frequency of mesh exposure was higher than that when mesh was placed in the posterior compartment (8.7% versus 2.9%, P=0.04. Independent risk factors for mesh exposure were diabetes (AOR = 7.7, 95% CI 1.6–37.6; P=0.01 and surgeon (AOR = 7.3, 95% CI 1.9–28.6; P=0.004. Conclusion. Women with diabetes have a 7-fold increased risk for mesh exposure after transvaginal prolapse repair using Prolift. The variable rate of mesh exposure amongst surgeons may be related to technique. The anterior vaginal wall may be at higher risk of mesh exposure as compared to the posterior vaginal wall.

  12. [An evaluation of sampling design for estimating an epidemiologic volume of diabetes and for assessing present status of its control in Korea].

    Science.gov (United States)

    Lee, Ji-Sung; Kim, Jaiyong; Baik, Sei-Hyun; Park, Ie-Byung; Lee, Juneyoung

    2009-03-01

    An appropriate sampling strategy for estimating an epidemiologic volume of diabetes has been evaluated through a simulation. We analyzed about 250 million medical insurance claims data submitted to the Health Insurance Review & Assessment Service with diabetes as principal or subsequent diagnoses, more than or equal to once per year, in 2003. The database was re-constructed to a 'patient-hospital profile' that had 3,676,164 cases, and then to a 'patient profile' that consisted of 2,412,082 observations. The patient profile data was then used to test the validity of a proposed sampling frame and methods of sampling to develop diabetic-related epidemiologic indices. Simulation study showed that a use of a stratified two-stage cluster sampling design with a total sample size of 4,000 will provide an estimate of 57.04% (95% prediction range, 49.83 - 64.24%) for a treatment prescription rate of diabetes. The proposed sampling design consists, at first, stratifying the area of the nation into "metropolitan/city/county" and the types of hospital into "tertiary/secondary/primary/clinic" with a proportion of 5:10:10:75. Hospitals were then randomly selected within the strata as a primary sampling unit, followed by a random selection of patients within the hospitals as a secondly sampling unit. The difference between the estimate and the parameter value was projected to be less than 0.3%. The sampling scheme proposed will be applied to a subsequent nationwide field survey not only for estimating the epidemiologic volume of diabetes but also for assessing the present status of nationwide diabetes control.

  13. A Novel Coarsening Method for Scalable and Efficient Mesh Generation

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, A; Hysom, D; Gunney, B

    2010-12-02

    In this paper, we propose a novel mesh coarsening method called brick coarsening method. The proposed method can be used in conjunction with any graph partitioners and scales to very large meshes. This method reduces problem space by decomposing the original mesh into fixed-size blocks of nodes called bricks, layered in a similar way to conventional brick laying, and then assigning each node of the original mesh to appropriate brick. Our experiments indicate that the proposed method scales to very large meshes while allowing simple RCB partitioner to produce higher-quality partitions with significantly less edge cuts. Our results further indicate that the proposed brick-coarsening method allows more complicated partitioners like PT-Scotch to scale to very large problem size while still maintaining good partitioning performance with relatively good edge-cut metric. Graph partitioning is an important problem that has many scientific and engineering applications in such areas as VLSI design, scientific computing, and resource management. Given a graph G = (V,E), where V is the set of vertices and E is the set of edges, (k-way) graph partitioning problem is to partition the vertices of the graph (V) into k disjoint groups such that each group contains roughly equal number of vertices and the number of edges connecting vertices in different groups is minimized. Graph partitioning plays a key role in large scientific computing, especially in mesh-based computations, as it is used as a tool to minimize the volume of communication and to ensure well-balanced load across computing nodes. The impact of graph partitioning on the reduction of communication can be easily seen, for example, in different iterative methods to solve a sparse system of linear equation. Here, a graph partitioning technique is applied to the matrix, which is basically a graph in which each edge is a non-zero entry in the matrix, to allocate groups of vertices to processors in such a way that many of

  14. A study of toxic emissions from a coal-fired power plant utilizing an ESP/Wet FGD system. Volume 1, Sampling, results, and special topics: Final report

    Energy Technology Data Exchange (ETDEWEB)

    1994-07-01

    This was one of a group of assessments of toxic emissions from coal-fired power plants, conducted for DOE-PETC in 1993 as mandated by the 1990 Clean Air Act. It is organized into 2 volumes; Volume 1 describes the sampling effort, presents the concentration data on toxic chemicals in several power plant streams, and reports the results of evaluations and calculations. The study involved solid, liquid, and gaseous samples from input, output, and process streams at Coal Creek Station Unit No. 1, Underwood, North Dakota (1100 MW mine-mouth plant burning lignite from the Falkirk mine located adjacent to the plant). This plant had an electrostatic precipitator and a wet scrubber flue gas desulfurization unit. Measurements were conducted on June 21--24, 26, and 27, 1993; chemicals measured were 6 major and 16 trace elements (including Hg, Cr, Cd, Pb, Se, As, Be, Ni), acids and corresponding anions (HCl, HF, chloride, fluoride, phosphate, sulfate), ammonia and cyanide, elemental C, radionuclides, VOCs, semivolatiles (incl. PAH, polychlorinated dioxins, furans), and aldehydes. Volume 2: Appendices includes process data log sheets, field sampling data sheets, uncertainty calculations, and quality assurance results.

  15. Adaptive mesh refinement in titanium

    Energy Technology Data Exchange (ETDEWEB)

    Colella, Phillip; Wen, Tong

    2005-01-21

    In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.

  16. Mechanical properties of the samples produced by volume powder cladding of stainless steel using a continuous fiber laser

    Science.gov (United States)

    Bykovskiy, D. P.; Petrovskiy, V. N.; Mironov, V. D.; Osintsev, A. V.; Ochkov, K. Yu

    2016-09-01

    Samples for tensile tests were manufactured by using one of the additive technologies - direct laser material deposition. Investigations were carried out at the facility Huffman HC-205 equipped with a fiber laser with a power up to 3.5 kW. Various strategies of layering metallic powder of stainless steel 316L were considered to optimize the modes of constructing the samples. We measured the stress-strain state of the produced samples by the method of digital image correlation. It is found that the nominal tensile strength of the samples produced by the direct growing using laser powder of 316L steel is of high level - 767 MPa.

  17. Mesh refinement strategy for optimal control problems

    Science.gov (United States)

    Paiva, L. T.; Fontes, F. A. C. C.

    2013-10-01

    Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.

  18. 7th International Meshing Roundtable '98

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, T.J.

    1998-10-01

    The goal of the 7th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the past, the Roundtable has enjoyed significant participation from each of these groups from a wide variety of countries.

  19. Laparoscopic Pelvic Floor Repair Using Polypropylene Mesh

    Directory of Open Access Journals (Sweden)

    Shih-Shien Weng

    2008-09-01

    Conclusion: Laparoscopic pelvic floor repair using a single piece of polypropylene mesh combined with uterosacral ligament suspension appears to be a feasible procedure for the treatment of advanced vaginal vault prolapse and enterocele. Fewer mesh erosions and postoperative pain syndromes were seen in patients who had no previous pelvic floor reconstructive surgery.

  20. A comparison of tetrahedral mesh improvement techniques

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, L.A.; Ollivier-Gooch, C. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1996-12-01

    Automatic mesh generation and adaptive refinement methods for complex three-dimensional domains have proven to be very successful tools for the efficient solution of complex applications problems. These methods can, however, produce poorly shaped elements that cause the numerical solution to be less accurate and more difficult to compute. Fortunately, the shape of the elements can be improved through several mechanisms, including face-swapping techniques that change local connectivity and optimization-based mesh smoothing methods that adjust grid point location. The authors consider several criteria for each of these two methods and compare the quality of several meshes obtained by using different combinations of swapping and smoothing. Computational experiments show that swapping is critical to the improvement of general mesh quality and that optimization-based smoothing is highly effective in eliminating very small and very large angles. The highest quality meshes are obtained by using a combination of swapping and smoothing techniques.

  1. Generation of high order geometry representations in Octree meshes

    Directory of Open Access Journals (Sweden)

    Harald G. Klimach

    2015-11-01

    Full Text Available We propose a robust method to convert triangulated surface data into polynomial volume data. Such polynomial representations are required for high-order partial differential solvers, as low-order surface representations would diminish the accuracy of their solution. Our proposed method deploys a first order spatial bisection algorithm to find robustly an approximation of given geometries. The resulting voxelization is then used to generate Legendre polynomials of arbitrary degree. By embedding the locally defined polynomials in cubical elements of a coarser mesh, this method can reliably approximate even complex structures, like porous media. It thereby is possible to provide appropriate material definitions for high order discontinuous Galerkin schemes. We describe the method to construct the polynomial and how it fits into the overall mesh generation. Our discussion includes numerical properties of the method and we show some results from applying it to various geometries. We have implemented the described method in our mesh generator Seeder, which is publically available under a permissive open-source license.

  2. Anisotropic diffusion in mesh-free numerical magnetohydrodynamics

    Science.gov (United States)

    Hopkins, Philip F.

    2017-04-01

    We extend recently developed mesh-free Lagrangian methods for numerical magnetohydrodynamics (MHD) to arbitrary anisotropic diffusion equations, including: passive scalar diffusion, Spitzer-Braginskii conduction and viscosity, cosmic ray diffusion/streaming, anisotropic radiation transport, non-ideal MHD (Ohmic resistivity, ambipolar diffusion, the Hall effect) and turbulent 'eddy diffusion'. We study these as implemented in the code GIZMO for both new meshless finite-volume Godunov schemes (MFM/MFV). We show that the MFM/MFV methods are accurate and stable even with noisy fields and irregular particle arrangements, and recover the correct behaviour even in arbitrarily anisotropic cases. They are competitive with state-of-the-art AMR/moving-mesh methods, and can correctly treat anisotropic diffusion-driven instabilities (e.g. the MTI and HBI, Hall MRI). We also develop a new scheme for stabilizing anisotropic tensor-valued fluxes with high-order gradient estimators and non-linear flux limiters, which is trivially generalized to AMR/moving-mesh codes. We also present applications of some of these improvements for SPH, in the form of a new integral-Godunov SPH formulation that adopts a moving-least squares gradient estimator and introduces a flux-limited Riemann problem between particles.

  3. Distribution of dead wood volume and mass in mediterranean Fagus sylvatica L. forests in Northern Iberian Peninsula. Implications for field sampling inventory

    Directory of Open Access Journals (Sweden)

    Celia Herrero

    2016-12-01

    Full Text Available Aim of study: The aim of this study was to 1 estimate the amount of dead wood in managed beech (Fagus sylvatica L. stands in northern Iberian Peninsula and 2 evaluate the most appropriate volume equation and the optimal transect length for sampling downed wood. Area of study: The study area is the Aralar Forest in Navarra (Northern Iberian Peninsula. Material and methods: The amount of dead wood by component (downed logs, snags, stumps and fine woody debris was inventoried in 51 plots across a chronosequence of stand ages (0-120 years old. Main results: The average volume and biomass of dead wood was 24.43 m3 ha-1 and 7.65 Mg ha-1, respectively. This amount changed with stand development stage [17.14 m3 ha-1 in seedling stage; 34.09 m3 ha-1 inpole stage; 22.54 m3 ha-1 in mature stage and 24.27 m3 ha-1 in regular stand in regeneration stage], although the differences were not statistically significant for coarse woody debris. However, forest management influenced the amount of dead wood, because the proportion of mass in the different components and the decay stage depended on time since last thinning. The formula based on intersection diameter resulted on the smallest coefficient of variation out of seven log-volume formulae. Thus, the intersection diameter is the preferred method because it gives unbiased estimates, has the greatest precision and is the easiest to implement in the field. Research highlights: The amount of dead wood, and in particular snags, was significantly lower than that in reserved forests. Results of this study showed that sampling effort should be directed towards increasing the number of transects, instead of increasing transect length or collecting additional piece diameters that do not increase the accuracy or precision of DWM volume estimation. Keywords: snags; downed logs; stumps; fine woody debris; beech; line intersect sampling.

  4. Characteristics of Mesh Wave Impedance in FDTD Non-Uniform Mesh

    Institute of Scientific and Technical Information of China (English)

    REN Wu; LIU Bo; GAO Ben-qing

    2005-01-01

    In order to increase the evaluating precision of mesh reflection wave, the mesh wave impedance(MWI) is extended to the non-uniform mesh in 1-D and 2-D cases for the first time on the basis of the Yee's positional relation for electromagnetic field components. Lots of characteristics are obtained for different mesh sizes and frequencies. Then the reflection coefficient caused by the non-uniform mesh can be calculated according to the theory of equivalent transmission line. By comparing it with that calculated by MWI in the uniform mesh, it is found that the evaluating error can be largely reduced and is in good agreement with that directly computed by FDTD method. And this extension of MWI can be used in the error analysis of complex mesh.

  5. Update on Development of Mesh Generation Algorithms in MeshKit

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Vanderzee, Evan [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-09-30

    MeshKit uses a graph-based design for coding all its meshing algorithms, which includes the Reactor Geometry (and mesh) Generation (RGG) algorithms. This report highlights the developmental updates of all the algorithms, results and future work. Parallel versions of algorithms, documentation and performance results are reported. RGG GUI design was updated to incorporate new features requested by the users; boundary layer generation and parallel RGG support were added to the GUI. Key contributions to the release, upgrade and maintenance of other SIGMA1 libraries (CGM and MOAB) were made. Several fundamental meshing algorithms for creating a robust parallel meshing pipeline in MeshKit are under development. Results and current status of automated, open-source and high quality nuclear reactor assembly mesh generation algorithms such as trimesher, quadmesher, interval matching and multi-sweeper are reported.

  6. Preparation and characterization of mesh membranes using electrospinning technique

    Science.gov (United States)

    Russo, Giuseppina; Peters, Gerrit W. M.; Solberg, Ramon H. M.

    2012-07-01

    This paper is focused on the formulation and characterization of membranes that can act as biomedical devices with a mesh sample structure to reduce local inflammation and improve the tissue regeneration. These systems were realized homogenously dispersing lamellar Hydrotalcite loaded with Diclofenac Sodium (HTLc-DIK) in a polymeric matrix of Poly-caprolactone (PCL). Membranes were obtained through the electrospinning technique that has shown many advantages with respect to other techniques. Experiments carried out on the manufactured samples highlight the no- toxicity of the samples and very good interactions between cells and device.

  7. Evaluation of selected-ion flow-tube mass spectrometry for the measurement of ethanol, methanol and isopropanol in physiological fluids: effect of osmolality and sample volume.

    Science.gov (United States)

    Rowbottom, Lynn; Workman, Clive; Roberts, Norman B

    2009-09-01

    Selected-ion flow-tube mass spectrometry (SIFT-MS) is particularly suited for the analysis of volatile low molecular weight compounds. We have evaluated this technique for the assay of different alcohols in aqueous solutions, including blood plasma, and in particular whether the osmolality or sample volume affected vapourisation. Solutions of three different alcohols (methanol, ethanol and isopropanol) ranging from 0.005 to 50 mmol/L were prepared in deionised water (0 milliosmol), phosphate-buffered saline (690 mOsm), isotonic saline (294 mOsm) and plasma (296 mOsm). The vapour above the sample (50 to 1000 microL) contained in air-tight tubes at 37 degrees C was aspirated into the instrument. The outputs for ethanol, methanol and isopropanol were linear over the concentration range and independent of the sample volume and relatively independent of the osmolar concentration. SIFT-MS can reliably and accurately measure common alcohols in the headspace above aqueous solutions, including serum/plasma. This novel application of SIFT-MS is easy to follow, requires no sample preparation and the wide dynamic range will facilitate measurement of alcohols present from normal metabolism as well as when taken in excess or in accidental poisoning.

  8. A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics

    Science.gov (United States)

    Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars

    2016-11-01

    We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.

  9. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Derivations and Verification of Plans. Volume 1

    Science.gov (United States)

    Johnson, Kenneth L.; White, K, Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.

  10. Large volume of water samples introduced in dispersive liquid-liquid microextraction for the determination of 15 triazole fungicides by gas chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Nie, Jing; Chen, Fujiang; Song, Zhiyu; Sun, Caixia; Li, Zuguang; Liu, Wenhan; Lee, Mawrong

    2016-10-01

    A novel method of large volume of water samples directly introduced in dispersive liquid-liquid microextraction was developed, which is based on ultrasound/manual shaking-synergy-assisted emulsification and self-generating carbon dioxide gas (CO2) breaking down the emulsion for the determination of 15 triazole fungicides by gas chromatography-tandem mass spectrometry. This technique makes low-density extraction solvent toluene (180 μL) dissolve in 200 mL of samples containing 0.05 mol L(-1) of HCl and 5 % of NaCl (w/v) to form a well emulsion by synergy of ultrasound and manual shaking, and injects NaHCO3 solution (1.0 mol L(-1)) to generate CO2 achieving phase separation with the assistance of ultrasound. The entire process is accomplished within 8 min. The injection of NaHCO3 to generate CO2 achieves phase separation that breaks through the centrifugation limited large volume aqueous samples. In addition, the device could be easily cleaned, and this kind of vessel could be reconfigured for any volume of samples. Under optimal conditions, the low limits of detection ranging from 0.7 to 51.7 ng L(-1), wide linearity, and enrichment factors obtained were in the range 924-3669 for different triazole fungicides. Southern end of the Beijing-Hangzhou Grand Canal water (Hangzhou, China) was used to verify the applicability of the developed method. Graphical Abstract Flow chart of ultrasound/manual shaking-synergy-assisted emulsification and self-generating carbon dioxide gas breaking down the emulsion.

  11. Quadratically consistent projection from particles to mesh

    CERN Document Server

    Duque, Daniel

    2016-01-01

    The advantage of particle Lagrangian methods in computational fluid dynamics is that advection is accurately modeled. However, this complicates the calculation of space derivatives. If a mesh is employed, it must be updated at each time step. On the other hand, fixed mesh, Eulerian, formulations benefit from the mesh being defined at the beginning of the simulation, but feature non-linear advection terms. It therefore seems natural to combine the two approaches, using a fixed mesh to perform calculations related to space derivatives, and using the particles to advect the information with time. The idea of combining Lagrangian particles and a fixed mesh goes back to Particle-in-Cell methods, and is here considered within the context of the finite element method (FEM) for the fixed mesh, and the particle FEM (pFEM) for the particles. Our results, in agreement with recent works, show that interpolation ("projection") errors, especially from particles to mesh, are the culprits of slow convergence of the method if...

  12. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    Science.gov (United States)

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  13. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    Directory of Open Access Journals (Sweden)

    Zichun Zhong

    2016-01-01

    Full Text Available By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  14. Multi-Dimensional, Compressible Viscous Flow on a Moving Voronoi Mesh

    CERN Document Server

    Muñoz, Diego; Marcus, Robert; Vogelsberger, Mark; Hernquist, Lars

    2012-01-01

    Numerous formulations of finite volume schemes for the Euler and Navier-Stokes equations exist, but in the majority of cases they have been developed for structured and stationary meshes. In many applications, more flexible mesh geometries that can dynamically adjust to the problem at hand and move with the flow in a (quasi) Lagrangian fashion would, however, be highly desirable, as this can allow a significant reduction of advection errors and an accurate realization of curved and moving boundary conditions. Here we describe a novel formulation of viscous continuum hydrodynamics that solves the equations of motion on a Voronoi mesh created by a set of mesh-generating points. The points can move in an arbitrary manner, but the most natural motion is that given by the fluid velocity itself, such that the mesh dynamically adjusts to the flow. Owing to the mathematical properties of the Voronoi tessellation, pathological mesh-twisting effects are avoided. Our implementation considers the full Navier-Stokes equat...

  15. H(curl) Auxiliary Mesh Preconditioning

    Energy Technology Data Exchange (ETDEWEB)

    Kolev, T V; Pasciak, J E; Vassilevski, P S

    2006-08-31

    This paper analyzes a two-level preconditioning scheme for H(curl) bilinear forms. The scheme utilizes an auxiliary problem on a related mesh that is more amenable for constructing optimal order multigrid methods. More specifically, we analyze the case when the auxiliary mesh only approximately covers the original domain. The latter assumption is important since it allows for easy construction of nested multilevel spaces on regular auxiliary meshes. Numerical experiments in both two and three space dimensions illustrate the optimal performance of the method.

  16. Engagement of Metal Debris into Gear Mesh

    Science.gov (United States)

    handschuh, Robert F.; Krantz, Timothy L.

    2010-01-01

    A series of bench-top experiments was conducted to determine the effects of metallic debris being dragged through meshing gear teeth. A test rig that is typically used to conduct contact fatigue experiments was used for these tests. Several sizes of drill material, shim stock and pieces of gear teeth were introduced and then driven through the meshing region. The level of torque required to drive the "chip" through the gear mesh was measured. From the data gathered, chip size sufficient to jam the mechanism can be determined.

  17. Application of mesh network radios to UGS

    Science.gov (United States)

    Calcutt, Wade; Jones, Barry; Roeder, Brent

    2008-04-01

    During the past five years McQ has been actively pursuing integrating and applying wireless mesh network radios as a communications solution for unattended ground sensor (UGS) systems. This effort has been rewarded with limited levels of success and has ultimately resulted in a corporate position regarding the use of mesh network radios for UGS systems. A discussion into the background of the effort, the challenges of implementing commercial off-the-shelf (COTS) mesh radios with UGSs, the tradeoffs involved, and an overview of the future direction is presented.

  18. Mesh Optimization for Ground Vehicle Aerodynamics

    OpenAIRE

    Adrian Gaylard; Essam F Abo-Serie; Nor Elyana Ahmad

    2010-01-01

    Mesh optimization strategy for estimating accurate drag of a ground vehicle is proposed based on examining the effect of different mesh parameters.  The optimized mesh parameters were selected using design of experiment (DOE) method to be able to work in a...

  19. Rigidity Constraints for Large Mesh Deformation

    Institute of Scientific and Technical Information of China (English)

    Yong Zhao; Xin-Guo Liu; Qun-Sheng Peng; Hu-Jun Bao

    2009-01-01

    It is a challenging problem of surface-based deformation to avoid apparent volumetric distortions around largely deformed areas. In this paper, we propose a new rigidity constraint for gradient domain mesh deformation to address this problem. Intuitively the proposed constraint can be regarded as several small cubes defined by the mesh vertices through mean value coordinates. The user interactively specifies the cubes in the regions which are prone to volumetric distortions, and the rigidity constraints could make the mesh behave like a solid object during deformation. The experimental results demonstrate that our constraint is intuitive, easy to use and very effective.

  20. SURFACE MESH PARAMETERIZATION WITH NATURAL BOUNDARY

    Institute of Scientific and Technical Information of China (English)

    Ye Ming; Zhu Xiaofeng; Wang Chengtao

    2003-01-01

    Using the projected curve of surface mesh boundary as parameter domain border, linear mapping parameterization with natural boundary is realized. A fast algorithm for least squares fitting plane of vertices in the mesh boundary is proposed. After the mesh boundary is projected onto the fitting plane, low-pass filtering is adopted to eliminate crossovers, sharp corners and cavities in the projected curve and convert it into an eligible convex parameter domain boundary. In order to facilitate quantitative evaluations of parameterization schemes, three distortion-measuring formulae are presented.

  1. An automatic generation of non-uniform mesh for CFD analyses of image-based multiscale human airway models

    Science.gov (United States)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2014-11-01

    The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.

  2. High-resolution liquid- and solid-state nuclear magnetic resonance of nanoliter sample volumes using microcoil detectors

    NARCIS (Netherlands)

    Kentgens, A.P.M.; Bart, J.; Bentum, van P.J.M.; Brinkmann, A.; Eck, van E.R.H.; Gardeniers, J.G.E.; Janssen, J.W.G.; Knijn, P.J.; Vasa, S.; Verkuijlen, M.H.W.

    2008-01-01

    The predominant means to detect nuclear magnetic resonance(NMR) is to monitor the voltage induced in a radiofrequency coil by the precessing magnetization. To address the sensitivity of NMR for mass-limited samples it is worthwhile to miniaturize this detector coil. Although making smaller coils see

  3. Atlas-Based Automatic Generation of Subject-Specific Finite Element Tongue Meshes.

    Science.gov (United States)

    Bijar, Ahmad; Rohan, Pierre-Yves; Perrier, Pascal; Payan, Yohan

    2016-01-01

    Generation of subject-specific 3D finite element (FE) models requires the processing of numerous medical images in order to precisely extract geometrical information about subject-specific anatomy. This processing remains extremely challenging. To overcome this difficulty, we present an automatic atlas-based method that generates subject-specific FE meshes via a 3D registration guided by Magnetic Resonance images. The method extracts a 3D transformation by registering the atlas' volume image to the subject's one, and establishes a one-to-one correspondence between the two volumes. The 3D transformation field deforms the atlas' mesh to generate the subject-specific FE mesh. To preserve the quality of the subject-specific mesh, a diffeomorphic non-rigid registration based on B-spline free-form deformations is used, which guarantees a non-folding and one-to-one transformation. Two evaluations of the method are provided. First, a publicly available CT-database is used to assess the capability to accurately capture the complexity of each subject-specific Lung's geometry. Second, FE tongue meshes are generated for two healthy volunteers and two patients suffering from tongue cancer using MR images. It is shown that the method generates an appropriate representation of the subject-specific geometry while preserving the quality of the FE meshes for subsequent FE analysis. To demonstrate the importance of our method in a clinical context, a subject-specific mesh is used to simulate tongue's biomechanical response to the activation of an important tongue muscle, before and after cancer surgery.

  4. Green Ocean Amazon 2014/15 High-Volume Filter Sampling: Atmospheric Particulate Matter of an Amazon Tropical City and its Relationship to Population Health Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Machado, C. M. [Federal Univ. of Amazonas (Brazil); Santos, Erickson O. [Federal Univ. of Amazonas (Brazil); Fernandes, Karenn S. [Federal Univ. of Amazonas (Brazil); Neto, J. L. [Federal Univ. of Amazonas (Brazil); Souza, Rodrigo A. [Univ. of the State of Amazonas (Brazil)

    2016-08-01

    Manaus, the capital of the Brazilian state of Amazonas, is developing very rapidly. Its pollution plume contains aerosols from fossil fuel combustion mainly due to vehicular emission, industrial activity, and a thermal power plant. Soil resuspension is probably a secondary source of atmospheric particles. The plume transports from Manaus to the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility ARM site at Manacapuru urban pollutants as well as pollutants from pottery factories along the route of the plume. Considering the effects of particulate matter on health, atmospheric particulate matter was evaluated at this site as part of the ARM Facility’s Green Ocean Amazon 2014/15 (GoAmazon 2014/15) field campaign. Aerosol or particulate matter (PM) is typically defined by size, with the smaller particles having more health impact. Total suspended particulate (TSP) are particles smaller than 100 μm; particles smaller than 2.5 μm are called PM2.5. In this work, the PM2.5 levels were obtained from March to December of 2015, totaling 34 samples and TSP levels from October to December of 2015, totaling 17 samples. Sampling was conducted with PM2.5 and TSP high-volume samplers using quartz filters (Figure 1). Filters were stored during 24 hours in a room with temperature (21,1ºC) and humidity (44,3 %) control, in order to do gravimetric analyses by weighing before and after sampling. This procedure followed the recommendations of the Brazilian Association for Technical Standards local norm (NBR 9547:1997). Mass concentrations of particulate matter were obtained from the ratio between the weighted sample and the volume of air collected. Defining a relationship between particulate matter (PM2.5 and TSP) and respiratory diseases of the local population is an important goal of this project, since no information exists on that topic.

  5. Efficient computation of clipped Voronoi diagram for mesh generation

    KAUST Repository

    Yan, Dongming

    2013-04-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  6. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang

    2011-01-01

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc.

  7. LR: Compact connectivity representation for triangle meshes

    Energy Technology Data Exchange (ETDEWEB)

    Gurung, T; Luffel, M; Lindstrom, P; Rossignac, J

    2011-01-28

    We propose LR (Laced Ring) - a simple data structure for representing the connectivity of manifold triangle meshes. LR provides the option to store on average either 1.08 references per triangle or 26.2 bits per triangle. Its construction, from an input mesh that supports constant-time adjacency queries, has linear space and time complexity, and involves ordering most vertices along a nearly-Hamiltonian cycle. LR is best suited for applications that process meshes with fixed connectivity, as any changes to the connectivity require the data structure to be rebuilt. We provide an implementation of the set of standard random-access, constant-time operators for traversing a mesh, and show that LR often saves both space and traversal time over competing representations.

  8. Shape space exploration of constrained meshes

    KAUST Repository

    Yang, Yongliang

    2011-12-12

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.

  9. Spacetime Meshing for Discontinuous Galerkin Methods

    CERN Document Server

    Thite, Shripad Vidyadhar

    2008-01-01

    Spacetime discontinuous Galerkin (SDG) finite element methods are used to solve such PDEs involving space and time variables arising from wave propagation phenomena in important applications in science and engineering. To support an accurate and efficient solution procedure using SDG methods and to exploit the flexibility of these methods, we give a meshing algorithm to construct an unstructured simplicial spacetime mesh over an arbitrary simplicial space domain. Our algorithm is the first spacetime meshing algorithm suitable for efficient solution of nonlinear phenomena in anisotropic media using novel discontinuous Galerkin finite element methods for implicit solutions directly in spacetime. Given a triangulated d-dimensional Euclidean space domain M (a simplicial complex) and initial conditions of the underlying hyperbolic spacetime PDE, we construct an unstructured simplicial mesh of the (d+1)-dimensional spacetime domain M x [0,infinity). Our algorithm uses a near-optimal number of spacetime elements, ea...

  10. Metal Mesh Filters for Terahertz Receivers Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The technical objective of this SBIR program is to develop and demonstrate metal mesh filters for use in NASA's low noise receivers for terahertz astronomy and...

  11. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  12. Obtuse triangle suppression in anisotropic meshes

    KAUST Repository

    Sun, Feng

    2011-12-01

    Anisotropic triangle meshes are used for efficient approximation of surfaces and flow data in finite element analysis, and in these applications it is desirable to have as few obtuse triangles as possible to reduce the discretization error. We present a variational approach to suppressing obtuse triangles in anisotropic meshes. Specifically, we introduce a hexagonal Minkowski metric, which is sensitive to triangle orientation, to give a new formulation of the centroidal Voronoi tessellation (CVT) method. Furthermore, we prove several relevant properties of the CVT method with the newly introduced metric. Experiments show that our algorithm produces anisotropic meshes with much fewer obtuse triangles than using existing methods while maintaining mesh anisotropy. © 2011 Elsevier B.V. All rights reserved.

  13. Removal of line artifacts on mesh boundary in computer generated hologram by mesh phase matching.

    Science.gov (United States)

    Park, Jae-Hyeung; Yeom, Han-Ju; Kim, Hee-Jae; Zhang, HuiJun; Li, BoNi; Ji, Yeong-Min; Kim, Sang-Hoo

    2015-03-23

    Mesh-based computer generated hologram enables realistic and efficient representation of three-dimensional scene. However, the dark line artifacts on the boundary between neighboring meshes are frequently observed, degrading the quality of the reconstruction. In this paper, we propose a simple technique to remove the dark line artifacts by matching the phase on the boundary of neighboring meshes. The feasibility of the proposed method is confirmed by the numerical and optical reconstruction of the generated hologram.

  14. Nanoprobe NAPPA Arrays for the Nanoconductimetric Analysis of Ultra-Low-Volume Protein Samples Using Piezoelectric Liquid Dispensing Technology

    Directory of Open Access Journals (Sweden)

    Eugenia Pechkova

    2015-03-01

    Full Text Available In the last years, the evolution and the advances of the nanobiotechnologies applied to the systematic study of proteins, namely proteomics, both structural and functional, and specifically the development of more sophisticated and largescale protein arrays, have enabled scientists to investigate protein interactions and functions with an unforeseeable precision and wealth of details. Here, we present a further advancement of our previously introduced and described Nucleic Acid Programmable Protein Arrays (NAPPA-based nanoconductometric sensor. We coupled Quartz Crystal Microbalance with Dissipation factor Monitoring (QCM_D with piezoelectric inkjet printing technology (namely, the newly developed ActivePipette, which enables to significantly reduce the volume of probe required for genes/proteins arrays. We performed a negative control (with master mix, or MM and a positive control (MM_p53 plus MDM2. We performed this experiment both in static and in flow, computing the apparent dissociation constant of p53-MDM2 complex (130 nM, in excellent agreement with the published literature. We compared the results obtained with the ActivePipette printing and dispensing technology vs. pin spotting. Without the ActivePipette, after MDM2 addition the shift in frequency (Δf was 7575 Hz and the corresponding adsorbed mass was 32.9 μg. With the ActivePipette technology, after MDM2 addition Δf was 7740 Hz and the corresponding adsorbed mass was 33.6 μg. With this experiment, we confirmed the sensing potential of our device, being able to discriminate each gene and protein as well as their interactions, showing for each one of them a unique conductance curve. Moreover, we obtained a better yield with the ActivePipette technology.

  15. MHD simulations on an unstructured mesh

    Energy Technology Data Exchange (ETDEWEB)

    Strauss, H.R. [New York Univ., NY (United States); Park, W.; Belova, E.; Fu, G.Y. [Princeton Univ., NJ (United States). Plasma Physics Lab.; Longcope, D.W. [Univ. of Montana, Missoula, MT (United States); Sugiyama, L.E. [Massachusetts Inst. of Tech., Cambridge, MA (United States)

    1998-12-31

    Two reasons for using an unstructured computational mesh are adaptivity, and alignment with arbitrarily shaped boundaries. Two codes which use finite element discretization on an unstructured mesh are described. FEM3D solves 2D and 3D RMHD using an adaptive grid. MH3D++, which incorporates methods of FEM3D into the MH3D generalized MHD code, can be used with shaped boundaries, which might be 3D.

  16. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation.......The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  17. Vector field processing on triangle meshes

    OpenAIRE

    De Goes, Fernando; Desbrun, Mathieu; Tong, Yiying

    2015-01-01

    While scalar fields on surfaces have been staples of geometry processing, the use of tangent vector fields has steadily grown in geometry processing over the last two decades: they are crucial to encoding directions and sizing on surfaces as commonly required in tasks such as texture synthesis, non-photorealistic rendering, digital grooming, and meshing. There are, however, a variety of discrete representations of tangent vector fields on triangle meshes, and each approach offers different tr...

  18. Determination of sample size for a multi-class classifier based on single-nucleotide polymorphisms: a volume under the surface approach.

    Science.gov (United States)

    Liu, Xinyu; Wang, Yupeng; Sriram, T N

    2014-06-14

    Data on single-nucleotide polymorphisms (SNPs) have been found to be useful in predicting phenotypes ranging from an individual's class membership to his/her risk of developing a disease. In multi-class classification scenarios, clinical samples are often limited due to cost constraints, making it necessary to determine the sample size needed to build an accurate classifier based on SNPs. The performance of such classifiers can be assessed using the Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) for two classes and the Volume Under the ROC hyper-Surface (VUS) for three or more classes. Sample size determination based on AUC or VUS would not only guarantee an overall correct classification rate, but also make studies more cost-effective. For coded SNP data from D(≥2) classes, we derive an optimal Bayes classifier and a linear classifier, and obtain a normal approximation to the probability of correct classification for each classifier. These approximations are then used to evaluate the associated AUCs or VUSs, whose accuracies are validated using Monte Carlo simulations. We give a sample size determination method, which ensures that the difference between the two approximate AUCs (or VUSs) is below a pre-specified threshold. The performance of our sample size determination method is then illustrated via simulations. For the HapMap data with three and four populations, a linear classifier is built using 92 independent SNPs and the required total sample sizes are determined for a continuum of threshold values. In all, four different sample size determination studies are conducted with the HapMap data, covering cases involving well-separated populations to poorly-separated ones. For multi-classes, we have developed a sample size determination methodology and illustrated its usefulness in obtaining a required sample size from the estimated learning curve. For classification scenarios, this methodology will help scientists determine whether a sample

  19. Unstructured Mesh Movement and Viscous Mesh Generation for CFD-Based Design Optimization Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovations proposed are twofold: 1) a robust unstructured mesh movement method able to handle isotropic (Euler), anisotropic (viscous), mixed element (hybrid)...

  20. Mesh geometry impact on Micromegas performance with an Exchangeable Mesh prototype

    Energy Technology Data Exchange (ETDEWEB)

    Kuger, F., E-mail: fabian.kuger@cern.ch [CERN, Geneva (Switzerland); Julius-Maximilians-Universität, Würzburg (Germany); Bianco, M.; Iengo, P. [CERN, Geneva (Switzerland); Sekhniaidze, G. [CERN, Geneva (Switzerland); Universita e INFN, Napoli (Italy); Veenhof, R. [Uludağ University, Bursa (Turkey); Wotschack, J. [CERN, Geneva (Switzerland)

    2016-07-11

    The reconstruction precision of gaseous detectors is limited by losses of primary electrons during signal formation. In addition to common gas related losses, like attachment, Micromegas suffer from electron absorption during its transition through the micro mesh. This study aims for a deepened understanding of electron losses and their dependency on the mesh geometry. It combines experimental results obtained with a novel designed Exchangeable Mesh Micromegas (ExMe) and advanced microscopic-tracking simulations (ANSYS and Garfield++) of electron drift and mesh transition.

  1. Robust moving mesh algorithms for hybrid stretched meshes: Application to moving boundaries problems

    Science.gov (United States)

    Landry, Jonathan; Soulaïmani, Azzeddine; Luke, Edward; Ben Haj Ali, Amine

    2016-12-01

    A robust Mesh-Mover Algorithm (MMA) approach is designed to adapt meshes of moving boundaries problems. A new methodology is developed from the best combination of well-known algorithms in order to preserve the quality of initial meshes. In most situations, MMAs distribute mesh deformation while preserving a good mesh quality. However, invalid meshes are generated when the motion is complex and/or involves multiple bodies. After studying a few MMA limitations, we propose the following approach: use the Inverse Distance Weighting (IDW) function to produce the displacement field, then apply the Geometric Element Transformation Method (GETMe) smoothing algorithms to improve the resulting mesh quality, and use an untangler to revert negative elements. The proposed approach has been proven efficient to adapt meshes for various realistic aerodynamic motions: a symmetric wing that has suffered large tip bending and twisting and the high-lift components of a swept wing that has moved to different flight stages. Finally, the fluid flow problem has been solved on meshes that have moved and they have produced results close to experimental ones. However, for situations where moving boundaries are too close to each other, more improvements need to be made or other approaches should be taken, such as an overset grid method.

  2. TVD differencing on three-dimensional unstructured meshes with monotonicity-preserving correction of mesh skewness

    Science.gov (United States)

    Denner, Fabian; van Wachem, Berend G. M.

    2015-10-01

    Total variation diminishing (TVD) schemes are a widely applied group of monotonicity-preserving advection differencing schemes for partial differential equations in numerical heat transfer and computational fluid dynamics. These schemes are typically designed for one-dimensional problems or multidimensional problems on structured equidistant quadrilateral meshes. Practical applications, however, often involve complex geometries that cannot be represented by Cartesian meshes and, therefore, necessitate the application of unstructured meshes, which require a more sophisticated discretisation to account for their additional topological complexity. In principle, TVD schemes are applicable to unstructured meshes, however, not all the data required for TVD differencing is readily available on unstructured meshes, and the solution suffers from considerable numerical diffusion as a result of mesh skewness. In this article we analyse TVD differencing on unstructured three-dimensional meshes, focusing on the non-linearity of TVD differencing and the extrapolation of the virtual upwind node. Furthermore, we propose a novel monotonicity-preserving correction method for TVD schemes that significantly reduces numerical diffusion caused by mesh skewness. The presented numerical experiments demonstrate the importance of accounting for the non-linearity introduced by TVD differencing and of imposing carefully chosen limits on the extrapolated virtual upwind node, as well as the efficacy of the proposed method to correct mesh skewness.

  3. Discrete differential geometry: the nonplanar quadrilateral mesh.

    Science.gov (United States)

    Twining, Carole J; Marsland, Stephen

    2012-06-01

    We consider the problem of constructing a discrete differential geometry defined on nonplanar quadrilateral meshes. Physical models on discrete nonflat spaces are of inherent interest, as well as being used in applications such as computation for electromagnetism, fluid mechanics, and image analysis. However, the majority of analysis has focused on triangulated meshes. We consider two approaches: discretizing the tensor calculus, and a discrete mesh version of differential forms. While these two approaches are equivalent in the continuum, we show that this is not true in the discrete case. Nevertheless, we show that it is possible to construct mesh versions of the Levi-Civita connection (and hence the tensorial covariant derivative and the associated covariant exterior derivative), the torsion, and the curvature. We show how discrete analogs of the usual vector integral theorems are constructed in such a way that the appropriate conservation laws hold exactly on the mesh, rather than only as approximations to the continuum limit. We demonstrate the success of our method by constructing a mesh version of classical electromagnetism and discuss how our formalism could be used to deal with other physical models, such as fluids.

  4. Hybrid Surface Mesh Adaptation for Climate Modeling

    Institute of Scientific and Technical Information of China (English)

    Ahmed Khamayseh; Valmor de Almeida; Glen Hansen

    2008-01-01

    Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, lesspopular method of spatial adaptivity is called "mesh motion" (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is pro-duced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is de-signed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.

  5. Determination of selected polycyclic aromatic compounds in particulate matter: a validation study of an agitation extraction method for samples with low mass loadings using reduced volumes

    Science.gov (United States)

    García-Alonso, S.; Pérez-Pastor, R. M.; Archilla-Prat, V.; Rodríguez-Maroto, J.; Izquierdo-Díaz, M.; Rojas, E.; Sanz, D.

    2015-12-01

    A simple analytical method using low volumes of solvent for determining selected PAHs and NPAHs in PM samples is presented. The proposed extraction method was compared with pressurized fluid (PFE) and microwave (MC) extraction techniques and intermediate precision associated to analytical measurements were estimated. Extraction by agitation with 8 mL of dichloromethane yielded recoveries above 80% compared to those obtained from PFE extraction. Regarding intermediate precision results, values between 10-20% were reached showing increases of dispersion for compounds with high volatility and low levels of concentration. Within the framework of the INTA/CIEMAT research agreement for the PM characterization in gas turbine exhaust, the method was applied for analysis of aluminum foil substrates and quartz filters with mass loading ranged from 0.02 to 2 mg per sample.

  6. Large-sized cylinder of Bi-2223/Ni meshes composite bulk for current lead

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, M. [Department of Electrical Engineering, Kogakuin University, 2665-1, Nakano, Hachioji, Tokyo 192-0015 (Japan); Yoshizawa, S. [Department of Environmental Systems, Meisei University, 2-1-1, Hodokubo, Hino, Tokyo 191-8506 (Japan)]. E-mail: yoshizaw@es.meisei-u.ac.jp; Hishinuma, Y. [Fusion Engineering Research Center, National Institute for Fusion Science, 322-6, Oroshi, Toki, Gifu 509-5202 (Japan); Nishimura, A. [Fusion Engineering Research Center, National Institute for Fusion Science, 322-6, Oroshi, Toki, Gifu 509-5202 (Japan); Yamazaki, S. [Department of Electrical Engineering, Kogakuin University, 2665-1, Nakano, Hachioji, Tokyo 192-0015 (Japan); Kojima, S. [Department of Electrical Engineering, Kogakuin University, 2665-1, Nakano, Hachioji, Tokyo 192-0015 (Japan)

    2006-10-01

    In order to improve the critical current density (J {sub c}) and mechanical property of Bi-2223 sintered bulk, Ni wire meshes were added in the bulk. For fabricating large-sized cylindrical Bi-2223/Ni meshes composite, composing meshes are easy to produce compared with adding a lot of wires. The mesh concentration was 18 x 18 meshes/cm{sup 2} using Ni wires of 0.25 mm in diameter. The Ni meshes were plated with Ag by 0.03 mm in thickness. We prepared the cylindrical sintered bulk for a current lead, 32 mm in outer diameter, 2 mm in thickness and 110 mm in length using a cold isostatic pressing (CIP) method. The samples were sintered at 845 deg. C for 50 h. After treatment again with CIP as an intermediate pressing, the samples were re-sintered. Small species were cut from the cylinder for measurement of critical current density (J {sub c}) at 77 K under self-field. There existed higher J {sub c} portions and low J {sub c} portions in the composite cylinder. Scanning electron microscope (SEM) observation showed that highly c-axis oriented and densely structured Bi-2223 plate-like grains could be formed around the interfacial region between the superconducting oxide and the Ag-plated Ni wires. There observed structural dislocation, which lead to low J {sub c} portions in the cylinder.

  7. Solution of the two-dimensional compressible Navier-Stokes equations on embedded structured multiblock meshes

    Science.gov (United States)

    Szmelter, J.; Marchant, M. J.; Evans, A.; Weatherill, N. P.

    A cell vertex finite volume Jameson scheme is used to solve the 2D compressible, laminar, viscous fluid flow equations on locally embedded multiblock meshes. The proposed algorithm is applicable to both the Euler and Navier-Stokes equations. It is concluded that the adaptivity method is very successful in efficiently improving the accuracy of the solution. Both the mesh generator and the flow equation solver which are based on a quadtree data structure offer good flexibility in the treatment of interfaces. It is concluded that methods under consideration lead to accurate flow solutions.

  8. A propidium monoazide-quantitative PCR method for the detection and quantification of viable Enterococcus faecalis in large-volume samples of marine waters.

    Science.gov (United States)

    Salam, Khaled W; El-Fadel, Mutasem; Barbour, Elie K; Saikaly, Pascal E

    2014-10-01

    The development of rapid detection assays of cell viability is essential for monitoring the microbiological quality of water systems. Coupling propidium monoazide with quantitative PCR (PMA-qPCR) has been successfully applied in different studies for the detection and quantification of viable cells in small-volume samples (0.25-1.00 mL), but it has not been evaluated sufficiently in marine environments or in large-volume samples. In this study, we successfully integrated blue light-emitting diodes for photoactivating PMA and membrane filtration into the PMA-qPCR assay for the rapid detection and quantification of viable Enterococcus faecalis cells in 10-mL samples of marine waters. The assay was optimized in phosphate-buffered saline and seawater, reducing the qPCR signal of heat-killed E. faecalis cells by 4 log10 and 3 log10 units, respectively. Results suggest that high total dissolved solid concentration (32 g/L) in seawater can reduce PMA activity. Optimal PMA-qPCR standard curves with a 6-log dynamic range and detection limit of 10(2) cells/mL were generated for quantifying viable E. faecalis cells in marine waters. The developed assay was compared with the standard membrane filter (MF) method by quantifying viable E. faecalis cells in seawater samples exposed to solar radiation. The results of the developed PMA-qPCR assay did not match that of the standard MF method. This difference in the results reflects the different physiological states of E. faecalis cells in seawater. In conclusion, the developed assay is a rapid (∼5 h) method for the quantification of viable E. faecalis cells in marine recreational waters, which should be further improved and tested in different seawater settings.

  9. A propidium monoazide–quantitative PCR method for the detection and quantification of viable Enterococcus faecalis in large-volume samples of marine waters

    KAUST Repository

    Salam, Khaled W.

    2014-08-23

    The development of rapid detection assays of cell viability is essential for monitoring the microbiological quality of water systems. Coupling propidium monoazide with quantitative PCR (PMA-qPCR) has been successfully applied in different studies for the detection and quantification of viable cells in small-volume samples (0.25-1.00 mL), but it has not been evaluated sufficiently in marine environments or in large-volume samples. In this study, we successfully integrated blue light-emitting diodes for photoactivating PMA and membrane filtration into the PMA-qPCR assay for the rapid detection and quantification of viable Enterococcus faecalis cells in 10-mL samples of marine waters. The assay was optimized in phosphate-buffered saline and seawater, reducing the qPCR signal of heat-killed E. faecalis cells by 4 log10 and 3 log10 units, respectively. Results suggest that high total dissolved solid concentration (32 g/L) in seawater can reduce PMA activity. Optimal PMA-qPCR standard curves with a 6-log dynamic range and detection limit of 102 cells/mL were generated for quantifying viable E. faecalis cells in marine waters. The developed assay was compared with the standard membrane filter (MF) method by quantifying viable E. faecalis cells in seawater samples exposed to solar radiation. The results of the developed PMA-qPCR assay did not match that of the standard MF method. This difference in the results reflects the different physiological states of E. faecalis cells in seawater. In conclusion, the developed assay is a rapid (∼5 h) method for the quantification of viable E. faecalis cells in marine recreational waters, which should be further improved and tested in different seawater settings. © 2014 Springer-Verlag Berlin Heidelberg.

  10. SeaWiFS technical report series. Volume 4: An analysis of GAC sampling algorithms. A case study

    Science.gov (United States)

    Yeh, Eueng-Nan (Editor); Hooker, Stanford B. (Editor); Hooker, Stanford B. (Editor); Mccain, Charles R. (Editor); Fu, Gary (Editor)

    1992-01-01

    The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) instrument will sample at approximately a 1 km resolution at nadir which will be broadcast for reception by realtime ground stations. However, the global data set will be comprised of coarser four kilometer data which will be recorded and broadcast to the SeaWiFS Project for processing. Several algorithms for degrading the one kilometer data to four kilometer data are examined using imagery from the Coastal Zone Color Scanner (CZCS) in an effort to determine which algorithm would best preserve the statistical characteristics of the derived products generated from the one kilometer data. Of the algorithms tested, subsampling based on a fixed pixel within a 4 x 4 pixel array is judged to yield the most consistent results when compared to the one kilometer data products.

  11. Conversion of a Continuous Flow Cavity Ring-Down Spectrometer to Measure 13C in CO2 Using Static Analyses of Small Volume Grab Samples (Invited)

    Science.gov (United States)

    Rahn, T.; Jordanova, K.; Berryman, E.; van Pelt, A. D.; Marshall, J. D.

    2010-12-01

    Laser-based analyses of concentration and isotopic content allow unprecedented temporal resolution for a number of important atmospheric constituents. Perhaps overlooked is the potential for these tools to also provide analyses in a more traditional "mass spectrometric" mode that is readily deployable in a field setting. Certain sampling regimes (e.g. soil profiles) are not appropriate for continuous sampling due to their slow change and disturbance of gradients caused by frequent/continuous sampling. We have modified the inlet and plumbing system of a commercial continuous flow cavity ring-down spectrometer in a manner that allows for 13C analyses of CO2 from syringe samples at volumes as small as 25 ml of air with ambient concentrations of CO2. The method employs an external high vacuum pump and a series of programmable micro-valves that allow for evacuation of the long-pass ring-down cell followed by controlled filling, via syringe, of the cavity to optimal sampling pressure and subsequent static analysis of CO2 concentration and 13C /13C ratios. The set-up is field deployable with modest power requirements and allows for near real time analysis in a variety of sampling environments and on-the-fly determination of sampling protocol. In its current configuration, the system provides precision of 1% for CO2 concentration and 0.3 permil for δ13C in replicate analyses of reference air. We have deployed the system to a field laboratory in central New Mexico near a controlled tree mortality research site where post-mortality ecosystem CO2 evolution is being studied. Results from the first field season will be presented and discussed.

  12. Heteronuclear Micro-Helmholtz Coil Facilitates µm-Range Spatial and Sub-Hz Spectral Resolution NMR of nL-Volume Samples on Customisable Microfluidic Chips.

    Science.gov (United States)

    Spengler, Nils; Höfflin, Jens; Moazenzadeh, Ali; Mager, Dario; MacKinnon, Neil; Badilita, Vlad; Wallrabe, Ulrike; Korvink, Jan G

    2016-01-01

    We present a completely revised generation of a modular micro-NMR detector, featuring an active sample volume of ∼ 100 nL, and an improvement of 87% in probe efficiency. The detector is capable of rapidly screening different samples using exchangeable, application-specific, MEMS-fabricated, microfluidic sample containers. In contrast to our previous design, the sample holder chips can be simply sealed with adhesive tape, with excellent adhesion due to the smooth surfaces surrounding the fluidic ports, and so withstand pressures of ∼2.5 bar, while simultaneously enabling high spectral resolution up to 0.62 Hz for H2O, due to its optimised geometry. We have additionally reworked the coil design and fabrication processes, replacing liquid photoresists by dry film stock, whose final thickness does not depend on accurate volume dispensing or precise levelling during curing. We further introduced mechanical alignment structures to avoid time-intensive optical alignment of the chip stacks during assembly, while we exchanged the laser-cut, PMMA spacers by diced glass spacers, which are not susceptible to melting during cutting. Doing so led to an overall simplification of the entire fabrication chain, while simultaneously increasing the yield, due to an improved uniformity of thickness of the individual layers, and in addition, due to more accurate vertical positioning of the wirebonded coils, now delimited by a post base plateau. We demonstrate the capability of the design by acquiring a 1H spectrum of ∼ 11 nmol sucrose dissolved in D2O, where we achieved a linewidth of 1.25 Hz for the TSP reference peak. Chemical shift imaging experiments were further recorded from voxel volumes of only ∼ 1.5 nL, which corresponded to amounts of just 1.5 nmol per voxel for a 1 M concentration. To extend the micro-detector to other nuclei of interest, we have implemented a trap circuit, enabling heteronuclear spectroscopy, demonstrated by two 1H/13C 2D HSQC experiments.

  13. Heteronuclear Micro-Helmholtz Coil Facilitates µm-Range Spatial and Sub-Hz Spectral Resolution NMR of nL-Volume Samples on Customisable Microfluidic Chips.

    Directory of Open Access Journals (Sweden)

    Nils Spengler

    Full Text Available We present a completely revised generation of a modular micro-NMR detector, featuring an active sample volume of ∼ 100 nL, and an improvement of 87% in probe efficiency. The detector is capable of rapidly screening different samples using exchangeable, application-specific, MEMS-fabricated, microfluidic sample containers. In contrast to our previous design, the sample holder chips can be simply sealed with adhesive tape, with excellent adhesion due to the smooth surfaces surrounding the fluidic ports, and so withstand pressures of ∼2.5 bar, while simultaneously enabling high spectral resolution up to 0.62 Hz for H2O, due to its optimised geometry. We have additionally reworked the coil design and fabrication processes, replacing liquid photoresists by dry film stock, whose final thickness does not depend on accurate volume dispensing or precise levelling during curing. We further introduced mechanical alignment structures to avoid time-intensive optical alignment of the chip stacks during assembly, while we exchanged the laser-cut, PMMA spacers by diced glass spacers, which are not susceptible to melting during cutting. Doing so led to an overall simplification of the entire fabrication chain, while simultaneously increasing the yield, due to an improved uniformity of thickness of the individual layers, and in addition, due to more accurate vertical positioning of the wirebonded coils, now delimited by a post base plateau. We demonstrate the capability of the design by acquiring a 1H spectrum of ∼ 11 nmol sucrose dissolved in D2O, where we achieved a linewidth of 1.25 Hz for the TSP reference peak. Chemical shift imaging experiments were further recorded from voxel volumes of only ∼ 1.5 nL, which corresponded to amounts of just 1.5 nmol per voxel for a 1 M concentration. To extend the micro-detector to other nuclei of interest, we have implemented a trap circuit, enabling heteronuclear spectroscopy, demonstrated by two 1H/13C 2D HSQC

  14. Development of a high-volume air sampler for nanoparticles.

    Science.gov (United States)

    Hata, M; Thongyen, T; Bao, L; Hoshino, A; Otani, Y; Ikeda, T; Furuuchi, M

    2013-02-01

    As a tool to evaluate the characteristics of aerosol nano-particles, a high-volume air sampler for the collection of nano-particles was developed based on the inertial filter technology. Instead of the webbed fiber geometry of the existing inertial filter, wire mesh screens alternately layered using spacing sheets with circular holes aligned to provide multi-circular nozzles were newly devised and the separation performance of the filter was investigated experimentally. The separation performance was evaluated for a single-nozzle inertial filter at different filtration velocities. A webbed stainless steel fiber mat attached on the inlet surface of the developed inertial filter was discussed as a pre-separator suppressing the bouncing of particles on meshes. The separation performance of a triple-nozzle inertial filter was also discussed to investigate the influence of scale-up on the separation performance of a multi-nozzle inertial filter. The influence of particle loading on the pressure drop and separation performance was discussed. A supplemental inlet for the nano-particle collection applied to an existing portable high-volume air sampler was devised and the consistency with other types of existing samplers was discussed based on the sampling of ambient particles. The layered-mesh inertial filter with a webbed stainless steel fiber mat as a pre-separator showed good performance in the separation of particles with a d p50 ranging from 150 to 190 nm keeping the influence of loaded particles small. The developed layered-mesh inertial filter was successfully applied to the collection of particles at a d p50∼ 190 nm that was consistent with the results from existing samplers.

  15. The Atlas3D project -- I. A volume-limited sample of 260 nearby early-type galaxies: science goals and selection criteria

    CERN Document Server

    Cappellari, Michele; Krajnovic, Davor; McDermid, Richard M; Scott, Nicholas; Kleijn, G A Verdoes; Young, Lisa M; Alatalo, Katherine; Bacon, R; Blitz, Leo; Bois, Maxime; Bournaud, Frederic; Bureau, M; Davies, Roger L; Davis, Timothy A; de Zeeuw, P T; Duc, Pierre-Alain; Khochfar, Sadegh; Kuntschner, Harald; Lablanche, Pierre-Yves; Morganti, Raffaella; Naab, Thorsten; Oosterloo, Tom; Sarzi, Marc; Serra, Paolo; Weijmans, Anne-Marie

    2010-01-01

    The Atlas3D project is a multi-wavelength survey combined with a theoretical modeling effort. The observations span from the radio to the millimeter and optical, and provide multi-colour imaging, two-dimensional kinematics of the atomic (HI), molecular (CO) and ionized gas (Hbeta, [OIII] and [NI]), together with the kinematics and population of the stars (Hbeta, Fe5015 and Mgb), for a carefully selected, volume-limited (1.16*10^5 Mpc^3) sample of 260 early-type (elliptical E and lenticular S0) galaxies (ETGs). The models include semi-analytic, N-body binary mergers and cosmological simulations of galaxy formation. Here we present the science goals for the project and introduce the galaxy sample and the selection criteria. The sample consists of nearby (D6*10^9 M_Sun). We analyze possible selection biases and we conclude that the parent sample is essentially complete and statistically representative of the nearby galaxy population. We present the size-luminosity relation for the spirals and ETGs and show that ...

  16. Remedial investigation sampling and analysis plan for J-Field, Aberdeen Proving Ground, Maryland: Volume 2, Quality Assurance Project Plan

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, S.; Martino, L.; Patton, T.

    1995-03-01

    J-Field encompasses about 460 acres at the southern end of the Gunpowder Neck Peninsula in the Edgewood Area of APG (Figure 2.1). Since World War II, the Edgewood Area of APG has been used to develop, manufacture, test, and destroy chemical agents and munitions. These materials were destroyed at J-Field by open burning and open detonation (OB/OD). For the purposes of this project, J-Field has been divided into eight geographic areas or facilities that are designated as areas of concern (AOCs): the Toxic Burning Pits (TBP), the White Phosphorus Burning Pits (WPP), the Riot Control Burning Pit (RCP), the Robins Point Demolition Ground (RPDG), the Robins Point Tower Site (RPTS), the South Beach Demolition Ground (SBDG), the South Beach Trench (SBT), and the Prototype Building (PB). The scope of this project is to conduct a remedial investigation/feasibility study (RI/FS) and ecological risk assessment to evaluate the impacts of past disposal activities at the J-Field site. Sampling for the RI will be carried out in three stages (I, II, and III) as detailed in the FSP. A phased approach will be used for the J-Field ecological risk assessment (ERA).

  17. Ultra-trace plutonium determination in small volume seawater by sector field inductively coupled plasma mass spectrometry with application to Fukushima seawater samples.

    Science.gov (United States)

    Bu, Wenting; Zheng, Jian; Guo, Qiuju; Aono, Tatsuo; Tagami, Keiko; Uchida, Shigeo; Tazoe, Hirofumi; Yamada, Masatoshi

    2014-04-11

    Long-term monitoring of Pu isotopes in seawater is required for assessing Pu contamination in the marine environment from the Fukushima Dai-ichi Nuclear Power Plant accident. In this study, we established an accurate and precise analytical method based on anion-exchange chromatography and SF-ICP-MS. This method was able to determine Pu isotopes in seawater samples with small volumes (20-60L). The U decontamination factor was 3×10(7)-1×10(8), which provided sufficient removal of interfering U from the seawater samples. The estimated limits of detection for (239)Pu and (240)Pu were 0.11fgmL(-1) and 0.08fgmL(-1), respectively, which corresponded to 0.01mBqm(-3) for (239)Pu and 0.03mBqm(-3) for (240)Pu when a 20L volume of seawater was measured. We achieved good precision (2.9%) and accuracy (0.8%) for measurement of the (240)Pu/(239)Pu atom ratio in the standard Pu solution with a (239)Pu concentration of 11fgmL(-1) and (240)Pu concentration of 2.7fgmL(-1). Seawater reference materials were used for the method validation and both the (239+240)Pu activities and (240)Pu/(239)Pu atom ratios agreed well with the expected values. Surface and bottom seawater samples collected off Fukushima in the western North Pacific since March 2011 were analyzed. Our results suggested that there was no significant variation of the Pu distribution in seawater in the investigated areas compared to the distribution before the accident.

  18. HULK - Simple and fast generation of structured hexahedral meshes for improved subsurface simulations

    Science.gov (United States)

    Jansen, Gunnar; Sohrabi, Reza; Miller, Stephen A.

    2017-02-01

    Short for Hexahedra from Unique Location in (K)convex Polyhedra - HULK is a simple and efficient algorithm to generate hexahedral meshes from generic STL files describing a geological model to be used in simulation tools based on the finite element, finite volume or finite difference methods. Using binary space partitioning of the input geometry and octree refinement on the grid, a successive increase in accuracy of the mesh is achieved. We present the theoretical basis as well as the implementation procedure with three geological models with varying complexity, providing the basis on which the algorithm is evaluated. HULK generates high accuracy discretizations with cell counts suitable for state-of-the-art subsurface simulators and provides a new method for hexahedral mesh generation in geological settings.

  19. A nonhydrostatic unstructured-mesh soundproof model for simulation of internal gravity waves

    Science.gov (United States)

    Smolarkiewicz, Piotr; Szmelter, Joanna

    2011-12-01

    A semi-implicit edge-based unstructured-mesh model is developed that integrates nonhydrostatic soundproof equations, inclusive of anelastic and pseudo-incompressible systems of partial differential equations. The model builds on nonoscillatory forward-in-time MPDATA approach using finite-volume discretization and unstructured meshes with arbitrarily shaped cells. Implicit treatment of gravity waves benefits both accuracy and stability of the model. The unstructured-mesh solutions are compared to equivalent structured-grid results for intricate, multiscale internal-wave phenomenon of a non-Boussinesq amplification and breaking of deep stratospheric gravity waves. The departures of the anelastic and pseudoincompressible results are quantified in reference to a recent asymptotic theory [Achatz et al. 2010, J. Fluid Mech., 663, 120-147)].

  20. Multiple Staggered Mesh Ewald: Boosting the Accuracy of the Smooth Particle Mesh Ewald Method

    CERN Document Server

    Wang, Han; Fang, Jun

    2016-01-01

    The smooth particle mesh Ewald (SPME) method is the standard method for computing the electrostatic interactions in the molecular simulations. In this work, the multiple staggered mesh Ewald (MSME) method is proposed to boost the accuracy of the SPME method. Unlike the SPME that achieves higher accuracy by refining the mesh, the MSME improves the accuracy by averaging the standard SPME forces computed on, e.g. $M$, staggered meshes. We prove, from theoretical perspective, that the MSME is as accurate as the SPME, but uses $M^2$ times less mesh points in a certain parameter range. In the complementary parameter range, the MSME is as accurate as the SPME with twice of the interpolation order. The theoretical conclusions are numerically validated both by a uniform and uncorrelated charge system, and by a three-point-charge water system that is widely used as solvent for the bio-macromolecules.

  1. Medical Image Processing for Fully Integrated Subject Specific Whole Brain Mesh Generation

    Directory of Open Access Journals (Sweden)

    Chih-Yang Hsu

    2015-05-01

    Full Text Available Currently, anatomically consistent segmentation of vascular trees acquired with magnetic resonance imaging requires the use of multiple image processing steps, which, in turn, depend on manual intervention. In effect, segmentation of vascular trees from medical images is time consuming and error prone due to the tortuous geometry and weak signal in small blood vessels. To overcome errors and accelerate the image processing time, we introduce an automatic image processing pipeline for constructing subject specific computational meshes for entire cerebral vasculature, including segmentation of ancillary structures; the grey and white matter, cerebrospinal fluid space, skull, and scalp. To demonstrate the validity of the new pipeline, we segmented the entire intracranial compartment with special attention of the angioarchitecture from magnetic resonance imaging acquired for two healthy volunteers. The raw images were processed through our pipeline for automatic segmentation and mesh generation. Due to partial volume effect and finite resolution, the computational meshes intersect with each other at respective interfaces. To eliminate anatomically inconsistent overlap, we utilized morphological operations to separate the structures with a physiologically sound gap spaces. The resulting meshes exhibit anatomically correct spatial extent and relative positions without intersections. For validation, we computed critical biometrics of the angioarchitecture, the cortical surfaces, ventricular system, and cerebrospinal fluid (CSF spaces and compared against literature values. Volumina and surface areas of the computational mesh were found to be in physiological ranges. In conclusion, we present an automatic image processing pipeline to automate the segmentation of the main intracranial compartments including a subject-specific vascular trees. These computational meshes can be used in 3D immersive visualization for diagnosis, surgery planning with haptics

  2. Immediate and perioperative outcomes of polypropylene mesh in pelvic floor repair in a predominantly obese population.

    Science.gov (United States)

    Adedipe, T O; Vine, S J

    2010-01-01

    This retrospective study was to identify perioperative and postoperative complications associated with use of polypropylene mesh for pelvic floor repair in a UK district general hospital in a predominantly obese population. The sample size was 27 women with data retrieved from records. Total mesh was used in 37.1%, isolated anterior mesh in 44.4%, and an isolated posterior mesh in 18.5%. There was a high incidence of obese (BMI kg/m2 > or = 30.0) women (66.67%). The highest recorded thus far. A high proportion of the women (44.4%) were also over the age of 65 years with attendant comorbidities. The age range was 45-77 years. Complications included mesh exposure (7.4%), catheterization at discharge (7.4%), bladder injury during dissection (3.7%) and recurrent prolapse (7.4%). In the carefully selected individuals, polypropylene mesh for prolapse repair appears to be a safe technique to correct pelvic organ prolapse. However, long-term follow-up is needed with further research.

  3. PROSPECTIVE STUDY ON DARNING AND LICHTENSTEIN MESH HERNIOPLASTY (LMH IN INGUINAL HERNIA REPAIR

    Directory of Open Access Journals (Sweden)

    Affin

    2016-01-01

    Full Text Available INTRODUCTION Prospective study on Darning and Lichtenstein Mesh Hernioplasty in Inguinal Hernia Repair is a study of 61 cases of inguinal hernias which were treated by either open Inguinal hernia mesh repair (Lichtenstein or darning repair. The study was conducted with an objective to compare the effectiveness of these procedures and complications if any. 61 cases of inguinal hernia admitted in Yenepoya Medical College Hospital, Mangalore were selected on the basis of the non-probability (prospective sampling method. All patients with uncomplicated direct and indirect hernias treated by darning or mesh repair were included. After preoperative preparation they were randomly chosen for darning or mesh repair. The age/sex incidence, mode of presentation, precipitating factors, surgical treatment and postoperative complications were all evaluated and compared with standard published literature. The total number of postoperative complications was reported in 13.9% patients, complications was high after Mesh repair when compared to Darning. Seroma was the most common complication followed by funiculitis and wound infection. There was one recurrence each noted till date in the two groups under study. Darn repair is equally effective and much less costly treatment for inguinal hernia than mesh repair which had more risk of infection.

  4. Novel system using microliter order sample volume for measuring arterial radioactivity concentrations in whole blood and plasma for mouse PET dynamic study.

    Science.gov (United States)

    Kimura, Yuichi; Seki, Chie; Hashizume, Nobuya; Yamada, Takashi; Wakizaka, Hidekatsu; Nishimoto, Takahiro; Hatano, Kentaro; Kitamura, Keishi; Toyama, Hiroshi; Kanno, Iwao

    2013-11-21

    This study aimed to develop a new system, named CD-Well, for mouse PET dynamic study. CD-Well allows the determination of time-activity curves (TACs) for arterial whole blood and plasma using 2-3 µL of blood per sample; the minute sample size is ideal for studies in small animals. The system has the following merits: (1) measures volume and radioactivity of whole blood and plasma separately; (2) allows measurements at 10 s intervals to capture initial rapid changes in the TAC; and (3) is compact and easy to handle, minimizes blood loss from sampling, and delay and dispersion of the TAC. CD-Well has 36 U-shaped channels. A drop of blood is sampled into the opening of the channel and stored there. After serial sampling is completed, CD-Well is centrifuged and scanned using a flatbed scanner to define the regions of plasma and blood cells. The length measured is converted to volume because the channels have a precise and uniform cross section. Then, CD-Well is exposed to an imaging plate to measure radioactivity. Finally, radioactivity concentrations are computed. We evaluated the performance of CD-Well in in vitro measurement and in vivo (18)F-fluorodeoxyglucose and [(11)C]2-carbomethoxy-3β-(4-fluorophenyl) tropane studies. In in vitro evaluation, per cent differences (mean±SE) from manual measurement were 4.4±3.6% for whole blood and 4.0±3.5% for plasma across the typical range of radioactivity measured in mouse dynamic study. In in vivo studies, reasonable TACs were obtained. The peaks were captured well, and the time courses coincided well with the TAC derived from PET imaging of the heart chamber. The total blood loss was less than 200 µL, which had no physiological effect on the mice. CD-Well demonstrates satisfactory performance, and is useful for mouse PET dynamic study.

  5. Laparoscopic-assisted Ventral Hernia Repair: Primary Fascial Repair with Polyester Mesh versus Polyester Mesh Alone.

    Science.gov (United States)

    Karipineni, Farah; Joshi, Priya; Parsikia, Afshin; Dhir, Teena; Joshi, Amit R T

    2016-03-01

    Laparoscopic-assisted ventral hernia repair (LAVHR) with mesh is well established as the preferred technique for hernia repair. We sought to determine whether primary fascial closure and/or overlap of the mesh reduced recurrence and/or complications. We conducted a retrospective review on 57 LAVHR patients using polyester composite mesh between August 2010 and July 2013. They were divided into mesh-only (nonclosure) and primary fascial closure with mesh (closure) groups. Patient demographics, prior surgical history, mesh overlap, complications, and recurrence rates were compared. Thirty-nine (68%) of 57 patients were in the closure group and 18 (32%) in the nonclosure group. Mean defect sizes were 15.5 and 22.5 cm(2), respectively. Participants were followed for a mean of 1.3 years [standard deviation (SD) = 0.7]. Recurrence rates were 2/39 (5.1%) in the closure group and 1/18 (5.6%) in the nonclosure group (P = 0.947). There were no major postoperative complications in the nonclosure group. The closure group experienced four (10.3%) complications. This was not a statistically significant difference (P = 0.159). The median mesh-to-hernia ratio for all repairs was 15.2 (surface area) and 3.9 (diameter). Median length of stay was 14.5 hours (1.7-99.3) for patients with nonclosure and 11.9 hours (6.9-90.3 hours) for patients with closure (P = 0.625). In conclusion, this is one of the largest series of LAVHR exclusively using polyester dual-sided mesh. Our recurrence rate was about 5 per cent. Significant mesh overlap is needed to achieve such low recurrence rates. Primary closure of hernias seems less important than adequate mesh overlap in preventing recurrence after LAVHR.

  6. A generalized finite difference method for modeling cardiac electrical activation on arbitrary, irregular computational meshes.

    Science.gov (United States)

    Trew, Mark L; Smaill, Bruce H; Bullivant, David P; Hunter, Peter J; Pullan, Andrew J

    2005-12-01

    A generalized finite difference (GFD) method is presented that can be used to solve the bi-domain equations modeling cardiac electrical activity. Classical finite difference methods have been applied by many researchers to the bi-domain equations. However, these methods suffer from the limitation of requiring computational meshes that are structured and orthogonal. Finite element or finite volume methods enable the bi-domain equations to be solved on unstructured meshes, although implementations of such methods do not always cater for meshes with varying element topology. The GFD method solves the bi-domain equations on arbitrary and irregular computational meshes without any need to specify element basis functions. The method is useful as it can be easily applied to activation problems using existing meshes that have originally been created for use by finite element or finite difference methods. In addition, the GFD method employs an innovative approach to enforcing nodal and non-nodal boundary conditions. The GFD method performs effectively for a range of two and three-dimensional test problems and when computing bi-domain electrical activation moving through a fully anisotropic three-dimensional model of canine ventricles.

  7. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    Science.gov (United States)

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-04-01

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.

  8. Coupling codes including deformation exchange suitable for non conforming and unstructured large meshes

    Energy Technology Data Exchange (ETDEWEB)

    Duplex, B., E-mail: benjamin.duplex@gmail.fr [CEA, DEN, DANS/DM2S/STMF, Cadarache, F-13108 Saint-Paul-lez-Durance (France); Grandotto, M. [CEA, DEN, DANS/DM2S/STMF, Cadarache, F-13108 Saint-Paul-lez-Durance (France); Perdu, F. [CEA, DEN, DANS/DM2S/STMF, 17 rue des Martyrs, F-38054 Grenoble (France); Daniel, M.; Gesquiere, G. [Aix-Marseille University, CNRS, LSIS, UMR 7296, case postale 925, 163 Avenue de Luminy, F-13288 Marseille cedex 09 (France)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer A function of deformation transfer on meshes is proposed. Black-Right-Pointing-Pointer Large meshes sharing a common geometry or common borders are treated. Black-Right-Pointing-Pointer We show the deformation transfer impact on simulation results. - Abstract: The paper proposes a method to couple computation codes and focuses on the transfer of mesh deformations between these codes. The deformations can concern a single object or different objects in contact along common boundaries. The method is designed to allow a wide range of mesh types and to manage large volumes of data. To reach these objectives, a mesh simplification step is first achieved and is followed by the deformation characterisation through a continuous function defined by a network of compact support radial basis functions (RBFs). A test case featuring adjacent geometries in a material testing reactor (MTR) is presented to assess the method. Two solids close together are subject to a deformation by a thermal dilatation, and are cooled by a liquid flowing between them. The results demonstrate the effectiveness of the method and show how the deformation transfer modifies the thermalhydraulic solution.

  9. Extension of a dynamic headspace multi-volatile method to milliliter injection volumes with full sample evaporation: Application to green tea.

    Science.gov (United States)

    Ochiai, Nobuo; Sasamoto, Kikuo; Tsunokawa, Jun; Hoffmann, Andreas; Okanoya, Kazunori; MacNamara, Kevin

    2015-11-20

    An extension of multi-volatile method (MVM) technology using the combination of a standard dynamic headspace (DHS) configuration, and a modified DHS configuration incorporating an additional vacuum module, was developed for milliliter injection volume of aqueous sample with full sample evaporation. A prior step involved investigation of water management by weighing of the water residue in the adsorbent trap. The extended MVM for 1 mL aqueous sample consists of five different DHS method parameter sets including choice of the replaceable adsorbent trap. An initial two DHS sampling sets at 25°C with the standard DHS configuration using a carbon-based adsorbent trap target very volatile solutes with high vapor pressure (>10 kPa) and volatile solutes with moderate vapor pressure (1-10 kPa). Subsequent three DHS sampling sets at 80°C with the modified DHS configuration using a Tenax TA trap target solutes with low vapor pressure (88%) for 17 test aroma compounds and moderate recoveries (44-71%) for 4 test compounds. The method showed good linearity (r(2)>0.9913) and high sensitivity (limit of detection: 0.1-0.5 ng mL(-1)) even with MS scan mode. The improved sensitivity of the method was demonstrated with analysis of a wide variety of aroma compounds in brewed green tea. Compared to the original 100 μL MVM procedure, this extension to 1 mL MVM allowed detection of nearly twice the number of aroma compounds, including 18 potent aroma compounds from top-note to base-note (e.g. 2,3-butanedione, coumarin, furaneol, guaiacol, cis-3-hexenol, linalool, maltol, methional, 3-methyl butanal, 2,3,5-trimethyl pyrazine, and vanillin). Sensitivity for 23 compounds improved by a factor of 3.4-15 under 1 mL MVM conditions.

  10. The Cool ISM in Elliptical Galaxies. II. Gas Content in the Volume - Limited Sample and Results from the Combined Elliptical and Lenticular Surveys

    CERN Document Server

    Welch, Gary A; Young, Lisa M

    2010-01-01

    We report new observations of atomic and molecular gas in a volume limited sample of elliptical galaxies. Combining the elliptical sample with an earlier and similar lenticular one, we show that cool gas detection rates are very similar among low luminosity E and SO galaxies but are much higher among luminous S0s. Using the combined sample we revisit the correlation between cool gas mass and blue luminosity which emerged from our lenticular survey, finding strong support for previous claims that the molecular gas in ellipticals and lenticulars has different origins. Unexpectedly, however, and contrary to earlier claims, the same is not true for atomic gas. We speculate that both the AGN feedback and merger paradigms might offer explanations for differences in detection rates, and might also point towards an understanding of why the two gas phases could follow different evolutionary paths in Es and S0s. Finally we present a new and puzzling discovery concerning the global mix of atomic and molecular gas in ear...

  11. Integrated Spectroscopy of the Herschel Reference Survey. The spectral line properties of a volume-limited, K-band selected sample of nearby galaxies

    CERN Document Server

    Boselli, A; Cortese, L; Gavazzi, G; Buat, V

    2012-01-01

    We present long-slit integrated spectroscopy of 238 late-type galaxies belonging to the Herschel Reference Survey, a volume limited sample representative of the nearby universe. This sample has a unique legacy value since ideally defined for any statistical study of the multifrequency properties of galaxies spanning a large range in morphological type and luminosity. The spectroscopic observations cover the spectral range 3600-6900 A at a resolution R ~ 1000 and are thus suitable for separating the underlying absorption from the emission of the Hbeta line as well as the two [NII] lines from the Halpha emission. We measure the fluxes and the equivalent widths of the strongest emission lines ([OII], Hbeta, [OIII], [NII], Halpha, and [SII]). The data are used to study the distribution of the equivalent width of all the emission lines, of the Balmer decrement C(Hbeta) and of the observed underlying Balmer absorption under Hbeta in this sample. Combining these new spectroscopic data with those available at other f...

  12. Versatile, ultra-low sample volume gas analyzer using a rapid, broad-tuning ECQCL and a hollow fiber gas cell

    Energy Technology Data Exchange (ETDEWEB)

    Kriesel, Jason M.; Makarem, Camille N.; Phillips, Mark C.; Moran, James J.; Coleman, Max; Christensen, Lance; Kelly, James F.

    2017-05-05

    We describe a versatile mid-infrared (Mid-IR) spectroscopy system developed to measure the concentration of a wide range of gases with an ultra-low sample size. The system combines a rapidly-swept external cavity quantum cascade laser (ECQCL) with a hollow fiber gas cell. The ECQCL has sufficient spectral resolution and reproducibility to measure gases with narrow features (e.g., water, methane, ammonia, etc.), and also the spectral tuning range needed to measure volatile organic compounds (VOCs), (e.g., aldehydes, ketones, hydrocarbons), sulfur compounds, chlorine compounds, etc. The hollow fiber is a capillary tube having an internal reflective coating optimized for transmitting the Mid-IR laser beam to a detector. Sample gas introduced into the fiber (e.g., internal volume = 0.6 ml) interacts strongly with the laser beam, and despite relatively modest path lengths (e.g., L ~ 3 m), the requisite quantity of sample needed for sensitive measurements can be significantly less than what is required using conventional IR laser spectroscopy systems. Example measurements are presented including quantification of VOCs relevant for human breath analysis with a sensitivity of ~2 picomoles at a 1 Hz data rate.

  13. Monte Carlo simulations incorporating Mie calculations of light transport in tissue phantoms: Examination of photon sampling volumes for endoscopically compatible fiber optic probes

    Energy Technology Data Exchange (ETDEWEB)

    Mourant, J.R.; Hielscher, A.H.; Bigio, I.J.

    1996-04-01

    Details of the interaction of photons with tissue phantoms are elucidated using Monte Carlo simulations. In particular, photon sampling volumes and photon pathlengths are determined for a variety of scattering and absorption parameters. The Monte Carlo simulations are specifically designed to model light delivery and collection geometries relevant to clinical applications of optical biopsy techniques. The Monte Carlo simulations assume that light is delivered and collected by two, nearly-adjacent optical fibers and take into account the numerical aperture of the fibers as well as reflectance and refraction at interfaces between different media. To determine the validity of the Monte Carlo simulations for modeling the interactions between the photons and the tissue phantom in these geometries, the simulations were compared to measurements of aqueous suspensions of polystyrene microspheres in the wavelength range 450-750 nm.

  14. Connectivity editing for quad-dominant meshes

    KAUST Repository

    Peng, Chihan

    2013-08-01

    We propose a connectivity editing framework for quad-dominant meshes. In our framework, the user can edit the mesh connectivity to control the location, type, and number of irregular vertices (with more or fewer than four neighbors) and irregular faces (non-quads). We provide a theoretical analysis of the problem, discuss what edits are possible and impossible, and describe how to implement an editing framework that realizes all possible editing operations. In the results, we show example edits and illustrate the advantages and disadvantages of different strategies for quad-dominant mesh design. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  15. GRChombo: Numerical relativity with adaptive mesh refinement

    Science.gov (United States)

    Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran

    2015-12-01

    In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial ‘many-boxes-in-many-boxes’ mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.

  16. Retrofitting Masonry Walls with Carbon Mesh

    Directory of Open Access Journals (Sweden)

    Patrick Bischof

    2014-01-01

    Full Text Available Static-cyclic shear load tests and tensile tests on retrofitted masonry walls were conducted at UAS Fribourg for an evaluation of the newly developed retrofitting system, the S&P ARMO-System. This retrofitting system consists of a composite of carbon mesh embedded in a specially adapted high quality spray mortar. It can be applied with established construction techniques using traditional construction materials. The experimental study has shown that masonry walls reinforced by this retrofitting system reach a similar strength and a higher ductility than retrofits by means of bonded carbon fiber reinforced polymer sheets. Hence, the retrofitting system using carbon fiber meshes embedded in a high quality mortar constitutes a good option for static or seismic retrofits or reinforcements for masonry walls. However, the experimental studies also revealed that the mechanical anchorage of carbon mesh may be delicate depending on its design.

  17. NASA Lewis Meshed VSAT Workshop meeting summary

    Science.gov (United States)

    Ivancic, William

    1993-11-01

    NASA Lewis Research Center's Space Electronics Division (SED) hosted a workshop to address specific topics related to future meshed very small-aperture terminal (VSAT) satellite communications networks. The ideas generated by this workshop will help to identify potential markets and focus technology development within the commercial satellite communications industry and NASA. The workshop resulted in recommendations concerning these principal points of interest: the window of opportunity for a meshed VSAT system; system availability; ground terminal antenna sizes; recommended multifrequency for time division multiple access (TDMA) uplink; a packet switch design concept for narrowband; and fault tolerance design concepts. This report presents a summary of group presentations and discussion associated with the technological, economic, and operational issues of meshed VSAT architectures that utilize processing satellites.

  18. Mesh saliency with adaptive local patches

    Science.gov (United States)

    Nouri, Anass; Charrier, Christophe; Lézoray, Olivier

    2015-03-01

    3D object shapes (represented by meshes) include both areas that attract the visual attention of human observers and others less or not attractive at all. This visual attention depends on the degree of saliency exposed by these areas. In this paper, we propose a technique for detecting salient regions in meshes. To do so, we define a local surface descriptor based on local patches of adaptive size and filled with a local height field. The saliency of mesh vertices is then defined as its degree measure with edges weights computed from adaptive patch similarities. Our approach is compared to the state-of-the-art and presents competitive results. A study evaluating the influence of the parameters establishing this approach is also carried out. The strength and the stability of our approach with respect to noise and simplification are also studied.

  19. Performance of a streaming mesh refinement algorithm.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, David C.; Pebay, Philippe Pierre

    2004-08-01

    In SAND report 2004-1617, we outline a method for edge-based tetrahedral subdivision that does not rely on saving state or communication to produce compatible tetrahedralizations. This report analyzes the performance of the technique by characterizing (a) mesh quality, (b) execution time, and (c) traits of the algorithm that could affect quality or execution time differently for different meshes. It also details the method used to debug the several hundred subdivision templates that the algorithm relies upon. Mesh quality is on par with other similar refinement schemes and throughput on modern hardware can exceed 600,000 output tetrahedra per second. But if you want to understand the traits of the algorithm, you have to read the report!

  20. The generation of hexahedral meshes for assembly geometries: A survey

    Energy Technology Data Exchange (ETDEWEB)

    TAUTGES,TIMOTHY J.

    2000-02-14

    The finite element method is being used today to model component assemblies in a wide variety of application areas, including structural mechanics, fluid simulations, and others. Generating hexahedral meshes for these assemblies usually requires the use of geometry decomposition, with different meshing algorithms applied to different regions. While the primary motivation for this approach remains the lack of an automatic, reliable all-hexahedral meshing algorithm, requirements in mesh quality and mesh configuration for typical analyses are also factors. For these reasons, this approach is also sometimes required when producing other types of unstructured meshes. This paper will review progress to date in automating many parts of the hex meshing process, which has halved the time to produce all-hex meshes for large assemblies. Particular issues which have been exposed due to this progress will also be discussed, along with their applicability to the general unstructured meshing problem.

  1. MESH FREE ESTIMATION OF THE STRUCTURE MODEL INDEX

    Directory of Open Access Journals (Sweden)

    Joachim Ohser

    2011-05-01

    Full Text Available The structure model index (SMI is a means of subsuming the topology of a homogeneous random closed set under just one number, similar to the isoperimetric shape factors used for compact sets. Originally, the SMI is defined as a function of volume fraction, specific surface area and first derivative of the specific surface area, where the derivative is defined and computed using a surface meshing. The generalised Steiner formula yields however a derivative of the specific surface area that is – up to a constant – the density of the integral of mean curvature. Consequently, an SMI can be defined without referring to a discretisation and it can be estimated from 3D image data without need to mesh the surface but using the number of occurrences of 2×2×2 pixel configurations, only. Obviously, it is impossible to completely describe a random closed set by one number. In this paper, Boolean models of balls and infinite straight cylinders serve as cautionary examples pointing out the limitations of the SMI. Nevertheless, shape factors like the SMI can be valuable tools for comparing similar structures. This is illustrated on real microstructures of ice, foams, and paper.

  2. Open preperitoneal groin hernia repair with mesh

    DEFF Research Database (Denmark)

    Andresen, Kristoffer; Rosenberg, Jacob

    2017-01-01

    A systematic review was conducted and reported according to the PRISMA statement. PubMed, Cochrane library and Embase were searched systematically. Studies were included if they provided clinical data with more than 30 days follow up following repair of an inguinal hernia with an open preperitoneal mesh......Background For the repair of inguinal hernias, several surgical methods have been presented where the purpose is to place a mesh in the preperitoneal plane through an open access. The aim of this systematic review was to describe preperitoneal repairs with emphasis on the technique. Data sources...

  3. Adaptive mesh refinement for storm surge

    KAUST Repository

    Mandli, Kyle T.

    2014-03-01

    An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the GeoClaw framework and compared to ADCIRC for Hurricane Ike along with observed tide gauge data and the computational cost of each model run. © 2014 Elsevier Ltd.

  4. Relativistic MHD with Adaptive Mesh Refinement

    CERN Document Server

    Anderson, M; Liebling, S L; Neilsen, D; Anderson, Matthew; Hirschmann, Eric; Liebling, Steven L.; Neilsen, David

    2006-01-01

    We solve the relativistic magnetohydrodynamics (MHD) equations using a finite difference Convex ENO method (CENO) in 3+1 dimensions within a distributed parallel adaptive mesh refinement (AMR) infrastructure. In flat space we examine a Balsara blast wave problem along with a spherical blast wave and a relativistic rotor test both with unigrid and AMR simulations. The AMR simulations substantially improve performance while reproducing the resolution equivalent unigrid simulation results. We also investigate the impact of hyperbolic divergence cleaning for the spherical blast wave and relativistic rotor. We include unigrid and mesh refinement parallel performance measurements for the spherical blast wave.

  5. Laparoscopic rectocele repair using polyglactin mesh.

    Science.gov (United States)

    Lyons, T L; Winer, W K

    1997-05-01

    We assessed the efficacy of laparoscopic treatment of rectocele defect using a polyglactin mesh graft. From May 1, 1995, through September 30, 1995, we prospectively evaluated 20 women (age 38-74 yrs) undergoing pelvic floor reconstruction for symptomatic pelvic floor prolapse, with or without hysterectomy. Morbidity of the procedure was extremely low compared with standard transvaginal and transrectal approaches. Patients were followed at 3-month intervals for 1 year. Sixteen had resolution of symptoms. Laparoscopic application of polyglactin mesh for the repair of the rectocele defect is a viable option, although long-term follow-up is necessary.

  6. Adaptive Mesh Refinement for Storm Surge

    CERN Document Server

    Mandli, Kyle T

    2014-01-01

    An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the \\geoclaw framework and compared to \\adcirc for Hurricane Ike along with observed tide gauge data and the computational cost of each model run.

  7. Block-structured adaptive meshes and reduced grids for atmospheric general circulation models.

    Science.gov (United States)

    Jablonowski, Christiane; Oehmke, Robert C; Stout, Quentin F

    2009-11-28

    Adaptive mesh refinement techniques offer a flexible framework for future variable-resolution climate and weather models since they can focus their computational mesh on certain geographical areas or atmospheric events. Adaptive meshes can also be used to coarsen a latitude-longitude grid in polar regions. This allows for the so-called reduced grid setups. A spherical, block-structured adaptive grid technique is applied to the Lin-Rood finite-volume dynamical core for weather and climate research. This hydrostatic dynamics package is based on a conservative and monotonic finite-volume discretization in flux form with vertically floating Lagrangian layers. The adaptive dynamical core is built upon a flexible latitude-longitude computational grid and tested in two- and three-dimensional model configurations. The discussion is focused on static mesh adaptations and reduced grids. The two-dimensional shallow water setup serves as an ideal testbed and allows the use of shallow water test cases like the advection of a cosine bell, moving vortices, a steady-state flow, the Rossby-Haurwitz wave or cross-polar flows. It is shown that reduced grid configurations are viable candidates for pure advection applications but should be used moderately in nonlinear simulations. In addition, static grid adaptations can be successfully used to resolve three-dimensional baroclinic waves in the storm-track region.

  8. Parallel octree-based hexahedral mesh generation for eulerian to lagrangian conversion.

    Energy Technology Data Exchange (ETDEWEB)

    Staten, Matthew L.; Owen, Steven James

    2010-09-01

    Computational simulation must often be performed on domains where materials are represented as scalar quantities or volume fractions at cell centers of an octree-based grid. Common examples include bio-medical, geotechnical or shock physics calculations where interface boundaries are represented only as discrete statistical approximations. In this work, we introduce new methods for generating Lagrangian computational meshes from Eulerian-based data. We focus specifically on shock physics problems that are relevant to ASC codes such as CTH and Alegra. New procedures for generating all-hexahedral finite element meshes from volume fraction data are introduced. A new primal-contouring approach is introduced for defining a geometric domain. New methods for refinement, node smoothing, resolving non-manifold conditions and defining geometry are also introduced as well as an extension of the algorithm to handle tetrahedral meshes. We also describe new scalable MPI-based implementations of these procedures. We describe a new software module, Sculptor, which has been developed for use as an embedded component of CTH. We also describe its interface and its use within the mesh generation code, CUBIT. Several examples are shown to illustrate the capabilities of Sculptor.

  9. Shear Alignment of Diblock Copolymers for Patterning Nanowire Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, Kyle T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-08

    Metallic nanowire meshes are useful as cheap, flexible alternatives to indium tin oxide – an expensive, brittle material used in transparent conductive electrodes. We have fabricated nanowire meshes over areas up to 2.5 cm2 by: 1) mechanically aligning parallel rows of diblock copolymer (diBCP) microdomains; 2) selectively infiltrating those domains with metallic ions; 3) etching away the diBCP template; 4) sintering to reduce ions to metal nanowires; and, 5) repeating steps 1 – 4 on the same sample at a 90° offset. We aligned parallel rows of polystyrene-b-poly(2-vinylpyridine) [PS(48.5 kDa)-b-P2VP(14.5 kDa)] microdomains by heating above its glass transition temperature (Tg ≈ 100°C), applying mechanical shear pressure (33 kPa) and normal force (13.7 N), and cooling below Tg. DiBCP samples were submerged in aqueous solutions of metallic ions (15 – 40 mM ions; 0.1 – 0.5 M HCl) for 30 – 90 minutes, which coordinate to nitrogen in P2VP. Subsequent ozone-etching and sintering steps yielded parallel nanowires. We aimed to optimize alignment parameters (e.g. shear and normal pressures, alignment duration, and PDMS thickness) to improve the quality, reproducibility, and scalability of meshes. We also investigated metals other than Pt and Au that may be patterned using this technique (Cu, Ag).

  10. MeshEZW: an image coder using mesh and finite elements

    Science.gov (United States)

    Landais, Thomas; Bonnaud, Laurent; Chassery, Jean-Marc

    2003-08-01

    In this paper, we present a new method to compress the information in an image, called MeshEZW. The proposed approach is based on the finite elements method, a mesh construction and a zerotree method. The zerotree method is an adaptive of the EZW algorithm with two new symbols for increasing the performance. These steps allow a progressive representation of the image by the automatic construction of a bitstream. The mesh structure is adapted to the image compression domain and is defined to allow video comrpession. The coder is described and some preliminary results are discussed.

  11. Euler Flow Computations on Non-Matching Unstructured Meshes

    Science.gov (United States)

    Gumaste, Udayan

    1999-01-01

    Advanced fluid solvers to predict aerodynamic performance-coupled treatment of multiple fields are described. The interaction between the fluid and structural components in the bladed regions of the engine is investigated with respect to known blade failures caused by either flutter or forced vibrations. Methods are developed to describe aeroelastic phenomena for internal flows in turbomachinery by accounting for the increased geometric complexity, mutual interaction between adjacent structural components and presence of thermal and geometric loading. The computer code developed solves the full three dimensional aeroelastic problem of-stage. The results obtained show that flow computations can be performed on non-matching finite-volume unstructured meshes with second order spatial accuracy.

  12. Block-structured Adaptive Mesh Refinement - Theory, Implementation and Application

    Directory of Open Access Journals (Sweden)

    Deiterding Ralf

    2011-12-01

    Full Text Available Structured adaptive mesh refinement (SAMR techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.

  13. Analysis of Six β-Lactam Residues in Milk and Egg by Micellar Electrokinetic Chromatography with Large-Volume Sample Stacking and Polarity Switching.

    Science.gov (United States)

    Shao, Yu-Xiu; Chen, Guan-Hua; Fang, Rou; Zhang, Li; Yi, Ling-Xiao; Meng, Hong-Lian

    2016-05-04

    A new micellar electrokinetic chromatography method with large-volume sample stacking and polarity switching was developed to analyze amoxicllin, cephalexin, oxacillin, penicillin G, cefazolin, and cefoperazone in milk and egg. The important parameters influencing separation and enrichment factors were optimized. The optimized running buffer consisted of 10 mM phosphate and 22 mM SDS at pH 6.7. The sample size was 1.47 kPa × 690 s, the reverse voltage was 20 kV, and the electric current recovery was 95%. Under these optimum conditions, the enrichment factors of six β-lactams were 193-601. Their LODs were <0.26 ng/g, and LOQs were all 2 ng/g, which was only 1/50-1/2 of the maximum residual limits demanded by U.S. and Japanese regulations. The intraday and interday RSDs of method were lower than 3.70 and 3.91%, respectively. The method can be applied to determine these six antibiotic residues in egg and milk.

  14. Large volume sample stacking with EOF and sweeping in CE for determination of common preservatives in cosmetic products by chemometric experimental design.

    Science.gov (United States)

    Cheng, Yi-Cian; Wang, Chun-Chi; Chen, Yen-Ling; Wu, Shou-Mei

    2012-05-01

    This study proposes a capillary electrophoresis method incorporating large volume sample stacking, EOF and sweeping for detection of common preservatives used in cosmetic products. The method was developed using chemometric experimental design (fractional factorial design and central composite design) to determine multiple separation variables by efficient steps. The samples were loaded by hydrodynamic injection (10 psi, 90 s), and separated by phosphate buffer (50 mM, pH 3) containing 30% methanol and 80 mM SDS at -20 kV. During method validation, calibration curves were found to be linear over a range of 5-100 μg/mL for butyl paraben and isobutyl paraben; 0.05-10 μg/mL for ethyl paraben; 0.2-50 μg/mL for dehydroacetic acid; 0.5-70 μg/mL for methyl paraben; 5-350 μg/mL for sorbic acid; 0.02-450 μg/mL for p-hydroxybenzoic acid and 0.05-10 μg/mL for salicylic acid and benzoic acid. The analytes were analysed simultaneously and their detection limits (S/N = 3) were down to 0.005-2 μg/mL. The analysis method was successfully used for detection of preservatives used in commercial cosmetics.

  15. CHARACTERISTICS OF SLUDGE BOTTOM MESH

    Directory of Open Access Journals (Sweden)

    Kamil Szydłowski

    2016-05-01

    Full Text Available The main aim of the study was to assess the selected heavy metals pollution of bottom sediments of small water bodies of different catchment management. Two ponds located in Mostkowo village were chosen for investigation. The first small water reservoir is surrounded by the cereal fields, cultivated without the use of organic and mineral fertilizers (NPK. The second reservoir is located in a park near rural buildings. Sediment samples were collected by the usage of KC Denmark sediments core probe. Samples were taken from 4 layers of sediment, from depth: 0–5, 5–10, 10–20 and 20–30 cm. Sampling was made once during the winter period (2014 year when ice occurred on the surface of small water bodies, from three points. The material was prepared for further analysis according to procedures used in soil science. The content of heavy metals (Cd, Cr, Cu, Ni, Pb and Zn were determined by atomic absorption spectrometry by usage of ASA ICE 3000 Thermo Scientific after prior digestion in the mixture (5: 1 of concentrated acids (HNO3 and HClO4. Higher pH values ​​were characteristic for sediments of pond located in a park than in pond located within the agricultural fields. In both small water bodies the highest heavy metal concentrations occurred in the deepest points of the research. In the sediments of the pond located within crop fields the highest concentration of cadmium, copper, lead and zinc were observed in a layer of 0–5 cm, wherein the nickel and chromium in a layer of 20–30 cm. In the sediments of the pond, located in the park the highest values ​​occurred at the deepest sampling point in the layer taken form 10–20 cm. Sediments from second reservoir were characterized by the largest average concentrations of heavy metals, except the lead content in sediment form the layer of 10–20 cm. According to the geochemical evaluation of sediments proposed by Bojakowska and Sokołowska [1998], the majority of samples belongs to Ist

  16. Constrained and joint inversion on unstructured meshes

    Science.gov (United States)

    Doetsch, J.; Jordi, C.; Rieckh, V.; Guenther, T.; Schmelzbach, C.

    2015-12-01

    Unstructured meshes allow for inclusion of arbitrary surface topography, complex acquisition geometry and undulating geological interfaces in the inversion of geophysical data. This flexibility opens new opportunities for coupling different geophysical and hydrological data sets in constrained and joint inversions. For example, incorporating geological interfaces that have been derived from high-resolution geophysical data (e.g., ground penetrating radar) can add geological constraints to inversions of electrical resistivity data. These constraints can be critical for a hydrogeological interpretation of the inversion results. For time-lapse inversions of geophysical data, constraints can be derived from hydrological point measurements in boreholes, but it is difficult to include these hard constraints in the inversion of electrical resistivity monitoring data. Especially mesh density and the regularization footprint around the hydrological point measurements are important for an improved inversion compared to the unconstrained case. With the help of synthetic and field examples, we analyze how regularization and coupling operators should be chosen for time-lapse inversions constrained by point measurements and for joint inversions of geophysical data in order to take full advantage of the flexibility of unstructured meshes. For the case of constraining to point measurements, it is important to choose a regularization operator that extends beyond the neighboring cells and the uncertainty in the point measurements needs to be accounted for. For joint inversion, the choice of the regularization depends on the expected subsurface heterogeneity and the cell size of the parameter mesh.

  17. Hash functions and triangular mesh reconstruction*1

    Science.gov (United States)

    Hrádek, Jan; Kuchař, Martin; Skala, Václav

    2003-07-01

    Some applications use data formats (e.g. STL file format), where a set of triangles is used to represent the surface of a 3D object and it is necessary to reconstruct the triangular mesh with adjacency information. It is a lengthy process for large data sets as the time complexity of this process is O( N log N), where N is number of triangles. Triangular mesh reconstruction is a general problem and relevant algorithms can be used in GIS and DTM systems as well as in CAD/CAM systems. Many algorithms rely on space subdivision techniques while hash functions offer a more effective solution to the reconstruction problem. Hash data structures are widely used throughout the field of computer science. The hash table can be used to speed up the process of triangular mesh reconstruction but the speed strongly depends on hash function properties. Nevertheless the design or selection of the hash function for data sets with unknown properties is a serious problem. This paper describes a new hash function, presents the properties obtained for large data sets, and discusses validity of the reconstructed surface. Experimental results proved theoretical considerations and advantages of hash function use for mesh reconstruction.

  18. Particle Collection Efficiency for Nylon Mesh Screens.

    Science.gov (United States)

    Cena, Lorenzo G; Ku, Bon-Ki; Peters, Thomas M

    Mesh screens composed of nylon fibers leave minimal residual ash and produce no significant spectral interference when ashed for spectrometric examination. These characteristics make nylon mesh screens attractive as a collection substrate for nanoparticles. A theoretical single-fiber efficiency expression developed for wire-mesh screens was evaluated for estimating the collection efficiency of submicrometer particles for nylon mesh screens. Pressure drop across the screens, the effect of particle morphology (spherical and highly fractal) on collection efficiency and single-fiber efficiency were evaluated experimentally for three pore sizes (60, 100 and 180 μm) at three flow rates (2.5, 4 and 6 Lpm). The pressure drop across the screens was found to increase linearly with superficial velocity. The collection efficiency of the screens was found to vary by less than 4% regardless of particle morphology. Single-fiber efficiency calculated from experimental data was in good agreement with that estimated from theory for particles between 40 and 150 nm but deviated from theory for particles outside this size range. New coefficients for the single-fiber efficiency model were identified that minimized the sum of square error (SSE) between the values estimated with the model and those determined experimentally. Compared to the original theory, the SSE calculated using the modified theory was at least one order of magnitude lower for all screens and flow rates with the exception of the 60-μm pore screens at 2.5 Lpm, where the decrease was threefold.

  19. Mesh Optimization for Ground Vehicle Aerodynamics

    Directory of Open Access Journals (Sweden)

    Adrian Gaylard

    2010-04-01

    Full Text Available

    Mesh optimization strategy for estimating accurate drag of a ground vehicle is proposed based on examining the effect of different mesh parameters.  The optimized mesh parameters were selected using design of experiment (DOE method to be able to work in a limited memory environment and in a reasonable amount of time but without compromising the accuracy of results. The study was further extended to take into account the car model size effect. Three car model sizes have been investigated and compared with MIRA scale wind tunnel results. Parameters that lead to drag value closer to experiment with less memory and computational time have been identified. Scaling the optimized mesh size with the length of car model was successfully used to predict the drag of the other car sizes with reasonable accuracy. This investigation was carried out using STARCCM+ commercial software package, however the findings can be applied to any other CFD package.

  20. Details of tetrahedral anisotropic mesh adaptation

    Science.gov (United States)

    Jensen, Kristian Ejlebjerg; Gorman, Gerard

    2016-04-01

    We have implemented tetrahedral anisotropic mesh adaptation using the local operations of coarsening, swapping, refinement and smoothing in MATLAB without the use of any for- N loops, i.e. the script is fully vectorised. In the process of doing so, we have made three observations related to details of the implementation: 1. restricting refinement to a single edge split per element not only simplifies the code, it also improves mesh quality, 2. face to edge swapping is unnecessary, and 3. optimising for the Vassilevski functional tends to give a little higher value for the mean condition number functional than optimising for the condition number functional directly. These observations have been made for a uniform and a radial shock metric field, both starting from a structured mesh in a cube. Finally, we compare two coarsening techniques and demonstrate the importance of applying smoothing in the mesh adaptation loop. The results pertain to a unit cube geometry, but we also show the effect of corners and edges by applying the implementation in a spherical geometry.

  1. Drag reduction properties of superhydrophobic mesh pipes

    Science.gov (United States)

    Geraldi, Nicasio R.; Dodd, Linzi E.; Xu, Ben B.; Wells, Gary G.; Wood, David; Newton, Michael I.; McHale, Glen

    2017-09-01

    Even with the recent extensive study into superhydrophobic surfaces, the fabrication of such surfaces on the inside walls of a pipe remains challenging. In this work we report a convenient bi-layered pipe design using a thin superhydrophobic metallic mesh formed into a tube, supported inside another pipe. A flow system was constructed to test the fabricated bi-layer pipeline, which allowed for different constant flow rates of water to be passed through the pipe, whilst the differential pressure was measured, from which the drag coefficient (ƒ) and Reynolds numbers (Re) were calculated. Expected values of ƒ were found for smooth glass pipes for the Reynolds number (Re) range 750-10 000, in both the laminar and part of the turbulent regimes. Flow through plain meshes without the superhydrophobic coating were also measured over a similar range (750  superhydrophobic coating, ƒ was found for 4000  superhydrophobic mesh can support a plastron and provide a drag reduction compared to a plain mesh, however, the plastron is progressively destroyed with use and in particular at higher flow rates.

  2. Performance Evaluation of Coded Meshed Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Pedersen, Morten Videbæk;

    2013-01-01

    of the former to enhance the gains of the latter. We first motivate our work through measurements in WiFi mesh networks. Later, we compare state-of-the-art approaches, e.g., COPE, RLNC, to CORE. Our measurements show the higher reliability and throughput of CORE over other schemes, especially, for asymmetric...

  3. Mesh Currents and Josephson Junction Arrays

    OpenAIRE

    1995-01-01

    A simple but accurate mesh current analysis is performed on a XY model and on a SIMF model to derive the equations for a Josephson junction array. The equations obtained here turn out to be different from other equations already existing in the literature. Moreover, it is shown that the two models come from an unique hidden structure

  4. Automatic finite elements mesh generation from planar contours of the brain: an image driven 'blobby' approach

    CERN Document Server

    Bucki, M; Bucki, Marek; Payan, Yohan

    2005-01-01

    In this paper, we address the problem of automatic mesh generation for finite elements modeling of anatomical organs for which a volumetric data set is available. In the first step a set of characteristic outlines of the organ is defined manually or automatically within the volume. The outlines define the "key frames" that will guide the procedure of surface reconstruction. Then, based on this information, and along with organ surface curvature information extracted from the volume data, a 3D scalar field is generated. This field allows a 3D reconstruction of the organ: as an iso-surface model, using a marching cubes algorithm; or as a 3D mesh, using a grid "immersion" technique, the field value being used as the outside/inside test. The final reconstruction respects the various topological changes that occur within the organ, such as holes and branching elements.

  5. Oxidation and degradation of polypropylene transvaginal mesh.

    Science.gov (United States)

    Talley, Anne D; Rogers, Bridget R; Iakovlev, Vladimir; Dunn, Russell F; Guelcher, Scott A

    2017-04-01

    Polypropylene (PP) transvaginal mesh (TVM) repair for stress urinary incontinence (SUI) has shown promising short-term objective cure rates. However, life-altering complications have been associated with the placement of PP mesh for SUI repair. PP degradation as a result of the foreign body reaction (FBR) has been proposed as a contributing factor to mesh complications. We hypothesized that PP oxidizes under in vitro conditions simulating the FBR, resulting in degradation of the PP. Three PP mid-urethral slings from two commercial manufacturers were evaluated. Test specimens (n = 6) were incubated in oxidative medium for up to 5 weeks. Oxidation was assessed by Fourier Transform Infrared Spectroscopy (FTIR), and degradation was evaluated by scanning electron microscopy (SEM). FTIR spectra of the slings revealed evidence of carbonyl and hydroxyl peaks after 5 weeks of incubation time, providing evidence of oxidation of PP. SEM images at 5 weeks showed evidence of surface degradation, including pitting and flaking. Thus, oxidation and degradation of PP pelvic mesh were evidenced by chemical and physical changes under simulated in vivo conditions. To assess changes in PP surface chemistry in vivo, fibers were recovered from PP mesh explanted from a single patient without formalin fixation, untreated (n = 5) or scraped (n = 5) to remove tissue, and analyzed by X-ray photoelectron spectroscopy. Mechanical scraping removed adherent tissue, revealing an underlying layer of oxidized PP. These findings underscore the need for further research into the relative contribution of oxidative degradation to complications associated with PP-based TVM devices in larger cohorts of patients.

  6. Wireless Mesh Network Routing Under Uncertain Demands

    Science.gov (United States)

    Wellons, Jonathan; Dai, Liang; Chang, Bin; Xue, Yuan

    Traffic routing plays a critical role in determining the performance of a wireless mesh network. Recent research results usually fall into two ends of the spectrum. On one end are the heuristic routing algorithms, which are highly adaptive to the dynamic environments of wireless networks yet lack the analytical properties of how well the network performs globally. On the other end are the optimal routing algorithms that are derived from the optimization problem formulation of mesh network routing. They can usually claim analytical properties such as resource use optimality and throughput fairness. However, traffic demand is usually implicitly assumed as static and known a priori in these problem formulations. In contrast, recent studies of wireless network traces show that the traffic demand, even being aggregated at access points, is highly dynamic and hard to estimate. Thus, to apply the optimization-based routing solution in practice, one must take into account the dynamic and uncertain nature of wireless traffic demand. There are two basic approaches to address the traffic uncertainty in optimal mesh network routing (1) predictive routing that infers the traffic demand with maximum possibility based in its history and optimizes the routing strategy based on the predicted traffic demand and (2) oblivious routing that considers all the possible traffic demands and selects the routing strategy where the worst-case network performance could be optimized. This chapter provides an overview of the optimal routing strategies for wireless mesh networks with a focus on the above two strategies that explicitly consider the traffic uncertainty. It also identifies the key factors that affect the performance of each routing strategy and provides guidelines towards the strategy selection in mesh network routing under uncertain traffic demands.

  7. The mesh-matching algorithm: an automatic 3D mesh generator for Finite element structures

    CERN Document Server

    Couteau, B; Lavallee, S; Payan, Yohan; Lavallee, St\\'{e}phane

    2000-01-01

    Several authors have employed Finite Element Analysis (FEA) for stress and strain analysis in orthopaedic biomechanics. Unfortunately, the use of three-dimensional models is time consuming and consequently the number of analysis to be performed is limited. The authors have investigated a new method allowing automatically 3D mesh generation for structures as complex as bone for example. This method called Mesh-Matching (M-M) algorithm generated automatically customized 3D meshes of bones from an already existing model. The M-M algorithm has been used to generate FE models of ten proximal human femora from an initial one which had been experimentally validated. The new meshes seemed to demonstrate satisfying results.

  8. Randomized clinical trial of self-gripping mesh versus sutured mesh for Lichtenstein hernia repair

    DEFF Research Database (Denmark)

    Jorgensen, L N; Sommer, T; Assaadzadeh, S;

    2012-01-01

    between the groups in postoperative complications (33·7 versus 40·4 per cent; P = 0·215), rate of recurrent hernia within 1 year (1·2 per cent in both groups) or quality of life. CONCLUSION: The avoidance of suture fixation using a self-gripping mesh was not accompanied by a reduction in chronic symptoms......BACKGROUND: Many patients develop discomfort after open repair of a groin hernia. It was hypothesized that suture fixation of the mesh is a cause of these symptoms. METHODS: This patient- and assessor-blinded randomized multicentre clinical trial compared a self-gripping mesh (Parietene Progrip......(®) ) and sutured mesh for open primary repair of uncomplicated inguinal hernia by the Lichtenstein technique. Patients were assessed before surgery, on the day of operation, and at 1 and 12 months after surgery. The primary endpoint was moderate or severe symptoms after 12 months, including a combination...

  9. Moving mesh generation with a sequential approach for solving PDEs

    DEFF Research Database (Denmark)

    of physical and mesh equations suffers typically from long computation time due to highly nonlinear coupling between the two equations. Moreover, the extended system (physical and mesh equations) may be sensitive to the tuning parameters such as a temporal relaxation factor. It is therefore useful to design...... adaptive grid method (local refinement by adding/deleting the meshes at a discrete time level) as well as of efficiency for the dynamic adaptive grid method (or moving mesh method) where the number of meshes is not changed. For illustration, a phase change problem is solved with the decomposition algorithm.......In moving mesh methods, physical PDEs and a mesh equation derived from equidistribution of an error metrics (so-called the monitor function) are simultaneously solved and meshes are dynamically concentrated on steep regions (Lim et al., 2001). However, the simultaneous solution procedure...

  10. On combining Laplacian and optimization-based mesh smoothing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, L.A.

    1997-07-01

    Local mesh smoothing algorithms have been shown to be effective in repairing distorted elements in automatically generated meshes. The simplest such algorithm is Laplacian smoothing, which moves grid points to the geometric center of incident vertices. Unfortunately, this method operates heuristically and can create invalid meshes or elements of worse quality than those contained in the original mesh. In contrast, optimization-based methods are designed to maximize some measure of mesh quality and are very effective at eliminating extremal angles in the mesh. These improvements come at a higher computational cost, however. In this article the author proposes three smoothing techniques that combine a smart variant of Laplacian smoothing with an optimization-based approach. Several numerical experiments are performed that compare the mesh quality and computational cost for each of the methods in two and three dimensions. The author finds that the combined approaches are very cost effective and yield high-quality meshes.

  11. Comparing the reinforcement capacity of welded steel mesh and a thin spray-on liner using large scale laboratory tests

    Institute of Scientific and Technical Information of China (English)

    Zhenjun Shan; Porter Ian; Nemcik Jan; Baafi Ernest

    2014-01-01

    Steel mesh is used as a passive skin confinement medium to supplement the active support provided by rock bolts for roof and rib control in underground coal mines. Thin spray-on liners (TSL) are believed to have the potential to take the place of steel mesh as the skin confinement medium in underground mines. To confirm this belief, large scale laboratory experiments were conducted to compare the behaviour of welded steel mesh and a TSL, when used in conjunction with rock bolts, in reinforcing strata with weak bedding planes and strata prone to guttering, two common rock conditions which exist in coal mines. It was found that while the peak load taken by the simulated rock mass with weak bedding planes acting as the control sample (no skin confinement) was 2494 kN, the corresponding value of the sample with 5 mm thick TSL reinforcement reached 2856 kN. The peak load of the steel mesh reinforced sample was only 2321 kN, but this was attributed to the fact that one of the rock bolts broke during the test. The TSL rein-forced sample had a similar post-yield behaviour as the steel mesh reinforced one. The results of the large scale guttering test indicated that a TSL is better than steel mesh in restricting rock movement and thus inhibiting the formation of gutters in the roof.

  12. INCISIONAL HERNIA - ONLAY VS SUBLAY MESH HERNIOPLAS T Y

    OpenAIRE

    Ravi Kamal Kumar; Chandrakumar; Vijayalaxmi,; Thokala; Venkat Ramana

    2015-01-01

    BACKGROUND : Incisional hernia is a common surgical problem. Anatomical repair of hernia is now out of vogue. Polypropylene mesh repair has now become accepted. In open mesh repair of incisional hernia cases the site of placement of mesh is still debated. Some surgeo ns favour the onlay repair and others use sublay or retro - rectus plane for deployment of the mesh. AIM: The aim of the study is to examine the pros and cons of both the techniques and find the bett...

  13. Explicit inverse distance weighting mesh motion for coupled problems

    OpenAIRE

    Witteveen, J.A.S.; Bijl, H.

    2009-01-01

    An explicit mesh motion algorithm based on inverse distance weighting interpolation is presented. The explicit formulation leads to a fast mesh motion algorithm and an easy implementation. In addition, the proposed point-by-point method is robust and flexible in case of large deformations, hanging nodes, and parallelization. Mesh quality results and CPU time comparisons are presented for triangular and hexahedral unstructured meshes in an airfoil flutter fluid-structure interaction problem.

  14. Two-scale meshes in quasilinear discretized problems of computational mechanics

    Science.gov (United States)

    Jarošová, P.; Vala, J.

    2016-06-01

    Some problems of continuum mechanics, as the analysis of crack formation in the cohesive zone modelling, require (at least) two-scale numerical approach to finite element (or volume, difference, etc.) computations: i) at the macro-scale for a whole (nearly elastic, partially damaged) body and ii) at the micro-scale near the crack (a new interior surface). The paper presents an always convergent procedure handling overlapping two-scale meshes for such model problems, open to generalizations in many directions.

  15. On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach

    Science.gov (United States)

    Liu, Zheng; Xue, Kaiping; Hong, Peilin

    The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.

  16. A testing preocedure for the evaluation of directional mesh bias

    NARCIS (Netherlands)

    Slobbe, A.T.; Hendriks, M.A.N.; Rots, J.G.

    2013-01-01

    This paper presents a dedicated numerical test that enables to assess the directional mesh bias of constitutive models in a systematic way. The test makes use of periodic boundary conditions, by which strain localization can be analyzed for different mesh alignments with preservation of mesh uniform

  17. Multiphase flow of immiscible fluids on unstructured moving meshes

    DEFF Research Database (Denmark)

    Misztal, Marek Krzysztof; Erleben, Kenny; Bargteil, Adam

    2012-01-01

    In this paper, we present a method for animating multiphase flow of immiscible fluids using unstructured moving meshes. Our underlying discretization is an unstructured tetrahedral mesh, the deformable simplicial complex (DSC), that moves with the flow in a Lagrangian manner. Mesh optimization op...

  18. 21 CFR 870.3650 - Pacemaker polymeric mesh bag.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Pacemaker polymeric mesh bag. 870.3650 Section 870...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Prosthetic Devices § 870.3650 Pacemaker polymeric mesh bag. (a) Identification. A pacemaker polymeric mesh bag is an implanted device used to hold a...

  19. A new class of accurate, mesh-free hydrodynamic simulation methods

    Science.gov (United States)

    Hopkins, Philip F.

    2015-06-01

    We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.

  20. Halo Gas and Galaxy Disk Kinematics of a Volume-Limited Sample of MgII Absorption-Selected Galaxies at z~0.1

    CERN Document Server

    Kacprzak, G G; Barton, E J; Cooke, J

    2011-01-01

    We have directly compared MgII halo gas kinematics to the rotation velocities derived from emission/absorption lines of the associated host galaxies. Our 0.096volume-limited sample comprises 13 ~L* galaxies, with impact parameters of 12-90 kpc from background quasars sight-lines, associated with 11 MgII absorption systems with MgII equivalent widths 0.3< W_r(2796)<2.3A. For only 5/13 galaxies, the absorption resides to one side of the galaxy systemic velocity and trends to align with one side of the galaxy rotation curve. The remainder have absorption that spans both sides of the galaxy systemic velocity. These results differ from those at z~0.5, where 74% of the galaxies have absorption residing to one side of the galaxy systemic velocity. For all the z~0.1 systems, simple extended disk-like rotation models fail to reproduce the full MgII velocity spread, implying other dynamical processes contribute to the MgII kinematics. In fact 55% of the galaxies are "counter-rotating" with respect ...

  1. To mesh or not to mesh: a review of pelvic organ reconstructive surgery

    Directory of Open Access Journals (Sweden)

    Dällenbach P

    2015-04-01

    Full Text Available Patrick Dällenbach Department of Gynecology and Obstetrics, Division of Gynecology, Urogynecology Unit, Geneva University Hospitals, Geneva, Switzerland Abstract: Pelvic organ prolapse (POP is a major health issue with a lifetime risk of undergoing at least one surgical intervention estimated at close to 10%. In the 1990s, the risk of reoperation after primary standard vaginal procedure was estimated to be as high as 30% to 50%. In order to reduce the risk of relapse, gynecological surgeons started to use mesh implants in pelvic organ reconstructive surgery with the emergence of new complications. Recent studies have nevertheless shown that the risk of POP recurrence requiring reoperation is lower than previously estimated, being closer to 10% rather than 30%. The development of mesh surgery – actively promoted by the marketing industry – was tremendous during the past decade, and preceded any studies supporting its benefit for our patients. Randomized trials comparing the use of mesh to native tissue repair in POP surgery have now shown better anatomical but similar functional outcomes, and meshes are associated with more complications, in particular for transvaginal mesh implants. POP is not a life-threatening condition, but a functional problem that impairs quality of life for women. The old adage “primum non nocere” is particularly appropriate when dealing with this condition which requires no treatment when asymptomatic. It is currently admitted that a certain degree of POP is physiological with aging when situated above the landmark of the hymen. Treatment should be individualized and the use of mesh needs to be selective and appropriate. Mesh implants are probably an important tool in pelvic reconstructive surgery, but the ideal implant has yet to be found. The indications for its use still require caution and discernment. This review explores the reasons behind the introduction of mesh augmentation in POP surgery, and aims to

  2. Facial expression reconstruction on the basis of selected vertices of triangle mesh

    Science.gov (United States)

    Peszor, Damian; Wojciechowska, Marzena

    2016-06-01

    Facial expression reconstruction is an important issue in the field of computer graphics. While it is relatively easy to create an animation based on meshes constructed through video recordings, this kind of high-quality data is often not transferred to another model because of lack of intermediary, anthropometry-based way to do so. However, if a high-quality mesh is sampled with sufficient density, it is possible to use obtained feature points to encode the shape of surrounding vertices in a way that can be easily transferred to another mesh with corresponding feature points. In this paper we present a method used for obtaining information for the purpose of reconstructing changes in facial surface on the basis of selected feature points.

  3. Laparoscopic Total Extraperitoneal (TEP) Inguinal Hernia Repair Using 3-dimensional Mesh Without Mesh Fixation.

    Science.gov (United States)

    Aliyazicioglu, Tolga; Yalti, Tunc; Kabaoglu, Burcak

    2017-08-01

    Approximately one fifth of patients suffer from inguinal pain after laparoscopic total extraperitoneal (TEP) inguinal hernia repair. There is existing literature suggesting that the staples used to fix the mesh can cause postoperative inguinal pain. In this study, we describe our experience with laparoscopic TEP inguinal hernia surgery using 3-dimensional mesh without mesh fixation, in our institution. A total of 300 patients who had undergone laparoscopic TEP inguinal hernia repair with 3-dimensional mesh in VKV American Hospital, Istanbul from November 2006 to November 2015 were studied retrospectively. Using the hospital's electronic archive, we studied patients' selected parameters, which are demographic features (age, sex), body mass index, hernia locations and types, duration of operations, preoperative and postoperative complications, duration of hospital stays, cost of surgery, need for analgesics, time elapsed until returning to daily activities and work. A total of 300 patients underwent laparoscopic TEP hernia repair of 437 inguinal hernias from November 2006 to November 2015. Of the 185 patients, 140 were symptomatic. Mean duration of follow-up was 48 months (range, 6 to 104 mo). The mean duration of surgery was 55 minutes for bilateral hernia repair, and 38 minutes for unilateral hernia repair. The mean duration of hospital stay was 0.9 day. There was no conversion to open surgery. In none of the cases the mesh was fixated with either staples or fibrin glue. Six patients (2%) developed seroma that were treated conservatively. One patient had inguinal hernia recurrence. One patient had preperitoneal hematoma. One patient operated due to indirect right-sided hernia developed right-sided hydrocele. One patient had wound dehiscence at the umbilical port entry site. Chronic pain developed postoperatively in 1 patient. Ileus developed in 1 patient. Laparoscopic TEP inguinal repair with 3-dimensional mesh without mesh fixation can be performed as safe as

  4. Meshed split skin graft for extensive vitiligo

    Directory of Open Access Journals (Sweden)

    Srinivas C

    2004-05-01

    Full Text Available A 30 year old female presented with generalized stable vitiligo involving large areas of the body. Since large areas were to be treated it was decided to do meshed split skin graft. A phototoxic blister over recipient site was induced by applying 8 MOP solution followed by exposure to UVA. The split skin graft was harvested from donor area by Padgett dermatome which was meshed by an ampligreffe to increase the size of the graft by 4 times. Significant pigmentation of the depigmented skin was seen after 5 months. This procedure helps to cover large recipient areas, when pigmented donor skin is limited with minimal risk of scarring. Phototoxic blister enables easy separation of epidermis thus saving time required for dermabrasion from recipient site.

  5. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming

    2012-11-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  6. Capacity estimation of wireless mesh networks

    OpenAIRE

    2005-01-01

    Resumo: Este trabalho apresenta uma estimação da capacidade das redes sem fio tipo Mesh. As redes deste tipo têm topologias e padrões de tráfego únicos que as diferenciam das redes sem fio convencionais. Nas redes sem fio tipo mesh os nós atuam como clientes e como servidores e o tráfego e encaminhado para uma ou várias gateways em um modo multi-salto. A estimação da capacidade é baseada em estudos da Camada Física e MAC. Efeitos da propagação do canal são avaliados Abstract: This work add...

  7. Nondispersive optical activity of meshed helical metamaterials.

    Science.gov (United States)

    Park, Hyun Sung; Kim, Teun-Teun; Kim, Hyeon-Don; Kim, Kyungjin; Min, Bumki

    2014-11-17

    Extreme optical properties can be realized by the strong resonant response of metamaterials consisting of subwavelength-scale metallic resonators. However, highly dispersive optical properties resulting from strong resonances have impeded the broadband operation required for frequency-independent optical components or devices. Here we demonstrate that strong, flat broadband optical activity with high transparency can be obtained with meshed helical metamaterials in which metallic helical structures are networked and arranged to have fourfold rotational symmetry around the propagation axis. This nondispersive optical activity originates from the Drude-like response as well as the fourfold rotational symmetry of the meshed helical metamaterials. The theoretical concept is validated in a microwave experiment in which flat broadband optical activity with a designed magnitude of 45° per layer of metamaterial is measured. The broadband capabilities of chiral metamaterials may provide opportunities in the design of various broadband optical systems and applications.

  8. Energy-efficient wireless mesh networks

    CSIR Research Space (South Africa)

    Ntlatlapa, N

    2007-06-01

    Full Text Available ­deficient areas such as rural areas in  Africa.  Index   Terms—Energy   efficient   design,   Wireless  Mesh networks, Network Protocols I. INTRODUCTION The  objectives  of   this   research  group   are   the  application   and   adaptation   of   existing   wireless  local   area...   networks,   especially   those   based   on  802.11 standard, for   the energy­efficient  wireless  mesh network (EE­WMN) architectures, protocols  and   controls.   In   addition   to   the   WMN   regular  features   of   self...

  9. Adaptive upscaling with the dual mesh method

    Energy Technology Data Exchange (ETDEWEB)

    Guerillot, D.; Verdiere, S.

    1997-08-01

    The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.

  10. Gamra: Simple Meshes for Complex Earthquakes

    CERN Document Server

    Landry, Walter

    2016-01-01

    The static offsets caused by earthquakes are well described by elastostatic models with a discontinuity in the displacement along the fault. A traditional approach to model this discontinuity is to align the numerical mesh with the fault and solve the equations using finite elements. However, this distorted mesh can be difficult to generate and update. We present a new numerical method, inspired by the Immersed Interface Method, for solving the elastostatic equations with embedded discontinuities. This method has been carefully designed so that it can be used on parallel machines on an adapted finite difference grid. We have implemented this method in Gamra, a new code for earth modelling. We demonstrate the correctness of the method with analytic tests, and we demonstrate its practical performance by solving a realistic earthquake model to extremely high precision.

  11. Electrostatic PIC with adaptive Cartesian mesh

    CERN Document Server

    Kolobov, Vladimir I

    2016-01-01

    We describe an initial implementation of an electrostatic Particle-in-Cell (ES-PIC) module with adaptive Cartesian mesh in our Unified Flow Solver framework. Challenges of PIC method with cell-based adaptive mesh refinement (AMR) are related to a decrease of the particle-per-cell number in the refined cells with a corresponding increase of the numerical noise. The developed ES-PIC solver is validated for capacitively coupled plasma, its AMR capabilities are demonstrated for simulations of streamer development during high-pressure gas breakdown. It is shown that cell-based AMR provides a convenient particle management algorithm for exponential multiplications of electrons and ions in the ionization events.

  12. Mesh convergence study for hydraulic turbine draft-tube

    Science.gov (United States)

    Devals, C.; Vu, T. C.; Zhang, Y.; Dompierre, J.; Guibault, F.

    2016-11-01

    Computational flow analysis is an essential tool for hydraulic turbine designers. Grid generation is the first step in the flow analysis process. Grid quality and solution accuracy are strongly linked. Even though many studies have addressed the issue of mesh independence, there is still no definitive consensus on mesh best practices, and research on that topic is still needed. This paper presents a mesh convergence study for turbulence flow in hydraulic turbine draft- tubes which represents the most challenging turbine component for CFD predictions. The findings from this parametric study will be incorporated as mesh control rules in an in-house automatic mesh generator for turbine components.

  13. Overlay Share Mesh for Interactive Group Communication with High Dynamic

    Institute of Scientific and Technical Information of China (English)

    WU Yan-hua; CAI Yun-ze; XU Xiao-ming

    2007-01-01

    An overlay share mesh infrastructure is presented for high dynamic group communication systems, such as distributed interactive simulation (DIS) and distributed virtual environments (DVE). Overlay share mesh infrastructure can own better adapting ability for high dynamic group than tradition multi-tree multicast infrastructure by sharing links among different groups. The mechanism of overlay share mesh based on area of interest (AOI) was discussed in detail in this paper. A large number of simulation experiments were done and the permance of mesh infrastructure was studied. Experiments results proved that overlay mesh infrastructure owns better adaptability than traditional multi-tree infrastructure for high dynamic group communication systems.

  14. Diffusive mesh relaxation in ALE finite element numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dube, E.I.

    1996-06-01

    The theory for a diffusive mesh relaxation algorithm is developed for use in three-dimensional Arbitary Lagrange/Eulerian (ALE) finite element simulation techniques. This mesh relaxer is derived by a variational principle for an unstructured 3D grid using finite elements, and incorporates hourglass controls in the numerical implementation. The diffusive coefficients are based on the geometric properties of the existing mesh, and are chosen so as to allow for a smooth grid that retains the general shape of the original mesh. The diffusive mesh relaxation algorithm is then applied to an ALE code system, and results from several test cases are discussed.

  15. Effects of mesh resolution on hypersonic heating prediction

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Aeroheating prediction is a challenging and critical problem for the design and optimization of hypersonic vehicles.One challenge is that the solution of the Navier-Stokes equations strongly depends on the computational mesh.In this letter,the effect of mesh resolution on heat flux prediction is studied.It is found that mesh-independent solutions can be obtained using fine mesh,whose accuracy is confirmed by results from kinetic particle simulation.It is analyzed that mesh-induced numerical error comes m...

  16. Household vacuum cleaners vs. the high-volume surface sampler for collection of carpet dust samples in epidemiologic studies of children

    Directory of Open Access Journals (Sweden)

    Buffler Patricia A

    2008-02-01

    Full Text Available Abstract Background Levels of pesticides and other compounds in carpet dust can be useful indicators of exposure in epidemiologic studies, particularly for young children who are in frequent contact with carpets. The high-volume surface sampler (HVS3 is often used to collect dust samples in the room in which the child had spent the most time. This method can be expensive and cumbersome, and it has been suggested that an easier method would be to remove dust that had already been collected with the household vacuum cleaner. However, the household vacuum integrates exposures over multiple rooms, some of which are not relevant to the child's exposure, and differences in vacuuming equipment and practices could affect the chemical concentration data. Here, we compare levels of pesticides and other compounds in dust from household vacuums to that collected using the HVS3. Methods Both methods were used in 45 homes in California. HVS3 samples were collected in one room, while the household vacuum had typically been used throughout the home. The samples were analyzed for 64 organic compounds, including pesticides, polycyclic aromatic hydrocarbons, and polychlorinated biphenyls (PCBs, using GC/MS in multiple ion monitoring mode; and for nine metals using conventional microwave-assisted acid digestion combined with ICP/MS. Results The methods agreed in detecting the presence of the compounds 77% to 100% of the time (median 95%. For compounds with less than 100% agreement, neither method was consistently more sensitive than the other. Median concentrations were similar for most analytes, and Spearman correlation coefficients were 0.60 or higher except for allethrin (0.15 and malathion (0.24, which were detected infrequently, and benzo(kfluoranthene (0.55, benzo(apyrene (0.55, PCB 105 (0.54, PCB 118 (0.54, and PCB 138 (0.58. Assuming that the HVS3 method is the "gold standard," the extent to which the household vacuum cleaner method yields relative risk

  17. Wireless experiments on a Motorola mesh testbed.

    Energy Technology Data Exchange (ETDEWEB)

    Riblett, Loren E., Jr.; Wiseman, James M.; Witzke, Edward L.

    2010-06-01

    Motomesh is a Motorola product that performs mesh networking at both the client and access point levels and allows broadband mobile data connections with or between clients moving at vehicular speeds. Sandia National aboratories has extensive experience with this product and its predecessors in infrastructure-less mobile environments. This report documents experiments, which characterize certain aspects of how the Motomesh network performs when obile units are added to a fixed network infrastructure.

  18. Airbag Mapped Mesh Auto-Flattening Method

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jinhuan; MA Chunsheng; BAI Yuanli; HUANG Shilin

    2005-01-01

    Current software cannot easily model an airbag to be flattened without wrinkles. This paper improves the modeling efficiency using the initial metric method to design a mapped mesh auto-flattening algorithm. The element geometric transformation matrix was obtained using the theory of computer graphics. The algorithm proved to be practical for modeling a passenger-side airbag model. The efficiency and precision of modeling airbags are greatly improved by this method.

  19. Solid Mesh Registration for Radiotherapy Treatment Planning

    DEFF Research Database (Denmark)

    Noe, Karsten Østergaard; Sørensen, Thomas Sangild

    2010-01-01

    We present an algorithm for solid organ registration of pre-segmented data represented as tetrahedral meshes. Registration of the organ surface is driven by force terms based on a distance field representation of the source and reference shapes. Registration of internal morphology is achieved usi...... to complete. The proposed method has many potential uses in image guided radiotherapy (IGRT) which relies on registration to account for organ deformation between treatment sessions....

  20. Performance Evaluation of Coded Meshed Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Pedersen, Morten Videbæk

    2013-01-01

    of the former to enhance the gains of the latter. We first motivate our work through measurements in WiFi mesh networks. Later, we compare state-of-the-art approaches, e.g., COPE, RLNC, to CORE. Our measurements show the higher reliability and throughput of CORE over other schemes, especially, for asymmetric...... and/or high loss probabilities. We show that a store and forward scheme outperforms COPE under some channel conditions, while CORE yields 3dB gains....

  1. Incremental approach for radial basis functions mesh deformation with greedy algorithm

    Science.gov (United States)

    Selim, Mohamed M.; Koomullil, Roy P.; Shehata, Ahmed S.

    2017-07-01

    Mesh Deformation is an important element of any fluid-structure interaction simulation. In this article, a new methodology is presented for the deformation of volume meshes using incremental radial basis function (RBF) based interpolation. A greedy algorithm is used to select a small subset of the surface nodes iteratively. Two incremental approaches are introduced to solve the RBF system of equations: 1) block matrix inversion based approach and 2) modified LU decomposition approach. The use of incremental approach decreased the computational complexity of solving the system of equations within each greedy algorithm's iteration from O (n3) to O (n2). Results are presented from an accuracy study using specified deformations on a 2D surface. Mesh deformations for bending and twisting of a 3D rectangular supercritical wing have been demonstrated. Outcomes showed the incremental approaches reduce the CPU time up to 67% as compared to a traditional RBF matrix solver. Finally, the proposed mesh deformation approach was integrated within a fluid-structure interaction solver for investigating a flow induced cantilever beam vibration.

  2. Earth As An Unstructured Mesh and Its Recovery from Seismic Waveform Data

    Science.gov (United States)

    De Hoop, M. V.

    2015-12-01

    We consider multi-scale representations of Earth's interior from thepoint of view of their possible recovery from multi- andhigh-frequency seismic waveform data. These representations areintrinsically connected to (geologic, tectonic) structures, that is,geometric parametrizations of Earth's interior. Indeed, we address theconstruction and recovery of such parametrizations using localiterative methods with appropriately designed data misfits andguaranteed convergence. The geometric parametrizations containinterior boundaries (defining, for example, faults, salt bodies,tectonic blocks, slabs) which can, in principle, be obtained fromsuccessive segmentation. We make use of unstructured meshes. For the adaptation and recovery of an unstructured mesh we introducean energy functional which is derived from the Hausdorff distance. Viaan augmented Lagrangian method, we incorporate the mentioned datamisfit. The recovery is constrained by shape optimization of theinterior boundaries, and is reminiscent of Hausdorff warping. We useelastic deformation via finite elements as a regularization whilefollowing a two-step procedure. The first step is an update determinedby the energy functional; in the second step, we modify the outcome ofthe first step where necessary to ensure that the new mesh isregular. This modification entails an array of techniques includingtopology correction involving interior boundary contacting andbreakup, edge warping and edge removal. We implement this as afeed-back mechanism from volume to interior boundary meshesoptimization. We invoke and apply a criterion of mesh quality controlfor coarsening, and for dynamical local multi-scale refinement. Wepresent a novel (fluid-solid) numerical framework based on theDiscontinuous Galerkin method.

  3. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    Science.gov (United States)

    Greene, Patrick; Schofield, Sam; Nourgaliev, Robert

    2016-11-01

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin (DG) projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. The method retains the excellent smoothing capabilities of condition number relaxation, while providing a method for clustering mesh cells near regions of interest. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  4. Acoustic performance of mesh compression paddles for a multimodality breast imaging system.

    Science.gov (United States)

    LeCarpentier, Gerald L; Goodsitt, Mitchell M; Verweij, Sacha; Li, Jie; Padilla, Frederic R; Carson, Paul L

    2014-07-01

    A system incorporating automated 3-D ultrasound and digital X-ray tomosynthesis is being developed for improved breast lesion detection and characterization. The goal of this work is to develop and test candidates for a dual-modality mesh compression paddle. A Computerized Imaging Reference Systems (Norfork, VA, USA) ultrasound phantom with tilted low-contrast cylindrical objects was used. Polyester mesh fabrics (1- and 2-mm spacing), a high-density polyethylene filament grid (Dyneema, DSM Dyneema, Stanley, NC, USA) and a solid polymethylpentene (TPX; Mitsui Plastics, Inc., White Plains, NY) paddle were compared with no overlying structures using a GE Logic 9 with M12L transducer. A viscous gel provided coupling. The phantom was scanned 10 times over 9 cm for each configuration. Image volumes were analyzed for signal strength, contrast and contrast-to-noise ratio. X-ray tests confirmed X-ray transparency for all materials. By all measures, both mesh fabrics outperformed TPX and Dyneema, and there were essentially no differences between 2-mm mesh and unobstructed configurations.

  5. Particle Mesh Hydrodynamics for Astrophysics Simulations

    Science.gov (United States)

    Chatelain, Philippe; Cottet, Georges-Henri; Koumoutsakos, Petros

    We present a particle method for the simulation of three dimensional compressible hydrodynamics based on a hybrid Particle-Mesh discretization of the governing equations. The method is rooted on the regularization of particle locations as in remeshed Smoothed Particle Hydrodynamics (rSPH). The rSPH method was recently introduced to remedy problems associated with the distortion of computational elements in SPH, by periodically re-initializing the particle positions and by using high order interpolation kernels. In the PMH formulation, the particles solely handle the convective part of the compressible Euler equations. The particle quantities are then interpolated onto a mesh, where the pressure terms are computed. PMH, like SPH, is free of the convection CFL condition while at the same time it is more efficient as derivatives are computed on a mesh rather than particle-particle interactions. PMH does not detract from the adaptive character of SPH and allows for control of its accuracy. We present simulations of a benchmark astrophysics problem demonstrating the capabilities of this approach.

  6. Mesh Learning for Classifying Cognitive Processes

    CERN Document Server

    Ozay, Mete; Öztekin, Uygar; Vural, Fatos T Yarman

    2012-01-01

    The major goal of this study is to model the encoding and retrieval operations of the brain during memory processing, using statistical learning tools. The suggested method assumes that the memory encoding and retrieval processes can be represented by a supervised learning system, which is trained by the brain data collected from the functional Magnetic Resonance (fMRI) measurements, during the encoding stage. Then, the system outputs the same class labels as that of the fMRI data collected during the retrieval stage. The most challenging problem of modeling such a learning system is the design of the interactions among the voxels to extract the information about the underlying patterns of brain activity. In this study, we suggest a new method called Mesh Learning, which represents each voxel by a mesh of voxels in a neighborhood system. The nodes of the mesh are a set of neighboring voxels, whereas the arc weights are estimated by a linear regression model. The estimated arc weights are used to form Local Re...

  7. Parallel object-oriented adaptive mesh refinement

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, D.; Quinlan, D.J.

    1997-04-01

    In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.

  8. Mesh deployable antenna mechanics testing method

    Science.gov (United States)

    Jiang, Li

    Rapid development in spatial technologies and continuous expansion of astronautics applications require stricter and stricter standards in spatial structure. Deployable space structure as a newly invented structural form is being extensively adopted because of its characteristic (i.e. deployability). Deployable mesh reflector antenna is a kind of common deployable antennas. Its reflector consists in a kind of metal mesh. Its electrical properties are highly dependent on its mechanics parameters (including surface accuracy, angle, and position). Therefore, these mechanics parameters have to be calibrated. This paper presents a mesh antenna mechanics testing method that employs both an electronic theodolite and a laser tracker. The laser tracker is firstly used to measure the shape of radial rib deployable antenna. The measurement data are then fitted to a paraboloid by means of error compensation. Accordingly, the focus and the focal axis of the paraboloid are obtained. The following step is to synchronize the coordinate systems of the electronic theodolite and the measured antenna. Finally, in a microwave anechoic chamber environment, the electromechanical axis is calibrated. Testing results verify the effectiveness of the presented method.

  9. MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank.

    Science.gov (United States)

    Mao, Yuqing; Lu, Zhiyong

    2017-04-17

    MeSH indexing is the task of assigning relevant MeSH terms based on a manual reading of scholarly publications by human indexers. The task is highly important for improving literature retrieval and many other scientific investigations in biomedical research. Unfortunately, given its manual nature, the process of MeSH indexing is both time-consuming (new articles are not immediately indexed until 2 or 3 months later) and costly (approximately ten dollars per article). In response, automatic indexing by computers has been previously proposed and attempted but remains challenging. In order to advance the state of the art in automatic MeSH indexing, a community-wide shared task called BioASQ was recently organized. We propose MeSH Now, an integrated approach that first uses multiple strategies to generate a combined list of candidate MeSH terms for a target article. Through a novel learning-to-rank framework, MeSH Now then ranks the list of candidate terms based on their relevance to the target article. Finally, MeSH Now selects the highest-ranked MeSH terms via a post-processing module. We assessed MeSH Now on two separate benchmarking datasets using traditional precision, recall and F1-score metrics. In both evaluations, MeSH Now consistently achieved over 0.60 in F-score, ranging from 0.610 to 0.612. Furthermore, additional experiments show that MeSH Now can be optimized by parallel computing in order to process MEDLINE documents on a large scale. We conclude that MeSH Now is a robust approach with state-of-the-art performance for automatic MeSH indexing and that MeSH Now is capable of processing PubMed scale documents within a reasonable time frame. http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/MeSHNow/ .

  10. Data-Parallel Mesh Connected Components Labeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  11. Data-Parallel Mesh Connected Components Labeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  12. Alternatives to Acellular Dermal Matrix: Utilization of a Gore DualMesh Sling as a Cost-Conscious Adjunct for Breast Reconstruction.

    Science.gov (United States)

    Grow, Jacob N; Butterworth, James; Petty, Paul

    2017-01-01

    Objective: This study seeks an alternative to acellular dermal matrix in 2-staged breast reconstruction while minimizing cost. It was hypothesized that use of a Gore DualMesh would allow for similar intraoperative tissue expander fill volumes, time to second-stage reconstruction, and number of postoperative fills compared with acellular dermal matrix at only a fraction of the expense. Methods: Retrospective analysis comparing Gore DualMesh (59 breasts, 34 patients), acellular dermal matrix (13 breasts, 8 patients), and total muscle coverage (25 breasts, 14 patients) for postmastectomy breast reconstruction was performed. Time to second-stage reconstruction, number of expansions, and relative initial fill volumes were compared between the 3 groups. Secondarily, complication rates were also considered, including seroma, infection, expander/implant explantation, removal of mesh, and capsular contracture. Statistical analysis was performed utilizing the Fisher exact test and the χ(2) test for categorical variables and the Mann-Whitney U test for continuous variables. Results: Relative initial fill volumes, number of expansions, and time to second-stage reconstruction showed no statistical difference between the acellular dermal matrix and Gore DualMesh groups (P = .494, P = .146, and P = .539, respectively). Furthermore, the Gore DualMesh group underwent significantly fewer fills (P Gore DualMesh represents a safe alternative to acellular dermal matrix for breast reconstruction with similar aesthetic results in certain patients at a fraction of the cost.

  13. Use of Lagrangian transport models and Sterilized High Volume Sampling to pinpoint the source region of Kawasaki disease and determine the etiologic agent

    Science.gov (United States)

    Curcoll Masanes, Roger; Rodó, Xavier; Anton, Jordi; Ballester, Joan; Jornet, Albert; Nofuentes, Manel; Sanchez-Manubens, Judith; Morguí, Josep-Anton

    2015-04-01

    Kawasaki disease (KD) is an acute, coronary artery vasculitis of young children, and still a medical mystery after more than 40 years. A former study [Rodó et al. 2011] demonstrated that certain patterns of winds in the troposphere above the earth's surface flowing from Asia were associated with the times of the annual peak in KD cases and with days having anomalously high numbers of KD patients. In a later study [Rodó et al. 2014], we used residence times from an Air Transport Model to pinpoint the source region for KD. Simulations were generated from locations spanning Japan from days with either high or low KD incidence. In order to cope with stationarity of synoptic situations, only trajectories for the winter months, when there is the maximum in KD cases, were considered. Trajectories traced back in time 10 days for each dataset and location were generated using the flexible particle Lagrangian dispersion model (FLEXPART Version 8.23 [Stohl et al. 2005]) run in backward mode. The particles modeled were air tracers, with 10,000 particles used on each model run. The model output used was residence time, with an output grid of 0.5° latitude × longitude and a time resolution of 3 h. The data input used for the FLEXPART model was gridded atmospheric wind velocity from the European Center for Medium-Range Weather Forecasts Re-Analysis (ERA-Interim at 1°). Aggregates of winter period back-trajectories were calculated for three different regions of Japan. A common source of wind air masses was located for periods with High Kawasaki disease. Knowing the trajectories of winds from the air transport models, a sampling methodology was developed in order to capture the possible etiological agent or other tracers that could have been released together. This methodology is based on the sterilized filtering of high volumes of the transported air at medium tropospheric levels by aircraft sampling and a later analyze these filters with adequate techniques. High purity

  14. Bubble reconstruction method for wire-mesh sensors measurements

    Science.gov (United States)

    Mukin, Roman V.

    2016-08-01

    A new algorithm is presented for post-processing of void fraction measurements with wire-mesh sensors, particularly for identifying and reconstructing bubble surfaces in a two-phase flow. This method is a combination of the bubble recognition algorithm presented in Prasser (Nuclear Eng Des 237(15):1608, 2007) and Poisson surface reconstruction algorithm developed in Kazhdan et al. (Poisson surface reconstruction. In: Proceedings of the fourth eurographics symposium on geometry processing 7, 2006). To verify the proposed technique, a comparison was done of the reconstructed individual bubble shapes with those obtained numerically in Sato and Ničeno (Int J Numer Methods Fluids 70(4):441, 2012). Using the difference between reconstructed and referenced bubble shapes, the accuracy of the proposed algorithm was estimated. At the next step, the algorithm was applied to void fraction measurements performed in Ylönen (High-resolution flow structure measurements in a rod bundle (Diss., Eidgenössische Technische Hochschule ETH Zürich, Nr. 20961, 2013) by means of wire-mesh sensors in a rod bundle geometry. The reconstructed bubble shape yields bubble surface area and volume, hence its Sauter diameter d_{32} as well. Sauter diameter is proved to be more suitable for bubbles size characterization compared to volumetric diameter d_{30}, proved capable to capture the bi-disperse bubble size distribution in the flow. The effect of a spacer grid was studied as well: For the given spacer grid and considered flow rates, bubble size frequency distribution is obtained almost at the same position for all cases, approximately at d_{32} = 3.5 mm. This finding can be related to the specific geometry of the spacer grid or the air injection device applied in the experiments, or even to more fundamental properties of the bubble breakup and coagulation processes. In addition, an application of the new algorithm for reconstruction of a large air-water interface in a tube bundle is

  15. Nonhydrostatic adaptive mesh dynamics for multiscale climate models (Invited)

    Science.gov (United States)

    Collins, W.; Johansen, H.; McCorquodale, P.; Colella, P.; Ullrich, P. A.

    2013-12-01

    Many of the atmospheric phenomena with the greatest potential impact in future warmer climates are inherently multiscale. Such meteorological systems include hurricanes and tropical cyclones, atmospheric rivers, and other types of hydrometeorological extremes. These phenomena are challenging to simulate in conventional climate models due to the relatively coarse uniform model resolutions relative to the native nonhydrostatic scales of the phenomonological dynamics. To enable studies of these systems with sufficient local resolution for the multiscale dynamics yet with sufficient speed for climate-change studies, we have adapted existing adaptive mesh dynamics for the DOE-NSF Community Atmosphere Model (CAM). In this talk, we present an adaptive, conservative finite volume approach for moist non-hydrostatic atmospheric dynamics. The approach is based on the compressible Euler equations on 3D thin spherical shells, where the radial direction is treated implicitly (using a fourth-order Runga-Kutta IMEX scheme) to eliminate time step constraints from vertical acoustic waves. Refinement is performed only in the horizontal directions. The spatial discretization is the equiangular cubed-sphere mapping, with a fourth-order accurate discretization to compute flux averages on faces. By using both space-and time-adaptive mesh refinement, the solver allocates computational effort only where greater accuracy is needed. The resulting method is demonstrated to be fourth-order accurate for model problems, and robust at solution discontinuities and stable for large aspect ratios. We present comparisons using a simplified physics package for dycore comparisons of moist physics. Hadley cell lifting an advected tracer into upper atmosphere, with horizontal adaptivity

  16. Deep SSI after mesh-mediated groin hernia repair: management and outcome in an Emergency Surgery Department.

    Science.gov (United States)

    Salamone, G; Licari, L; Augello, G; Campanella, S; Falco, N; Tutino, R; Cocorullo, G; Gullo, R; Raspanti, C; De Marco, P; Porrello, C; Profita, G; Gulotta, G

    2017-01-01

    Mesh-mediated groin hernia repair is considered the goldstandard procedure. It has low recurrence rate. Rarely a deep Surgical Site Infection (SSI) is seen when a synthetic prosthesis is used. We describe a rare case of bilateral deep SSI after mesh-mediated groin hernia repair. Diagnosis was performed through the physical examination and radiological exams. Microbiological samples identified a methicillin-resistant Staphylococcus aureus responsible of the infection. Target therapy was performed and re-operation performed in order to remove the infected prosthesis and to apply a biological one to create the fibrous scaffold. During follow-up time, right side recurrence was observed. Tru-cut biopsy of fascia was obtained in order to identify the responsible of the recurrence. Combination of antibiotic therapy and surgical reoperation seems to be the correct way to approach the deep SSI after mesh-mediated groin hernia repair. The use of biological mesh after synthetic removal seems to improve the final outcome.

  17. Droplet sampling of an oil-based and two water-based antievaporant ultra-low volume insecticide formulations using Teflon- and magnesium oxide-coated slides.

    Science.gov (United States)

    Chaskopoulou, Alexandra; Latham, Mark D; Pereira, Roberto M; Koehler, Philip G

    2013-06-01

    We estimated the diameters below which 50% and 90% of the volume of droplets exist (Dv50 and Dv90, respectively) of 1 oil-based (Permanone 30-30) and 2 water-based (AquaReslin, Aqua-K-Othrine) antievaporant aerosols (with the Film Forming Aqueous Spray Technology [FFAST]) using Teflon- and magnesium oxide (MgO)-coated slides and determined whether the aging of the droplets on the slides (up to 60 min) exhibited any significant effect on Dv50 and Dv90 calculations. There were no significant differences in either Dv50 or Dv90 estimates on MgO-coated slides at 0 min and 60 min for all 3 products tested. On Teflon-coated slides, the only product that showed significant difference between 0 min and 60 min in both Dv50 and Dv90 estimates was Aqua-K-Othrine, perhaps due to a difference in formulation components. Specifically, both values corresponding to Dv50 and Dv90 at 60 min decreased by approximately 50% when compared to the values at 0 min. For the other 2 products, AquaReslin and Permanone, aging of droplets on Teflon up to 60 min did not have any significant effect on Dv50 and Dv90 values. To further investigate the behavior of Aqua-K-Othrine droplets on Teflon-coated slides we observed the droplets immediately after spraying and at 10-min intervals under different conditions of temperature and humidity. The majority of the shrinkage occurred within the 1st 10 min after impaction on the slides under all conditions tested. So in most field situations where slides are read several hours or days after collection, this shrinkage would not be observed. The MgO-coated slides should be the preferred field method for sampling droplets of Aqua-K-Othirne with the FFAST antievaporant technology.

  18. An experience in mesh generation for three-dimensional calculation of potential flow around a rotating propeller

    Science.gov (United States)

    Jou, W.-H.

    1982-01-01

    An attempt is made to develop a three-dimensional, finite volume computational code for highly swept, twisted, small aspect ratio propeller blades with supersonic tip speeds, in a way that accounts for cascade effects, hub-induced flow, and nonlinear transonic effects. Attention is presently given to the generation of a computational mesh for such a complex propeller configuration, with the aim of sharing developmental process experience. The problem treated is unique, in that blade chord, blade length, hub length and blade-to-blade distance represent several characteristic length scales among which there is considerable disparity. An ad hoc mesh-generation scheme is accordingly developed.

  19. Computed tomographic measurements of mesh shrinkage after laparoscopic ventral incisional hernia repair with an expanded polytetrafluoroethylene mesh.

    Science.gov (United States)

    Schoenmaeckers, Ernst J P; van der Valk, Steef B A; van den Hout, Huib W; Raymakers, Johan F T J; Rakic, Srdjan

    2009-07-01

    The potential for shrinkage of intraperitoneally implanted meshes for laparoscopic repair of ventral and incisional hernia (LRVIH) remains a concern. Numerous experimental studies on this issue reported very inconsistent results. Expanded polytetrafluoroethylene (ePTFE) mesh has the unique property of being revealed by computed tomography (CT). We therefore conducted an analysis of CT findings in patients who had previously undergone LRVIH with an ePTFE mesh (DualMesh, WL Gore, Flagstaff, AZ, USA) in order to evaluate the shrinkage of implanted meshes. Of 656 LRVIH patients with DualMesh, all patients who subsequently underwent CT scanning were identified and only those with precisely known transverse diameter of implanted mesh and with CT scans made more than 3 months postoperatively were selected (n = 40). Two radiologists who were blinded to the size of the implanted mesh measured in consensus the maximal transverse diameter of the meshes by using the AquariusNET program (TeraRecon Inc., San Mateo, CA, USA). Mesh shrinkage was defined as the relative loss of transverse diameter as compared with the original transverse diameter of the mesh. The mean time from LRVIH to CT scan was 17.9 months (range 3-59 months). The mean shrinkage of the mesh was 7.5% (range 0-23.7%). For 11 patients (28%) there was no shrinkage at all. Shrinkage of 1-10% was found in 16 patients (40%), of 10-20% in 10 patients (25%), and of 20-24% in 3 patients (7.5%). No correlation was found regarding the elapsed time between LRVIH and CT, and shrinkage. There were two recurrences, one possibly related to shrinkage. Our observations indicate that shrinkage of DualMesh is remarkably lower than has been reported in experimental studies (8-51%). This study is the first to address the problem of shrinkage after intraperitoneal implantation of synthetic mesh in a clinical material.

  20. A comparative study of postoperative complications of lightweight mesh and conventional prolene mesh in Lichtenstein hernia repair

    Directory of Open Access Journals (Sweden)

    Gugri Mukthinath

    2016-06-01

    Full Text Available Background: Inguinal hernia repair is the most frequently performed operation in any general surgical unit. The complications of using the mesh has been the rationale to examine the role of mesh in hernia repair in detail and to begin investigating the biocompatibility of different mesh modifications and to challenge old mesh concepts. Therefore the present study is undertaken to compare the lightweight mesh (Ultrapro with conventional prolene mesh in lichtenstein hernia repair. Methods: Thirty one patients with primary unilateral inguinal hernia was subjected either to lightweight mesh Lichtenstein's hernioplasty or standard prolene mesh Lichtenstein's hernioplasty. The patients were followed in the surgical OPD at 1 month, 6 months and 1 year for time taken to return to normal activities, chronic groin pain, foreign body sensation, seroma formation and recurrence. Results: Chronic pain among patients in standard prolene mesh group at 1 month, 6 month, and 1 year follow up was seen in 45.2%, 16% and 3.2% of the patients respectively, in light weight mesh group patients at 1 month, 6 month and 1 year follow up was 32.2%, 6.4% and none at one year respectively. Foreign body sensation in the light weight mesh group is significantly less compared to patients in standard prolene mesh group. Time taken to return to work was relatively shorter among patients in Light weight mesh group. There was no recurrence in both groups. Conclusion: Light weight mesh is an ideal choice in Lichenstein's hernioplasty whenever feasible. [Int J Res Med Sci 2016; 4(6.000: 2130-2134