WorldWideScience

Sample records for meshes computational analysis

  1. Sierra toolkit computational mesh conceptual model

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  2. Mesh Processing in Medical Image Analysis

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  3. Enhanced Computer Aided Simulation of Meshing and Contact With Application for Spiral Bevel Gear Drives

    Litvin, F

    1999-01-01

    An integrated tooth contact analysis (TCA) computer program for the simulation of meshing and contact of gear drives that calculates transmission errors and shift of hearing contact for misaligned gear drives has been developed...

  4. Polyhedral meshing in numerical analysis of conjugate heat transfer

    Sosnowski, Marcin; Krzywanski, Jaroslaw; Grabowska, Karolina; Gnatowska, Renata

    2018-06-01

    Computational methods have been widely applied in conjugate heat transfer analysis. The very first and crucial step in such research is the meshing process which consists in dividing the analysed geometry into numerous small control volumes (cells). In Computational Fluid Dynamics (CFD) applications it is desirable to use the hexahedral cells as the resulting mesh is characterized by low numerical diffusion. Unfortunately generating such mesh can be a very time-consuming task and in case of complicated geometry - it may not be possible to generate cells of good quality. Therefore tetrahedral cells have been implemented into commercial pre-processors. Their advantage is the ease of its generation even in case of very complex geometry. On the other hand tetrahedrons cannot be stretched excessively without decreasing the mesh quality factor, so significantly larger number of cells has to be used in comparison to hexahedral mesh in order to achieve a reasonable accuracy. Moreover the numerical diffusion of tetrahedral elements is significantly higher. Therefore the polyhedral cells are proposed within the paper in order to combine the advantages of hexahedrons (low numerical diffusion resulting in accurate solution) and tetrahedrons (rapid semi-automatic generation) as well as to overcome the disadvantages of both the above mentioned mesh types. The major benefit of polyhedral mesh is that each individual cell has many neighbours, so gradients can be well approximated. Polyhedrons are also less sensitive to stretching than tetrahedrons which results in better mesh quality leading to improved numerical stability of the model. In addition, numerical diffusion is reduced due to mass exchange over numerous faces. This leads to a more accurate solution achieved with a lower cell count. Therefore detailed comparison of numerical modelling results concerning conjugate heat transfer using tetrahedral and polyhedral meshes is presented in the paper.

  5. Capacity Analysis of Wireless Mesh Networks

    M. I. Gumel

    2012-06-01

    Full Text Available The next generation wireless networks experienced a great development with emergence of wireless mesh networks (WMNs, which can be regarded as a realistic solution that provides wireless broadband access. The limited available bandwidth makes capacity analysis of the network very essential. While the network offers broadband wireless access to community and enterprise users, the problems that limit the network capacity must be addressed to exploit the optimum network performance. The wireless mesh network capacity analysis shows that the throughput of each mesh node degrades in order of l/n with increasing number of nodes (n in a linear topology. The degradation is found to be higher in a fully mesh network as a result of increase in interference and MAC layer contention in the network.

  6. Computational mesh generation for vascular structures with deformable surfaces

    Putter, S. de; Laffargue, F.; Breeuwer, M.; Vosse, F.N. van de; Gerritsen, F.A.; Philips Medical Systems, Best

    2006-01-01

    Computational blood flow and vessel wall mechanics simulations for vascular structures are becoming an important research tool for patient-specific surgical planning and intervention. An important step in the modelling process for patient-specific simulations is the creation of the computational mesh based on the segmented geometry. Most known solutions either require a large amount of manual processing or lead to a substantial difference between the segmented object and the actual computational domain. We have developed a chain of algorithms that lead to a closely related implementation of image segmentation with deformable models and 3D mesh generation. The resulting processing chain is very robust and leads both to an accurate geometrical representation of the vascular structure as well as high quality computational meshes. The chain of algorithms has been tested on a wide variety of shapes. A benchmark comparison of our mesh generation application with five other available meshing applications clearly indicates that the new approach outperforms the existing methods in the majority of cases. (orig.)

  7. Refficientlib: an efficient load-rebalanced adaptive mesh refinement algorithm for high-performance computational physics meshes

    Baiges Aznar, Joan; Bayona Roa, Camilo Andrés

    2017-01-01

    No separate or additional fees are collected for access to or distribution of the work. In this paper we present a novel algorithm for adaptive mesh refinement in computational physics meshes in a distributed memory parallel setting. The proposed method is developed for nodally based parallel domain partitions where the nodes of the mesh belong to a single processor, whereas the elements can belong to multiple processors. Some of the main features of the algorithm presented in this paper a...

  8. Parallel Adaptive Mesh Refinement for High-Order Finite-Volume Schemes in Computational Fluid Dynamics

    Schwing, Alan Michael

    comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.

  9. Efficient computation of clipped Voronoi diagram for mesh generation

    Yan, Dongming

    2013-04-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  10. Efficient computation of clipped Voronoi diagram for mesh generation

    Yan, Dongming; Wang, Wen Ping; Lé vy, Bruno L.; Liu, Yang

    2013-01-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  11. Data-Parallel Mesh Connected Components Labeling and Analysis

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  12. Mesh size in Lichtenstein repair: a systematic review and meta-analysis to determine the importance of mesh size.

    Seker, D; Oztuna, D; Kulacoglu, H; Genc, Y; Akcil, M

    2013-04-01

    Small mesh size has been recognized as one of the factors responsible for recurrence after Lichtenstein hernia repair due to insufficient coverage or mesh shrinkage. The Lichtenstein Hernia Institute recommends a 7 × 15 cm mesh that can be trimmed up to 2 cm from the lateral side. We performed a systematic review to determine surgeons' mesh size preference for the Lichtenstein hernia repair and made a meta-analysis to determine the effect of mesh size, mesh type, and length of follow-up time on recurrence. Two medical databases, PubMed and ISI Web of Science, were systematically searched using the key word "Lichtenstein repair." All full text papers were selected. Publications mentioning mesh size were brought for further analysis. A mesh surface area of 90 cm(2) was accepted as the threshold for defining the mesh as small or large. Also, a subgroup analysis for recurrence pooled proportion according to the mesh size, mesh type, and follow-up period was done. In total, 514 papers were obtained. There were no prospective or retrospective clinical studies comparing mesh size and clinical outcome. A total of 141 papers were duplicated in both databases. As a result, 373 papers were obtained. The full text was available in over 95 % of papers. Only 41 (11.2 %) papers discussed mesh size. In 29 studies, a mesh larger than 90 cm(2) was used. The most frequently preferred commercial mesh size was 7.5 × 15 cm. No papers mentioned the size of the mesh after trimming. There was no information about the relationship between mesh size and patient BMI. The pooled proportion in recurrence for small meshes was 0.0019 (95 % confidence interval: 0.007-0.0036), favoring large meshes to decrease the chance of recurrence. Recurrence becomes more marked when follow-up period is longer than 1 year (p < 0.001). Heavy meshes also decreased recurrence (p = 0.015). This systematic review demonstrates that the size of the mesh used in Lichtenstein hernia repair is rarely

  13. Capacity analysis of wireless mesh networks | Gumel | Nigerian ...

    ... number of nodes (n) in a linear topology. The degradation is found to be higher in a fully mesh network as a result of increase in interference and MAC layer contention in the network. Key words: Wireless mesh network (WMN), Adhoc network, Network capacity analysis, Bottleneck collision domain, Medium access control ...

  14. Mesh Processing in Medical-Image Analysis-a Tutorial

    Levine, Joshua A.; Paulsen, Rasmus Reinhold; Zhang, Yongjie

    2012-01-01

    Medical-image analysis requires an understanding of sophisticated scanning modalities, constructing geometric models, building meshes to represent domains, and downstream biological applications. These four steps form an image-to-mesh pipeline. For research in this field to progress, the imaging...

  15. Development of a multimaterial, two-dimensional, arbitrary Lagrangian-Eulerian mesh computer program

    Barton, R.T.

    1982-01-01

    We have developed a large, multimaterial, two-dimensional Arbitrary Lagrangian-Eulerian (ALE) computer program. The special feature of an ALE mesh is that it can be either an embedded Lagrangian mesh, a fixed Eulerian mesh, or a partially embedded, partially remapped mesh. Remapping is used to remove Lagrangian mesh distortion. This general purpose program has been used for astrophysical modeling, under the guidance of James R. Wilson. The rationale behind the development of this program will be used to highlight several important issues in program design

  16. Second-order particle-in-cell (PIC) computational method in the one-dimensional variable Eulerian mesh system

    Pyun, J.J.

    1981-01-01

    As part of an effort to incorporate the variable Eulerian mesh into the second-order PIC computational method, a truncation error analysis was performed to calculate the second-order error terms for the variable Eulerian mesh system. The results that the maximum mesh size increment/decrement is limited to be α(Δr/sub i/) 2 where Δr/sub i/ is a non-dimensional mesh size of the ith cell, and α is a constant of order one. The numerical solutions of Burgers' equation by the second-order PIC method in the variable Eulerian mesh system wer compared with its exact solution. It was found that the second-order accuracy in the PIC method was maintained under the above condition. Additional problems were analyzed using the second-order PIC methods in both variable and uniform Eulerian mesh systems. The results indicate that the second-order PIC method in the variable Eulerian mesh system can provide substantial computational time saving with no loss in accuracy

  17. r-Adaptive mesh generation for shell finite element analysis

    Cho, Maenghyo; Jun, Seongki

    2004-01-01

    An r-adaptive method or moving grid technique relocates a grid so that it becomes concentrated in the desired region. This concentration improves the accuracy and efficiency of finite element solutions. We apply the r-adaptive method to computational mesh of shell surfaces, which is initially regular and uniform. The r-adaptive method, given by Liao and Anderson [Appl. Anal. 44 (1992) 285], aggregate the grid in the region with a relatively high weight function without any grid-tangling. The stress error estimator is calculated in the initial uniform mesh for a weight function. However, since the r-adaptive method is a method that moves the grid, shell surface geometry error such as curvature error and mesh distortion error will increase. Therefore, to represent the exact geometry of a shell surface and to prevent surface geometric errors, we use the Naghdi's shell theory and express the shell surface by a B-spline patch. In addition, using a nine-node element, which is relatively less sensitive to mesh distortion, we try to diminish mesh distortion error in the application of an r-adaptive method. In the numerical examples, it is shown that the values of the error estimator for a cylinder, hemisphere, and torus in the overall domain can be reduced effectively by using the mesh generated by the r-adaptive method. Also, the reductions of the estimated relative errors are demonstrated in the numerical examples. In particular, a new functional is proposed to construct an adjusted mesh configuration by considering a mesh distortion measure as well as the stress error function. The proposed weight function provides a reliable mesh adaptation method after a parameter value in the weight function is properly chosen

  18. Thermal Analysis of Concrete Storage Cask with Bird Screen Meshes

    Lee, Ju-Chan; Bang, K.S.; Yu, S.H.; Cho, S.S.; Choi, W.S. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this study, a thermal analysis of the cask with bird screen meshes has been performed using a porous media model. The overpack consists of a structural material, a concrete shielding, and a ventilation system. Heat is removed from the cask to the environment by a passive means only. Air inlet and outlet ducts are installed at the bottom and top of the cask for a ventilation system. Bird screen meshes are installed at the air inlet and outlet ducts to inhibit intrusion of debris from the external environment. The presence of this screens introduce an additional resistance to air flow through the ducts. Five types of meshes for bird screen were considered in this study. The bird screen meshes at the inlet and outlet vents reduce the open area for flow by about 44 - 79 %. Flow resistance coefficients for porous media model were deduced from the fluid flow analysis of bird screen meshes. Thermal analyses for the concrete cask have been carried out using a porous media model. The analysis results agreed well with the test results. Therefore, it was shown that the porous media model for the screen mesh was established to estimate the cask temperatures.

  19. Thermal Analysis of Concrete Storage Cask with Bird Screen Meshes

    Lee, Ju-Chan; Bang, K.S.; Yu, S.H.; Cho, S.S.; Choi, W.S.

    2016-01-01

    In this study, a thermal analysis of the cask with bird screen meshes has been performed using a porous media model. The overpack consists of a structural material, a concrete shielding, and a ventilation system. Heat is removed from the cask to the environment by a passive means only. Air inlet and outlet ducts are installed at the bottom and top of the cask for a ventilation system. Bird screen meshes are installed at the air inlet and outlet ducts to inhibit intrusion of debris from the external environment. The presence of this screens introduce an additional resistance to air flow through the ducts. Five types of meshes for bird screen were considered in this study. The bird screen meshes at the inlet and outlet vents reduce the open area for flow by about 44 - 79 %. Flow resistance coefficients for porous media model were deduced from the fluid flow analysis of bird screen meshes. Thermal analyses for the concrete cask have been carried out using a porous media model. The analysis results agreed well with the test results. Therefore, it was shown that the porous media model for the screen mesh was established to estimate the cask temperatures

  20. Towards a real time computation of the dose in a phantom segmented into homogeneous meshes

    Blanpain, B.

    2009-10-01

    Automatic radiation therapy treatment planning necessitates a very fast computation of the dose delivered to the patient. We propose to compute the dose by segmenting the patient's phantom into homogeneous meshes, and by associating, to the meshes, projections to dose distributions pre-computed in homogeneous phantoms, along with weights managing heterogeneities. The dose computation is divided into two steps. The first step impacts the meshes: projections and weights are set according to physical and geometrical criteria. The second step impacts the voxels: the dose is computed by evaluating the functions previously associated to their mesh. This method is very fast, in particular when there are few points of interest (several hundreds). In this case, results are obtained in less than one second. With such performances, practical realization of automatic treatment planning becomes practically feasible. (author)

  1. Improved mesh based photon sampling techniques for neutron activation analysis

    Relson, E.; Wilson, P. P. H.; Biondo, E. D.

    2013-01-01

    The design of fusion power systems requires analysis of neutron activation of large, complex volumes, and the resulting particles emitted from these volumes. Structured mesh-based discretization of these problems allows for improved modeling in these activation analysis problems. Finer discretization of these problems results in large computational costs, which drives the investigation of more efficient methods. Within an ad hoc subroutine of the Monte Carlo transport code MCNP, we implement sampling of voxels and photon energies for volumetric sources using the alias method. The alias method enables efficient sampling of a discrete probability distribution, and operates in 0(1) time, whereas the simpler direct discrete method requires 0(log(n)) time. By using the alias method, voxel sampling becomes a viable alternative to sampling space with the 0(1) approach of uniformly sampling the problem volume. Additionally, with voxel sampling it is straightforward to introduce biasing of volumetric sources, and we implement this biasing of voxels as an additional variance reduction technique that can be applied. We verify our implementation and compare the alias method, with and without biasing, to direct discrete sampling of voxels, and to uniform sampling. We study the behavior of source biasing in a second set of tests and find trends between improvements and source shape, material, and material density. Overall, however, the magnitude of improvements from source biasing appears to be limited. Future work will benefit from the implementation of efficient voxel sampling - particularly with conformal unstructured meshes where the uniform sampling approach cannot be applied. (authors)

  2. Mesh influence on the fire computer modeling in nuclear power plants

    D. Lázaro

    2018-04-01

    Full Text Available Fire computer models allow to study real fire scenarios consequences. Its use in nuclear power plants has increased with the new regulations to apply risk informed performance-based methods for the analysis and design of fire safety solutions. The selection of the cell side factor is very important in these kinds of models. The mesh must establish a compromise between the geometry adjustment, the resolution of the equations and the computation times. This paper aims to study the impact of several cell sizes, using the fire computer model FDS, to evaluate the relative affectation in the final simulation results. In order to validate that, we have employed several scenarios of interest for nuclear power plants. Conclusions offer relevant data for users and show some cell sizes that can be selected to guarantee the quality of the simulations and reduce the results uncertainty.

  3. Mesh Partitioning Algorithm Based on Parallel Finite Element Analysis and Its Actualization

    Lei Zhang

    2013-01-01

    Full Text Available In parallel computing based on finite element analysis, domain decomposition is a key technique for its preprocessing. Generally, a domain decomposition of a mesh can be realized through partitioning of a graph which is converted from a finite element mesh. This paper discusses the method for graph partitioning and the way to actualize mesh partitioning. Relevant softwares are introduced, and the data structure and key functions of Metis and ParMetis are introduced. The writing, compiling, and testing of the mesh partitioning interface program based on these key functions are performed. The results indicate some objective law and characteristics to guide the users who use the graph partitioning algorithm and software to write PFEM program, and ideal partitioning effects can be achieved by actualizing mesh partitioning through the program. The interface program can also be used directly by the engineering researchers as a module of the PFEM software. So that it can reduce the application of the threshold of graph partitioning algorithm, improve the calculation efficiency, and promote the application of graph theory and parallel computing.

  4. Mesh construction for the 2-dimensional computational fracture mechanics using the I-DEAS

    Kim, Jong Wook; Kim, Tae Wan; Park, Keun Bae

    2000-09-01

    Recently research activities have been reported regarding the generation of the input data for the crack problems at a minimum of effort utilizing the general characteristics of the finite element modeling technique. Several automatic FE mesh generation methods for the cracked structure of particular geometries and boundary conditions have been proposed by using commercial codes or developing in-house programs. In general, development of software to deal with special crack problem can maximize the efficiency and accuracy for a specific environment. However, applicable range of such scheme is usually very restricted and new program should be formed in each case. On the other hand, commercial codes can be used for the automatic mesh generation of variety of geometries, but with an additional effort to accomodate the singular element for the cracked-body analysis. In the present study, a procedure for the generation of input data for the optimized computational fracture mechanics is developed as a series of effort to establish the structural integrity evaluation procedure of SMART reactor vessel assembly. Input data for the finite element analysis are prepared using the commercial code I-DEAS. The midpoint nodes near the crack front are shifted at the quarter-points. The complete finite element model generated is given to another commercial finite element code ABAQUS for the stress analysis. The stress intensity factors are calculated using the J-integral method. To demonstrate the validation of the present procedure, double-edge crack in a plate subjected to uniform tension is solved, and the effects of mesh construction are discussed in detail. The structural integrity evaluation procedure through the 2-D crack modeling is then established

  5. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    Pointer, William David [ORNL

    2017-08-01

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes were used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge

  6. Combined in vivo and ex vivo analysis of mesh mechanics in a porcine hernia model.

    Kahan, Lindsey G; Lake, Spencer P; McAllister, Jared M; Tan, Wen Hui; Yu, Jennifer; Thompson, Dominic; Brunt, L Michael; Blatnik, Jeffrey A

    2018-02-01

    Hernia meshes exhibit variability in mechanical properties, and their mechanical match to tissue has not been comprehensively studied. We used an innovative imaging model of in vivo strain tracking and ex vivo mechanical analysis to assess effects of mesh properties on repaired abdominal walls in a porcine model. We hypothesized that meshes with dissimilar mechanical properties compared to native tissue would alter abdominal wall mechanics more than better-matched meshes. Seven mini-pigs underwent ventral hernia creation and subsequent open repair with one of two heavyweight polypropylene meshes. Following mesh implantation with attached radio-opaque beads, fluoroscopic images were taken at insufflation pressures from 5 to 30 mmHg on postoperative days 0, 7, and 28. At 28 days, animals were euthanized and ex vivo mechanical testing performed on full-thickness samples across repaired abdominal walls. Testing was conducted on 13 mini-pig controls, and on meshes separately. Stiffness and anisotropy (the ratio of stiffness in the transverse versus craniocaudal directions) were assessed. 3D reconstructions of repaired abdominal walls showed stretch patterns. As pressure increased, both meshes expanded, with no differences between groups. Over time, meshes contracted 17.65% (Mesh A) and 0.12% (Mesh B; p = 0.06). Mesh mechanics showed that Mesh A deviated from anisotropic native tissue more than Mesh B. Compared to native tissue, Mesh A was stiffer both transversely and craniocaudally. Explanted repaired abdominal walls of both treatment groups were stiffer than native tissue. Repaired tissue became less anisotropic over time, as mesh properties prevailed over native abdominal wall properties. This technique assessed 3D stretch at the mesh level in vivo in a porcine model. While the abdominal wall expanded, mesh-ingrown areas contracted, potentially indicating stresses at mesh edges. Ex vivo mechanics demonstrate that repaired tissue adopts mesh properties, suggesting

  7. Computational performance of Free Mesh Method applied to continuum mechanics problems

    YAGAWA, Genki

    2011-01-01

    The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics. PMID:21558753

  8. A Krylov Subspace Method for Unstructured Mesh SN Transport Computation

    Yoo, Han Jong; Cho, Nam Zin; Kim, Jong Woon; Hong, Ser Gi; Lee, Young Ouk

    2010-01-01

    Hong, et al., have developed a computer code MUST (Multi-group Unstructured geometry S N Transport) for the neutral particle transport calculations in three-dimensional unstructured geometry. In this code, the discrete ordinates transport equation is solved by using the discontinuous finite element method (DFEM) or the subcell balance methods with linear discontinuous expansion. In this paper, the conventional source iteration in the MUST code is replaced by the Krylov subspace method to reduce computing time and the numerical test results are given

  9. Computing discrete signed distance fields from triangle meshes

    Bærentzen, Jakob Andreas; Aanæs, Henrik

    2002-01-01

    A method for generating a discrete, signed 3D distance field is proposed. Distance fields are used in a number of contexts. In particular the popular level set method is usually initialized by a distance field. The main focus of our work is on simplifying the computation of the sign when generating...

  10. Numerical methods and analysis of the nonlinear Vlasov equation on unstructured meshes of phase space

    Besse, Nicolas

    2003-01-01

    This work is dedicated to the mathematical and numerical studies of the Vlasov equation on phase-space unstructured meshes. In the first part, new semi-Lagrangian methods are developed to solve the Vlasov equation on unstructured meshes of phase space. As the Vlasov equation describes multi-scale phenomena, we also propose original methods based on a wavelet multi-resolution analysis. The resulting algorithm leads to an adaptive mesh-refinement strategy. The new massively-parallel computers allow to use these methods with several phase-space dimensions. Particularly, these numerical schemes are applied to plasma physics and charged particle beams in the case of two-, three-, and four-dimensional Vlasov-Poisson systems. In the second part we prove the convergence and give error estimates for several numerical schemes applied to the Vlasov-Poisson system when strong and classical solutions are considered. First we show the convergence of a semi-Lagrangian scheme on an unstructured mesh of phase space, when the regularity hypotheses for the initial data are minimal. Then we demonstrate the convergence of classes of high-order semi-Lagrangian schemes in the framework of the regular classical solution. In order to reconstruct the distribution function, we consider symmetrical Lagrange polynomials, B-Splines and wavelets bases. Finally we prove the convergence of a semi-Lagrangian scheme with propagation of gradients yielding a high-order and stable reconstruction of the solution. (author) [fr

  11. A Novel Capacity Analysis for Wireless Backhaul Mesh Networks

    Chung, Tein-Yaw; Lee, Kuan-Chun; Lee, Hsiao-Chih

    This paper derived a closed-form expression for inter-flow capacity of a backhaul wireless mesh network (WMN) with centralized scheduling by employing a ring-based approach. Through the definition of an interference area, we are able to accurately describe a bottleneck collision area for a WMN and calculate the upper bound of inter-flow capacity. The closed-form expression shows that the upper bound is a function of the ratio between transmission range and network radius. Simulations and numerical analysis show that our analytic solution can better estimate the inter-flow capacity of WMNs than that of previous approach.

  12. Combination of ray-tracing and the method of moments for electromagnetic radiation analysis using reduced meshes

    Delgado, Carlos; Cátedra, Manuel Felipe

    2018-05-01

    This work presents a technique that allows a very noticeable relaxation of the computational requirements for full-wave electromagnetic simulations based on the Method of Moments. A ray-tracing analysis of the geometry is performed in order to extract the critical points with significant contributions. These points are then used to generate a reduced mesh, considering the regions of the geometry that surround each critical point and taking into account the electrical path followed from the source. The electromagnetic analysis of the reduced mesh produces very accurate results, requiring a fraction of the resources that the conventional analysis would utilize.

  13. A shape and mesh adaptive computational methodology for gamma ray dose from volumetric sources

    Mirza, N.M.; Ali, B.; Mirza, S.M.; Tufail, M.; Ahmad, N.

    1991-01-01

    Indoor external exposure to the population is dominated by gamma rays emitted from the walls and the floor of a room. A shape and mesh size adaptive flux calculational approach has been developed for a typical wall source. Parametric studies of the effect of mesh size on flux calculations have been done. The optimum value of the mesh size is found to depend strongly on distance from the source, permissible limits on uncertainty in flux predictions and on computer Central Processing Unit time. To test the computations, a typical wall source was reduced to a point, a line and an infinite volume source having finite thickness, and the computed flux values were compared with values from corresponding analytical expressions for these sources. Results indicate that the errors under optimum conditions remain less than 6% for the fluxes calculated from this approach when compared with the analytical values for the point and the line source approximations. Also, when the wall is simulated as an infinite volume source having finite thickness, the errors in computed to analytical flux ratios remain large for smaller wall dimensions. However, the errors become less than 10% when the wall dimensions are greater than ten mean free paths for 3 MeV gamma rays. Also, specific dose rates from this methodology remain within the difference of 15% for the values obtained by Monte Carlo method. (author)

  14. Prophylactic mesh to prevent parastomal hernia after end colostomy: a meta-analysis and trial sequential analysis.

    López-Cano, M; Brandsma, H-T; Bury, K; Hansson, B; Kyle-Leinhase, I; Alamino, J G; Muysoms, F

    2017-04-01

    Prevention of parastomal hernia (PSH) formation is crucial, given the high prevalence and difficulties in the surgical repair of PSH. To investigate the effect of a preventive mesh in PSH formation after an end colostomy, we aimed to meta-analyze all relevant randomized controlled trials (RCTs). We searched five databases. For each trial, we extracted risk ratios (RRs) of the effects of mesh or no mesh. The primary outcome was incidence of PSH with a minimum follow-up of 12 months with a clinical and/or computed tomography diagnosis. RRs were combined using the random-effect model (Mantel-Haenszel). To control the risk of type I error, we performed a trial sequential analysis (TSA). Seven RCTs with low risk of bias (451 patients) were included. Meta-analysis for primary outcome showed a significant reduction of the incidence of PSH using a mesh (RR 0.43, 95% CI 0.26-0.71; P = 0.0009). Regarding TSA calculation for the primary outcome, the accrued information size (451) was 187.1% of the estimated required information size (RIS) (241). Wound infection showed no statistical differences between groups (RR 0.77, 95% CI 0.39-1.54; P = 0.46). PSH repair rate showed a significant reduction in the mesh group (RR 0.28 (95% CI 0.10-0.78; P = 0.01). PSH prevention with mesh when creating an end colostomy reduces the incidence of PSH, the risk for subsequent PSH repair and does not increase wound infections. TSA shows that the RIS is reached for the primary outcome. Additional RCTs in the previous context are not needed.

  15. Horizontal Air-Water Flow Analysis with Wire Mesh Sensor

    De Salve, M; Monni, G; Panella, B

    2012-01-01

    A Wire Mesh Sensor, based on the measurement of the local instantaneous conductivity of the two-phase mixture, has been used to characterize the fluid dynamics of the gas–liquid interface in a horizontal pipe flow. Experiments with a pipe of a nominal diameter of 19.5 mm and total length of 6 m, have been performed with air/water mixtures, at ambient conditions. The flow quality ranges from 0.00016 to 0.22 and the superficial velocities range from 0.1 to 10.5 m/s for air and from 0.02 to 1.7 m/s for water; the flow pattern is stratified, slug/plug and annular. A sensor (WMS200) with an inner diameter of 19.5 mm and a measuring matrix of 16×16 points equally distributed over the cross-section has been chosen for the measurements. From the analysis of the Wire Mesh Sensor digital signals the average and the local void fraction are evaluated and the flow patterns are identified with reference to space, time and flow rate boundary conditions.

  16. The Analysis of the Usefulness of Welded Meshes to Embankment Reinforcement

    Ćwirko Marcin

    2017-09-01

    Full Text Available The aim of this paper was to find an answer to the question about the possibility of using steel welded mesh in building the retaining walls of gabion baskets. In light of the currently used gabion structure solutions, among which double-woven mesh is much more popular, the focus was put on the possibility of using welded mesh. A numerical analysis was conducted to examine the behavior of welded and woven mesh subjected to various loads and the results obtained for both types of mesh were directly compared. The maximal displacement in mesh nodes was admitted as the measurement of the system behavior (in the case of both undamaged and damaged mesh.

  17. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  18. MCR2S unstructured mesh capabilities for use in shutdown dose rate analysis

    Eade, T.; Stonell, D.; Turner, A.

    2015-01-01

    Highlights: • Advancements in shutdown dose rate calculations will be needed as fusion moves from experimental reactors to full scale demonstration reactors in order to ensure the safety of personnel. • The MCR2S shutdown dose rate tool has been modified to allow shutdown dose rates calculations using an unstructured mesh. • The unstructured mesh capability of MCR2S was used on three shutdown dose rate models, a simple sphere, the ITER computational benchmark and the DEMO computational benchmark. • The results showed a reasonable agreement between an unstructured mesh approach and the CSG approach and highlighted the need to carefully choose the unstructured mesh resolution. - Abstract: As nuclear fusion progresses towards a sustainable energy source and the power of tokamak devices increases, a greater understanding of the radiation fields will be required. As well as on-load radiation fields, off-load or shutdown radiation field are an important consideration for the safety and economic viability of a commercial fusion reactor. Previously codes such as MCR2S have been written in order to predict the shutdown dose rates within, and in regions surrounding, a fusion reactor. MCR2S utilises a constructive solid geometry (CSG) model and a superimposed structured mesh to calculate 3-D maps of the shutdown dose rate. A new approach to MCR2S calculations is proposed and implemented using a single unstructured mesh to replace both the CSG model and the superimposed structured mesh. This new MCR2S approach has been demonstrated on three models of increasing complexity. These models were: a sphere, the ITER computational shutdown dose rate benchmark and the DEMO computational shutdown dose rate benchmark. In each case the results were compared to MCR2S calculations performed using MCR2S with CSG geometry and a superimposed structured mesh. It was concluded that the results from the unstructured mesh implementation of MCR2S compared well to the CSG structured mesh

  19. Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.

    Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F

    2011-03-01

    This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.

  20. An efficient Adaptive Mesh Refinement (AMR) algorithm for the Discontinuous Galerkin method: Applications for the computation of compressible two-phase flows

    Papoutsakis, Andreas; Sazhin, Sergei S.; Begg, Steven; Danaila, Ionut; Luddens, Francky

    2018-06-01

    We present an Adaptive Mesh Refinement (AMR) method suitable for hybrid unstructured meshes that allows for local refinement and de-refinement of the computational grid during the evolution of the flow. The adaptive implementation of the Discontinuous Galerkin (DG) method introduced in this work (ForestDG) is based on a topological representation of the computational mesh by a hierarchical structure consisting of oct- quad- and binary trees. Adaptive mesh refinement (h-refinement) enables us to increase the spatial resolution of the computational mesh in the vicinity of the points of interest such as interfaces, geometrical features, or flow discontinuities. The local increase in the expansion order (p-refinement) at areas of high strain rates or vorticity magnitude results in an increase of the order of accuracy in the region of shear layers and vortices. A graph of unitarian-trees, representing hexahedral, prismatic and tetrahedral elements is used for the representation of the initial domain. The ancestral elements of the mesh can be split into self-similar elements allowing each tree to grow branches to an arbitrary level of refinement. The connectivity of the elements, their genealogy and their partitioning are described by linked lists of pointers. An explicit calculation of these relations, presented in this paper, facilitates the on-the-fly splitting, merging and repartitioning of the computational mesh by rearranging the links of each node of the tree with a minimal computational overhead. The modal basis used in the DG implementation facilitates the mapping of the fluxes across the non conformal faces. The AMR methodology is presented and assessed using a series of inviscid and viscous test cases. Also, the AMR methodology is used for the modelling of the interaction between droplets and the carrier phase in a two-phase flow. This approach is applied to the analysis of a spray injected into a chamber of quiescent air, using the Eulerian

  1. Streaming simplification of tetrahedral meshes.

    Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T

    2007-01-01

    Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.

  2. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  3. 3D face analysis by using Mesh-LBP feature

    Wang, Haoyu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong

    2017-11-01

    Objective: Face Recognition is one of the widely application of image processing. Corresponding two-dimensional limitations, such as the pose and illumination changes, to a certain extent restricted its accurate rate and further development. How to overcome the pose and illumination changes and the effects of self-occlusion is the research hotspot and difficulty, also attracting more and more domestic and foreign experts and scholars to study it. 3D face recognition fusing shape and texture descriptors has become a very promising research direction. Method: Our paper presents a 3D point cloud based on mesh local binary pattern grid (Mesh-LBP), then feature extraction for 3D face recognition by fusing shape and texture descriptors. 3D Mesh-LBP not only retains the integrity of the 3D geometry, is also reduces the need for recognition process of normalization steps, because the triangle Mesh-LBP descriptor is calculated on 3D grid. On the other hand, in view of multi-modal consistency in face recognition advantage, construction of LBP can fusing shape and texture information on Triangular Mesh. In this paper, some of the operators used to extract Mesh-LBP, Such as the normal vectors of the triangle each face and vertex, the gaussian curvature, the mean curvature, laplace operator and so on. Conclusion: First, Kinect devices obtain 3D point cloud face, after the pretreatment and normalization, then transform it into triangular grid, grid local binary pattern feature extraction from face key significant parts of face. For each local face, calculate its Mesh-LBP feature with Gaussian curvature, mean curvature laplace operator and so on. Experiments on the our research database, change the method is robust and high recognition accuracy.

  4. A Nonlinear Dynamic Model and Free Vibration Analysis of Deployable Mesh Reflectors

    Shi, H.; Yang, B.; Thomson, M.; Fang, H.

    2011-01-01

    This paper presents a dynamic model of deployable mesh reflectors, in which geometric and material nonlinearities of such a space structure are fully described. Then, by linearization around an equilibrium configuration of the reflector structure, a linearized model is obtained. With this linearized model, the natural frequencies and mode shapes of a reflector can be computed. The nonlinear dynamic model of deployable mesh reflectors is verified by using commercial finite element software in numerical simulation. As shall be seen, the proposed nonlinear model is useful for shape (surface) control of deployable mesh reflectors under thermal loads.

  5. Coarse mesh code development

    Lieberoth, J.

    1975-06-15

    The numerical solution of the neutron diffusion equation plays a very important role in the analysis of nuclear reactors. A wide variety of numerical procedures has been proposed, at which most of the frequently used numerical methods are fundamentally based on the finite- difference approximation where the partial derivatives are approximated by the finite difference. For complex geometries, typical of the practical reactor problems, the computational accuracy of the finite-difference method is seriously affected by the size of the mesh width relative to the neutron diffusion length and by the heterogeneity of the medium. Thus, a very large number of mesh points are generally required to obtain a reasonably accurate approximate solution of the multi-dimensional diffusion equation. Since the computation time is approximately proportional to the number of mesh points, a detailed multidimensional analysis, based on the conventional finite-difference method, is still expensive even with modern large-scale computers. Accordingly, there is a strong incentive to develop alternatives that can reduce the number of mesh-points and still retain accuracy. One of the promising alternatives is the finite element method, which consists of the expansion of the neutron flux by piecewise polynomials. One of the advantages of this procedure is its flexibility in selecting the locations of the mesh points and the degree of the expansion polynomial. The small number of mesh points of the coarse grid enables to store the results of several of the least outer iterations and to calculate well extrapolated values of them by comfortable formalisms. This holds especially if only one energy distribution of fission neutrons is assumed for all fission processes in the reactor, because the whole information of an outer iteration is contained in a field of fission rates which has the size of all mesh points of the coarse grid.

  6. ANALYSIS OF COMBINED POLYSURFACES TO MESH SURFACES MATCHING

    Marek WYLEŻOŁ

    2014-06-01

    Full Text Available This article applies to an example of the process of quantitatively evaluate the fit of combined polysurface (NURBS class to a surface mesh. The fitting process of the polysurface and the evaluation of obtained results have been realized in the environment of the CATIA v5 system. Obtained quantitative evaluation are shown graphically in the form of three-dimensional graphs and histograms. As the base surface mesh was used a pelvic bone stl model (the model was created by digitizing didactic physical model.

  7. Efficient computation of the elastography inverse problem by combining variational mesh adaption and a clustering technique

    Arnold, Alexander; Bruhns, Otto T; Reichling, Stefan; Mosler, Joern

    2010-01-01

    This paper is concerned with an efficient implementation suitable for the elastography inverse problem. More precisely, the novel algorithm allows us to compute the unknown stiffness distribution in soft tissue by means of the measured displacement field by considerably reducing the numerical cost compared to previous approaches. This is realized by combining and further elaborating variational mesh adaption with a clustering technique similar to those known from digital image compression. Within the variational mesh adaption, the underlying finite element discretization is only locally refined if this leads to a considerable improvement of the numerical solution. Additionally, the numerical complexity is reduced by the aforementioned clustering technique, in which the parameters describing the stiffness of the respective soft tissue are sorted according to a predefined number of intervals. By doing so, the number of unknowns associated with the elastography inverse problem can be chosen explicitly. A positive side effect of this method is the reduction of artificial noise in the data (smoothing of the solution). The performance and the rate of convergence of the resulting numerical formulation are critically analyzed by numerical examples.

  8. Development of a mesh-type computer tomography for the two-phase flow

    Lee, Jae Young; Lee, In Wook

    1998-01-01

    This paper is to describe the development of a mesh-type computer tomography for the two-phase flow. The sensor is made of many parallel wires in the orthogonal orientation. A demultiplexer circuits is developed for electrodes to supply driving voltage and for data acquisition system to get the output voltage form the electrode unit. For the reconstruction of image a direct inversion algorithm is adopted. Full automation is provided from the data sensing to the image construction. Through the careful calibariation and field tests in the horizontal and vertical two-phase loop, the present sensor detect images for the solitary wave and the slug realistically. This sensor could be a useful tool in the laboratory experiments

  9. Analysis of dynamic meshing characteristic of planetary gear transmission in wind power increasing gearbox

    Wang Jungang

    2017-01-01

    Full Text Available Dynamic behavior of planetary gear’s tooth contact surface in the different location can better conform operation condition comparing to the general gear pair. Nonlinear finite element algorithm was derived according to the basic control equation of contact dynamics. A finite element model of planetary gear transmission in wind power increasing gearbox was proposed considering different meshing locations based on nonlinear finite element solution. The characteristics of stress distribution at different meshing positions were analyzed. A simulation of the meshing process was conducted using finite element analysis. It was shown that node stresses of external meshing planetary gear varied significantly at different position. The analysis provides some useful insights into the performance of planetary gear’s tooth contact surface.

  10. Outcomes of Orbital Floor Reconstruction After Extensive Maxillectomy Using the Computer-Assisted Fabricated Individual Titanium Mesh Technique.

    Zhang, Wen-Bo; Mao, Chi; Liu, Xiao-Jing; Guo, Chuan-Bin; Yu, Guang-Yan; Peng, Xin

    2015-10-01

    Orbital floor defects after extensive maxillectomy can cause severe esthetic and functional deformities. Orbital floor reconstruction using the computer-assisted fabricated individual titanium mesh technique is a promising method. This study evaluated the application and clinical outcomes of this technique. This retrospective study included 10 patients with orbital floor defects after maxillectomy performed from 2012 through 2014. A 3-dimensional individual stereo model based on mirror images of the unaffected orbit was obtained to fabricate an anatomically adapted titanium mesh using computer-assisted design and manufacturing. The titanium mesh was inserted into the defect using computer navigation. The postoperative globe projection and orbital volume were measured and the incidence of postoperative complications was evaluated. The average postoperative globe projection was 15.91 ± 1.80 mm on the affected side and 16.24 ± 2.24 mm on the unaffected side (P = .505), and the average postoperative orbital volume was 26.01 ± 1.28 and 25.57 ± 1.89 mL, respectively (P = .312). The mean mesh depth was 25.11 ± 2.13 mm. The mean follow-up period was 23.4 ± 7.7 months (12 to 34 months). Of the 10 patients, 9 did not develop diplopia or a decrease in visual acuity and ocular motility. Titanium mesh exposure was not observed in any patient. All patients were satisfied with their postoperative facial symmetry. Orbital floor reconstruction after extensive maxillectomy with an individual titanium mesh fabricated using computer-assisted techniques can preserve globe projection and orbital volume, resulting in successful clinical outcomes. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  11. Fast precalculated triangular mesh algorithm for 3D binary computer-generated holograms.

    Yang, Fan; Kaczorowski, Andrzej; Wilkinson, Tim D

    2014-12-10

    A new method for constructing computer-generated holograms using a precalculated triangular mesh is presented. The speed of calculation can be increased dramatically by exploiting both the precalculated base triangle and GPU parallel computing. Unlike algorithms using point-based sources, this method can reconstruct a more vivid 3D object instead of a "hollow image." In addition, there is no need to do a fast Fourier transform for each 3D element every time. A ferroelectric liquid crystal spatial light modulator is used to display the binary hologram within our experiment and the hologram of a base right triangle is produced by utilizing just a one-step Fourier transform in the 2D case, which can be expanded to the 3D case by multiplying by a suitable Fresnel phase plane. All 3D holograms generated in this paper are based on Fresnel propagation; thus, the Fresnel plane is treated as a vital element in producing the hologram. A GeForce GTX 770 graphics card with 2 GB memory is used to achieve parallel computing.

  12. Polyhedral meshing as an innovative approach to computational domain discretization of a cyclone in a fluidized bed CLC unit

    Sosnowski Marcin

    2017-01-01

    Full Text Available Chemical Looping Combustion (CLC is a technology that allows the separation of CO2, which is generated by the combustion of fossil fuels. The majority of process designs currently under investigation are systems of coupled fluidized beds. Advances in the development of power generation system using CLC cannot be introduced without using numerical modelling as a research tool. The primary and critical activity in numerical modelling is the computational domain discretization. It influences the numerical diffusion as well as convergence of the model and therefore the overall accuracy of the obtained results. Hence an innovative approach of computational domain discretization using polyhedral (POLY mesh is proposed in the paper. This method reduces both the numerical diffusion of the mesh as well as the time cost of preparing the model for subsequent calculation. The major advantage of POLY mesh is that each individual cell has many neighbours, so gradients can be much better approximated in comparison to commonly-used tetrahedral (TET mesh. POLYs are also less sensitive to stretching than TETs which results in better numerical stability of the model. Therefore detailed comparison of numerical modelling results concerning subsection of CLC system using tetrahedral and polyhedral mesh is covered in the paper.

  13. Adjoint-based Mesh Optimization Method: The Development and Application for Nuclear Fuel Analysis

    Son, Seongmin; Lee, Jeong Ik

    2016-01-01

    In this research, methods for optimizing mesh distribution is proposed. The proposed method uses adjoint base optimization method (adjoint method). The optimized result will be obtained by applying this meshing technique to the existing code input deck and will be compared to the results produced from the uniform meshing method. Numerical solutions are calculated form an in-house 1D Finite Difference Method code while neglecting the axial conduction. The fuel radial node optimization was first performed to match the Fuel Centerline Temperature (FCT) the best. This was followed by optimizing the axial node which the Peak Cladding Temperature (PCT) is matched the best. After obtaining the optimized radial and axial nodes, the nodalization is implemented into the system analysis code and transient analyses were performed to observe the optimum nodalization performance. The developed adjoint-based mesh optimization method in the study is applied to MARS-KS, which is a nuclear system analysis code. Results show that the newly established method yields better results than that of the uniform meshing method from the numerical point of view. It is again stressed that the optimized mesh for the steady state can also give better numerical results even during a transient analysis

  14. Slug to churn transition analysis using wire-mesh sensor

    H. F. Velasco, P.; Ortiz-Vidal, L. E.; Rocha, D. M.; Rodriguez, O. M. H.

    2016-06-01

    A comparison between some theoretical slug to churn flow-pattern transition models and experimental data is performed. The flow-pattern database considers vertical upward air-water flow at standard temperature and pressure for 50 mm and 32 mm ID pipes. A briefly description of the models and its phenomenology is presented. In general, the performance of the transition models is poor. We found that new experimental studies describing objectively both stable and unstable slug flow-pattern are required. In this sense, the Wire Mesh Sensor (WMS) can assist to that aim. The potential of the WMS is outlined.

  15. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  16. Impact on Dose Coefficients Calculated with ICRP Adult Mesh-type Reference Computational Phantoms

    Yeom, Yeon Soo; Nguyen, Thang Tat; Choi, Chan Soo; Lee, Han Jin; Han, Hae Gin; Han, Min Cheol; Shin, Bang Ho; Kim, Chan Hyeong [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of)

    2017-04-15

    In 2016, the International Commission on Radiological Protection (ICRP) formulated a new Task Group (TG) (i.e., TG 103) within Committee 2. The ultimate aim of the TG 103 is to develop the mesh-type reference computational phantoms (MRCPs) that can address dosimetric limitations of the currently used voxel-type reference computational phantoms (VRCPs) due to their limited voxel resolutions. The objective of the present study is to investigate dosimetric impact of the adult MRCPs by comparing dose coefficients (DCs) calculated with the MRCPs for some external and internal exposure cases and the reference DCs in ICRP Publications 116 and 133 that were produced with the adult VRCPs. In the present study, the DCs calculated with the adult MRCPs for some exposure cases were compared with the values in ICRP Publications 116 and 133. This comparison shows that in general the MRCPs provide very similar DCs for uncharged particles, but for charged particles provide significantly different DCs due to the improvement of the MRCPs.

  17. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  18. Mesh Generation and Adaption for High Reynolds Number RANS Computations, Phase I

    National Aeronautics and Space Administration — This proposal offers to provide NASA with an automatic mesh generator for the simulation of aerodynamic flows using Reynolds-Averages Navier-Stokes (RANS) models....

  19. Mesh Generation and Adaption for High Reynolds Number RANS Computations, Phase II

    National Aeronautics and Space Administration — This proposal offers to provide NASA with an automatic mesh generator for the simulation of aerodynamic flows using Reynolds-Averages Navier-Stokes (RANS) models....

  20. Software abstractions and computational issues in parallel structure adaptive mesh methods for electronic structure calculations

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1997-05-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradient with FAC multigrid preconditioning. We have parallelized our solver using an object- oriented adaptive mesh refinement framework.

  1. Finite element modeling of the human kidney for probabilistic occupant models: Statistical shape analysis and mesh morphing.

    Yates, Keegan M; Untaroiu, Costin D

    2018-04-16

    Statistical shape analysis was conducted on 15 pairs (left and right) of human kidneys. It was shown that the left and right kidney were significantly different in size and shape. In addition, several common modes of kidney variation were identified using statistical shape analysis. Semi-automatic mesh morphing techniques have been developed to efficiently create subject specific meshes from a template mesh with a similar geometry. Subject specific meshes as well as probabilistic kidney meshes were created from a template mesh. Mesh quality remained about the same as the template mesh while only taking a fraction of the time to create the mesh from scratch or morph with manually identified landmarks. This technique can help enhance the quality of information gathered from experimental testing with subject specific meshes as well as help to more efficiently predict injury by creating models with the mean shape as well as models at the extremes for each principal component. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Analysis of Mesh Distribution Systems Considering Load Models and Load Growth Impact with Loops on System Performance

    Kumar Sharma, A.; Murty, V. V. S. N.

    2014-12-01

    The distribution system is the final link between bulk power system and consumer end. A distinctive load flow solution method is used for analysis of the load flow of radial and weakly meshed network based on Kirchhoff's Current Law (KCL) and KVL. This method has excellent convergence characteristics for both radial as well as weakly meshed structure and is based on bus injection to branch current and branch-current to bus-voltage matrix. The main contribution of the paper is: (i) an analysis has been carried out for a weekly mesh network considering number of loops addition and its impact on the losses, kW and kVAr requirements from a system, and voltage profile, (ii) different load models, realistic ZIP load model and load growth impact on losses, voltage profile, kVA and kVAr requirements, (iii) impact of addition of loops on losses, voltage profile, kVA and kVAr requirements from substation, and (iv) comparison of system performance with radial distribution system. Voltage stability is a major concern in planning and operation of power systems. This paper also includes identifying the closeness critical bus which is the most sensitive to the voltage collapse in radial distribution networks. Node having minimum value of voltage stability index is the most sensitive node. Voltage stability index values are computed for meshed network with number of loops added in the system. The results have been obtained for IEEE 33 and 69 bus test system. The results have also been obtained for radial distribution system for comparison.

  3. Calculation of local skin doses with ICRP adult mesh-type reference computational phantoms

    Yeom, Yeon Soo; Han, Haegin; Choi, Chansoo; Nguyen, Thang Tat; Lee, Hanjin; Shin, Bangho; Kim, Chan Hyeong; Han, Min Cheol

    2018-01-01

    Recently, Task Group 103 of the International Commission on Radiological Protection (ICRP) developed new mesh-type reference computational phantoms (MRCPs) for adult males and females in order to address the limitations of the current voxel-type reference phantoms described in ICRP Publication 110 due to their limited voxel resolutions and the nature of the voxel geometry. One of the substantial advantages of the MRCPs over the ICRP-110 reference phantoms is the inclusion of a 50-μm-thick radiosensitive skin basal-cell layer; however, a methodology for calculating the local skin dose (LSD), i.e., the maximum dose to the basal layer averaged over a 1-cm2 area, has yet to be developed. In the present study, a dedicated program for the LSD calculation with the MRCPs was developed based on the mean shift algorithm and the Geant4 Monte Carlo code. The developed program was used to calculate local skin dose coefficients (LSDCs) for electrons and alpha particles, which were then compared with the values given in ICRP Publication 116 that were produced with a simple tissue-equivalent cube model. The results of the present study show that the LSDCs of the MRCPs are generally in good agreement with the ICRP-116 values for alpha particles, but for electrons, significant differences are found at energies higher than 0.15 MeV. The LSDCs of the MRCPs are greater than the ICRP-116 values by as much as 2.7 times at 10 MeV, which is due mainly to the different curvature between realistic MRCPs ( i.e., curved) and the simple cube model ( i.e., flat).

  4. Computational movement analysis

    Laube, Patrick

    2014-01-01

    This SpringerBrief discusses the characteristics of spatiotemporal movement data, including uncertainty and scale. It investigates three core aspects of Computational Movement Analysis: Conceptual modeling of movement and movement spaces, spatiotemporal analysis methods aiming at a better understanding of movement processes (with a focus on data mining for movement patterns), and using decentralized spatial computing methods in movement analysis. The author presents Computational Movement Analysis as an interdisciplinary umbrella for analyzing movement processes with methods from a range of fi

  5. Parallel scientific computing theory, algorithms, and applications of mesh based and meshless methods

    Trobec, Roman

    2015-01-01

    This book is concentrated on the synergy between computer science and numerical analysis. It is written to provide a firm understanding of the described approaches to computer scientists, engineers or other experts who have to solve real problems. The meshless solution approach is described in more detail, with a description of the required algorithms and the methods that are needed for the design of an efficient computer program. Most of the details are demonstrated on solutions of practical problems, from basic to more complicated ones. This book will be a useful tool for any reader interes

  6. Opfront: mesh

    2015-01-01

    Mesh generation and visualization software based on the CGAL library. Folder content: drawmesh Visualize slices of the mesh (surface/volumetric) as wireframe on top of an image (3D). drawsurf Visualize surfaces of the mesh (surface/volumetric). img2mesh Convert isosurface in image to volumetric m...... mesh (medit format). img2off Convert isosurface in image to surface mesh (off format). off2mesh Convert surface mesh (off format) to volumetric mesh (medit format). reduce Crop and resize 3D and stacks of images. data Example data to test the library on...

  7. Implicit flux-split Euler schemes for unsteady aerodynamic analysis involving unstructured dynamic meshes

    Batina, John T.

    1990-01-01

    Improved algorithm for the solution of the time-dependent Euler equations are presented for unsteady aerodynamic analysis involving unstructured dynamic meshes. The improvements were developed recently to the spatial and temporal discretizations used by unstructured grid flow solvers. The spatial discretization involves a flux-split approach which is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves an implicit time-integration scheme using a Gauss-Seidel relaxation procedure which is computationally efficient for either steady or unsteady flow problems. For example, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady and unsteady flow results are presented for the NACA 0012 airfoil to demonstrate applications of the new Euler solvers. The unsteady results were obtained for the airfoil pitching harmonically about the quarter chord. The resulting instantaneous pressure distributions and lift and moment coefficients during a cycle of motion compare well with experimental data. A description of the Euler solvers is presented along with results and comparisons which assess the capability.

  8. Analysis of the Numerical Diffusion in Anisotropic Mediums: Benchmarks for Magnetic Field Aligned Meshes in Space Propulsion Simulations

    Daniel Pérez-Grande

    2016-11-01

    Full Text Available This manuscript explores numerical errors in highly anisotropic diffusion problems. First, the paper addresses the use of regular structured meshes in numerical solutions versus meshes aligned with the preferential directions of the problem. Numerical diffusion in structured meshes is quantified by solving the classical anisotropic diffusion problem; the analysis is exemplified with the application to a numerical model of conducting fluids under magnetic confinement, where rates of transport in directions parallel and perpendicular to a magnetic field are quite different. Numerical diffusion errors in this problem promote the use of magnetic field aligned meshes (MFAM. The generation of this type of meshes presents some challenges; several meshing strategies are implemented and analyzed in order to provide insight into achieving acceptable mesh regularity. Second, Gradient Reconstruction methods for magnetically aligned meshes are addressed and numerical errors are compared for the structured and magnetically aligned meshes. It is concluded that using the latter provides a more correct and straightforward approach to solving problems where anisotropicity is present, especially, if the anisotropicity level is high or difficult to quantify. The conclusions of the study may be extrapolated to the study of anisotropic flows different from conducting fluids.

  9. Comprehensive adaptive mesh refinement in wrinkling prediction analysis

    Selman, A.; Meinders, Vincent T.; Huetink, Han; van den Boogaard, Antonius H.

    2002-01-01

    Discretisation errors indicator, contact free wrinkling and wrinkling with contact indicators are, in a challenging task, brought together and used in a comprehensive approach to wrinkling prediction analysis in thin sheet metal forming processes.

  10. Development of three-dimensional ENRICHED FREE MESH METHOD and its application to crack analysis

    Suzuki, Hayato; Matsubara, Hitoshi; Ezawa, Yoshitaka; Yagawa, Genki

    2010-01-01

    In this paper, we describe a method for three-dimensional high accurate analysis of a crack included in a large-scale structure. The Enriched Free Mesh Method (EFMM) is a method for improving the accuracy of the Free Mesh Method (FMM), which is a kind of meshless method. First, we developed an algorithm of the three-dimensional EFMM. The elastic problem was analyzed using the EFMM and we find that its accuracy compares advantageously with the FMM, and the number of CG iterations is smaller. Next, we developed a method for calculating the stress intensity factor by employing the EFMM. The structure with a crack was analyzed using the EFMM, and the stress intensity factor was calculated by the developed method. The analysis results were very well in agreement with reference solution. It was shown that the proposed method is very effective in the analysis of the crack included in a large-scale structure. (author)

  11. Analysis of the Two-Regime Method on Square Meshes

    Flegg, Mark B.

    2014-01-01

    The two-regime method (TRM) has been recently developed for optimizing stochastic reaction-diffusion simulations [M. Flegg, J. Chapman, and R. Erban, J. Roy. Soc. Interface, 9 (2012), pp. 859-868]. It is a multiscale (hybrid) algorithm which uses stochastic reaction-diffusion models with different levels of detail in different parts of the computational domain. The coupling condition on the interface between different modeling regimes of the TRM was previously derived for onedimensional models. In this paper, the TRM is generalized to higher dimensional reaction-diffusion systems. Coupling Brownian dynamics models with compartment-based models on regular (square) two-dimensional lattices is studied in detail. In this case, the interface between different modeling regimes contains either flat parts or right-angle corners. Both cases are studied in the paper. For flat interfaces, it is shown that the one-dimensional theory can be used along the line perpendicular to the TRM interface. In the direction tangential to the interface, two choices of the TRM parameters are presented. Their applicability depends on the compartment size and the time step used in the molecular-based regime. The two-dimensional generalization of the TRM is also discussed in the case of corners. © 2014 Society for Industrial and Applied Mathematics.

  12. Efficacy of Prophylactic Mesh in End-Colostomy Construction: A Systematic Review and Meta-analysis of Randomized Controlled Trials.

    Wang, Shuanhu; Wang, Wenbin; Zhu, Bing; Song, Guolei; Jiang, Congqiao

    2016-10-01

    Parastomal hernia is a very common complication after colostomy, especially end-colostomy. It is unclear whether prophylactic placement of mesh at the time of stoma formation could prevent parastomal hernia formation after surgery for rectal cancer. A systematic review and meta-analysis were conducted to evaluate the efficacy of prophylactic mesh in end-colostomy construction. PubMed, Embase, and the Cochrane Library were searched, covering records entered from their inception to September 2015. Randomized controlled trials (RCTs) comparing stoma with mesh to stoma without mesh after surgery for rectal cancer were included. The primary outcome was the incidence of parastomal hernia. Pooled risk ratios (RR) with 95 % confidence intervals (CI) were obtained using random effects models. Six RCTs containing 309 patients were included. Parastomal hernia occurred in 24.4 % (38 of 156) of patients with mesh and 50.3 % (77 of 153) of patients without mesh. Meta-analysis showed a lower incidence of parastomal hernia (RR, 0.42; 95 % CI 0.22-0.82) and reoperation related to parastomal hernia (RR, 0.23; 95 % CI 0.06-0.89) in patients with mesh. Stoma-related morbidity was similar between mesh group and non-mesh group (RR, 0.65; 95 % CI 0.33-1.30). Prophylactic placement of a mesh at the time of a stoma formation seems to be associated with a significant reduction in the incidence of parastomal hernia and reoperation related to parastomal hernia after surgery for rectal cancer, but not the rate of stoma-related morbidity. However, the results should be interpreted with caution because of the heterogeneity among the studies.

  13. COMPUTATIONAL EFFICIENCY OF A MODIFIED SCATTERING KERNEL FOR FULL-COUPLED PHOTON-ELECTRON TRANSPORT PARALLEL COMPUTING WITH UNSTRUCTURED TETRAHEDRAL MESHES

    JONG WOON KIM

    2014-04-01

    In this paper, we introduce a modified scattering kernel approach to avoid the unnecessarily repeated calculations involved with the scattering source calculation, and used it with parallel computing to effectively reduce the computation time. Its computational efficiency was tested for three-dimensional full-coupled photon-electron transport problems using our computer program which solves the multi-group discrete ordinates transport equation by using the discontinuous finite element method with unstructured tetrahedral meshes for complicated geometrical problems. The numerical tests show that we can improve speed up to 17∼42 times for the elapsed time per iteration using the modified scattering kernel, not only in the single CPU calculation but also in the parallel computing with several CPUs.

  14. Image-Based Geometric Modeling and Mesh Generation

    2013-01-01

    As a new interdisciplinary research area, “image-based geometric modeling and mesh generation” integrates image processing, geometric modeling and mesh generation with finite element method (FEM) to solve problems in computational biomedicine, materials sciences and engineering. It is well known that FEM is currently well-developed and efficient, but mesh generation for complex geometries (e.g., the human body) still takes about 80% of the total analysis time and is the major obstacle to reduce the total computation time. It is mainly because none of the traditional approaches is sufficient to effectively construct finite element meshes for arbitrarily complicated domains, and generally a great deal of manual interaction is involved in mesh generation. This contributed volume, the first for such an interdisciplinary topic, collects the latest research by experts in this area. These papers cover a broad range of topics, including medical imaging, image alignment and segmentation, image-to-mesh conversion,...

  15. The influences of mesh subdivision on nonlinear fracture analysis for surface cracked structures

    Shimakawa, T.

    1991-01-01

    The leak-before-break (LBB) concept can be expected to be applied not only to safety assessment, but also to the rationalization of nuclear power plants. The development of a method to evaluate fracture characteristics is required to establish this concept. The finite element method (FEM) is one of the most useful tools for this evaluation. However, the influence of various factors on the solution is not well understood and the reliability has not been fully verified. In this study, elastic-plastic 3D analyses are performed for two kinds of surface cracked structure, and the influence of mesh design is discussed. The first problem is surface crack growth in a carbon steel plate subjected to tension loading. A crack extension analysis is performed under a generation phase simulation using the crack release technique. Numerical instability of the J-integral solution is observed when the number of elements in the thickness direction of the ligament is reduced to three. The influence of mesh design in the ligament on the solution is discussed. The second problem is a circumferential part-through crack in a carbon steel pipe subjected to a bending moment. Two kinds of mesh design are employed, and a comparison between two sets of results shows that the number of elements on the crack surface also affects the solution as well as the number of elements in the ligament. (author)

  16. Mesh refinement and numerical sensitivity analysis for parameter calibration of partial differential equations

    Becker, Roland; Vexler, Boris

    2005-06-01

    We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.

  17. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Computational Music Analysis

    This book provides an in-depth introduction and overview of current research in computational music analysis. Its seventeen chapters, written by leading researchers, collectively represent the diversity as well as the technical and philosophical sophistication of the work being done today...... on well-established theories in music theory and analysis, such as Forte's pitch-class set theory, Schenkerian analysis, the methods of semiotic analysis developed by Ruwet and Nattiez, and Lerdahl and Jackendoff's Generative Theory of Tonal Music. The book is divided into six parts, covering...... music analysis, the book provides an invaluable resource for researchers, teachers and students in music theory and analysis, computer science, music information retrieval and related disciplines. It also provides a state-of-the-art reference for practitioners in the music technology industry....

  19. Comparison of computed tomography based parametric and patient-specific finite element models of the healthy and metastatic spine using a mesh-morphing algorithm.

    O'Reilly, Meaghan Anne; Whyne, Cari Marisa

    2008-08-01

    A comparative analysis of parametric and patient-specific finite element (FE) modeling of spinal motion segments. To develop patient-specific FE models of spinal motion segments using mesh-morphing methods applied to a parametric FE model. To compare strain and displacement patterns in parametric and morphed models for both healthy and metastatically involved vertebrae. Parametric FE models may be limited in their ability to fully represent patient-specific geometries and material property distributions. Generation of multiple patient-specific FE models has been limited because of computational expense. Morphing methods have been successfully used to generate multiple specimen-specific FE models of caudal rat vertebrae. FE models of a healthy and a metastatic T6-T8 spinal motion segment were analyzed with and without patient-specific material properties. Parametric and morphed models were compared using a landmark-based morphing algorithm. Morphing of the parametric FE model and including patient-specific material properties both had a strong impact on magnitudes and patterns of vertebral strain and displacement. Small but important geometric differences can be represented through morphing of parametric FE models. The mesh-morphing algorithm developed provides a rapid method for generating patient-specific FE models of spinal motion segments.

  20. Shutdown dose rate analysis with CAD geometry, Cartesian/tetrahedral mesh, and advanced variance reduction

    Biondo, Elliott D.; Davis, Andrew; Wilson, Paul P.H.

    2016-01-01

    Highlights: • A CAD-based shutdown dose rate analysis workflow has been implemented. • Cartesian and superimposed tetrahedral mesh are fully supported. • Biased and unbiased photon source sampling options are available. • Hybrid Monte Carlo/deterministic techniques accelerate photon transport. • The workflow has been validated with the FNG-ITER benchmark problem. - Abstract: In fusion energy systems (FES) high-energy neutrons born from burning plasma activate system components to form radionuclides. The biological dose rate that results from photons emitted by these radionuclides after shutdown—the shutdown dose rate (SDR)—must be quantified for maintenance planning. This can be done using the Rigorous Two-Step (R2S) method, which involves separate neutron and photon transport calculations, coupled by a nuclear inventory analysis code. The geometric complexity and highly attenuating configuration of FES motivates the use of CAD geometry and advanced variance reduction for this analysis. An R2S workflow has been created with the new capability of performing SDR analysis directly from CAD geometry with Cartesian or tetrahedral meshes and with biased photon source sampling, enabling the use of the Consistent Adjoint Driven Importance Sampling (CADIS) variance reduction technique. This workflow has been validated with the Frascati Neutron Generator (FNG)-ITER SDR benchmark using both Cartesian and tetrahedral meshes and both unbiased and biased photon source sampling. All results are within 20.4% of experimental values, which constitutes satisfactory agreement. Photon transport using CADIS is demonstrated to yield speedups as high as 8.5·10"5 for problems using the FNG geometry.

  1. Sentiment Analysis of Web Sites Related to Vaginal Mesh Use in Pelvic Reconstructive Surgery.

    Hobson, Deslyn T G; Meriwether, Kate V; Francis, Sean L; Kinman, Casey L; Stewart, J Ryan

    2018-05-02

    The purpose of this study was to utilize sentiment analysis to describe online opinions toward vaginal mesh. We hypothesized that sentiment in legal Web sites would be more negative than that in medical and reference Web sites. We generated a list of relevant key words related to vaginal mesh and searched Web sites using the Google search engine. Each unique uniform resource locator (URL) was sorted into 1 of 6 categories: "medical", "legal", "news/media", "patient generated", "reference", or "unrelated". Sentiment of relevant Web sites, the primary outcome, was scored on a scale of -1 to +1, and mean sentiment was compared across all categories using 1-way analysis of variance. Tukey test evaluated differences between category pairs. Google searches of 464 unique key words resulted in 11,405 URLs. Sentiment analysis was performed on 8029 relevant URLs (3472 legal, 1625 "medical", 1774 "reference", 666 "news media", 492 "patient generated"). The mean sentiment for all relevant Web sites was +0.01 ± 0.16; analysis of variance revealed significant differences between categories (P Web sites categorized as "legal" and "news/media" had a slightly negative mean sentiment, whereas those categorized as "medical," "reference," and "patient generated" had slightly positive mean sentiments. Tukey test showed differences between all category pairs except the "medical" versus "reference" in comparison with the largest mean difference (-0.13) seen in the "legal" versus "reference" comparison. Web sites related to vaginal mesh have an overall mean neutral sentiment, and Web sites categorized as "medical," "reference," and "patient generated" have significantly higher sentiment scores than related Web sites in "legal" and "news/media" categories.

  2. Unsteady Navier-Stokes computations over airfoils using both fixed and dynamic meshes

    Rumsey, Christopher L.; Anderson, W. Kyle

    1989-01-01

    A finite volume implicit approximate factorization method which solves the thin layer Navier-Stokes equations was used to predict unsteady turbulent flow airfoil behavior. At a constant angle of attack of 16 deg, the NACA 0012 airfoil exhibits an unsteady periodic flow field with the lift coefficient oscillating between 0.89 and 1.60. The Strouhal number is 0.028. Results are similar at 18 deg, with a Strouhal number of 0.033. A leading edge vortex is shed periodically near maximum lift. Dynamic mesh solutions for unstalled airfoil flows show general agreement with experimental pressure coefficients. However, moment coefficients and the maximum lift value are underpredicted. The deep stall case shows some agreement with experiment for increasing angle of attack, but is only qualitatively comparable past stall and for decreasing angle of attack.

  3. Design and analysis of a deployable truss for the large modular mesh antenna

    Meguro, Akira

    This paper describes the design and deployment analysis for large deployable modular mesh antennas. Key design criteria are deployability, and the driving force and latching moment requirements. Reaction forces and moments due to mesh and cable network seriously influence the driving force. These forces and moments can be precisely estimated by means of analyzing the cable network using Cable Structure Analyzer (CASA). Deployment analysis is carried out using Dynamic Analysis and Design System (DADS). The influence of alignment errors on the driving reaction force can be eliminated by replacing the joint element with a spring element. The joint slop is also modeled using a discontinuous spring elements. Their design approach for three types of deployable modules and the deployment characterstics of three Bread-Board Models based on those designs are also presented. In order to study gravity effects on the deployment characteristics and the effects of the gravity compensation method, ground deployment analysis is carried out. A planned deployment test that will use aircraft parabolic flight to simulate a micro-gravity environment is also described.

  4. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-01-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner for scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated

  5. Computer aided safety analysis

    1988-05-01

    The document reproduces 20 selected papers from the 38 papers presented at the Technical Committee/Workshop on Computer Aided Safety Analysis organized by the IAEA in co-operation with the Institute of Atomic Energy in Otwock-Swierk, Poland on 25-29 May 1987. A separate abstract was prepared for each of these 20 technical papers. Refs, figs and tabs

  6. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  7. Parallel FE Electron-Photon Transport Analysis on 2-D Unstructured Mesh

    Drumm, C.R.; Lorenz, J.

    1999-01-01

    A novel solution method has been developed to solve the coupled electron-photon transport problem on an unstructured triangular mesh. Instead of tackling the first-order form of the linear Boltzmann equation, this approach is based on the second-order form in conjunction with the conventional multi-group discrete-ordinates approximation. The highly forward-peaked electron scattering is modeled with a multigroup Legendre expansion derived from the Goudsmit-Saunderson theory. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, a method that is well suited for massively parallel computers

  8. Crack growth simulation for plural crack using hexahedral mesh generation technique

    Orita, Y; Wada, Y; Kikuchi, M

    2010-01-01

    This paper describes a surface crack growth simulation using a new mesh generation technique. The generated mesh is constituted of all hexahedral elements. Hexahedral elements are suitable for an analysis of fracture mechanics parameters, i.e. stress intensity factor. The advantage of a hexahedral mesh is good accuracy of an analysis and less number of degrees of freedoms than a tetrahedral mesh. In this study, a plural crack growth simulation is computed using the hexahedral mesh and its distribution of stress intensity factor is investigated.

  9. Computational analysis of a multistage axial compressor

    Mamidoju, Chaithanya

    Turbomachines are used extensively in Aerospace, Power Generation, and Oil & Gas Industries. Efficiency of these machines is often an important factor and has led to the continuous effort to improve the design to achieve better efficiency. The axial flow compressor is a major component in a gas turbine with the turbine's overall performance depending strongly on compressor performance. Traditional analysis of axial compressors involves throughflow calculations, isolated blade passage analysis, Quasi-3D blade-to-blade analysis, single-stage (rotor-stator) analysis, and multi-stage analysis involving larger design cycles. In the current study, the detailed flow through a 15 stage axial compressor is analyzed using a 3-D Navier Stokes CFD solver in a parallel computing environment. Methodology is described for steady state (frozen rotor stator) analysis of one blade passage per component. Various effects such as mesh type and density, boundary conditions, tip clearance and numerical issues such as turbulence model choice, advection model choice, and parallel processing performance are analyzed. A high sensitivity of the predictions to the above was found. Physical explanation to the flow features observed in the computational study are given. The total pressure rise verses mass flow rate was computed.

  10. Performance Analysis of On-Demand Routing Protocols in Wireless Mesh Networks

    Arafatur RAHMAN

    2009-01-01

    Full Text Available Wireless Mesh Networks (WMNs have recently gained a lot of popularity due to their rapid deployment and instant communication capabilities. WMNs are dynamically self-organizing, self-configuring and self-healing with the nodes in the network automatically establishing an adiej hoc network and preserving the mesh connectivity. Designing a routing protocol for WMNs requires several aspects to consider, such as wireless networks, fixed applications, mobile applications, scalability, better performance metrics, efficient routing within infrastructure, load balancing, throughput enhancement, interference, robustness etc. To support communication, various routing protocols are designed for various networks (e.g. ad hoc, sensor, wired etc.. However, all these protocols are not suitable for WMNs, because of the architectural differences among the networks. In this paper, a detailed simulation based performance study and analysis is performed on the reactive routing protocols to verify the suitability of these protocols over such kind of networks. Ad Hoc On-Demand Distance Vector (AODV, Dynamic Source Routing (DSR and Dynamic MANET On-demand (DYMO routing protocol are considered as the representative of reactive routing protocols. The performance differentials are investigated using varying traffic load and number of source. Based on the simulation results, how the performance of each protocol can be improved is also recommended.

  11. Recent developments in the ROCS/MC code for retrieving local power information in coarse-mesh reactor analysis

    Grill, S.F.; Jonsson, A.; Crump, M.W.

    1983-01-01

    The inclusion of 3-D effects in PWR analysis is necessary for accurate predictions of reactivity, power distributions, and reactivity coefficients. The ROCS/MC code system has been developed by Combustion Engineering to provide 3-D coarse mesh analysis (ROCS) with the capability to retrieve local information on flux, power and burnup (MC). A review of the finite difference representation of the MC diffusion equation, along with recent improvements to the ROCS/MC system are presented. These improvements include the implementation if fine mesh radial boundary conditions and internal calculation of coarse mesh boundary conditions, generalization of the imbedded calculation to account for the local neighboring environment, and the automation of ROCS/MC links to C-E's code system for in-core power distribution monitoring and core-follow analysis. The results of the ROCS/MC verification program are described and show good agreement with C-E's ROCS/PDQ based methodologies

  12. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  13. QMESH RENUM QPLOT, Mesh Generator on 2-D Bodies for Finite Element Method Analysis, with Plot Utility

    Jones, R.E.; Schkade, A.F.; Eyberger, L.R.

    1991-01-01

    1 - Description of problem or function: A set of five programs which make up a self-organising mesh generation package. QMESH generates meshes having quadrilateral elements on arbitrarily-shaped, two-dimensional (planar or axisymmetric) bodies. It is designed for use with two-dimensional finite element analysis applications. A flexible hierarchical input scheme is used to describe bodies to QMESH as collections of regions. A mesh for each region is developed independently, with the final assembly and bandwidth minimization performed by the independent program, RENUM or RENUM8. RENUM is applied when four-node elements are desired. Eight-node elements (with mid-side nodes) may be obtained with RENUM8., QPLOT and QPLOT8 are plot programs for meshes generated by the QMESH/RENUM and QMESH/RENUM8 program pairs, respectively. QPLOT and QPLOT8 automatically section the mesh into appropriately-sized sections for legible display of node and element numbers. An overall plot showing the position of the selected plot areas is produced. 2 - Method of solution: The mesh generating process for each individual region begins with the installation of an initial mesh which is a transformation of a regular grid on the unit square. The dimensions and orientation of the initial mesh may be defined by the user or, optionally, may be chosen by QMESH. Various smoothing algorithms may be applied to the initial mesh. Then, the mesh may be 'restructured' using an iterative scheme involving 'element pair restructuring', 'acute element deletion', and smoothing. In element pair restructuring, the interface side between two elements is removed and placed between two different nodes belonging to the pair of elements, provided that the change produces an overall improvement in the shapes of the two elements. In acute element deletion, an element having one diagonal much shorter than the other is deleted by collapsing the short diagonal to zero length The exact order in which restructuring, element

  14. Mesh and Time-Step Independent Computational Fluid Dynamics (CFD) Solutions

    Nijdam, Justin J.

    2013-01-01

    A homework assignment is outlined in which students learn Computational Fluid Dynamics (CFD) concepts of discretization, numerical stability and accuracy, and verification in a hands-on manner by solving physically realistic problems of practical interest to engineers. The students solve a transient-diffusion problem numerically using the common…

  15. Classification Method to Define Synchronization Capability Limits of Line-Start Permanent-Magnet Motor Using Mesh-Based Magnetic Equivalent Circuit Computation Results

    Bart Wymeersch

    2018-04-01

    Full Text Available Line start permanent magnet synchronous motors (LS-PMSM are energy-efficient synchronous motors that can start asynchronously due to a squirrel cage in the rotor. The drawback, however, with this motor type is the chance of failure to synchronize after start-up. To identify the problem, and the stable operation limits, the synchronization at various parameter combinations is investigated. For accurate knowledge of the operation limits to assure synchronization with the utility grid, an accurate classification of parameter combinations is needed. As for this, many simulations have to be executed, a rapid evaluation method is indispensable. To simulate the dynamic behavior in the time domain, several modeling methods exist. In this paper, a discussion is held with respect to different modeling methods. In order to include spatial factors and magnetic nonlinearities, on the one hand, and to restrict the computation time on the other hand, a magnetic equivalent circuit (MEC modeling method is developed. In order to accelerate numerical convergence, a mesh-based analysis method is applied. The novelty in this paper is the implementation of support vector machine (SVM to classify the results of simulations at various parameter combinations into successful or unsuccessful synchronization, in order to define the synchronization capability limits. It is explained how these techniques can benefit the simulation time and the evaluation process. The results of the MEC modeling correspond to those obtained with finite element analysis (FEA, despite the reduced computation time. In addition, simulation results obtained with MEC modeling are experimentally validated.

  16. Riding Bare-Back on unstructured meshes for 21. century criticality calculations - 244

    Kelley, K.C.; Martz, R.L.; Crane, D.L.

    2010-01-01

    MCNP has a new capability that permits tracking of neutrons and photons on an unstructured mesh which is embedded as a mesh universe within its legacy geometry capability. The mesh geometry is created through Abaqus/CAE using its solid modeling capabilities. Transport results are calculated for mesh elements through a path length estimator while element to element tracking is performed on the mesh. The results from MCNP can be exported to Abaqus/CAE for visualization or other-physics analysis. The simple Godiva criticality benchmark problem was tested with this new mesh capability. Computer run time is proportional to the number of mesh elements used. Both first and second order polyhedrons are used. Models that used second order polyhedrons produced slightly better results without significantly increasing computer run time. Models that used first order hexahedrons had shorter runtimes than models that used first order tetrahedrons. (authors)

  17. The effects of meshed offshore grids on offshore wind investment – a real options analysis

    Schröder, Sascha Thorsten; Kitzing, Lena

    2012-01-01

    Offshore wind farms in future meshed offshore grids could be subject to different regulatory regimes. Feed-in tariffs would absorb market risk from wind farm operators, whereas price premium mechanisms leave operators exposed to market price signals. In this case, it plays a decisive role which...... price applies to a node in an offshore grid. The offshore node will either be integrated into any of the neighbouring markets, with access to the respective maximum price, or be subject to separate nodal pricing. We investigate the different regulatory regimes for connections to one to four countries...... based on a stochastic model capturing uncertainties in prices and line failures. The stochastic analysis shows that in case the wind park is granted access to the respective maximum price, there is a significant option value connected to the operational flexibility of accessing several markets: The wind...

  18. The effects of meshed offshore grids on offshore wind investment – a real options analysis

    Schröder, Sascha Thorsten; Kitzing, Lena

    2012-01-01

    based on a stochastic model capturing uncertainties in prices and line failures. The stochastic analysis shows that in case the wind park is granted access to the respective maximum price, there is a significant option value connected to the operational flexibility of accessing several markets: The wind......Offshore wind farms in future meshed offshore grids could be subject to different regulatory regimes. Feed-in tariffs would absorb market risk from wind farm operators, whereas price premium mechanisms leave operators exposed to market price signals. In this case, it plays a decisive role which...... price applies to a node in an offshore grid. The offshore node will either be integrated into any of the neighbouring markets, with access to the respective maximum price, or be subject to separate nodal pricing. We investigate the different regulatory regimes for connections to one to four countries...

  19. Shielding Benchmark Computational Analysis

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  20. Toward An Unstructured Mesh Database

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi

  1. Nonlinear Dynamics Modeling and Analysis of Torsional Spring-Loaded Antibacklash Gear with Time-Varying Meshing Stiffness and Friction

    Zheng Yang

    2013-01-01

    Full Text Available Torsional spring-loaded antibacklash gear which can improve the transmission precision is widely used in many precision transmission fields. It is very important to investigate the dynamic characteristics of antibacklash gear. In the paper, applied force analysis is completed in detail. Then, defining the starting point of double-gear meshing as initial position, according to the meshing characteristic of antibacklash gear, single- or double-tooth meshing states of two gear pairs and the transformation relationship at any moment are determined. Based on this, a nonlinear model of antibacklash gear with time-varying friction and meshing stiffness is proposed. The influences of friction and variations of torsional spring stiffness, damping ratio and preload on dynamic transmission error (DTE are analyzed by numerical calculation and simulation, and the results show that antibacklash gear can increase the composite meshing stiffness; when the torsional spring stiffness is large enough, the oscillating components of the DTE (ODTE and the RMS of the DTE (RDTE trend to be a constant value; the variations of ODTE and RDTE are not significant, unless preload exceeds a certain value.

  2. A Tissue Relevance and Meshing Method for Computing Patient-Specific Anatomical Models in Endoscopic Sinus Surgery Simulation

    Audette, M. A.; Hertel, I.; Burgert, O.; Strauss, G.

    This paper presents on-going work on a method for determining which subvolumes of a patient-specific tissue map, extracted from CT data of the head, are relevant to simulating endoscopic sinus surgery of that individual, and for decomposing these relevant tissues into triangles and tetrahedra whose mesh size is well controlled. The overall goal is to limit the complexity of the real-time biomechanical interaction while ensuring the clinical relevance of the simulation. Relevant tissues are determined as the union of the pathology present in the patient, of critical tissues deemed to be near the intended surgical path or pathology, and of bone and soft tissue near the intended path, pathology or critical tissues. The processing of tissues, prior to meshing, is based on the Fast Marching method applied under various guises, in a conditional manner that is related to tissue classes. The meshing is based on an adaptation of a meshing method of ours, which combines the Marching Tetrahedra method and the discrete Simplex mesh surface model to produce a topologically faithful surface mesh with well controlled edge and face size as a first stage, and Almost-regular Tetrahedralization of the same prescribed mesh size as a last stage.

  3. Convergence study of global meshing on enamel-cement-bracket finite element model

    Samshuri, S. F.; Daud, R.; Rojan, M. A.; Basaruddin, K. S.; Abdullah, A. B.; Ariffin, A. K.

    2017-09-01

    This paper presents on meshing convergence analysis of finite element (FE) model to simulate enamel-cement-bracket fracture. Three different materials used in this study involving interface fracture are concerned. Complex behavior ofinterface fracture due to stress concentration is the reason to have a well-constructed meshing strategy. In FE analysis, meshing size is a critical factor that influenced the accuracy and computational time of analysis. The convergence study meshing scheme involving critical area (CA) and non-critical area (NCA) to ensure an optimum meshing sizes are acquired for this FE model. For NCA meshing, the area of interest are at the back of enamel, bracket ligature groove and bracket wing. For CA meshing, area of interest are enamel area close to cement layer, the cement layer and bracket base. The value of constant NCA meshing tested are meshing size 1 and 0.4. The value constant CA meshing tested are 0.4 and 0.1. Manipulative variables are randomly selected and must abide the rule of NCA must be higher than CA. This study employed first principle stresses due to brittle failure nature of the materials used. Best meshing size are selected according to convergence error analysis. Results show that, constant CA are more stable compare to constant NCA meshing. Then, 0.05 constant CA meshing are tested to test the accuracy of smaller meshing. However, unpromising result obtained as the errors are increasing. Thus, constant CA 0.1 with NCA mesh of 0.15 until 0.3 are the most stable meshing as the error in this region are lowest. Convergence test was conducted on three selected coarse, medium and fine meshes at the range of NCA mesh of 0.15 until 3 and CA mesh area stay constant at 0.1. The result shows that, at coarse mesh 0.3, the error are 0.0003% compare to 3% acceptable error. Hence, the global meshing are converge as the meshing size at CA 0.1 and NCA 0.15 for this model.

  4. How to model wireless mesh networks topology

    Sanni, M L; Hashim, A A; Anwar, F; Ali, S; Ahmed, G S M

    2013-01-01

    The specification of network connectivity model or topology is the beginning of design and analysis in Computer Network researches. Wireless Mesh Networks is an autonomic network that is dynamically self-organised, self-configured while the mesh nodes establish automatic connectivity with the adjacent nodes in the relay network of wireless backbone routers. Researches in Wireless Mesh Networks range from node deployment to internetworking issues with sensor, Internet and cellular networks. These researches require modelling of relationships and interactions among nodes including technical characteristics of the links while satisfying the architectural requirements of the physical network. However, the existing topology generators model geographic topologies which constitute different architectures, thus may not be suitable in Wireless Mesh Networks scenarios. The existing methods of topology generation are explored, analysed and parameters for their characterisation are identified. Furthermore, an algorithm for the design of Wireless Mesh Networks topology based on square grid model is proposed in this paper. The performance of the topology generated is also evaluated. This research is particularly important in the generation of a close-to-real topology for ensuring relevance of design to the intended network and validity of results obtained in Wireless Mesh Networks researches

  5. Analysis of Surgical Outcomes and Determinants of Litigation Among Women With Transvaginal Mesh Complications.

    Zoorob, Dani; Karram, Mickey; Stecher, Anna; Maxwell, Rose; Whiteside, James

    To identify litigation predictors among women with complications of transvaginal mesh. Chart review and patient survey were conducted among women who had undergone a complication-related explant of a transvaginal prolapse or incontinence sling mesh. Trained study personnel administered a 57-question survey addressing subjective complaints related to bowel, bladder, sexual dysfunction, and development of pain or recurrent prolapse. These data were analyzed with respect to the subject's reported pursuit of litigation related to the mesh complication. Categorical and continuous variables were analyzed using the χ test and the t test as indicated. Ninety-five (68%) of 139 women completed the surveys with 60% of the patients pursuing litigation at the time of the survey. Individual risk factors for pursuing litigation included development of vaginal pain after mesh placement (P = 0.01); dyspareunia after mesh placement (P = 0.01); persistence of dyspareunia, suprapubic pain, and groin pain after mesh excision (P = 0.04, P = 0.02, and P = 0.001, respectively); unsuccessful attempts at conservative management of pelvic pain using pelvic floor rehabilitation (P = 0.002). There is an association between a higher likelihood of pursuing litigation and new-onset or persistent pain symptoms attributable to transvaginal mesh.

  6. In vitro analysis of biopolymer coating with glycidoxypropyltrimethoxysilane on hernia meshes.

    Metzler, Steffen; Zankovych, Sergiy; Rauchfuß, Falk; Dittmar, Yves; Jandt, Karin; Jandt, Klaus D; Settmacher, Utz; Scheuerlein, Hubert

    2017-07-01

    Certain coatings may improve the biocompatibility of hernia meshes. The coating with self-assembled monolayers, such as glycidoxypropyltrimethoxysilane (GOPS) can also improve the materials characteristics of implants. This approach was not yet explored in hernia meshes. It was the aim of this work to clarify if and how hernia meshes with their three-dimensional structure can be coated with GOPS and with which technique this coating can be best characterized. Commercially available meshes made from polypropylene (PP), polyester (PE), and expanded polytetrafluorethylene (ePTFE) have been coated with GOPS. The coatings were analyzed via X-ray photoelectron spectroscopy (XPS), confocal laser scanning microscopy (CLSM), and cell proliferation test (mouse fibroblasts). Cell viability and cytotoxicity were tested by MTT test. With the GOPS surface modification, the adherence of mouse fibroblasts on polyester meshes and the proliferation on ePTFE meshes were increased compared to noncoated meshes. Both XPS and CLSM are limited in their applicability and validity due to the three-dimensional mesh structure while CLSM was overall more suitable. In the MTT test, no negative effects of the GOPS coating on the cells were detected after 24 h. The present results show that GOPS coating of hernia meshes is feasible and effective. GOPS coating can be achieved in a fast and cost-efficient way. Further investigations are necessary with respect to coating quality and adverse effects before such a coating may be used in the clinical routine. In conclusion, GOPS is a promising material that warrants further research as coating of medical implants. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 105B: 1083-1090, 2017. © 2016 Wiley Periodicals, Inc.

  7. Analysis of achievable capacity in irregularly-placed high performance mesh nodes

    Olwal, TO

    2012-09-01

    Full Text Available -directional antenna for backhaul mesh connectivity and access. The third radio interface card is attached to a 2.4 GHz omni-directional antenna for mesh client access network. As shown in Figure 2, the HPN block diagram has a weather proof Unshielded Twisted Pair... by an embedded microcontroller technology [11]. To ensure high speed performance, the innovation has the first radio interface card attached to a 5 GHz directional antenna for backhaul mesh routing; the second interface card is connected to a 5 GHz omni...

  8. SU-E-CAMPUS-I-02: Estimation of the Dosimetric Error Caused by the Voxelization of Hybrid Computational Phantoms Using Triangle Mesh-Based Monte Carlo Transport

    Lee, C [Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, MD (United States); Badal, A [U.S. Food ' Drug Administration (CDRH/OSEL), Silver Spring, MD (United States)

    2014-06-15

    Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. We also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.

  9. Surgical treatment of subcostal incisional hernia with polypropylene mesh - analysis of late results

    Marco Antonio de Oliveira Peres

    Full Text Available OBJECTIVE: To evaluate the results of subcostal incisional hernia repair using polypropylene mesh, the technical aspects of musculo-aponeurotic reconstruction, routine fixation of supra-aponeurotic mesh and follow-up for five years.METHODS: We conducted a retrospective study that assessed 24 patients undergoing subcostal incisional hernia repair with use of polypropylene mesh; 15 patients (62.5% were female; ages ranged from 33 to 82, and 79.1% had comorbidities.RESULTS: Early complications: three cases (12.5% of wound infection, three cases (12.5% of seroma, one case (4.1% of hematoma; and one case (4.1% of wound dehiscence. Late complications occurred in one case (4.1% of hernia recurrence attributed to technical failure in the fixation of the mesh and in one case (4.1% of chronic pain. There were no cases of exposure or rejection of the mesh.CONCLUSION: The subcostal incisional hernia, though not very relevant, requires adequate surgical treatment. Its surgical correction involves rebuilding the muscle-aponeurotic defect, supra-aponeurotic fixation of polypropylene mesh, with less complexity and lower rates of complications and recurrences.

  10. Root-cause analysis of the better performance of the coarse-mesh finite-difference method for CANDU-type reactors

    Shen, W.

    2012-01-01

    Recent assessment results indicate that the coarse-mesh finite-difference method (FDM) gives consistently smaller percent differences in channel powers than the fine-mesh FDM when compared to the reference MCNP solution for CANDU-type reactors. However, there is an impression that the fine-mesh FDM should always give more accurate results than the coarse-mesh FDM in theory. To answer the question if the better performance of the coarse-mesh FDM for CANDU-type reactors was just a coincidence (cancellation of errors) or caused by the use of heavy water or the use of lattice-homogenized cross sections for the cluster fuel geometry in the diffusion calculation, three benchmark problems were set up with three different fuel lattices: CANDU, HWR and PWR. These benchmark problems were then used to analyze the root cause of the better performance of the coarse-mesh FDM for CANDU-type reactors. The analyses confirm that the better performance of the coarse-mesh FDM for CANDU-type reactors is mainly caused by the use of lattice-homogenized cross sections for the sub-meshes of the cluster fuel geometry in the diffusion calculation. Based on the analyses, it is recommended to use 2 x 2 coarse-mesh FDM to analyze CANDU-type reactors when lattice-homogenized cross sections are used in the core analysis. (authors)

  11. Root-cause analysis of the better performance of the coarse-mesh finite-difference method for CANDU-type reactors

    Shen, W. [Candu Energy Inc., 2285 Speakman Dr., Mississauga, ON L5B 1K (Canada)

    2012-07-01

    Recent assessment results indicate that the coarse-mesh finite-difference method (FDM) gives consistently smaller percent differences in channel powers than the fine-mesh FDM when compared to the reference MCNP solution for CANDU-type reactors. However, there is an impression that the fine-mesh FDM should always give more accurate results than the coarse-mesh FDM in theory. To answer the question if the better performance of the coarse-mesh FDM for CANDU-type reactors was just a coincidence (cancellation of errors) or caused by the use of heavy water or the use of lattice-homogenized cross sections for the cluster fuel geometry in the diffusion calculation, three benchmark problems were set up with three different fuel lattices: CANDU, HWR and PWR. These benchmark problems were then used to analyze the root cause of the better performance of the coarse-mesh FDM for CANDU-type reactors. The analyses confirm that the better performance of the coarse-mesh FDM for CANDU-type reactors is mainly caused by the use of lattice-homogenized cross sections for the sub-meshes of the cluster fuel geometry in the diffusion calculation. Based on the analyses, it is recommended to use 2 x 2 coarse-mesh FDM to analyze CANDU-type reactors when lattice-homogenized cross sections are used in the core analysis. (authors)

  12. Application of mesh free lattice Boltzmann method to the analysis of very high temperature reactor lower plenum

    Park, Jong Woon [Dongguk Univ., Gyeongju (Korea, Republic of). Dept. of Energy and Environment

    2011-11-15

    Inside a helium-cooled very high temperature reactor (VHTR) lower plenum, hot gas jets from upper fuel channels with very high velocities and temperatures and is mixed before flowing out. One of the major concerns is local hot spots in the plenum due to inefficient mixing of the helium exiting from differentially heated fuel channels and it involves complex fluid flow physics. For this situation, mesh-free technique, especially Lattice Boltzmann Method (LBM), is thus of particular interest owing to its merit of no mesh generation. As an attempt to find efficiency of the method in such a problem, 3 dimensional flow field inside a scaled test model of the VHTR lower plenum is computed with commercial XFLOW code. Large eddy simulation (LES) and classical Smagorinsky eddy viscosity (EV) turbulence models are employed to investigate the capability of the LBM in capturing large scale vortex shedding. (orig.)

  13. Collision Analysis at 60-GHz mmWave Mesh Networks: The Case With Blockage and Shadowing

    Lyu, Kangjia

    2018-05-01

    This thesis can be viewed as two parts. The first part focuses on performance analysis of millimeter wave (mmWave) communications. We investigate how the interference behaves in the outdoor mesh network operating at 60-GHz when block age and shadowing are present using the probability of collision as a metric, under both the protocol model and the physical model. In contrast with results reported in mmWave mesh networks at 60-GHz that advocates that interference has only a marginal effect, our results show that for a short-range link of 100 m, the collision probability gets considerably larger (beyond 0.1) at the signal-to-interference-plus-noise ratio (SINR) of interest (for example, the reference value is chosen as 15 dB for uncoded quadrature phase shift keying (QPSK)). Compensation or compromise should be made in order to maintain a low probability of collision, either by reducing transmitter node density which is to the detriment of the network connectivity, or by switching to a compact linear antenna array with more at-top elements, which places more stringent requirements in device integration techniques. The second part of this thesis focuses on finding the optimal unmanned aerial vehicle (UAV) deployment in the sense that it can maximize over specific network connectivity. We have introduced a connectivity measure based on the commonly used network connectivity metric, which is refered to as global soft connectivity. This measure can be easily extended to account for different propagation models, such as Rayleigh fading and Nakagami fading. It can also be modified to incorporate the link state probability and beam alignment errors in highly directional networks. As can be shown, under the line-of-sight (LOS) and Rayleigh fading assumptions, the optimization regarding the global soft connectivity can be expressed as a weighted sum of the square of link distances between the nodes within the network, namely the ground-to-ground links, the UAV-to-UAV links

  14. Cost analysis of surgical treatment for pelvic organ prolapse by laparoscopic sacrocolpopexy or transvaginal mesh.

    Carracedo, D; López-Fando, L; Sánchez, M D; Jiménez, M Á; Gómez, J M; Laso, I; Rodríguez, M Á; Burgos, F J

    2017-03-01

    The objective of this study is to compare direct costs of repairing pelvic organ prolapse by laparoscopic sacrocolpopexy (LS) against vaginal mesh (VM). Our hypothesis is the correction of pelvic organ prolapse by LS has a similar cost per procedure compared to VM. We made a retrospective comparative analysis of medium cost per procedure of first 69 consecutive LS versus first 69 consecutive VM surgeries. We calculate direct cost for each procedure: structural outlays, personal, operating room occupation, hospital stay, perishable or inventory material and prosthetic material. Medium cost per procedure were calculated for each group, with a 95% confidence interval. LS group has a higher cost related to a longer length of surgery, higher operating room occupation and anesthesia; VM group has a higher cost due to longer hospital stay and more expensive prosthetic material. Globally, LS has a lower medium cost per procedure in comparison to VM (5,985.7 €±1,550.8 € vs. 6,534.3 €±1,015.5 €), although it did not achieve statistical signification. In our midst, pelvic organ prolapse surgical correction by LS has at least similar cost per procedure compared to VM. Copyright © 2016 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  15. Integrated Data Collection Analysis (IDCA) Program — KClO3/Icing Sugar (-100 mesh) Mixture

    Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (IHD-NSWC), Indian Head, MD (United States). Indian Head Division; Sorenson, Daniel N. [Naval Surface Warfare Center (IHD-NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (IHD-NSWC), Indian Head, MD (United States). Indian Head Division; Moran, Jesse S. [Naval Surface Warfare Center (IHD-NSWC), Indian Head, MD (United States). Indian Head Division; Shelley, Timothy J. [Air Force Research Lab. (AFRL/RXQF), Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Whipple, Richard E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-05-02

    The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small-Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and scanning calorimetry analysis of a mixture of KClO3 sized through a 100-mesh sieve mixed with icing sugar, also sized through a 100-mesh sieve—KClO3/icing sugar (-100) mixture. This material was selected because of the challenge of performing SSST testing of a mixture of two solid materials. The mixture was found to be: 1) more sensitive to impact than RDX, with sensitivity similar to PETN, 2) the same or more sensitive to friction than PETN, and 3) less sensitive to spark than RDX. The analysis showed that the mixture has thermally stability similar to RDX and is perhaps more energetic upon decomposition but variable results indicate sampling issues.

  16. An Analysis of Network and Sensor Performance Within IEEE 802.X Wireless MESH Networks in the Tactical Network Topology (TNT)

    Davis, Joseph A., Sr

    2005-01-01

    .... Specifically, this thesis will attempt establish the foundation for the development of wireless MESH "network health" models by examining the performance of sensors operating within a MESH network...

  17. Large-scale structure of a network of co-occurring MeSH terms: statistical analysis of macroscopic properties.

    Andrej Kastrin

    Full Text Available Concept associations can be represented by a network that consists of a set of nodes representing concepts and a set of edges representing their relationships. Complex networks exhibit some common topological features including small diameter, high degree of clustering, power-law degree distribution, and modularity. We investigated the topological properties of a network constructed from co-occurrences between MeSH descriptors in the MEDLINE database. We conducted the analysis on two networks, one constructed from all MeSH descriptors and another using only major descriptors. Network reduction was performed using the Pearson's chi-square test for independence. To characterize topological properties of the network we adopted some specific measures, including diameter, average path length, clustering coefficient, and degree distribution. For the full MeSH network the average path length was 1.95 with a diameter of three edges and clustering coefficient of 0.26. The Kolmogorov-Smirnov test rejects the power law as a plausible model for degree distribution. For the major MeSH network the average path length was 2.63 edges with a diameter of seven edges and clustering coefficient of 0.15. The Kolmogorov-Smirnov test failed to reject the power law as a plausible model. The power-law exponent was 5.07. In both networks it was evident that nodes with a lower degree exhibit higher clustering than those with a higher degree. After simulated attack, where we removed 10% of nodes with the highest degrees, the giant component of each of the two networks contains about 90% of all nodes. Because of small average path length and high degree of clustering the MeSH network is small-world. A power-law distribution is not a plausible model for the degree distribution. The network is highly modular, highly resistant to targeted and random attack and with minimal dissortativity.

  18. Connectivity editing for quad-dominant meshes

    Peng, Chihan

    2013-08-01

    We propose a connectivity editing framework for quad-dominant meshes. In our framework, the user can edit the mesh connectivity to control the location, type, and number of irregular vertices (with more or fewer than four neighbors) and irregular faces (non-quads). We provide a theoretical analysis of the problem, discuss what edits are possible and impossible, and describe how to implement an editing framework that realizes all possible editing operations. In the results, we show example edits and illustrate the advantages and disadvantages of different strategies for quad-dominant mesh design. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  19. A nonlinear equivalent circuit method for analysis of passive intermodulation of mesh reflectors

    Jiang Jie

    2014-08-01

    Full Text Available Passive intermodulation (PIM has gradually become a serious electromagnetic interference due to the development of high-power and high-sensitivity RF/microwave communication systems, especially large deployable mesh reflector antennas. This paper proposes a field-circuit coupling method to analyze the PIM level of mesh reflectors. With the existence of many metal–metal (MM contacts in mesh reflectors, the contact nonlinearity becomes the main reason for PIM generation. To analyze these potential PIM sources, an equivalent circuit model including nonlinear components is constructed to model a single MM contact so that the transient current through the MM contact point induced by incident electromagnetic waves can be calculated. Taking the electric current as a new electromagnetic wave source, the far-field scattering can be obtained by the use of electromagnetic numerical methods or the communication link method. Finally, a comparison between simulation and experimental results is illustrated to verify the validity of the proposed method.

  20. Notes on the Mesh Handler and Mesh Data Conversion

    Lee, Sang Yong; Park, Chan Eok

    2009-01-01

    At the outset of the development of the thermal-hydraulic code (THC), efforts have been made to utilize the recent technology of the computational fluid dynamics. Among many of them, the unstructured mesh approach was adopted to alleviate the restriction of the grid handling system. As a natural consequence, a mesh handler (MH) has been developed to manipulate the complex mesh data from the mesh generator. The mesh generator, Gambit, was chosen at the beginning of the development of the code. But a new mesh generator, Pointwise, was introduced to get more flexible mesh generation capability. An open source code, Paraview, was chosen as a post processor, which can handle unstructured as well as structured mesh data. Overall data processing system for THC is shown in Figure-1. There are various file formats to save the mesh data in the permanent storage media. A couple of dozen of file formats are found even in the above mentioned programs. A competent mesh handler should have the capability to import or export mesh data as many as possible formats. But, in reality, there are two aspects that make it difficult to achieve the competence. The first aspect to consider is the time and efforts to program the interface code. And the second aspect, which is even more difficult one, is the fact that many mesh data file formats are proprietary information. In this paper, some experience of the development of the format conversion programs will be presented. File formats involved are Gambit neutral format, Ansys-CFX grid file format, VTK legacy file format, Nastran format and CGNS

  1. Statistical Analysis of Compressive and Flexural Test Results on the Sustainable Adobe Reinforced with Steel Wire Mesh

    Jokhio, Gul A.; Syed Mohsin, Sharifah M.; Gul, Yasmeen

    2018-04-01

    It has been established that Adobe provides, in addition to being sustainable and economic, a better indoor air quality without spending extensive amounts of energy as opposed to the modern synthetic materials. The material, however, suffers from weak structural behaviour when subjected to adverse loading conditions. A wide range of mechanical properties has been reported in literature owing to lack of research and standardization. The present paper presents the statistical analysis of the results that were obtained through compressive and flexural tests on Adobe samples. Adobe specimens with and without wire mesh reinforcement were tested and the results were reported. The statistical analysis of these results presents an interesting read. It has been found that the compressive strength of adobe increases by about 43% after adding a single layer of wire mesh reinforcement. This increase is statistically significant. The flexural response of Adobe has also shown improvement with the addition of wire mesh reinforcement, however, the statistical significance of the same cannot be established.

  2. Analysis and modification of a single-mesh gear fatigue rig for use in diagnostic studies

    Zakrajsek, James J.; Townsend, Dennis P.; Oswald, Fred B.; Decker, Harry J.

    1992-01-01

    A single-mesh gear fatigue rig was analyzed and modified for use in gear mesh diagnostic research. The fatigue rig allowed unwanted vibration to mask the test-gear vibration signal, making it difficult to perform diagnostic studies. Several possible sources and factors contributing to the unwanted components of the vibration signal were investigated. Sensor mounting location was found to have a major effect on the content of the vibration signal. In the presence of unwanted vibration sources, modal amplification made unwanted components strong. A sensor location was found that provided a flatter frequency response. This resulted in a more useful vibration signal. A major network was performed on the fatigue rig to reduce the influence of the most probable sources of the noise in the vibration signal. The slave gears were machined to reduce weight and increase tooth loading. The housing and the shafts were modified to reduce imbalance, looseness, and misalignment in the rotating components. These changes resulted in an improved vibration signal, with the test-gear mesh frequency now the dominant component in the signal. Also, with the unwanted sources eliminated, the sensor mounting location giving the most robust representation of the test-gear meshing energy was found to be at a point close to the test gears in the load zone of the bearings.

  3. Urogynecologic Surgical Mesh Implants

    ... procedures performed to treat pelvic floor disorders with surgical mesh: Transvaginal mesh to treat POP Transabdominal mesh to treat ... address safety risks Final Order for Reclassification of Surgical Mesh for Transvaginal Pelvic Organ Prolapse Repair Final Order for Effective ...

  4. Are patient specific meshes required for EIT head imaging?

    Jehl, Markus; Aristovich, Kirill; Faulkner, Mayo; Holder, David

    2016-06-01

    Head imaging with electrical impedance tomography (EIT) is usually done with time-differential measurements, to reduce time-invariant modelling errors. Previous research suggested that more accurate head models improved image quality, but no thorough analysis has been done on the required accuracy. We propose a novel pipeline for creation of precise head meshes from magnetic resonance imaging and computed tomography scans, which was applied to four different heads. Voltages were simulated on all four heads for perturbations of different magnitude, haemorrhage and ischaemia, in five different positions and for three levels of instrumentation noise. Statistical analysis showed that reconstructions on the correct mesh were on average 25% better than on the other meshes. However, the stroke detection rates were not improved. We conclude that a generic head mesh is sufficient for monitoring patients for secondary strokes following head trauma.

  5. THM-GTRF: New Spider meshes, New Hydra-TH runs

    Bakosi, Jozsef [Los Alamos National Laboratory; Christon, Mark A. [Los Alamos National Laboratory; Francois, Marianne M. [Los Alamos National Laboratory; Lowrie, Robert B. [Los Alamos National Laboratory; Nourgaliev, Robert [Los Alamos National Laboratory

    2012-06-20

    Progress is reported on computational capabilities for the grid-to-rod-fretting (GTRF) problem of pressurized water reactors. Numeca's Hexpress/Hybrid mesh generator is demonstrated as an excellent alternative to generating computational meshes for complex flow geometries, such as in GTRF. Mesh assessment is carried out using standard industrial computational fluid dynamics practices. Hydra-TH, a simulation code developed at LANL for reactor thermal-hydraulics, is demonstrated on hybrid meshes, containing different element types. A series of new Hydra-TH calculations has been carried out collecting turbulence statistics. Preliminary results on the newly generated meshes are discussed; full analysis will be documented in the L3 milestone, THM.CFD.P5.05, Sept. 2012.

  6. A security analysis of the 802.11s wireless mesh network routing protocol and its secure routing protocols.

    Tan, Whye Kit; Lee, Sang-Gon; Lam, Jun Huy; Yoo, Seong-Moo

    2013-09-02

    Wireless mesh networks (WMNs) can act as a scalable backbone by connecting separate sensor networks and even by connecting WMNs to a wired network. The Hybrid Wireless Mesh Protocol (HWMP) is the default routing protocol for the 802.11s WMN. The routing protocol is one of the most important parts of the network, and it requires protection, especially in the wireless environment. The existing security protocols, such as the Broadcast Integrity Protocol (BIP), Counter with cipher block chaining message authentication code protocol (CCMP), Secure Hybrid Wireless Mesh Protocol (SHWMP), Identity Based Cryptography HWMP (IBC-HWMP), Elliptic Curve Digital Signature Algorithm HWMP (ECDSA-HWMP), and Watchdog-HWMP aim to protect the HWMP frames. In this paper, we have analyzed the vulnerabilities of the HWMP and developed security requirements to protect these identified vulnerabilities. We applied the security requirements to analyze the existing secure schemes for HWMP. The results of our analysis indicate that none of these protocols is able to satisfy all of the security requirements. We also present a quantitative complexity comparison among the protocols and an example of a security scheme for HWMP to demonstrate how the result of our research can be utilized. Our research results thus provide a tool for designing secure schemes for the HWMP.

  7. Analysis of computer programming languages

    Risset, Claude Alain

    1967-01-01

    This research thesis aims at trying to identify some methods of syntax analysis which can be used for computer programming languages while putting aside computer devices which influence the choice of the programming language and methods of analysis and compilation. In a first part, the author proposes attempts of formalization of Chomsky grammar languages. In a second part, he studies analytical grammars, and then studies a compiler or analytic grammar for the Fortran language

  8. Development validation and use of computer codes for inelastic analysis

    Jobson, D.A.

    1983-01-01

    A finite element scheme is a system which provides routines so carry out the operations which are common to all finite element programs. The list of items that can be provided as standard by the finite element scheme is surprisingly large and the list provided by the UNCLE finite element scheme is unusually comprehensive. This presentation covers the following: construction of the program, setting up a finite element mesh, generation of coordinates, incorporating boundary and load conditions. Program validation was done by creep calculations performed using CAUSE code. Program use is illustrated by calculating a typical inelastic analysis problem. This includes computer model of the PFR intermediate heat exchanger

  9. Coarse mesh finite element method for boiling water reactor physics analysis

    Ellison, P.G.

    1983-01-01

    A coarse mesh method is formulated for the solution of Boiling Water Reactor physics problems using two group diffusion theory. No fuel assembly cross-section homogenization is required; water gaps, control blades and fuel pins of varying enrichments are treated explicitly. The method combines constrained finite element discretization with infinite lattice super cell trial functions to obtain coarse mesh solutions for which the only approximations are along the boundaries between fuel assemblies. The method is applied to bench mark Boiling Water Reactor problems to obtain both the eigenvalue and detailed flux distributions. The solutions to these problems indicate the method is useful in predicting detailed power distributions and eigenvalues for Boiling Water Reactor physics problems

  10. Shear force bond analysis between acrylic resin bases and retention framework (open- and mesh-type)

    Royhan, A.; Indrasari, M.; Masulili, C.

    2017-08-01

    Occlusions between teeth and the activity of the muscles around an artificial tooth during mastication create a force on dentures. This force causes friction between acrylic resin bases and retention frameworks that can lead to the complete loss of the acrylic resin base from the framework. The purpose of this study was to analyze the design of retention frameworks and determine which ones have a better resistance to shear forces in order to prevent the loss of heat cured acrylic resin base (HCARB). Six samples each of open-and mesh-type retention frameworks, both types made of Co-Cr material, and HCARB, were shear tested by means of a universal testing machine. The average shear force required to release the HCARB for mesh-type retention frameworks was 28.84 kgf, and the average for the open-type was 26.52 kgf. There was no significant difference between the shear forces required to remove HCARB from open- and mesh-type retention frameworks.

  11. Meta-analysis and systematic review of laparoscopic versus open mesh repair for elective incisional hernia.

    Awaiz, A; Rahman, F; Hossain, M B; Yunus, R M; Khan, S; Memon, B; Memon, M A

    2015-06-01

    The utility of laparoscopic repair in the treatment of incisional hernia repair is still contentious. The aim was to conduct a meta-analysis of RCTs investigating the surgical and postsurgical outcomes of elective incisional hernia by open versus laparoscopic method. A search of PubMed, Medline, Embase, Science Citation Index, Current Contents, and the Cochrane Central Register of Controlled Trials published between January 1993 and September 2013 was performed using medical subject headings (MESH) "hernia," "incisional," "abdominal," "randomized/randomised controlled trial," "abdominal wall hernia," "laparoscopic repair," "open repair", "human" and "English". Prospective RCTs comparing surgical treatment of only incisional hernia (and not primary ventral hernias) using open and laparoscopic methods were selected. Data extraction and critical appraisal were carried out independently by two authors (AA and MAM) using predefined data fields. The outcome variables analyzed included (a) hernia diameter; (b) operative time; (c) length of hospital stay; (d) overall complication rate; (e) bowel complications; (f) reoperation; (g) wound infection; (h) wound hematoma or seroma; (i) time to oral intake; (j) back to work; (k) recurrence rate; and (l) postoperative neuralgia. These outcomes were unanimously decided to be important since they influence the practical and surgical approach towards hernia management within hospitals and institutions. The quality of RCTs was assessed using Jadad's scoring system. Random effects model was used to calculate the effect size of both binary and continuous data. Heterogeneity amongst the outcome variables of these trials was determined by the Cochran Q statistic and I (2) index. The meta-analysis was prepared in accordance with PRISMA guidelines. Sufficient data were available for the analysis of twelve clinically relevant outcomes. Statistically significant reduction in bowel complications was noted with open surgery compared to the

  12. A Survey of Solver-Related Geometry and Meshing Issues

    Masters, James; Daniel, Derick; Gudenkauf, Jared; Hine, David; Sideroff, Chris

    2016-01-01

    There is a concern in the computational fluid dynamics community that mesh generation is a significant bottleneck in the CFD workflow. This is one of several papers that will help set the stage for a moderated panel discussion addressing this issue. Although certain general "rules of thumb" and a priori mesh metrics can be used to ensure that some base level of mesh quality is achieved, inadequate consideration is often given to the type of solver or particular flow regime on which the mesh will be utilized. This paper explores how an analyst may want to think differently about a mesh based on considerations such as if a flow is compressible vs. incompressible or hypersonic vs. subsonic or if the solver is node-centered vs. cell-centered. This paper is a high-level investigation intended to provide general insight into how considering the nature of the solver or flow when performing mesh generation has the potential to increase the accuracy and/or robustness of the solution and drive the mesh generation process to a state where it is no longer a hindrance to the analysis process.

  13. Analysis of computer networks

    Gebali, Fayez

    2015-01-01

    This textbook presents the mathematical theory and techniques necessary for analyzing and modeling high-performance global networks, such as the Internet. The three main building blocks of high-performance networks are links, switching equipment connecting the links together, and software employed at the end nodes and intermediate switches. This book provides the basic techniques for modeling and analyzing these last two components. Topics covered include, but are not limited to: Markov chains and queuing analysis, traffic modeling, interconnection networks and switch architectures and buffering strategies.   ·         Provides techniques for modeling and analysis of network software and switching equipment; ·         Discusses design options used to build efficient switching equipment; ·         Includes many worked examples of the application of discrete-time Markov chains to communication systems; ·         Covers the mathematical theory and techniques necessary for ana...

  14. Affective Computing and Sentiment Analysis

    Ahmad, Khurshid

    2011-01-01

    This volume maps the watershed areas between two 'holy grails' of computer science: the identification and interpretation of affect -- including sentiment and mood. The expression of sentiment and mood involves the use of metaphors, especially in emotive situations. Affect computing is rooted in hermeneutics, philosophy, political science and sociology, and is now a key area of research in computer science. The 24/7 news sites and blogs facilitate the expression and shaping of opinion locally and globally. Sentiment analysis, based on text and data mining, is being used in the looking at news

  15. Comparative analysis of histopathologic effects of synthetic meshes based on material, weight, and pore size in mice.

    Orenstein, Sean B; Saberski, Ean R; Kreutzer, Donald L; Novitsky, Yuri W

    2012-08-01

    While synthetic prosthetics have essentially become mandatory for hernia repair, mesh-induced chronic inflammation and scarring can lead to chronic pain and limited mobility. Mesh propensity to induce such adverse effects is likely related to the prosthetic's material, weight, and/or pore size. We aimed to compare histopathologic responses to various synthetic meshes after short- and long-term implantations in mice. Samples of macroporous polyester (Parietex [PX]), heavyweight microporous polypropylene (Trelex[TX]), midweight microporous polypropylene (ProLite[PL]), lightweight macroporous polypropylene (Ultrapro[UP]), and expanded polytetrafluoroethylene (DualMesh[DM]) were implanted subcutaneously in mice. Four and 12 wk post-implantation, meshes were assessed for inflammation, foreign body reaction (FBR), and fibrosis. All meshes induced varying levels of inflammatory responses. PX induced the greatest inflammatory response and marked FBR. DM induced moderate FBR and a strong fibrotic response with mesh encapsulation at 12 wk. UP and PL had the lowest FBR, however, UP induced a significant chronic inflammatory response. Although inflammation decreased slightly for TX, marked FBR was present throughout the study. Of the three polypropylene meshes, fibrosis was greatest for TX and slightly reduced for PL and UP. For UP and PL, there was limited fibrosis within each mesh pore. Polyester mesh induced the greatest FBR and lasting chronic inflammatory response. Likewise, marked fibrosis and encapsulation was seen surrounding ePTFE. Heavier polypropylene meshes displayed greater early and persistent fibrosis; the reduced-weight polypropylene meshes were associated with the least amount of fibrosis. Mesh pore size was inversely proportional to bridging fibrosis. Moreover, reduced-weight polypropylene meshes demonstrated the smallest FBR throughout the study. Overall, we demonstrated that macroporous, reduced-weight polypropylene mesh exhibited the highest degree of

  16. Publication trends of Allergy, Pediatric Allergy and Immunology, and Clinical and Translational Allergy journals: a MeSH term-based bibliometric analysis.

    Martinho-Dias, Daniel; Sousa-Pinto, Bernardo; Botelho-Souza, Júlio; Soares, António; Delgado, Luís; Fonseca, João Almeida

    2018-01-01

    We performed a MeSH term-based bibliometric analysis aiming to assess the publication trends of EAACI journals, namely Allergy, Pediatric Allergy and Immunology (PAI) (from 1990 to 2015) and Clinical and Translational Allergy (CTA) (from its inception in 2011 to 2015). We also aimed to discuss the impact of the creation of CTA in the publication topics of Allergy and PAI. We analysed a total of 1973 articles and 23,660 MeSH terms. Most MeSH terms in the three journals fell in the category of "basic immunology and molecular biology" (BIMB). During the studied period, we observed an increase in the proportion of MeSH terms on BIMB, and a decreasing proportion of terms on allergic rhinitis and aeroallergens. The observed changes in Allergy and PAI publication topics hint at a possible impact from CTA creation.

  17. Vaginal native tissue repair versus transvaginal mesh repair for apical prolapse: how utilizing different methods of analysis affects the estimated trade-off between reoperation for mesh exposure/erosion and reoperation for recurrent prolapse.

    Dieter, Alexis A; Willis-Gray, Marcella G; Weidner, Alison C; Visco, Anthony G; Myers, Evan R

    2015-05-01

    Informed decision-making about optimal surgical repair of apical prolapse with vaginal native tissue (NT) versus transvaginal mesh (TVM) requires understanding the balance between the potential "harm" of mesh-related complications and the potential "benefit" of reducing prolapse recurrence. Synthesis of data from observational studies is required and the current literature shows that the average follow-up for NT repair is significantly longer than for TVM repair. We examined this harm/benefit balance. We hypothesized that using different methods of analysis to incorporate follow-up time would affect the balance of outcomes. We used a Markov state transition model to estimate the cumulative 24-month probabilities of reoperation for mesh exposure/erosion or for recurrent prolapse after either NT or TVM repair. We used four different analytic approaches to estimate probability distributions ranging from simple pooled proportions to a random effects meta-analysis using study-specific events per patient-time. As variability in follow-up time was accounted for better, the balance of outcomes became more uncertain. For TVM repair, the incremental ratio of number of operations for mesh exposure/erosion per single reoperation for recurrent prolapse prevented increased progressively from 1.4 to over 100 with more rigorous analysis methods. The most rigorous analysis showed a 70% probability that TVM would result in more operations for recurrent prolapse repair than NT. Based on the best available evidence, there is considerable uncertainty about the harm/benefit trade-off between NT and TVM for apical prolapse repair. Future studies should incorporate time-to-event analyses, with greater standardization of reporting, in order to better inform decision-making.

  18. A mesh density study for application to large deformation rolling process evaluation

    Martin, J.A.

    1997-12-01

    When addressing large deformation through an elastic-plastic analysis the mesh density is paramount in determining the accuracy of the solution. However, given the nonlinear nature of the problem, a highly-refined mesh will generally require a prohibitive amount of computer resources. This paper addresses finite element mesh optimization studies considering accuracy of results and computer resource needs as applied to large deformation rolling processes. In particular, the simulation of the thread rolling manufacturing process is considered using the MARC software package and a Cray C90 supercomputer. Both mesh density and adaptive meshing on final results for both indentation of a rigid body to a specified depth and contact rolling along a predetermined length are evaluated

  19. Design and numerical analysis of an SMA mesh-based self-folding sheet

    Peraza-Hernandez, Edwin A; Hartl, Darren J; Malak Jr, Richard J

    2013-01-01

    Origami engineering, which is the practice of creating useful three-dimensional structures through folding and fold-like operations applied to initially two-dimensional entities, has the potential to impact several areas of design and manufacturing. In some instances, however, it may be impractical to apply external manipulations to produce the desired folds (e.g., as in remote applications such as space systems). In such cases, self-folding capabilities are valuable. A self-folding material or material system is one that can perform folding operations without manipulations from external forces. This work considers a concept for a self-folding material system. The system extends the ‘programmable matter’ concept and consists of an active, self-morphing sheet composed of two meshes of thermally actuated shape memory alloy (SMA) wire separated by a compliant passive layer. The geometric and power input parameters of the self-folding sheet are optimized to achieve the tightest local fold possible subject to stress and temperature constraints. The sheet folding performance considering folds at different angles relative to the orientation of the wire mesh is also analyzed. The optimization results show that a relatively low elastomer thickness is preferable to generate the tightest fold possible. The results also show that the self-folding sheet does not require large power inputs to achieve an optimal folding performance. It was shown that the self-folding sheet is capable of creating similar quality folds at different orientations. (paper)

  20. Comparison of Absorbable Mesh Plate versus Titanium-Dynamic Mesh Plate in Reconstruction of Blow-Out Fracture: An Analysis of Long-Term Outcomes

    Woon Il Baek

    2014-07-01

    Full Text Available Background A blow-out fracture is one of the most common facial injuries in midface trauma. Orbital wall reconstruction is extremely important because it can cause various functional and aesthetic sequelae. Although many materials are available, there are no uniformly accepted guidelines regarding material selection for orbital wall reconstruction. Methods From January 2007 to August 2012, a total of 78 patients with blow-out fractures were analyzed. 36 patients received absorbable mesh plates, and 42 patients received titanium-dynamic mesh plates. Both groups were retrospectively evaluated for therapeutic efficacy and safety according to the incidence of three different complications: enophthalmos, extraocular movement impairment, and diplopia. Results For all groups (inferior wall fracture group, medial wall fractrue group, and combined inferomedial wall fracture group, there were improvements in the incidence of each complication regardless of implant types. Moreover, a significant improvement of enophthalmos occurred for both types of implants in group 1 (inferior wall fracture group. However, we found no statistically significant differences of efficacy or complication rate in every groups between both implant types. Conclusions Both types of implants showed good results without significant differences in long-term follow up, even though we expected the higher recurrent enophthalmos rate in patients with absorbable plate. In conclusion, both types seem to be equally effective and safe for orbital wall reconstruction. In particular, both implant types significantly improve the incidence of enophthalmos in cases of inferior orbital wall fractures.

  1. 6th International Meshing Roundtable '97

    White, D.

    1997-09-01

    The goal of the 6th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the pas~ the Roundtable has enjoyed significant participation born each of these groups from a wide variety of countries. The Roundtable will consist of technical presentations from contributed papers and abstracts, two invited speakers, and two invited panels of experts discussing topics related to the development and use of automatic mesh generation tools. In addition, this year we will feature a "Bring Your Best Mesh" competition and poster session to encourage discussion and participation from a wide variety of mesh generation tool users. The schedule and evening social events are designed to provide numerous opportunities for informal dialog. A proceedings will be published by Sandia National Laboratories and distributed at the Roundtable. In addition, papers of exceptionally high quaIity will be submitted to a special issue of the International Journal of Computational Geometry and Applications. Papers and one page abstracts were sought that present original results on the meshing process. Potential topics include but are got limited to: Unstructured triangular and tetrahedral mesh generation Unstructured quadrilateral and hexahedral mesh generation Automated blocking and structured mesh generation Mixed element meshing Surface mesh generation Geometry decomposition and clean-up techniques Geometry modification techniques related to meshing Adaptive mesh refinement and mesh quality control Mesh visualization Special purpose meshing algorithms for particular applications Theoretical or novel ideas with practical potential Technical presentations from industrial researchers.

  2. Computer codes for safety analysis

    Holland, D.F.

    1986-11-01

    Computer codes for fusion safety analysis have been under development in the United States for about a decade. This paper will discuss five codes that are currently under development by the Fusion Safety Program. The purpose and capability of each code will be presented, a sample given, followed by a discussion of the present status and future development plans

  3. Transvaginal six-arm mesh OPUR in women with apical pelvic organ prolapse - analysis of short-term results, pelvic floor ultrasound evaluation.

    Kluz, Tomasz; Wlaźlak, Edyta; Surkont, Grzegorz

    2017-01-01

    Analysis of feasibility, efficacy and short-term results after six-arm transvaginal mesh OPUR implantation in women with apical prolapse. The same surgeon operated all of 39 women using mesh OPUR. Preoperatively patients had a standardized interview and clinical examination. Intraoperative and postoperative complications were analyzed. Postoperative evaluation included standardized interview, clinical examination and standardized pelvic floor ultrasound performed with 2D transvaginal probe and 4D abdominal probe. There was no complication that needed operative intervention. Hematomas in 3 patients resolved spontaneously. Transient voiding difficulties which lasted less than 7 days were observed in 5 patients. No erosion was observed. Comparison of pre- and postoperative results in 34 women revealed that in all 3 compartments improvement in POP-Q scale was statistically significant (p mesh needed re-operation. During PFS-TV in 94.1% of patients urethra was normobile or hypermobile. In all of the patients urethral end of the mesh was positioned far enough from the middle part of the urethra (ultrasound) to implant suburethral sling without risk of collision. Sexually active women did not inform of any important discomfort or pain during intercourse. It seems that six-arm OPUR mesh, if implanted under strict surgical rules, gives low risk of complications and high chance to successfully reduce POP symptoms in short term after the operation. It seems that OPUR mesh should not have negative influence on the results after anti-incontinence suburethral sling.

  4. Mesh sensitivity in the thermal analysis of a gas turbine a blade with internal cooling; Sensibilidad de malla en el analisis termico de un alabe de turbina de gas con enfriamiento interno

    Alfaro Ayala, Jorge Arturo; Gallegos Munoz, Armando [Facultad de Ingenieria Mecanica, Electrica y Electronica (FIMEE), Universidad de Guanajuato (Mexico); Campos Amezcua, Alfonso [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)

    2007-11-15

    This article presents the methodology to generate the mesh model of the computer model of a blade by means of commands in the software of CFD Fluent, mainly in the fluid zone, since a mesh sensitivity analysis becomes too expensive in terms of human and computer resources. When geometry is too irregular, modifications are required in the mesh to avoid problems such as the divergence, instability in the solution and the dependency on the results of temperature, pressure, velocity, etc. Such is the case of a blade with internal cooling of the first stage of a gas turbine. The results are included of the generated mesh as well as of the thermal analysis of the blade. Additionally the results of temperature, pressure and velocity of the combustion gases and of the cooling air are shown. [Spanish] Este articulo presenta la metodologia para generar el mallado del modelo computacional de un alabe por medio de comandos en el software de CFD Fluent, principalmente en la zona del fluido, ya que un analisis de sensibilidad de malla se vuelve demasiado costoso en terminos de recursos humanos y computacionales. Cuando la geometria es demasiado irregular, se requiere de modificaciones en la malla para evitar problemas como son la divergencia, inestabilidad en la solucion y la dependencia de los resultados de temperatura, presion, velocidad, etc. Tal es el caso de un alabe con enfriamiento interno de la primera etapa de una turbina de gas. Se incluyen los resultados tanto de la malla generada como del analisis termico del alabe. Adicionalmente se muestran los resultados de temperatura, presion y velocidad de los gases de la combustion y del aire de enfriamiento.

  5. Detailed Aerodynamic Analysis of a Shrouded Tail Rotor Using an Unstructured Mesh Flow Solver

    Lee, Hee Dong; Kwon, Oh Joon

    The detailed aerodynamics of a shrouded tail rotor in hover has been numerically studied using a parallel inviscid flow solver on unstructured meshes. The numerical method is based on a cell-centered finite-volume discretization and an implicit Gauss-Seidel time integration. The calculation was made for a single blade by imposing a periodic boundary condition between adjacent rotor blades. The grid periodicity was also imposed at the periodic boundary planes to avoid numerical inaccuracy resulting from solution interpolation. The results were compared with available experimental data and those from a disk vortex theory for validation. It was found that realistic three-dimensional modeling is important for the prediction of detailed aerodynamics of shrouded rotors including the tip clearance gap flow.

  6. Prevention of parastomal herniation with biologic/composite prosthetic mesh: a systematic review and meta-analysis of randomized controlled trials.

    Wijeyekoon, Sanjaya Prabhath; Gurusamy, Kurinchi; El-Gendy, Khalid; Chan, Christopher L

    2010-11-01

    Parastomal herniation is a frequent complication of stoma formation and can be difficult to repair satisfactorily, making it a recognized cause of significant morbidity. A systematic review with meta-analysis of randomized clinical trials was performed to determine the benefits and risks of mesh reinforcement versus conventional stoma formation in preventing parastomal herniation. Trials were identified from The Cochrane Library trials register, Medline, Embase, Science Citation Index Expanded, and reference lists. The primary outcome was the incidence of parastomal herniation. The secondary outcomes were the incidence of parastomal herniation requiring surgical repair, postoperative morbidity, and mortality. Meta-analysis was performed using a random-effects model. The risk ratio (RR) was estimated with 95% confidence intervals (CI) based on an intention-to-treat analysis. Three trials with 129 patients were included. Composite or biologic mesh was used in either the preperitoneal or sublay position. Mesh reinforcement was associated with a reduction in parastomal herniation versus conventional stoma formation (RR 0.23, 95%CI 0.06 to 0.81; p = 0.02), and a reduction in the percentage of parastomal hernias requiring surgical treatment (RR 0.13, 95%CI 0.02 to 1.02; p = 0.05). There was no difference between groups in stoma-related morbidity (2 of 58, 3.4% in the mesh group versus 2 of 57, 3.5% in the conventional group; p = 0.97), nor was there any mortality related to the placement of mesh. Composite or biologic mesh reinforcement of stomas in the preperitoneal/sublay position is associated with a reduced incidence of parastomal herniation with no excess morbidity. Mesh reinforcement also demonstrates a trend toward a decreased incidence of parastomal herniation requiring surgical repair. Copyright © 2010 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  7. Computational methods and modeling. 3. Adaptive Mesh Refinement for the Nodal Integral Method and Application to the Convection-Diffusion Equation

    Torej, Allen J.; Rizwan-Uddin

    2001-01-01

    error estimate leads to significant computational overhead. However, this overhead can be reduced in the NIM-AMR by taking advantage of the fact that the truncation error estimate can be based on the node-averaged variable. Consequently, a consistent comparison for the Richardson truncation error estimate is made by using the average of the four node-averaged variables from the child grid and the corresponding node-averaged variable from the parent grid. Thus, each of the four nodes from the child grid has the same local truncation error. Once the selection routine has identified the nodes to be refined, a clustering algorithm is then used to efficiently group the selected nodes and generate the appropriate sub-meshes. For the NIM-AMR, the point clustering and grid generation algorithm developed by Berger and Rigoutsos is used. Another advantage of the NIM-AMR is that the node interior variation that results from the NIM can be exploited to derive efficient and accurate interpolation operators in the communication procedures of the AMR. By using the nodal reconstruction on the coarse node, the transverse-integrated variables associated with the four smaller nodes can be evaluated. The last component of the NIM-AMR, the governing algorithm, is based on the one used in the Berger-Oliger method. Finally, implementation of the NIM-AMR into a computer code, due to the recursive nature of the governing algorithm combined with the level-grid hierarchy, requires a language that can support these procedures. As an application of the approach outlined here, the NIMAMR is applied to the steady-state convection-diffusion equation. The NIM-AMR has been developed and implemented in FORTRAN 90, which allows for dynamic memory allocation, advanced data structures, and recursive subroutine capability. Several convection-diffusion problems have been solved. In this paper, the results of the recirculating flow problem are presented. In the NIM-AMR, oscillation-free results are obtained

  8. Cache-Oblivious Mesh Layouts

    Yoon, S; Lindstrom, P; Pascucci, V; Manocha, D

    2005-01-01

    We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications

  9. Comparative study on triangular and quadrilateral meshes by a finite-volume method with a central difference scheme

    Yu, Guojun

    2012-10-01

    In this article, comparative studies on computational accuracies and convergence rates of triangular and quadrilateral meshes are carried out in the frame work of the finite-volume method. By theoretical analysis, we conclude that the number of triangular cells needs to be 4/3 times that of quadrilateral cells to obtain similar accuracy. The conclusion is verified by a number of numerical examples. In addition, the convergence rates of the triangular meshes are found to be slower than those of the quadrilateral meshes when the same accuracy is obtained with these two mesh types. © 2012 Taylor and Francis Group, LLC.

  10. Comparative study on triangular and quadrilateral meshes by a finite-volume method with a central difference scheme

    Yu, Guojun; Yu, Bo; Sun, Shuyu; Tao, Wenquan

    2012-01-01

    In this article, comparative studies on computational accuracies and convergence rates of triangular and quadrilateral meshes are carried out in the frame work of the finite-volume method. By theoretical analysis, we conclude that the number of triangular cells needs to be 4/3 times that of quadrilateral cells to obtain similar accuracy. The conclusion is verified by a number of numerical examples. In addition, the convergence rates of the triangular meshes are found to be slower than those of the quadrilateral meshes when the same accuracy is obtained with these two mesh types. © 2012 Taylor and Francis Group, LLC.

  11. Systems analysis and the computer

    Douglas, A S

    1983-08-01

    The words systems analysis are used in at least two senses. Whilst the general nature of the topic is well understood in the or community, the nature of the term as used by computer scientists is less familiar. In this paper, the nature of systems analysis as it relates to computer-based systems is examined from the point of view that the computer system is an automaton embedded in a human system, and some facets of this are explored. It is concluded that or analysts and computer analysts have things to learn from each other and that this ought to be reflected in their education. The important role played by change in the design of systems is also highlighted, and it is concluded that, whilst the application of techniques developed in the artificial intelligence field have considerable relevance to constructing automata able to adapt to change in the environment, study of the human factors affecting the overall systems within which the automata are embedded has an even more important role. 19 references.

  12. Unstructured mesh adaptivity for urban flooding modelling

    Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.

    2018-05-01

    Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.

  13. The transmission of stress to grafted bone inside a titanium mesh cage used in anterior column reconstruction after total spondylectomy: a finite-element analysis.

    Akamaru, Tomoyuki; Kawahara, Norio; Sakamoto, Jiro; Yoshida, Akira; Murakami, Hideki; Hato, Taizo; Awamori, Serina; Oda, Juhachi; Tomita, Katsuro

    2005-12-15

    A finite-element study of posterior alone or anterior/posterior combined instrumentation following total spondylectomy and replacement with a titanium mesh cage used as an anterior strut. To compare the effect of posterior instrumentation versus anterior/posterior instrumentation on transmission of the stress to grafted bone inside a titanium mesh cage following total spondylectomy. The most recent reconstruction techniques following total spondylectomy for malignant spinal tumor include a titanium mesh cage filled with autologous bone as an anterior strut. The need for additional anterior instrumentation with posterior pedicle screws and rods is controversial. Transmission of the mechanical stress to grafted bone inside a titanium mesh cage is important for fusion and remodeling. To our knowledge, there are no published reports comparing the load-sharing properties of the different reconstruction methods following total spondylectomy. A 3-dimensional finite-element model of the reconstructed spine (T10-L4) following total spondylectomy at T12 was constructed. A Harms titanium mesh cage (DePuy Spine, Raynham, MA) was positioned as an anterior replacement, and 3 types of the reconstruction methods were compared: (1) multilevel posterior instrumentation (MPI) (i.e., posterior pedicle screws and rods at T10-L2 without anterior instrumentation); (2) MPI with anterior instrumentation (MPAI) (i.e., MPAI [Kaneda SR; DePuy Spine] at T11-L1); and (3) short posterior and anterior instrumentation (SPAI) (i.e., posterior pedicle screws and rods with anterior instrumentation at T11-L1). The mechanical energy stress distribution exerted inside the titanium mesh cage was evaluated and compared by finite-element analysis for the 3 different reconstruction methods. Simulated forces were applied to give axial compression, flexion, extension, and lateral bending. In flexion mode, the energy stress distribution in MPI was higher than 3.0 x 10 MPa in 73.0% of the total volume inside

  14. Computer aided safety analysis 1989

    1990-04-01

    The meeting was conducted in a workshop style, to encourage involvement of all participants during the discussions. Forty-five (45) experts from 19 countries, plus 22 experts from the GDR participated in the meeting. A list of participants can be found at the end of this volume. Forty-two (42) papers were presented and discussed during the meeting. Additionally an open discussion was held on the possible directions of the IAEA programme on Computer Aided Safety Analysis. A summary of the conclusions of these discussions is presented in the publication. The remainder of this proceedings volume comprises the transcript of selected technical papers (22) presented in the meeting. It is the intention of the IAEA that the publication of these proceedings will extend the benefits of the discussions held during the meeting to a larger audience throughout the world. The Technical Committee/Workshop on Computer Aided Safety Analysis was organized by the IAEA in cooperation with the National Board for Safety and Radiological Protection (SAAS) of the German Democratic Republic in Berlin. The purpose of the meeting was to provide an opportunity for discussions on experiences in the use of computer codes used for safety analysis of nuclear power plants. In particular it was intended to provide a forum for exchange of information among experts using computer codes for safety analysis under the Technical Cooperation Programme on Safety of WWER Type Reactors (RER/9/004) and other experts throughout the world. A separate abstract was prepared for each of the 22 selected papers. Refs, figs tabs and pictures

  15. A study on the analysis of urban heat environment pattern and construction of mesh data

    Lee, K.G.; Hong, W.H.

    2008-01-01

    This paper reported on a study in which wind flow was analyzed to determine its influence on air quality in urban areas. Wind flow changes depending on fixed factors such as shapes of the buildings and postings, as well as on flexible factors like weather. The computational fluid dynamics (CFD) wind flow analysis method was used to quantitatively measure wind effect and air flow in terms of impacts on ventilation, air quality and walking in the street. Two case studies were presented in which land development planning was used to achieve a better urban environment. The studies showed that air quality is usually degraded when wind speed is low, while pedestrians feel discomforts when wind speed is high. Pedestrian comfort was found to be closely related to air circulation. Reduced ventilation and air contamination were shown to be affected by density and height of building. However, in these particular case studies, the height of the buildings was shown to have the greatest influence on the urban environment. 7 refs., 5 figs

  16. Automatic generation of 3D fine mesh geometries for the analysis of the venus-3 shielding benchmark experiment with the Tort code

    Pescarini, M.; Orsi, R.; Martinelli, T.

    2003-01-01

    In many practical radiation transport applications today the cost for solving refined, large size and complex multi-dimensional problems is not so much computing but is linked to the cumbersome effort required by an expert to prepare a detailed geometrical model, verify and validate that it is correct and represents, to a specified tolerance, the real design or facility. This situation is, in particular, relevant and frequent in reactor core criticality and shielding calculations, with three-dimensional (3D) general purpose radiation transport codes, requiring a very large number of meshes and high performance computers. The need for developing tools that make easier the task to the physicist or engineer, by reducing the time required, by facilitating through effective graphical display the verification of correctness and, finally, that help the interpretation of the results obtained, has clearly emerged. The paper shows the results of efforts in this field through detailed simulations of a complex shielding benchmark experiment. In the context of the activities proposed by the OECD/NEA Nuclear Science Committee (NSC) Task Force on Computing Radiation Dose and Modelling of Radiation-Induced Degradation of Reactor Components (TFRDD), the ENEA-Bologna Nuclear Data Centre contributed with an analysis of the VENUS-3 low-flux neutron shielding benchmark experiment (SCK/CEN-Mol, Belgium). One of the targets of the work was to test the BOT3P system, originally developed at the Nuclear Data Centre in ENEA-Bologna and actually released to OECD/NEA Data Bank for free distribution. BOT3P, ancillary system of the DORT (2D) and TORT (3D) SN codes, permits a flexible automatic generation of spatial mesh grids in Cartesian or cylindrical geometry, through combinatorial geometry algorithms, following a simplified user-friendly approach. This system demonstrated its validity also in core criticality analyses, as for example the Lewis MOX fuel benchmark, permitting to easily

  17. Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes

    Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.

    2002-01-01

    A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)

  18. Computational system for geostatistical analysis

    Vendrusculo Laurimar Gonçalves

    2004-01-01

    Full Text Available Geostatistics identifies the spatial structure of variables representing several phenomena and its use is becoming more intense in agricultural activities. This paper describes a computer program, based on Windows Interfaces (Borland Delphi, which performs spatial analyses of datasets through geostatistic tools: Classical statistical calculations, average, cross- and directional semivariograms, simple kriging estimates and jackknifing calculations. A published dataset of soil Carbon and Nitrogen was used to validate the system. The system was useful for the geostatistical analysis process, for the manipulation of the computational routines in a MS-DOS environment. The Windows development approach allowed the user to model the semivariogram graphically with a major degree of interaction, functionality rarely available in similar programs. Given its characteristic of quick prototypation and simplicity when incorporating correlated routines, the Delphi environment presents the main advantage of permitting the evolution of this system.

  19. Computational analysis of cerebral cortex

    Takao, Hidemasa; Abe, Osamu; Ohtomo, Kuni [University of Tokyo, Department of Radiology, Graduate School of Medicine, Tokyo (Japan)

    2010-08-15

    Magnetic resonance imaging (MRI) has been used in many in vivo anatomical studies of the brain. Computational neuroanatomy is an expanding field of research, and a number of automated, unbiased, objective techniques have been developed to characterize structural changes in the brain using structural MRI without the need for time-consuming manual measurements. Voxel-based morphometry is one of the most widely used automated techniques to examine patterns of brain changes. Cortical thickness analysis is also becoming increasingly used as a tool for the study of cortical anatomy. Both techniques can be relatively easily used with freely available software packages. MRI data quality is important in order for the processed data to be accurate. In this review, we describe MRI data acquisition and preprocessing for morphometric analysis of the brain and present a brief summary of voxel-based morphometry and cortical thickness analysis. (orig.)

  20. Computational analysis of cerebral cortex

    Takao, Hidemasa; Abe, Osamu; Ohtomo, Kuni

    2010-01-01

    Magnetic resonance imaging (MRI) has been used in many in vivo anatomical studies of the brain. Computational neuroanatomy is an expanding field of research, and a number of automated, unbiased, objective techniques have been developed to characterize structural changes in the brain using structural MRI without the need for time-consuming manual measurements. Voxel-based morphometry is one of the most widely used automated techniques to examine patterns of brain changes. Cortical thickness analysis is also becoming increasingly used as a tool for the study of cortical anatomy. Both techniques can be relatively easily used with freely available software packages. MRI data quality is important in order for the processed data to be accurate. In this review, we describe MRI data acquisition and preprocessing for morphometric analysis of the brain and present a brief summary of voxel-based morphometry and cortical thickness analysis. (orig.)

  1. Parallel adaptive simulations on unstructured meshes

    Shephard, M S; Jansen, K E; Sahni, O; Diachin, L A

    2007-01-01

    This paper discusses methods being developed by the ITAPS center to support the execution of parallel adaptive simulations on unstructured meshes. The paper first outlines the ITAPS approach to the development of interoperable mesh, geometry and field services to support the needs of SciDAC application in these areas. The paper then demonstrates the ability of unstructured adaptive meshing methods built on such interoperable services to effectively solve important physics problems. Attention is then focused on ITAPs' developing ability to solve adaptive unstructured mesh problems on massively parallel computers

  2. Computer aided analysis of disturbances

    Baldeweg, F.; Lindner, A.

    1986-01-01

    Computer aided analysis of disturbances and the prevention of failures (diagnosis and therapy control) in technological plants belong to the most important tasks of process control. Research in this field is very intensive due to increasing requirements to security and economy of process control and due to a remarkable increase of the efficiency of digital electronics. This publication concerns with analysis of disturbances in complex technological plants, especially in so called high risk processes. The presentation emphasizes theoretical concept of diagnosis and therapy control, modelling of the disturbance behaviour of the technological process and the man-machine-communication integrating artificial intelligence methods, e.g., expert system approach. Application is given for nuclear power plants. (author)

  3. Adaptive hybrid mesh refinement for multiphysics applications

    Khamayseh, Ahmed; Almeida, Valmor de

    2007-01-01

    The accuracy and convergence of computational solutions of mesh-based methods is strongly dependent on the quality of the mesh used. We have developed methods for optimizing meshes that are comprised of elements of arbitrary polygonal and polyhedral type. We present in this research the development of r-h hybrid adaptive meshing technology tailored to application areas relevant to multi-physics modeling and simulation. Solution-based adaptation methods are used to reposition mesh nodes (r-adaptation) or to refine the mesh cells (h-adaptation) to minimize solution error. The numerical methods perform either the r-adaptive mesh optimization or the h-adaptive mesh refinement method on the initial isotropic or anisotropic meshes to equidistribute weighted geometric and/or solution error function. We have successfully introduced r-h adaptivity to a least-squares method with spherical harmonics basis functions for the solution of the spherical shallow atmosphere model used in climate modeling. In addition, application of this technology also covers a wide range of disciplines in computational sciences, most notably, time-dependent multi-physics, multi-scale modeling and simulation

  4. Multilevel Bloom Filters for P2P Flows Identification Based on Cluster Analysis in Wireless Mesh Network

    Xia-an Bi

    2015-01-01

    Full Text Available With the development of wireless mesh networks and distributed computing, lots of new P2P services have been deployed and enrich the Internet contents and applications. The rapid growth of P2P flows brings great pressure to the regular network operation. So the effective flow identification and management of P2P applications become increasingly urgent. In this paper, we build a multilevel bloom filters data structure to identify the P2P flows through researches on the locality characteristics of P2P flows. Different level structure stores different numbers of P2P flow rules. According to the characteristics values of the P2P flows, we adjust the parameters of the data structure of bloom filters. The searching steps of the scheme traverse from the first level to the last level. Compared with the traditional algorithms, our method solves the drawbacks of previous schemes. The simulation results demonstrate that our algorithm effectively enhances the performance of P2P flows identification. Then we deploy our flow identification algorithm in the traffic monitoring sensors which belong to the network traffic monitoring system at the export link in the campus network. In the real environment, the experiment results demonstrate that our algorithm has a fast speed and high accuracy to identify the P2P flows; therefore, it is suitable for actual deployment.

  5. Numerical analysis of splashing fluid using hybrid method of mesh-based and particle-based modelings

    Tanaka, Nobuatsu; Ogawara, Takuya; Kaneda, Takeshi; Maseguchi, Ryo

    2009-01-01

    In order to simulate splashing and scattering fluid behaviors, we developed a hybrid method of mesh-based model for large-scale continuum fluid and particle-based model for small-scale discrete fluid particles. As for the solver of the continuum fluid, we adopt the CIVA RefIned Multiphase SimulatiON (CRIMSON) code to evaluate two phase flow behaviors based on the recent computational fluid dynamics (CFD) techniques. The phase field model has been introduced to the CRIMSON in order to solve the problem of loosing phase interface sharpness in long-term calculation. As for the solver of the discrete fluid droplets, we applied the idea of Smoothed Particle Hydrodynamics (SPH) method. Both continuum fluid and discrete fluid interact each other through drag interaction force. We verified our method by applying it to a popular benchmark problem of collapse of water column problems, especially focusing on the splashing and scattering fluid behaviors after the column collided against the wall. We confirmed that the gross splashing and scattering behaviors were well reproduced by the introduction of particle model while the detailed behaviors of the particles were slightly different from the experimental results. (author)

  6. Different types of mesh fixation for laparoscopic repair of inguinal hernia: A protocol for systematic review and network meta-analysis with randomized controlled trials.

    Wei, Kongyuan; Lu, Cuncun; Ge, Long; Pan, Bei; Yang, Huan; Tian, Jinhui; Cao, Nong

    2018-04-01

    Laparoscopic inguinal hernia repair has become a valid option for repair of an inguinal hernia. Due to there are several types of mesh fixation for laparoscopic repair of inguinal hernia. The study aims to assess and compare the efficacy of different types of mesh fixation for laparoscopic repair of inguinal hernia using network meta-analysis. We will systematically search PubMed, EMBASE the Cochrane library, and Chinese Biomedical Literature Database from their inception to March 2018. Randomized controlled trials (RCTs) that compared the effect of different types of mesh fixation for laparoscopic inguinal hernia repair will be included. The primary outcomes are chronic groin pain, incidence risk of hernia recurrence, and complications. Risk of bias assessment of the included RCTs will be conducted using to Cochrane risk of bias tool. A network meta-analysis will be performed using WinBUGS 1.4.3 software and the result figures will be generated using R x64 3.1.2 software and STATA V.12.0 software. Grading of Recommendations Assessment, Development and Evaluation (GRADE) will be used to assess the quality of evidence. The results of this study will be published in a peer-reviewed journal. Our study will generate evidence of laparoscopic repair of mesh fixation for adult patients with inguinal hernia and provide suggestions for clinical practice or guideline.

  7. Assessment of boiling transition analysis code against data from NUPEC BWR full-size fine-mesh bundle tests

    Utsuno, Hideaki; Ishida, Naoyuki; Masuhara, Yasuhiro; Kasahara, Fumio

    2004-01-01

    Transient BT analysis code TCAPE based on mechanistic methods coupled with subchannel analysis has been developed for the evaluation on fuel integrity under abnormal operations in BWR. TCAPE consisted mainly of the drift-flux model, the cross-flow model, the film model and the heat transfer model. Assessment of TCAPE has been performed against data from BWR full-size fine-mesh bundle tests (BFBT), which consisted of two major parts: the void distribution measurement and the critical power measurement. Code and data comparison was made for void distributions with varying number of unheated rods in simulated actual fuel assembly. Prediction of steady-state critical power was compared with the measurement on full-scale bundle under a range of BWR operational conditions. Although the cross-sectional averaged void fraction was underestimated when it became lower, the accuracy was obtained that the averaged ratio 0.910 and its standard deviation 0.076. The prediction of steady-state critical power agreed well with the data in the range of BWR operations, where the prediction accuracy was obtained that the averaged ratio 0.997 and its standard deviation 0.043. These results demonstrated that TCAPE is well capable to predict two-phase flow distribution and liquid film dryout phenomena occurring in BWR rod bundles. Part of NUPEC BFBT database will be made available for an international benchmark exercise. The code assessment shall be continued against the OECD/NRC benchmark based on BFBT database. (author)

  8. Age-stratified analysis of long-term outcomes of transvaginal mesh repair for treatment of pelvic organ prolapse.

    Dong, Shengnan; Zhong, Yanbo; Chu, Lei; Li, Huaifang; Tong, Xiaowen; Wang, Jianjun

    2016-10-01

    To investigate long-term outcomes after transvaginal mesh repair among patients with pelvic organ prolapse in different age groups. A retrospective cohort study was conducted among women who underwent transvaginal mesh repair with polypropylene mesh for pelvic organ prolapse of stage II or higher between January 2007 and November 2011 at a center in Shanghai, China. Patients were invited to attend a follow-up appointment between July 2014 and May 2015. Surgical outcomes were compared among three age groups (≤59, 60-74, and ≥75 years), and quality-of-life questionnaires were evaluated. Multivariate logistic regression was used to identify risk factors associated with recurrent prolapse and mesh exposure. Among 158 patients, 143 (90.5%) were objectively cured and 149 (94.3%) were subjectively cured at follow-up. Surgical outcomes were similar across all age groups. Significant improvements were observed on the Pelvic Floor Distress Inventory across all applicable subscales in all age groups (Pmesh exposure (odds ratio 11.89, 95% confidence interval 1.08-131.48; P=0.043). Transvaginal mesh repair was found to be a safe and effective technique for treating pelvic organ prolapse among women of all ages. An active postoperative sex life increased the odds of mesh exposure. Copyright © 2016 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Understanding and predicting size selection in diamond-mesh cod ends for danish seining: A study based on sea trials and computer simulations

    Herrmann, Bent; Krag, Ludvig Ahm; Feekings, Jordan P.

    2016-01-01

    described by a double logistic selection curve, implying that two different size selection processes occur in the cod end. The double selection process could be explained by an additional selection process occurring through slack meshes. The results imply that the escapement of 46% and 34% of the larger...... Atlantic Cod and Haddock (those above 48 cm), respectively, would be through wide-open or slack meshes. Since these mesh states are only likely to be present in the latest stage of the fishing process (e.g., when the cod end is near the surface), a large fraction of the bigger fish probably escaped near...

  10. Connectivity editing for quad-dominant meshes

    Peng, Chihan; Wonka, Peter

    2013-01-01

    and illustrate the advantages and disadvantages of different strategies for quad-dominant mesh design. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  11. Mesh Excision: Is Total Mesh Excision Necessary?

    Wolff, Gillian F; Winters, J Christian; Krlin, Ryan M

    2016-04-01

    Nearly 29% of women will undergo a secondary, repeat operation for pelvic organ prolapse (POP) symptom recurrence following a primary repair, as reported by Abbott et al. (Am J Obstet Gynecol 210:163.e1-163.e1, 2014). In efforts to decrease the rates of failure, graft materials have been utilized to augment transvaginal repairs. Following the success of using polypropylene mesh (PPM) for stress urinary incontinence (SUI), the use of PPM in the transvaginal repair of POP increased. However, in recent years, significant concerns have been raised about the safety of PPM mesh. Complications, some specific to mesh, such as exposures, erosion, dyspareunia, and pelvic pain, have been reported with increased frequency. In the current literature, there is not substantive evidence to suggest that PPM has intrinsic properties that warrant total mesh removal in the absence of complications. There are a number of complications that can occur after transvaginal mesh placement that do warrant surgical intervention after failure of conservative therapy. In aggregate, there are no high-quality controlled studies that clearly demonstrate that total mesh removal is consistently more likely to achieve pain reduction. In the cases of obstruction and erosion, it seems clear that definitive removal of the offending mesh is associated with resolution of symptoms in the majority of cases and reasonable practice. There are a number of complications that can occur with removal of mesh, and patients should be informed of this as they formulate a choice of treatment. We will review these considerations as we examine the clinical question of whether total versus partial removal of mesh is necessary for the resolution of complications following transvaginal mesh placement.

  12. Personal Computer Transport Analysis Program

    DiStefano, Frank, III; Wobick, Craig; Chapman, Kirt; McCloud, Peter

    2012-01-01

    The Personal Computer Transport Analysis Program (PCTAP) is C++ software used for analysis of thermal fluid systems. The program predicts thermal fluid system and component transients. The output consists of temperatures, flow rates, pressures, delta pressures, tank quantities, and gas quantities in the air, along with air scrubbing component performance. PCTAP s solution process assumes that the tubes in the system are well insulated so that only the heat transfer between fluid and tube wall and between adjacent tubes is modeled. The system described in the model file is broken down into its individual components; i.e., tubes, cold plates, heat exchangers, etc. A solution vector is built from the components and a flow is then simulated with fluid being transferred from one component to the next. The solution vector of components in the model file is built at the initiation of the run. This solution vector is simply a list of components in the order of their inlet dependency on other components. The component parameters are updated in the order in which they appear in the list at every time step. Once the solution vectors have been determined, PCTAP cycles through the components in the solution vector, executing their outlet function for each time-step increment.

  13. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement

    Juan J. Garcia-Cantero

    2017-06-01

    Full Text Available Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells’ overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma’s morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been

  14. Mesh versus non-mesh repair of ventral abdominal hernias

    Jawaid, M.A.; Talpur, A.H.

    2008-01-01

    To investigate the relative effectiveness of mesh and suture repair of ventral abdominal hernias in terms of clinical outcome, quality of life and rate of recurrence in both the techniques. This is a retrospective descriptive analysis of 236 patients with mesh and non-mesh repair of primary ventral hernias performed between January 2000 to December 2004 at Surgery Department, Liaquat University of Medical and Health Sciences, Jamshoro. The record sheets of the patients were analyzed and data retrieved to compare the results of both techniques for short-term and long-term results. The data retrieved is statistically analyzed on SPSS version 11. There were 43 (18.22%) males and 193 (81.77%) females with a mean age of 51.79 years and a range of 59 (81-22). Para-umbilical hernia was the commonest of ventral hernia and accounted for 49.8% (n=118) of the total study population followed by incisional hernia comprising 24% (n=57) of the total number. There was a significant difference in the recurrent rate at 3 years interval with 23/101 (22.77%) recurrences in suture-repaired subjects compared to 10/135 (7.40%) in mesh repair group. Chronic pain lasting up to 1-2 years was noted in 14 patients with suture repair. Wound infection is comparatively more common (8.14%) in mesh group. The other variables such as operative and postoperative complications, total hospital stay and quality of life is also discussed. Mesh repair of ventral hernia is much superior to non-mesh suture repair in terms of recurrence and overall outcome. (author)

  15. Outcomes using a bioprosthetic mesh at the time of permanent stoma creation in preventing a parastomal hernia: a value analysis.

    Figel, Nicole A; Rostas, Jack W; Ellis, C Neal

    2012-03-01

    A retrospective review of the medical records of all patients who had a prosthetic placed at the time of stoma creation for the prevention of a parastomal hernia was performed. The purpose of this study was to evaluate the safety, efficacy, and cost-effectiveness of bioprosthetics. A bioprosthetic was used in 16 patients to prevent the occurrence of a parastomal hernia. The median follow-up was 38 months. There were no mesh-related complications, and no parastomal hernias occurred. On value analysis, to be cost-effective, the percentage of patients who would have subsequently needed surgical repair of a parastomal hernia would have to be in excess of 39% or the bioprosthetic would have to cost less than $2,267 to $4,312. These data show the safety and efficacy of using a bioprosthetic at the time of permanent stoma creation in preventing a parastomal hernia and defines the parameters for this approach to be cost-effective. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. An Underlay Communication Channel for 5G Cognitive Mesh Networks: Packet Design, Implementation, Analysis, and Experimental Results

    Tarek Haddadin; Stephen Andrew Laraway; Arslan Majid; Taylor Sibbett; Daryl Leon Wasden; Brandon F Lo; Lloyd Landon; David Couch; Hussein Moradi; Behrouz Farhang-Boroujeny

    2016-04-01

    This paper proposes and presents the design and implementation of an underlay communication channel (UCC) for 5G cognitive mesh networks. The UCC builds its waveform based on filter bank multicarrier spread spectrum (FB-MCSS) signaling. The use of this novel spread spectrum signaling allows the device-to-device (D2D) user equipments (UEs) to communicate at a level well below noise temperature and hence, minimize taxation on macro-cell/small-cell base stations and their UEs in 5G wireless systems. Moreover, the use of filter banks allows us to avoid those portions of the spectrum that are in use by macro-cell and small-cell users. Hence, both D2D-to-cellular and cellular-to-D2D interference will be very close to none. We propose a specific packet for UCC and develop algorithms for packet detection, timing acquisition and tracking, as well as channel estimation and equalization. We also present the detail of an implementation of the proposed transceiver on a software radio platform and compare our experimental results with those from a theoretical analysis of our packet detection algorithm.

  17. Piping stress analysis with personal computers

    Revesz, Z.

    1987-01-01

    The growing market of the personal computers is providing an increasing number of professionals with unprecedented and surprisingly inexpensive computing capacity, which if using with powerful software, can enhance immensely the engineers capabilities. This paper focuses on the possibilities which opened in piping stress analysis by the widespread distribution of personal computers, on the necessary changes in the software and on the limitations of using personal computers for engineering design and analysis. Reliability and quality assurance aspects of using personal computers for nuclear applications are also mentioned. The paper resumes with personal views of the author and experiences gained during interactive graphic piping software development for personal computers. (orig./GL)

  18. Three dimensional computational fluid dynamic analysis of debris transport under emergency cooling water recirculation

    Park, Jong Woon

    2010-01-01

    This paper provides a computational fluid dynamic (CFD) analysis method on the evaluation of debris transport under emergency recirculation mode after loss of coolant accident of a nuclear power plant. Three dimensional reactor building floor geometrical model is constructed including flow obstacles larger than 6 inches such as mechanical components and equipments and considering various inlet flow paths from the upper reactor building such as break and spray flow. In the modeling of the inlet flows from the upper floors, effect of gravitational force was also reflected. For the precision of the analysis, 3 millions of tetrahedral-shaped meshes were generated. Reference calculation showed physically reasonable results. Sensitivity studies for mesh type and turbulence model showed very similar results to the reference case. This study provides useful information on the application of CFD to the evaluation of debris transport fraction for the design of new emergency sump filters. (orig.)

  19. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  20. Shape space exploration of constrained meshes

    Yang, Yongliang

    2011-12-12

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.

  1. Shape space exploration of constrained meshes

    Yang, Yongliang; Yang, Yijun; Pottmann, Helmut; Mitra, Niloy J.

    2011-01-01

    We present a general computational framework to locally characterize any shape space of meshes implicitly prescribed by a collection of non-linear constraints. We computationally access such manifolds, typically of high dimension and co-dimension, through first and second order approximants, namely tangent spaces and quadratically parameterized osculant surfaces. Exploration and navigation of desirable subspaces of the shape space with regard to application specific quality measures are enabled using approximants that are intrinsic to the underlying manifold and directly computable in the parameter space of the osculant surface. We demonstrate our framework on shape spaces of planar quad (PQ) meshes, where each mesh face is constrained to be (nearly) planar, and circular meshes, where each face has a circumcircle. We evaluate our framework for navigation and design exploration on a variety of inputs, while keeping context specific properties such as fairness, proximity to a reference surface, etc. © 2011 ACM.

  2. Computer-Based Linguistic Analysis.

    Wright, James R.

    Noam Chomsky's transformational-generative grammar model may effectively be translated into an equivalent computer model. Phrase-structure rules and transformations are tested as to their validity and ordering by the computer via the process of random lexical substitution. Errors appearing in the grammar are detected and rectified, and formal…

  3. Automatic mesh adaptivity for CADIS and FW-CADIS neutronics modeling of difficult shielding problems

    Ibrahim, A. M.; Peplow, D. E.; Mosher, S. W.; Wagner, J. C.; Evans, T. M.; Wilson, P. P.; Sawan, M. E.

    2013-01-01

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macro-material approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm de-couples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, obviating the need for a world-class super computer. (authors)

  4. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.

    2015-01-01

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer

  5. Automatic mesh generation with QMESH program

    Ise, Takeharu; Tsutsui, Tsuneo

    1977-05-01

    Usage of the two-dimensional self-organizing mesh generation program, QMESH, is presented together with the descriptions and the experience, as it has recently been converted and reconstructed from the NEACPL version to the FACOM. The program package consists of the QMESH code to generate quadrilaterial meshes with smoothing techniques, the QPLOT code to plot the data obtained from the QMESH on the graphic COM, and the RENUM code to renumber the meshes by using a bandwidth minimization procedure. The technique of mesh reconstructuring coupled with smoothing techniques is especially useful when one generates the meshes for computer codes based on the finite element method. Several typical examples are given for easy access to the QMESH program, which is registered in the R.B-disks of JAERI for users. (auth.)

  6. Connectivity editing for quadrilateral meshes

    Peng, Chihan; Zhang, Eugene; Kobayashi, Yoshihiro; Wonka, Peter

    2011-01-01

    We propose new connectivity editing operations for quadrilateral meshes with the unique ability to explicitly control the location, orientation, type, and number of the irregular vertices (valence not equal to four) in the mesh while preserving sharp edges. We provide theoretical analysis on what editing operations are possible and impossible and introduce three fundamental operations to move and re-orient a pair of irregular vertices. We argue that our editing operations are fundamental, because they only change the quad mesh in the smallest possible region and involve the fewest irregular vertices (i.e., two). The irregular vertex movement operations are supplemented by operations for the splitting, merging, canceling, and aligning of irregular vertices. We explain how the proposed highlevel operations are realized through graph-level editing operations such as quad collapses, edge flips, and edge splits. The utility of these mesh editing operations are demonstrated by improving the connectivity of quad meshes generated from state-of-art quadrangulation techniques. © 2011 ACM.

  7. Connectivity editing for quadrilateral meshes

    Peng, Chihan

    2011-12-12

    We propose new connectivity editing operations for quadrilateral meshes with the unique ability to explicitly control the location, orientation, type, and number of the irregular vertices (valence not equal to four) in the mesh while preserving sharp edges. We provide theoretical analysis on what editing operations are possible and impossible and introduce three fundamental operations to move and re-orient a pair of irregular vertices. We argue that our editing operations are fundamental, because they only change the quad mesh in the smallest possible region and involve the fewest irregular vertices (i.e., two). The irregular vertex movement operations are supplemented by operations for the splitting, merging, canceling, and aligning of irregular vertices. We explain how the proposed highlevel operations are realized through graph-level editing operations such as quad collapses, edge flips, and edge splits. The utility of these mesh editing operations are demonstrated by improving the connectivity of quad meshes generated from state-of-art quadrangulation techniques. © 2011 ACM.

  8. Transportation Research & Analysis Computing Center

    Federal Laboratory Consortium — The technical objectives of the TRACC project included the establishment of a high performance computing center for use by USDOT research teams, including those from...

  9. Numerical Analysis of Multiscale Computations

    Engquist, Björn; Tsai, Yen-Hsi R

    2012-01-01

    This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.

  10. Batch Computed Tomography Analysis of Projectiles

    2016-05-01

    ARL-TR-7681 ● MAY 2016 US Army Research Laboratory Batch Computed Tomography Analysis of Projectiles by Michael C Golt, Chris M...Laboratory Batch Computed Tomography Analysis of Projectiles by Michael C Golt and Matthew S Bratcher Weapons and Materials Research...values to account for projectile variability in the ballistic evaluation of armor. 15. SUBJECT TERMS computed tomography , CT, BS41, projectiles

  11. Quadrilateral mesh fitting that preserves sharp features based on multi-normals for Laplacian energy

    Yusuke Imai

    2014-04-01

    Full Text Available Because the cost of performance testing using actual products is expensive, manufacturers use lower-cost computer-aided design simulations for this function. In this paper, we propose using hexahedral meshes, which are more accurate than tetrahedral meshes, for finite element analysis. We propose automatic hexahedral mesh generation with sharp features to precisely represent the corresponding features of a target shape. Our hexahedral mesh is generated using a voxel-based algorithm. In our previous works, we fit the surface of the voxels to the target surface using Laplacian energy minimization. We used normal vectors in the fitting to preserve sharp features. However, this method could not represent concave sharp features precisely. In this proposal, we improve our previous Laplacian energy minimization by adding a term that depends on multi-normal vectors instead of using normal vectors. Furthermore, we accentuate a convex/concave surface subset to represent concave sharp features.

  12. Charged particle tracking through electrostatic wire meshes using the finite element method

    Devlin, L. J.; Karamyshev, O.; Welsch, C. P., E-mail: carsten.welsch@cockcroft.ac.uk [The Cockcroft Institute, Daresbury Laboratory, Warrington (United Kingdom); Department of Physics, University of Liverpool, Liverpool (United Kingdom)

    2016-06-15

    Wire meshes are used across many disciplines to accelerate and focus charged particles, however, analytical solutions are non-exact and few codes exist which simulate the exact fields around a mesh with physical sizes. A tracking code based in Matlab-Simulink using field maps generated using finite element software has been developed which tracks electrons or ions through electrostatic wire meshes. The fields around such a geometry are presented as an analytical expression using several basic assumptions, however, it is apparent that computational calculations are required to obtain realistic values of electric potential and fields, particularly when multiple wire meshes are deployed. The tracking code is flexible in that any quantitatively describable particle distribution can be used for both electrons and ions as well as other benefits such as ease of export to other programs for analysis. The code is made freely available and physical examples are highlighted where this code could be beneficial for different applications.

  13. COMPUTER METHODS OF GENETIC ANALYSIS.

    A. L. Osipov

    2017-02-01

    Full Text Available The basic statistical methods used in conducting the genetic analysis of human traits. We studied by segregation analysis, linkage analysis and allelic associations. Developed software for the implementation of these methods support.

  14. Computational domain discretization in numerical analysis of flow within granular materials

    Sosnowski, Marcin

    2018-06-01

    The discretization of computational domain is a crucial step in Computational Fluid Dynamics (CFD) because it influences not only the numerical stability of the analysed model but also the agreement of obtained results and real data. Modelling flow in packed beds of granular materials is a very challenging task in terms of discretization due to the existence of narrow spaces between spherical granules contacting tangentially in a single point. Standard approach to this issue results in a low quality mesh and unreliable results in consequence. Therefore the common method is to reduce the diameter of the modelled granules in order to eliminate the single-point contact between the individual granules. The drawback of such method is the adulteration of flow and contact heat resistance among others. Therefore an innovative method is proposed in the paper: single-point contact is extended to a cylinder-shaped volume contact. Such approach eliminates the low quality mesh elements and simultaneously introduces only slight distortion to the flow as well as contact heat transfer. The performed analysis of numerous test cases prove the great potential of the proposed method of meshing the packed beds of granular materials.

  15. Improved mesh generator for the POISSON Group Codes

    Gupta, R.C.

    1987-01-01

    This paper describes the improved mesh generator of the POISSON Group Codes. These improvements enable one to have full control over the way the mesh is generated and in particular the way the mesh density is distributed throughout this model. A higher mesh density in certain regions coupled with a successively lower mesh density in others keeps the accuracy of the field computation high and the requirements on the computer time and computer memory low. The mesh is generated with the help of codes AUTOMESH and LATTICE; both have gone through a major upgrade. Modifications have also been made in the POISSON part of these codes. We shall present an example of a superconducting dipole magnet to explain how to use this code. The results of field computations are found to be reliable within a few parts in a hundred thousand even in such complex geometries

  16. Proceedings of the 20th International Meshing Roundtable

    2012-01-01

    This volume contains the articles presented at the 20th International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held in Paris, France on Oct 23-26, 2011. This is the first year the IMR was held outside the United States territory. Other sponsors of the 20th IMR are Systematic Paris Region Systems & ICT Cluster, AIAA, NAFEMS, CEA, and NSF. The Sandia National Laboratories started the first IMR in 1992, and the conference has been held annually since. Each year the IMR brings together researchers, developers, and application experts, from a variety of disciplines, to present and discuss ideas on mesh generation and related topics. The topics covered by the IMR have applications in numerical analysis, computational geometry, computer graphics, as well as other areas, and the presentations describe novel work ranging from theory to application.     .

  17. Computer codes for the analysis of flask impact problems

    Neilson, A.J.

    1984-09-01

    This review identifies typical features of the design of transportation flasks and considers some of the analytical tools required for the analysis of impact events. Because of the complexity of the physical problem, it is unlikely that a single code will adequately deal with all the aspects of the impact incident. Candidate codes are identified on the basis of current understanding of their strengths and limitations. It is concluded that the HONDO-II, DYNA3D AND ABAQUS codes which ar already mounted on UKAEA computers will be suitable tools for use in the analysis of experiments conducted in the proposed AEEW programme and of general flask impact problems. Initial attention should be directed at the DYNA3D and ABAQUS codes with HONDO-II being reserved for situations where the three-dimensional elements of DYNA3D may provide uneconomic simulations in planar or axisymmetric geometries. Attention is drawn to the importance of access to suitable mesh generators to create the nodal coordinate and element topology data required by these structural analysis codes. (author)

  18. MHD simulations on an unstructured mesh

    Strauss, H.R.; Park, W.; Belova, E.; Fu, G.Y.; Sugiyama, L.E.

    1998-01-01

    Two reasons for using an unstructured computational mesh are adaptivity, and alignment with arbitrarily shaped boundaries. Two codes which use finite element discretization on an unstructured mesh are described. FEM3D solves 2D and 3D RMHD using an adaptive grid. MH3D++, which incorporates methods of FEM3D into the MH3D generalized MHD code, can be used with shaped boundaries, which might be 3D

  19. Impact analysis on a massively parallel computer

    Zacharia, T.; Aramayo, G.A.

    1994-01-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper

  20. Seeking new surgical predictors of mesh exposure after transvaginal mesh repair.

    Wu, Pei-Ying; Chang, Chih-Hung; Shen, Meng-Ru; Chou, Cheng-Yang; Yang, Yi-Ching; Huang, Yu-Fang

    2016-10-01

    The purpose of this study was to explore new preventable risk factors for mesh exposure. A retrospective review of 92 consecutive patients treated with transvaginal mesh (TVM) in the urogynecological unit of our university hospital. An analysis of perioperative predictors was conducted in patients after vaginal repairs using a type 1 mesh. Mesh complications were recorded according to International Urogynecological Association (IUGA) definitions. Mesh-exposure-free durations were calculated by using the Kaplan-Meier method and compared between different closure techniques using log-rank test. Hazard ratios (HR) of predictors for mesh exposure were estimated by univariate and multivariate analyses using Cox proportional hazards regression models. The median surveillance interval was 24.1 months. Two late occurrences were found beyond 1 year post operation. No statistically significant correlation was observed between mesh exposure and concomitant hysterectomy. Exposure risks were significantly higher in patients with interrupted whole-layer closure in univariate analysis. In the multivariate analysis, hematoma [HR 5.42, 95 % confidence interval (CI) 1.26-23.35, P = 0.024), Prolift mesh (HR 5.52, 95 % CI 1.15-26.53, P = 0.033), and interrupted whole-layer closure (HR 7.02, 95 % CI 1.62-30.53, P = 0.009) were the strongest predictors of mesh exposure. Findings indicate the risks of mesh exposure and reoperation may be prevented by avoiding hematoma, large amount of mesh, or interrupted whole-layer closure in TVM surgeries. If these risk factors are prevented, hysterectomy may not be a relative contraindication for TVM use. We also provide evidence regarding mesh exposure and the necessity for more than 1 year of follow-up and preoperative counselling.

  1. Purification, crystallization and preliminary X-ray crystallographic analysis of human ppGpp hydrolase, Mesh1

    Dawei Sun

    2010-09-01

    Full Text Available Bacterial SpoT is a Mn2+-dependent pyrophosphohydrolase to hydrolyze guanosine 3’-diphosphate-5’-diphosphate to guanosine diphosphate and pyrophosphate. In this study, SpoT ortholog from Homo sapiens (hMesh1, was over-expressed in Escherichia coli, purified and crystallized using hanging-drop vapour-diffusion method with polyethylene glycol and sodium citrate. The native crystal of hMesh1 was diffracted to 2.1 Å using a synchrotron-radiation source and belonged to the monoclinic space group P21 with cell dimensions of a = 53.27 Å, b = 62.61 Å, c = 52.45 Å and β= 94.96˚. The crystal contains two molecules in the asymmetric unit, with a solvent content of 44% and a Matthews coefficient VM value of 2.18 Å3/Da.

  2. The Role of Chronic Mesh Infection in Delayed-Onset Vaginal Mesh Complications or Recurrent Urinary Tract Infections: Results From Explanted Mesh Cultures.

    Mellano, Erin M; Nakamura, Leah Y; Choi, Judy M; Kang, Diana C; Grisales, Tamara; Raz, Shlomo; Rodriguez, Larissa V

    2016-01-01

    Vaginal mesh complications necessitating excision are increasingly prevalent. We aim to study whether subclinical chronically infected mesh contributes to the development of delayed-onset mesh complications or recurrent urinary tract infections (UTIs). Women undergoing mesh removal from August 2013 through May 2014 were identified by surgical code for vaginal mesh removal. Only women undergoing removal of anti-incontinence mesh were included. Exclusion criteria included any women undergoing simultaneous prolapse mesh removal. We abstracted preoperative and postoperative information from the medical record and compared mesh culture results from patients with and without mesh extrusion, de novo recurrent UTIs, and delayed-onset pain. One hundred seven women with only anti-incontinence mesh removed were included in the analysis. Onset of complications after mesh placement was within the first 6 months in 70 (65%) of 107 and delayed (≥6 months) in 37 (35%) of 107. A positive culture from the explanted mesh was obtained from 82 (77%) of 107 patients, and 40 (37%) of 107 were positive with potential pathogens. There were no significant differences in culture results when comparing patients with delayed-onset versus immediate pain, extrusion with no extrusion, and de novo recurrent UTIs with no infections. In this large cohort of patients with mesh removed for a diverse array of complications, cultures of the explanted vaginal mesh demonstrate frequent low-density bacterial colonization. We found no differences in culture results from women with delayed-onset pain versus acute pain, vaginal mesh extrusions versus no extrusions, or recurrent UTIs using standard culture methods. Chronic prosthetic infections in other areas of medicine are associated with bacterial biofilms, which are resistant to typical culture techniques. Further studies using culture-independent methods are needed to investigate the potential role of chronic bacterial infections in delayed vaginal mesh

  3. IUE Data Analysis Software for Personal Computers

    Thompson, R.; Caplinger, J.; Taylor, L.; Lawton , P.

    1996-01-01

    This report summarizes the work performed for the program titled, "IUE Data Analysis Software for Personal Computers" awarded under Astrophysics Data Program NRA 92-OSSA-15. The work performed was completed over a 2-year period starting in April 1994. As a result of the project, 450 IDL routines and eight database tables are now available for distribution for Power Macintosh computers and Personal Computers running Windows 3.1.

  4. Modelling of pedestrian level wind environment on a high-quality mesh: A case study for the HKPolyU campus

    Du, Yaxing; Mak, Cheuk Ming; Ai, Zhengtao

    2018-01-01

    Quality and efficiency of computational fluid dynamics (CFD) simulation of pedestrian level wind environment in a complex urban area are often compromised by many influencing factors, particularly mesh quality. This paper first proposes a systematic and efficient mesh generation method and then p......Quality and efficiency of computational fluid dynamics (CFD) simulation of pedestrian level wind environment in a complex urban area are often compromised by many influencing factors, particularly mesh quality. This paper first proposes a systematic and efficient mesh generation method...... and then performs detailed sensitivity analysis of some important computational parameters. The geometrically complex Hong Kong Polytechnic University (HKPolyU) campus is taken as a case study. Based on the high-quality mesh system, the influences of three important computational parameters, namely, turbulence...... model, near-wall mesh density and computational domain size, on the CFD predicted results of pedestrian level wind environment are quantitatively evaluated. Validation of CFD models is conducted against wind tunnel experimental data, where a good agreement is achieved. It is found that the proposed mesh...

  5. Adaptive mesh refinement for storm surge

    Mandli, Kyle T.; Dawson, Clint N.

    2014-01-01

    An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the GeoClaw framework and compared to ADCIRC for Hurricane Ike along with observed tide gauge data and the computational cost of each model run. © 2014 Elsevier Ltd.

  6. Adaptive mesh refinement for storm surge

    Mandli, Kyle T.

    2014-03-01

    An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the GeoClaw framework and compared to ADCIRC for Hurricane Ike along with observed tide gauge data and the computational cost of each model run. © 2014 Elsevier Ltd.

  7. Lagrangian fluid dynamics using the Voronoi-Delauanay mesh

    Dukowicz, J.K.

    1981-01-01

    A Lagrangian technique for numerical fluid dynamics is described. This technique makes use of the Voronoi mesh to efficiently locate new neighbors, and it uses the dual (Delaunay) triangulation to define computational cells. This removes all topological restrictions and facilitates the solution of problems containing interfaces and multiple materials. To improve computational accuracy a mesh smoothing procedure is employed

  8. A study on the dependency between turbulent models and mesh configurations of CFD codes

    Bang, Jungjin; Heo, Yujin; Jerng, Dong-Wook

    2015-01-01

    This paper focuses on the analysis of the behavior of hydrogen mixing and hydrogen stratification, using the GOTHIC code and the CFD code. Specifically, we examined the mesh sensitivity and how the turbulence model affects hydrogen stratification or hydrogen mixing, depending on the mesh configuration. In this work, sensitivity analyses for the meshes and the turbulence models were conducted for missing and stratification phenomena. During severe accidents in a nuclear power plants, the generation of hydrogen may occur and this will complicate the atmospheric condition of the containment by causing stratification of air, steam, and hydrogen. This could significantly impact containment integrity analyses, as hydrogen could be accumulated in local region. From this need arises the importance of research about stratification of gases in the containment. Two computation fluid dynamics code, i.e. GOTHIC and STAR-CCM+ were adopted and the computational results were benchmarked against the experimental data from PANDA facility. The main findings observed through the present work can be summarized as follows: 1) In the case of the GOTHIC code, it was observed that the aspect ratio of the mesh was found more important than the mesh size. Also, if the number of the mesh is over 3,000, the effects of the turbulence models were marginal. 2) For STAR-CCM+, the tendency is quite different from the GOTHIC code. That is, the effects of the turbulence models were small for fewer number of the mesh, however, as the number of mesh increases, the effects of the turbulence models becomes significant. Another observation is that away from the injection orifice, the role of the turbulence models tended to be important due to the nature of mixing process and inducted jet stream

  9. A study on the dependency between turbulent models and mesh configurations of CFD codes

    Bang, Jungjin; Heo, Yujin; Jerng, Dong-Wook [CAU, Seoul (Korea, Republic of)

    2015-10-15

    This paper focuses on the analysis of the behavior of hydrogen mixing and hydrogen stratification, using the GOTHIC code and the CFD code. Specifically, we examined the mesh sensitivity and how the turbulence model affects hydrogen stratification or hydrogen mixing, depending on the mesh configuration. In this work, sensitivity analyses for the meshes and the turbulence models were conducted for missing and stratification phenomena. During severe accidents in a nuclear power plants, the generation of hydrogen may occur and this will complicate the atmospheric condition of the containment by causing stratification of air, steam, and hydrogen. This could significantly impact containment integrity analyses, as hydrogen could be accumulated in local region. From this need arises the importance of research about stratification of gases in the containment. Two computation fluid dynamics code, i.e. GOTHIC and STAR-CCM+ were adopted and the computational results were benchmarked against the experimental data from PANDA facility. The main findings observed through the present work can be summarized as follows: 1) In the case of the GOTHIC code, it was observed that the aspect ratio of the mesh was found more important than the mesh size. Also, if the number of the mesh is over 3,000, the effects of the turbulence models were marginal. 2) For STAR-CCM+, the tendency is quite different from the GOTHIC code. That is, the effects of the turbulence models were small for fewer number of the mesh, however, as the number of mesh increases, the effects of the turbulence models becomes significant. Another observation is that away from the injection orifice, the role of the turbulence models tended to be important due to the nature of mixing process and inducted jet stream.

  10. Hernia Surgical Mesh Implants

    ... knitted mesh or non-knitted sheet forms. The synthetic materials used can be absorbable, non-absorbable or a combination of absorbable and non-absorbable materials. Animal-derived mesh are made of animal tissue, such as intestine or skin, that has been processed and disinfected to be ...

  11. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    Calo, Victor M.; Collier, Nathan; Pardo, David; Paszyński, Maciej R.

    2011-01-01

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  12. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    Calo, Victor M.

    2011-05-14

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  13. A Mechanistic Study of Wetting Superhydrophobic Porous 3D Meshes

    Yohe, Stefan T.; Freedman, Jonathan D.; Falde, Eric J.; Colson, Yolonda L.; Grinstaff, Mark W.

    2014-01-01

    Superhydrophobic, porous, 3D materials composed of poly( ε -caprolactone) (PCL) and the hydrophobic polymer dopant poly(glycerol monostearate-co- ε -caprolactone) (PGC-C18) are fabricated using the electrospinning technique. These 3D materials are distinct from 2D superhydrophobic surfaces, with maintenance of air at the surface as well as within the bulk of the material. These superhydrophobic materials float in water, and when held underwater and pressed, an air bubble is released and will rise to the surface. By changing the PGC-C18 doping concentration in the meshes and/or the fiber size from the micro- to nanoscale, the long-term stability of the entrapped air layer is controlled. The rate of water infiltration into the meshes, and the resulting displacement of the entrapped air, is quantitatively measured using X-ray computed tomography. The properties of the meshes are further probed using surfactants and solvents of different surface tensions. Finally, the application of hydraulic pressure is used to quantify the breakthrough pressure to wet the meshes. The tools for fabrication and analysis of these superhydrophobic materials as well as the ability to control the robustness of the entrapped air layer are highly desirable for a number of existing and emerging applications. PMID:25309305

  14. 22nd International Meshing Roundtable

    Staten, Matthew

    2014-01-01

    This volume contains the articles presented at the 22nd International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held on Oct 13-16, 2013 in Orlando, Florida, USA.  The first IMR was held in 1992, and the conference series has been held annually since.  Each year the IMR brings together researchers, developers, and application experts in a variety of disciplines, from all over the world, to present and discuss ideas on mesh generation and related topics.  The technical papers in this volume present theoretical and novel ideas and algorithms with practical potential, as well as technical applications in science and engineering, geometric modeling, computer graphics and visualization.

  15. 21st International Meshing Roundtable

    Weill, Jean-Christophe

    2013-01-01

    This volume contains the articles presented at the 21st International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held on October 7–10, 2012 in San Jose, CA, USA. The first IMR was held in 1992, and the conference series has been held annually since. Each year the IMR brings together researchers, developers, and application experts in a variety of disciplines, from all over the world, to present and discuss ideas on mesh generation and related topics. The technical papers in this volume present theoretical and novel ideas and algorithms with practical potential, as well as technical applications in science and engineering, geometric modeling, computer graphics, and visualization.

  16. MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank.

    Mao, Yuqing; Lu, Zhiyong

    2017-04-17

    MeSH indexing is the task of assigning relevant MeSH terms based on a manual reading of scholarly publications by human indexers. The task is highly important for improving literature retrieval and many other scientific investigations in biomedical research. Unfortunately, given its manual nature, the process of MeSH indexing is both time-consuming (new articles are not immediately indexed until 2 or 3 months later) and costly (approximately ten dollars per article). In response, automatic indexing by computers has been previously proposed and attempted but remains challenging. In order to advance the state of the art in automatic MeSH indexing, a community-wide shared task called BioASQ was recently organized. We propose MeSH Now, an integrated approach that first uses multiple strategies to generate a combined list of candidate MeSH terms for a target article. Through a novel learning-to-rank framework, MeSH Now then ranks the list of candidate terms based on their relevance to the target article. Finally, MeSH Now selects the highest-ranked MeSH terms via a post-processing module. We assessed MeSH Now on two separate benchmarking datasets using traditional precision, recall and F 1 -score metrics. In both evaluations, MeSH Now consistently achieved over 0.60 in F-score, ranging from 0.610 to 0.612. Furthermore, additional experiments show that MeSH Now can be optimized by parallel computing in order to process MEDLINE documents on a large scale. We conclude that MeSH Now is a robust approach with state-of-the-art performance for automatic MeSH indexing and that MeSH Now is capable of processing PubMed scale documents within a reasonable time frame. http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/MeSHNow/ .

  17. Computational methods for corpus annotation and analysis

    Lu, Xiaofei

    2014-01-01

    This book reviews computational tools for lexical, syntactic, semantic, pragmatic and discourse analysis, with instructions on how to obtain, install and use each tool. Covers studies using Natural Language Processing, and offers ideas for better integration.

  18. Applied time series analysis and innovative computing

    Ao, Sio-Iong

    2010-01-01

    This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.

  19. Computational methods in power system analysis

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  20. A computational description of simple mediation analysis

    Caron, Pier-Olivier

    2018-04-01

    Full Text Available Simple mediation analysis is an increasingly popular statistical analysis in psychology and in other social sciences. However, there is very few detailed account of the computations within the model. Articles are more often focusing on explaining mediation analysis conceptually rather than mathematically. Thus, the purpose of the current paper is to introduce the computational modelling within simple mediation analysis accompanied with examples with R. Firstly, mediation analysis will be described. Then, the method to simulate data in R (with standardized coefficients will be presented. Finally, the bootstrap method, the Sobel test and the Baron and Kenny test all used to evaluate mediation (i.e., indirect effect will be developed. The R code to implement the computation presented is offered as well as a script to carry a power analysis and a complete example.

  1. Distributed computing and nuclear reactor analysis

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-01-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations

  2. Computer assisted functional analysis. Computer gestuetzte funktionelle Analyse

    Schmidt, H A.E.; Roesler, H

    1982-01-01

    The latest developments in computer-assisted functional analysis (CFA) in nuclear medicine are presented in about 250 papers of the 19th international annual meeting of the Society of Nuclear Medicine (Bern, September 1981). Apart from the mathematical and instrumental aspects of CFA, computerized emission tomography is given particular attention. Advances in nuclear medical diagnosis in the fields of radiopharmaceuticals, cardiology, angiology, neurology, ophthalmology, pulmonology, gastroenterology, nephrology, endocrinology, oncology and osteology are discussed.

  3. Grid adaptation using chimera composite overlapping meshes

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1994-01-01

    The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.

  4. Grid adaption using Chimera composite overlapping meshes

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  5. Fog water collection effectiveness: Mesh intercomparisons

    Fernandez, Daniel; Torregrosa, Alicia; Weiss-Penzias, Peter; Zhang, Bong June; Sorensen, Deckard; Cohen, Robert; McKinley, Gareth; Kleingartner, Justin; Oliphant, Andrew; Bowman, Matthew

    2018-01-01

    To explore fog water harvesting potential in California, we conducted long-term measurements involving three types of mesh using standard fog collectors (SFC). Volumetric fog water measurements from SFCs and wind data were collected and recorded in 15-minute intervals over three summertime fog seasons (2014–2016) at four California sites. SFCs were deployed with: standard 1.00 m2 double-layer 35% shade coefficient Raschel; stainless steel mesh coated with the MIT-14 hydrophobic formulation; and FogHa-Tin, a German manufactured, 3-dimensional spacer fabric deployed in two orientations. Analysis of 3419 volumetric samples from all sites showed strong relationships between mesh efficiency and wind speed. Raschel mesh collected 160% more fog water than FogHa-Tin at wind speeds less than 1 m s–1 and 45% less for wind speeds greater than 5 m s–1. MIT-14 coated stainless-steel mesh collected more fog water than Raschel mesh at all wind speeds. At low wind speeds of steel mesh collected 3% more and at wind speeds of 4–5 m s–1, it collected 41% more. FogHa-Tin collected 5% more fog water when the warp of the weave was oriented vertically, per manufacturer specification, than when the warp of the weave was oriented horizontally. Time series measurements of three distinct mesh across similar wind regimes revealed inconsistent lags in fog water collection and inconsistent performance. Since such differences occurred under similar wind-speed regimes, we conclude that other factors play important roles in mesh performance, including in-situ fog event and aerosol dynamics that affect droplet-size spectra and droplet-to-mesh surface interactions.

  6. Automated hexahedral mesh generation from biomedical image data: applications in limb prosthetics.

    Zachariah, S G; Sanders, J E; Turkiyyah, G M

    1996-06-01

    A general method to generate hexahedral meshes for finite element analysis of residual limbs and similar biomedical geometries is presented. The method utilizes skeleton-based subdivision of cross-sectional domains to produce simple subdomains in which structured meshes are easily generated. Application to a below-knee residual limb and external prosthetic socket is described. The residual limb was modeled as consisting of bones, soft tissue, and skin. The prosthetic socket model comprised a socket wall with an inner liner. The geometries of these structures were defined using axial cross-sectional contour data from X-ray computed tomography, optical scanning, and mechanical surface digitization. A tubular surface representation, using B-splines to define the directrix and generator, is shown to be convenient for definition of the structure geometries. Conversion of cross-sectional data to the compact tubular surface representation is direct, and the analytical representation simplifies geometric querying and numerical optimization within the mesh generation algorithms. The element meshes remain geometrically accurate since boundary nodes are constrained to lie on the tubular surfaces. Several element meshes of increasing mesh density were generated for two residual limbs and prosthetic sockets. Convergence testing demonstrated that approximately 19 elements are required along a circumference of the residual limb surface for a simple linear elastic model. A model with the fibula absent compared with the same geometry with the fibula present showed differences suggesting higher distal stresses in the absence of the fibula. Automated hexahedral mesh generation algorithms for sliced data represent an advancement in prosthetic stress analysis since they allow rapid modeling of any given residual limb and optimization of mesh parameters.

  7. DFT computational analysis of piracetam

    Rajesh, P.; Gunasekaran, S.; Seshadri, S.; Gnanasambandan, T.

    2014-11-01

    Density functional theory calculation with B3LYP using 6-31G(d,p) and 6-31++G(d,p) basis set have been used to determine ground state molecular geometries. The first order hyperpolarizability (β0) and related properties (β, α0 and Δα) of piracetam is calculated using B3LYP/6-31G(d,p) method on the finite-field approach. The stability of molecule has been analyzed by using NBO/NLMO analysis. The calculation of first hyperpolarizability shows that the molecule is an attractive molecule for future applications in non-linear optics. Molecular electrostatic potential (MEP) at a point in the space around a molecule gives an indication of the net electrostatic effect produced at that point by the total charge distribution of the molecule. The calculated HOMO and LUMO energies show that charge transfer occurs within these molecules. Mulliken population analysis on atomic charge is also calculated. Because of vibrational analysis, the thermodynamic properties of the title compound at different temperatures have been calculated. Finally, the UV-Vis spectra and electronic absorption properties are explained and illustrated from the frontier molecular orbitals.

  8. hp-version discontinuous Galerkin methods on polygonal and polyhedral meshes

    Cangiani, Andrea; Georgoulis, Emmanuil H; Houston, Paul

    2017-01-01

    Over the last few decades discontinuous Galerkin finite element methods (DGFEMs) have been witnessed tremendous interest as a computational framework for the numerical solution of partial differential equations. Their success is due to their extreme versatility in the design of the underlying meshes and local basis functions, while retaining key features of both (classical) finite element and finite volume methods. Somewhat surprisingly, DGFEMs on general tessellations consisting of polygonal (in 2D) or polyhedral (in 3D) element shapes have received little attention within the literature, despite the potential computational advantages. This volume introduces the basic principles of hp-version (i.e., locally varying mesh-size and polynomial order) DGFEMs over meshes consisting of polygonal or polyhedral element shapes, presents their error analysis, and includes an extensive collection of numerical experiments. The extreme flexibility provided by the locally variable elemen t-shapes, element-sizes, and elemen...

  9. Automating sensitivity analysis of computer models using computer calculus

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  10. Three dimensional stress analysis of nozzle-to-shell intersections by the finite element method and a auto-mesh generation program

    Fujihara, Hirohiko; Ueda, Masahiro

    1975-01-01

    In the design of chemical reactors or nuclear pressure vessels it is often important to evaluate the stress distribution in nozzle-to-shell intersections. The finite element method is a powerful tool for stress analysis, but it has a defects to require troublesome work in preparing input data. Specially, the mesh data of oblique nozzles and tangential nozzles, in which stress concentration is very high, are very difficult to be prepared. The authors made a mesh generation program which can be used to any nozzle-to-shell intersections, and combining this program with a three dimensional stress analysis program by the finite element method they made the stress analysis of nozzle-to-shell intersections under internal pressure. Consequently, stresses, strains and deformations of nozzles nonsymmetrical to spherical shells and nozzles tangential to cylindrical shells were made clear and it was shown that the curvature of the inner surface of the nozzle corner was a controlling factor in reducing stress concentration. (auth.)

  11. Adaptive and dynamic meshing methods for numerical simulations

    Acikgoz, Nazmiye

    For the numerical simulation of many problems of engineering interest, it is desirable to have an automated mesh adaption tool capable of producing high quality meshes with an affordably low number of mesh points. This is important especially for problems, which are characterized by anisotropic features of the solution and require mesh clustering in the direction of high gradients. Another significant issue in meshing emerges in the area of unsteady simulations with moving boundaries or interfaces, where the motion of the boundary has to be accommodated by deforming the computational grid. Similarly, there exist problems where current mesh needs to be adapted to get more accurate solutions because either the high gradient regions are initially predicted inaccurately or they change location throughout the simulation. To solve these problems, we propose three novel procedures. For this purpose, in the first part of this work, we present an optimization procedure for three-dimensional anisotropic tetrahedral grids based on metric-driven h-adaptation. The desired anisotropy in the grid is dictated by a metric that defines the size, shape, and orientation of the grid elements throughout the computational domain. Through the use of topological and geometrical operators, the mesh is iteratively adapted until the final mesh minimizes a given objective function. In this work, the objective function measures the distance between the metric of each simplex and a target metric, which can be either user-defined (a-priori) or the result of a-posteriori error analysis. During the adaptation process, one tries to decrease the metric-based objective function until the final mesh is compliant with the target within a given tolerance. However, in regions such as corners and complex face intersections, the compliance condition was found to be very difficult or sometimes impossible to satisfy. In order to address this issue, we propose an optimization process based on an ad

  12. Evaluation of mesh morphing and mapping techniques in patient specific modeling of the human pelvis.

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2013-01-01

    Robust generation of pelvic finite element models is necessary to understand the variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis and their strain distributions evaluated. Morphing and mapping techniques were effectively applied to generate good quality geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Turbo Pascal Computer Code for PIXE Analysis

    Darsono

    2002-01-01

    To optimal utilization of the 150 kV ion accelerator facilities and to govern the analysis technique using ion accelerator, the research and development of low energy PIXE technology has been done. The R and D for hardware of the low energy PIXE installation in P3TM have been carried on since year 2000. To support the R and D of PIXE accelerator facilities in harmonize with the R and D of the PIXE hardware, the development of PIXE software for analysis is also needed. The development of database of PIXE software for analysis using turbo Pascal computer code is reported in this paper. This computer code computes the ionization cross-section, the fluorescence yield, and the stopping power of elements also it computes the coefficient attenuation of X- rays energy. The computer code is named PIXEDASIS and it is part of big computer code planed for PIXE analysis that will be constructed in the near future. PIXEDASIS is designed to be communicative with the user. It has the input from the keyboard. The output shows in the PC monitor, which also can be printed. The performance test of the PIXEDASIS shows that it can be operated well and it can provide data agreement with data form other literatures. (author)

  15. Automating sensitivity analysis of computer models using computer calculus

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  16. Computer graphics in reactor safety analysis

    Fiala, C.; Kulak, R.F.

    1989-01-01

    This paper describes a family of three computer graphics codes designed to assist the analyst in three areas: the modelling of complex three-dimensional finite element models of reactor structures; the interpretation of computational results; and the reporting of the results of numerical simulations. The purpose and key features of each code are presented. The graphics output used in actual safety analysis are used to illustrate the capabilities of each code. 5 refs., 10 figs

  17. Cartesian anisotropic mesh adaptation for compressible flow

    Keats, W.A.; Lien, F.-S.

    2004-01-01

    Simulating transient compressible flows involving shock waves presents challenges to the CFD practitioner in terms of the mesh quality required to resolve discontinuities and prevent smearing. This paper discusses a novel two-dimensional Cartesian anisotropic mesh adaptation technique implemented for compressible flow. This technique, developed for laminar flow by Ham, Lien and Strong, is efficient because it refines and coarsens cells using criteria that consider the solution in each of the cardinal directions separately. In this paper the method will be applied to compressible flow. The procedure shows promise in its ability to deliver good quality solutions while achieving computational savings. The convection scheme used is the Advective Upstream Splitting Method (Plus), and the refinement/ coarsening criteria are based on work done by Ham et al. Transient shock wave diffraction over a backward step and shock reflection over a forward step are considered as test cases because they demonstrate that the quality of the solution can be maintained as the mesh is refined and coarsened in time. The data structure is explained in relation to the computational mesh, and the object-oriented design and implementation of the code is presented. Refinement and coarsening algorithms are outlined. Computational savings over uniform and isotropic mesh approaches are shown to be significant. (author)

  18. Geometrically Consistent Mesh Modification

    Bonito, A.

    2010-01-01

    A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.

  19. Uncertainty analysis in Monte Carlo criticality computations

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  20. ASTEC: Controls analysis for personal computers

    Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.

    1989-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.

  1. Sub-Pixel Accuracy Crack Width Determination on Concrete Beams in Load Tests by Triangle Mesh Geometry Analysis

    Liebold, F.; Maas, H.-G.

    2018-05-01

    This paper deals with the determination of crack widths of concrete beams during load tests from monocular image sequences. The procedure starts in a reference image of the probe with suitable surface texture under zero load, where a large number of points is defined by an interest operator. Then a triangulated irregular network is established to connect the points. Image sequences are recorded during load tests with the load increasing continuously or stepwise, or at intermittently changing load. The vertices of the triangles are tracked through the consecutive images of the sequence with sub-pixel accuracy by least squares matching. All triangles are then analyzed for changes by principal strain calculation. For each triangle showing significant strain, a crack width is computed by a thorough geometric analysis of the relative movement of the vertices.

  2. Fog water collection effectiveness: Mesh intercomparisons

    Fernandez, Daniel; Torregrosa, Alicia; Weiss-Penzias, Peter; Zhang, Bong June; Sorensen, Deckard; Cohen, Robert; McKinley, Gareth; Kleingartner, Justin; Oliphant, Andrew; Bowman, Matthew

    2018-01-01

    To explore fog water harvesting potential in California, we conducted long-term measurements involving three types of mesh using standard fog collectors (SFC). Volumetric fog water measurements from SFCs and wind data were collected and recorded in 15-minute intervals over three summertime fog seasons (2014–2016) at four California sites. SFCs were deployed with: standard 1.00 m2 double-layer 35% shade coefficient Raschel; stainless steel mesh coated with the MIT-14 hydrophobic formulation; and FogHa-Tin, a German manufactured, 3-dimensional spacer fabric deployed in two orientations. Analysis of 3419 volumetric samples from all sites showed strong relationships between mesh efficiency and wind speed. Raschel mesh collected 160% more fog water than FogHa-Tin at wind speeds less than 1 m s–1 and 45% less for wind speeds greater than 5 m s–1. MIT-14 coated stainless-steel mesh collected more fog water than Raschel mesh at all wind speeds. At low wind speeds of wind speeds of 4–5 m s–1, it collected 41% more. FogHa-Tin collected 5% more fog water when the warp of the weave was oriented vertically, per manufacturer specification, than when the warp of the weave was oriented horizontally. Time series measurements of three distinct mesh across similar wind regimes revealed inconsistent lags in fog water collection and inconsistent performance. Since such differences occurred under similar wind-speed regimes, we conclude that other factors play important roles in mesh performance, including in-situ fog event and aerosol dynamics that affect droplet-size spectra and droplet-to-mesh surface interactions.

  3. Smooth Bézier surfaces over unstructured quadrilateral meshes

    Bercovier, Michel

    2017-01-01

    Using an elegant mixture of geometry, graph theory and linear analysis, this monograph completely solves a problem lying at the interface of Isogeometric Analysis (IgA) and Finite Element Methods (FEM). The recent explosion of IgA, strongly tying Computer Aided Geometry Design to Analysis, does not easily apply to the rich variety of complex shapes that engineers have to design and analyse. Therefore new developments have studied the extension of IgA to unstructured unions of meshes, similar to those one can find in FEM. The following problem arises: given an unstructured planar quadrilateral mesh, construct a C1-surface, by piecewise Bézier or B-Spline patches defined over this mesh. This problem is solved for C1-surfaces defined over plane bilinear Bézier patches, the corresponding results for B-Splines then being simple consequences. The method can be extended to higher-order quadrilaterals and even to three dimensions, and the most recent developments in this direction are also mentioned here.

  4. Temporal fringe pattern analysis with parallel computing

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  5. A computer program for activation analysis

    Rantanen, J.; Rosenberg, R.J.

    1983-01-01

    A computer program for calculating the results of activation analysis is described. The program comprises two gamma spectrum analysis programs, STOAV and SAMPO and one program for calculating elemental concentrations, KVANT. STOAV is based on a simple summation of channels and SAMPO is based on fitting of mathematical functions. The programs are tested by analyzing the IAEA G-1 test spectra. In the determination of peak location SAMPO is somewhat better than STOAV and in the determination of peak area SAMPO is more than twice as accurate as STOAV. On the other hand, SAMPO is three times as expensive as STOAV with the use of a Cyber 170 computer. (author)

  6. Development of a Two-Phase Flow Analysis Code based on a Unstructured-Mesh SIMPLE Algorithm

    Kim, Jong Tae; Park, Ik Kyu; Cho, Heong Kyu; Yoon, Han Young; Kim, Kyung Doo; Jeong, Jae Jun

    2008-09-15

    For analyses of multi-phase flows in a water-cooled nuclear power plant, a three-dimensional SIMPLE-algorithm based hydrodynamic solver CUPID-S has been developed. As governing equations, it adopts a two-fluid three-field model for the two-phase flows. The three fields represent a continuous liquid, a dispersed droplets, and a vapour field. The governing equations are discretized by a finite volume method on an unstructured grid to handle the geometrical complexity of the nuclear reactors. The phasic momentum equations are coupled and solved with a sparse block Gauss-Seidel matrix solver to increase a numerical stability. The pressure correction equation derived by summing the phasic volume fraction equations is applied on the unstructured mesh in the context of a cell-centered co-located scheme. This paper presents the numerical method and the preliminary results of the calculations.

  7. Safety analysis of control rod drive computers

    Ehrenberger, W.; Rauch, G.; Schmeil, U.; Maertz, J.; Mainka, E.U.; Nordland, O.; Gloee, G.

    1985-01-01

    The analysis of the most significant user programmes revealed no errors in these programmes. The evaluation of approximately 82 cumulated years of operation demonstrated that the operating system of the control rod positioning processor has a reliability that is sufficiently good for the tasks this computer has to fulfil. Computers can be used for safety relevant tasks. The experience gained with the control rod positioning processor confirms that computers are not less reliable than conventional instrumentation and control system for comparable tasks. The examination and evaluation of computers for safety relevant tasks can be done with programme analysis or statistical evaluation of the operating experience. Programme analysis is recommended for seldom used and well structured programmes. For programmes with a long, cumulated operating time a statistical evaluation is more advisable. The effort for examination and evaluation is not greater than the corresponding effort for conventional instrumentation and control systems. This project has also revealed that, where it is technologically sensible, process controlling computers or microprocessors can be qualified for safety relevant tasks without undue effort. (orig./HP) [de

  8. Surface computing and collaborative analysis work

    Brown, Judith; Gossage, Stevenson; Hack, Chris

    2013-01-01

    Large surface computing devices (wall-mounted or tabletop) with touch interfaces and their application to collaborative data analysis, an increasingly important and prevalent activity, is the primary topic of this book. Our goals are to outline the fundamentals of surface computing (a still maturing technology), review relevant work on collaborative data analysis, describe frameworks for understanding collaborative processes, and provide a better understanding of the opportunities for research and development. We describe surfaces as display technologies with which people can interact directly, and emphasize how interaction design changes when designing for large surfaces. We review efforts to use large displays, surfaces or mixed display environments to enable collaborative analytic activity. Collaborative analysis is important in many domains, but to provide concrete examples and a specific focus, we frequently consider analysis work in the security domain, and in particular the challenges security personne...

  9. Computer-assisted qualitative data analysis software.

    Cope, Diane G

    2014-05-01

    Advances in technology have provided new approaches for data collection methods and analysis for researchers. Data collection is no longer limited to paper-and-pencil format, and numerous methods are now available through Internet and electronic resources. With these techniques, researchers are not burdened with entering data manually and data analysis is facilitated by software programs. Quantitative research is supported by the use of computer software and provides ease in the management of large data sets and rapid analysis of numeric statistical methods. New technologies are emerging to support qualitative research with the availability of computer-assisted qualitative data analysis software (CAQDAS).CAQDAS will be presented with a discussion of advantages, limitations, controversial issues, and recommendations for this type of software use.

  10. Spatial analysis statistics, visualization, and computational methods

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  11. Obtuse triangle suppression in anisotropic meshes

    Sun, Feng; Choi, Yi King; Wang, Wen Ping; Yan, Dongming; Liu, Yang; Lé vy, Bruno L.

    2011-01-01

    Anisotropic triangle meshes are used for efficient approximation of surfaces and flow data in finite element analysis, and in these applications it is desirable to have as few obtuse triangles as possible to reduce the discretization error. We present a variational approach to suppressing obtuse triangles in anisotropic meshes. Specifically, we introduce a hexagonal Minkowski metric, which is sensitive to triangle orientation, to give a new formulation of the centroidal Voronoi tessellation (CVT) method. Furthermore, we prove several relevant properties of the CVT method with the newly introduced metric. Experiments show that our algorithm produces anisotropic meshes with much fewer obtuse triangles than using existing methods while maintaining mesh anisotropy. © 2011 Elsevier B.V. All rights reserved.

  12. Obtuse triangle suppression in anisotropic meshes

    Sun, Feng

    2011-12-01

    Anisotropic triangle meshes are used for efficient approximation of surfaces and flow data in finite element analysis, and in these applications it is desirable to have as few obtuse triangles as possible to reduce the discretization error. We present a variational approach to suppressing obtuse triangles in anisotropic meshes. Specifically, we introduce a hexagonal Minkowski metric, which is sensitive to triangle orientation, to give a new formulation of the centroidal Voronoi tessellation (CVT) method. Furthermore, we prove several relevant properties of the CVT method with the newly introduced metric. Experiments show that our algorithm produces anisotropic meshes with much fewer obtuse triangles than using existing methods while maintaining mesh anisotropy. © 2011 Elsevier B.V. All rights reserved.

  13. Computation for the analysis of designed experiments

    Heiberger, Richard

    2015-01-01

    Addresses the statistical, mathematical, and computational aspects of the construction of packages and analysis of variance (ANOVA) programs. Includes a disk at the back of the book that contains all program codes in four languages, APL, BASIC, C, and FORTRAN. Presents illustrations of the dual space geometry for all designs, including confounded designs.

  14. Effects of skull thickness, anisotropy, and inhomogeneity on forward EEG/ERP computations using a spherical three-dimensional resistor mesh model.

    Chauveau, Nicolas; Franceries, Xavier; Doyon, Bernard; Rigaud, Bernard; Morucci, Jean Pierre; Celsis, Pierre

    2004-02-01

    Bone thickness, anisotropy, and inhomogeneity have been reported to induce important variations in electroencephalogram (EEG) scalp potentials. To study this effect, we used an original three-dimensional (3-D) resistor mesh model described in spherical coordinates, consisting of 67,464 elements and 22,105 nodes arranged in 36 different concentric layers. After validation of the model by comparison with the analytic solution, potential variations induced by geometric and electrical skull modifications were investigated at the surface in the dipole plane and along the dipole axis, for several eccentricities and bone thicknesses. The resistor mesh permits one to obtain various configurations, as local modifications are introduced very easily. This has allowed several head models to be designed to study the effects of skull properties (thickness, anisotropy, and heterogeneity) on scalp surface potentials. Results show a decrease of potentials in bone, depending on bone thickness, and a very small decrease through the scalp layer. Nevertheless, similar scalp potentials can be obtained using either a thick scalp layer and a thin skull layer, and vice versa. It is thus important to take into account skull and scalp thicknesses, because the drop of potential in bone depends on both. The use of three different layers for skull instead of one leads to small differences in potential values and patterns. In contrast, the introduction of a hole in the skull highly increases the maximum potential value (by a factor of 11.5 in our case), because of the absence of potential drop in the corresponding volume. The inverse solution without any a priori knowledge indicates that the model with the hole gives the largest errors in both position and dipolar moment. Our results indicate that the resistor mesh model can be used as a robust and user-friendly simulation tool in EEG or event-related potentials. It makes it possible to build up real head models directly from anatomic magnetic

  15. Numerical simulation for quenching meshes with TONUS platform

    Bin, Chen; Hongxing, Yu

    2009-01-01

    For mitigation of hydrogen risks during severe accidents to protect the integrity of containment, PAR and ignitors are used in current advanced nuclear power plants. But multiple combustions induced by ignitors and consequent DDT phenomena are not practically eliminated. An innovative design call 'quenching meshes' is considered to confine hydrogen flame within one compartment by metallic meshes, so that hazardous flame propagation can be prevented. The numerical simulation results based on discretization of the full Navier-Stokes equations with global one-step reaction represented by Arrhenius laminar combustion model have shown the possibility of flame quenching 'numerically'. This is achieved via multiplication of the combustion rate expression by a Heaviside function having an ignition temperature as a parameter. Qualitative behavior of the computed flow shows that the flame velocity diminishes while passing through a quenching mesh, while qualitative analysis based on the energy balance reveals the mechanism of flame quenching. All the above analysis has been performed for a stoichiometric mixture and normal initial pressure and temperature for initial conditions. For further research we would like to suggest the investigation of the influence of the mixture composition, initial pressure and/or temperature on the quenching criteria

  16. A Reconfigurable Mesh-Ring Topology for Bluetooth Sensor Networks

    Ben-Yi Wang

    2018-05-01

    Full Text Available In this paper, a Reconfigurable Mesh-Ring (RMR algorithm is proposed for Bluetooth sensor networks. The algorithm is designed in three stages to determine the optimal configuration of the mesh-ring network. Firstly, a designated root advertises and discovers its neighboring nodes. Secondly, a scatternet criterion is built to compute the minimum number of piconets and distributes the connection information for piconet and scatternet. Finally, a peak-search method is designed to determine the optimal mesh-ring configuration for various sizes of networks. To maximize the network capacity, the research problem is formulated by determining the best connectivity of available mesh links. During the formation and maintenance phases, three possible configurations (including piconet, scatternet, and hybrid are examined to determine the optimal placement of mesh links. The peak-search method is a systematic approach, and is implemented by three functional blocks: the topology formation block generates the mesh-ring topology, the routing efficiency block computes the routing performance, and the optimum decision block introduces a decision-making criterion to determine the optimum number of mesh links. Simulation results demonstrate that the optimal mesh-ring configuration can be determined and that the scatternet case achieves better overall performance than the other two configurations. The RMR topology also outperforms the conventional ring-based and cluster-based mesh methods in terms of throughput performance for Bluetooth configurable networks.

  17. Computation system for nuclear reactor core analysis

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.; Petrie, L.M.

    1977-04-01

    This report documents a system which contains computer codes as modules developed to evaluate nuclear reactor core performance. The diffusion theory approximation to neutron transport may be applied with the VENTURE code treating up to three dimensions. The effect of exposure may be determined with the BURNER code, allowing depletion calculations to be made. The features and requirements of the system are discussed and aspects common to the computational modules, but the latter are documented elsewhere. User input data requirements, data file management, control, and the modules which perform general functions are described. Continuing development and implementation effort is enhancing the analysis capability available locally and to other installations from remote terminals

  18. Computer-aided power systems analysis

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  19. Manual for automatic generation of finite element models of spiral bevel gears in mesh

    Bibel, G. D.; Reddy, S.; Kumar, A.

    1994-01-01

    The goal of this research is to develop computer programs that generate finite element models suitable for doing 3D contact analysis of faced milled spiral bevel gears in mesh. A pinion tooth and a gear tooth are created and put in mesh. There are two programs: Points.f and Pat.f to perform the analysis. Points.f is based on the equation of meshing for spiral bevel gears. It uses machine tool settings to solve for an N x M mesh of points on the four surfaces, pinion concave and convex, and gear concave and convex. Points.f creates the file POINTS.OUT, an ASCI file containing N x M points for each surface. (N is the number of node points along the length of the tooth, and M is nodes along the height.) Pat.f reads POINTS.OUT and creates the file tl.out. Tl.out is a series of PATRAN input commands. In addition to the mesh density on the tooth face, additional user specified variables are the number of finite elements through the thickness, and the number of finite elements along the tooth full fillet. A full fillet is assumed to exist for both the pinion and gear.

  20. Deploy production sliding mesh capability with linear solver benchmarking.

    Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thomas, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Barone, Matthew F. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Alan B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ananthan, Shreyas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Overfelt, James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sprague, Mike [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rood, Jon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-02-01

    Wind applications require the ability to simulate rotating blades. To support this use-case, a novel design-order sliding mesh algorithm has been developed and deployed. The hybrid method combines the control volume finite element methodology (CVFEM) with concepts found within a discontinuous Galerkin (DG) finite element method (FEM) to manage a sliding mesh. The method has been demonstrated to be design-order for the tested polynomial basis (P=1 and P=2) and has been deployed to provide production simulation capability for a Vestas V27 (225 kW) wind turbine. Other stationary and canonical rotating ow simulations are also presented. As the majority of wind-energy applications are driving extensive usage of hybrid meshes, a foundational study that outlines near-wall numerical behavior for a variety of element topologies is presented. Results indicate that the proposed nonlinear stabilization operator (NSO) is an effective stabilization methodology to control Gibbs phenomena at large cell Peclet numbers. The study also provides practical mesh resolution guidelines for future analysis efforts. Application-driven performance and algorithmic improvements have been carried out to increase robustness of the scheme on hybrid production wind energy meshes. Specifically, the Kokkos-based Nalu Kernel construct outlined in the FY17/Q4 ExaWind milestone has been transitioned to the hybrid mesh regime. This code base is exercised within a full V27 production run. Simulation timings for parallel search and custom ghosting are presented. As the low-Mach application space requires implicit matrix solves, the cost of matrix reinitialization has been evaluated on a variety of production meshes. Results indicate that at low element counts, i.e., fewer than 100 million elements, matrix graph initialization and preconditioner setup times are small. However, as mesh sizes increase, e.g., 500 million elements, simulation time associated with \\setup-up" costs can increase to nearly 50% of

  1. Documentation for MeshKit - Reactor Geometry (&mesh) Generator

    Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-09-30

    This report gives documentation for using MeshKit’s Reactor Geometry (and mesh) Generator (RGG) GUI and also briefly documents other algorithms and tools available in MeshKit. RGG is a program designed to aid in modeling and meshing of complex/large hexagonal and rectilinear reactor cores. RGG uses Argonne’s SIGMA interfaces, Qt and VTK to produce an intuitive user interface. By integrating a 3D view of the reactor with the meshing tools and combining them into one user interface, RGG streamlines the task of preparing a simulation mesh and enables real-time feedback that reduces accidental scripting mistakes that could waste hours of meshing. RGG interfaces with MeshKit tools to consolidate the meshing process, meaning that going from model to mesh is as easy as a button click. This report is designed to explain RGG v 2.0 interface and provide users with the knowledge and skills to pilot RGG successfully. Brief documentation of MeshKit source code, tools and other algorithms available are also presented for developers to extend and add new algorithms to MeshKit. RGG tools work in serial and parallel and have been used to model complex reactor core models consisting of conical pins, load pads, several thousands of axially varying material properties of instrumentation pins and other interstices meshes.

  2. Parametric Quadrilateral Meshes for the Design and Optimization of Superconducting Magnets

    Aleksa, Martin; Völlinger, Christine

    2002-01-01

    The program package ROXIE has been developed at CERN for the design and optimization of accelerator magnets. The necessity of extremely uniform fields in the superconducting accelerator magnets for LHC requires very accurate methods of field computation. For this purpose the coupled boundary-element / finite-element technique (BEM-FEM) is used. Quadrilateral higher order finite-element meshes are generated for the discretization of the iron domain (yoke) and stainless steel collars. A new mesh generator using geometrically optimized domain decomposition which was developed at the University of Stuttgart, Germany has been implemented into the ROXIE program providing fully automatic and user friendly mesh generation. The structure of the magnet cross-section can be modeled using parametric objects such as holes of different forms, elliptic, parabolic or hyperbolic arcs, notches, slots, .... For sensitivity analysis and parametric studies, point based morphing algorithms are applied to guarantee smooth adaptatio...

  3. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  4. Adaptive radial basis function mesh deformation using data reduction

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited

  5. Local adaptive mesh refinement for shock hydrodynamics

    Berger, M.J.; Colella, P.; Lawrence Livermore Laboratory, Livermore, 94550 California)

    1989-01-01

    The aim of this work is the development of an automatic, adaptive mesh refinement strategy for solving hyperbolic conservation laws in two dimensions. There are two main difficulties in doing this. The first problem is due to the presence of discontinuities in the solution and the effect on them of discontinuities in the mesh. The second problem is how to organize the algorithm to minimize memory and CPU overhead. This is an important consideration and will continue to be important as more sophisticated algorithms that use data structures other than arrays are developed for use on vector and parallel computers. copyright 1989 Academic Press, Inc

  6. MUSIC: a mesh-unrestricted simulation code

    Bonalumi, R.A.; Rouben, B.; Dastur, A.R.; Dondale, C.S.; Li, H.Y.H.

    1978-01-01

    A general formalism to solve the G-group neutron diffusion equation is described. The G-group flux is represented by complementing an ''asymptotic'' mode with (G-1) ''transient'' modes. A particular reduction-to-one-group technique gives a high computational efficiency. MUSIC, a 2-group code using the above formalism, is presented. MUSIC is demonstrated on a fine-mesh calculation and on 2 coarse-mesh core calculations: a heavy-water reactor (HWR) problem and the 2-D lightwater reactor (LWR) IAEA benchmark. Comparison is made to finite-difference results

  7. PRE-CASKETSS: an input data generation computer program for thermal and structural analysis of nuclear fuel shipping casks

    Ikushima, Takeshi

    1988-12-01

    A computer program PRE-CASKETSS has been developed for the purpose of input data generation for thermal and structural analysis computer code system CASKETSS (CASKETSS means a modular code system for CASK Evaluation code system for Thermal and Structural Safety). Main features of PRE-CASKETSS are as follow; (1) Function of input data generation for thermal and structural analysis computer programs is provided in the program. (2) Two- and three-dimensional mesh generation for finite element and finite difference programs are available in the program. (3) The capacity of the material input data generation are provided in the program. (4) The boundary conditions, the load conditions and the initial conditions are capable in the program. (5) This computer program operate both the time shearing system and the batch system. In the paper, brief illustration of calculation method, input data and sample calculations are presented. (author)

  8. Mesh Denoising based on Normal Voting Tensor and Binary Optimization.

    Yadav, Sunil Kumar; Reitebuch, Ulrich; Polthier, Konrad

    2017-08-17

    This paper presents a two-stage mesh denoising algorithm. Unlike other traditional averaging approaches, our approach uses an element-based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stochastic analysis on the different kinds of noise based on the average edge length. The quantitative results demonstrate that the performance of our method is better compared to state-of-the-art smoothing approaches.

  9. Plasma geometric optics analysis and computation

    Smith, T.M.

    1983-01-01

    Important practical applications in the generation, manipulation, and diagnosis of laboratory thermonuclear plasmas have created a need for elaborate computational capabilities in the study of high frequency wave propagation in plasmas. A reduced description of such waves suitable for digital computation is provided by the theory of plasma geometric optics. The existing theory is beset by a variety of special cases in which the straightforward analytical approach fails, and has been formulated with little attention to problems of numerical implementation of that analysis. The standard field equations are derived for the first time from kinetic theory. A discussion of certain terms previously, and erroneously, omitted from the expansion of the plasma constitutive relation is given. A powerful but little known computational prescription for determining the geometric optics field in the neighborhood of caustic singularities is rigorously developed, and a boundary layer analysis for the asymptotic matching of the plasma geometric optics field across caustic singularities is performed for the first time with considerable generality. A proper treatment of birefringence is detailed, wherein a breakdown of the fundamental perturbation theory is identified and circumvented. A general ray tracing computer code suitable for applications to radiation heating and diagnostic problems is presented and described

  10. ZONE: a finite element mesh generator

    Burger, M.J.

    1976-05-01

    The ZONE computer program is a finite-element mesh generator which produces the nodes and element description of any two-dimensional geometry. The geometry is subdivided into a mesh of quadrilateral and triangular zones arranged sequentially in an ordered march through the geometry. The order of march can be chosen so that the minimum bandwidth is obtained. The node points are defined in terms of the x and y coordinates in a global rectangular coordinate system. The zones generated are quadrilaterals or triangles defined by four node points in a counterclockwise sequence. Node points defining the outside boundary are generated to describe pressure boundary conditions. The mesh that is generated can be used as input to any two-dimensional as well as any axisymmetrical structure program. The output from ZONE is essentially the input file to NAOS, HONDO, and other axisymmetric finite element programs. 14 figures

  11. Analysis of electronic circuits using digital computers

    Tapu, C.

    1968-01-01

    Various programmes have been proposed for studying electronic circuits with the help of computers. It is shown here how it possible to use the programme ECAP, developed by I.B.M., for studying the behaviour of an operational amplifier from different point of view: direct current, alternating current and transient state analysis, optimisation of the gain in open loop, study of the reliability. (author) [fr

  12. Computational Chemical Synthesis Analysis and Pathway Design

    Fan Feng

    2018-06-01

    Full Text Available With the idea of retrosynthetic analysis, which was raised in the 1960s, chemical synthesis analysis and pathway design have been transformed from a complex problem to a regular process of structural simplification. This review aims to summarize the developments of computer-assisted synthetic analysis and design in recent years, and how machine-learning algorithms contributed to them. LHASA system started the pioneering work of designing semi-empirical reaction modes in computers, with its following rule-based and network-searching work not only expanding the databases, but also building new approaches to indicating reaction rules. Programs like ARChem Route Designer replaced hand-coded reaction modes with automatically-extracted rules, and programs like Chematica changed traditional designing into network searching. Afterward, with the help of machine learning, two-step models which combine reaction rules and statistical methods became the main stream. Recently, fully data-driven learning methods using deep neural networks which even do not require any prior knowledge, were applied into this field. Up to now, however, these methods still cannot replace experienced human organic chemists due to their relatively low accuracies. Future new algorithms with the aid of powerful computational hardware will make this topic promising and with good prospects.

  13. Kinetic mesh-free method for flutter prediction in turbomachines

    -based mesh-free method for unsteady flows. ... Council for Scientific and Industrial Research, National Aerospace Laboratories, Computational and Theoretical Fluid Dynamics Division, Bangalore 560 017, India; Engineering Mechanics Unit, ...

  14. CMS Computing Software and Analysis Challenge 2006

    De Filippis, N. [Dipartimento interateneo di Fisica M. Merlin and INFN Bari, Via Amendola 173, 70126 Bari (Italy)

    2007-10-15

    The CMS (Compact Muon Solenoid) collaboration is making a big effort to test the workflow and the dataflow associated with the data handling model. With this purpose the computing, software and analysis Challenge 2006, namely CSA06, started the 15th of September. It was a 50 million event exercise that included all the steps of the analysis chain, like the prompt reconstruction, the data streaming, calibration and alignment iterative executions, the data distribution to regional sites, up to the end-user analysis. Grid tools provided by the LCG project are also experimented to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the production and the analysis jobs. An overview of the status and results of the CSA06 is presented in this work.

  15. CMS Computing Software and Analysis Challenge 2006

    De Filippis, N.

    2007-01-01

    The CMS (Compact Muon Solenoid) collaboration is making a big effort to test the workflow and the dataflow associated with the data handling model. With this purpose the computing, software and analysis Challenge 2006, namely CSA06, started the 15th of September. It was a 50 million event exercise that included all the steps of the analysis chain, like the prompt reconstruction, the data streaming, calibration and alignment iterative executions, the data distribution to regional sites, up to the end-user analysis. Grid tools provided by the LCG project are also experimented to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the production and the analysis jobs. An overview of the status and results of the CSA06 is presented in this work

  16. Local mesh refinement for incompressible fluid flow with free surfaces

    Terasaka, H.; Kajiwara, H.; Ogura, K. [Tokyo Electric Power Company (Japan)] [and others

    1995-09-01

    A new local mesh refinement (LMR) technique has been developed and applied to incompressible fluid flows with free surface boundaries. The LMR method embeds patches of fine grid in arbitrary regions of interest. Hence, more accurate solutions can be obtained with a lower number of computational cells. This method is very suitable for the simulation of free surface movements because free surface flow problems generally require a finer computational grid to obtain adequate results. By using this technique, one can place finer grids only near the surfaces, and therefore greatly reduce the total number of cells and computational costs. This paper introduces LMR3D, a three-dimensional incompressible flow analysis code. Numerical examples calculated with the code demonstrate well the advantages of the LMR method.

  17. Introduction to scientific computing and data analysis

    Holmes, Mark H

    2016-01-01

    This textbook provides and introduction to numerical computing and its applications in science and engineering. The topics covered include those usually found in an introductory course, as well as those that arise in data analysis. This includes optimization and regression based methods using a singular value decomposition. The emphasis is on problem solving, and there are numerous exercises throughout the text concerning applications in engineering and science. The essential role of the mathematical theory underlying the methods is also considered, both for understanding how the method works, as well as how the error in the computation depends on the method being used. The MATLAB codes used to produce most of the figures and data tables in the text are available on the author’s website and SpringerLink.

  18. Aerodynamic analysis of Pegasus - Computations vs reality

    Mendenhall, Michael R.; Lesieutre, Daniel J.; Whittaker, C. H.; Curry, Robert E.; Moulton, Bryan

    1993-01-01

    Pegasus, a three-stage, air-launched, winged space booster was developed to provide fast and efficient commercial launch services for small satellites. The aerodynamic design and analysis of Pegasus was conducted without benefit of wind tunnel tests using only computational aerodynamic and fluid dynamic methods. Flight test data from the first two operational flights of Pegasus are now available, and they provide an opportunity to validate the accuracy of the predicted pre-flight aerodynamic characteristics. Comparisons of measured and predicted flight characteristics are presented and discussed. Results show that the computational methods provide reasonable aerodynamic design information with acceptable margins. Post-flight analyses illustrate certain areas in which improvements are desired.

  19. Reactor physics verification of the MCNP6 unstructured mesh capability

    Burke, T. P.; Kiedrowski, B. C.; Martz, R. L.; Martin, W. R.

    2013-01-01

    The Monte Carlo software package MCNP6 has the ability to transport particles on unstructured meshes generated from the Computed-Aided Engineering software Abaqus. Verification is performed using benchmarks with features relevant to reactor physics - Big Ten and the C5G7 computational benchmark. Various meshing strategies are tested and results are compared to reference solutions. Computational performance results are also given. The conclusions show MCNP6 is capable of producing accurate calculations for reactor physics geometries and the computational requirements for small lattice benchmarks are reasonable on modern computing platforms. (authors)

  20. Reactor physics verification of the MCNP6 unstructured mesh capability

    Burke, T. P. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States); Kiedrowski, B. C.; Martz, R. L. [X-Computational Physics Division, Monte Carlo Codes Group, Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 87545 (United States); Martin, W. R. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States)

    2013-07-01

    The Monte Carlo software package MCNP6 has the ability to transport particles on unstructured meshes generated from the Computed-Aided Engineering software Abaqus. Verification is performed using benchmarks with features relevant to reactor physics - Big Ten and the C5G7 computational benchmark. Various meshing strategies are tested and results are compared to reference solutions. Computational performance results are also given. The conclusions show MCNP6 is capable of producing accurate calculations for reactor physics geometries and the computational requirements for small lattice benchmarks are reasonable on modern computing platforms. (authors)

  1. SUPERIMPOSED MESH PLOTTING IN MCNP

    J. HENDRICKS

    2001-02-01

    The capability to plot superimposed meshes has been added to MCNP{trademark}. MCNP4C featured a superimposed mesh weight window generator which enabled users to set up geometries without having to subdivide geometric cells for variance reduction. The variance reduction was performed with weight windows on a rectangular or cylindrical mesh superimposed over the physical geometry. Experience with the new capability was favorable but also indicated that a number of enhancements would be very beneficial, particularly a means of visualizing the mesh and its values. The mathematics for plotting the mesh and its values is described here along with a description of other upgrades.

  2. Computed image analysis of neutron radiographs

    Dinca, M.; Anghel, E.; Preda, M.; Pavelescu, M.

    2008-01-01

    Similar with X-radiography, using neutron like penetrating particle, there is in practice a nondestructive technique named neutron radiology. When the registration of information is done on a film with the help of a conversion foil (with high cross section for neutrons) that emits secondary radiation (β,γ) that creates a latent image, the technique is named neutron radiography. A radiographic industrial film that contains the image of the internal structure of an object, obtained by neutron radiography, must be subsequently analyzed to obtain qualitative and quantitative information about the structural integrity of that object. There is possible to do a computed analysis of a film using a facility with next main components: an illuminator for film, a CCD video camera and a computer (PC) with suitable software. The qualitative analysis intends to put in evidence possibly anomalies of the structure due to manufacturing processes or induced by working processes (for example, the irradiation activity in the case of the nuclear fuel). The quantitative determination is based on measurements of some image parameters: dimensions, optical densities. The illuminator has been built specially to perform this application but can be used for simple visual observation. The illuminated area is 9x40 cm. The frame of the system is a comparer of Abbe Carl Zeiss Jena type, which has been adapted to achieve this application. The video camera assures the capture of image that is stored and processed by computer. A special program SIMAG-NG has been developed at INR Pitesti that beside of the program SMTV II of the special acquisition module SM 5010 can analyze the images of a film. The major application of the system was the quantitative analysis of a film that contains the images of some nuclear fuel pins beside a dimensional standard. The system was used to measure the length of the pellets of the TRIGA nuclear fuel. (authors)

  3. Adaptive mesh refinement in titanium

    Colella, Phillip; Wen, Tong

    2005-01-21

    In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.

  4. Social sciences via network analysis and computation

    Kanduc, Tadej

    2015-01-01

    In recent years information and communication technologies have gained significant importance in the social sciences. Because there is such rapid growth of knowledge, methods and computer infrastructure, research can now seamlessly connect interdisciplinary fields such as business process management, data processing and mathematics. This study presents some of the latest results, practices and state-of-the-art approaches in network analysis, machine learning, data mining, data clustering and classifications in the contents of social sciences. It also covers various real-life examples such as t

  5. Computer network environment planning and analysis

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  6. Symbolic Computing in Probabilistic and Stochastic Analysis

    Kamiński Marcin

    2015-12-01

    Full Text Available The main aim is to present recent developments in applications of symbolic computing in probabilistic and stochastic analysis, and this is done using the example of the well-known MAPLE system. The key theoretical methods discussed are (i analytical derivations, (ii the classical Monte-Carlo simulation approach, (iii the stochastic perturbation technique, as well as (iv some semi-analytical approaches. It is demonstrated in particular how to engage the basic symbolic tools implemented in any system to derive the basic equations for the stochastic perturbation technique and how to make an efficient implementation of the semi-analytical methods using an automatic differentiation and integration provided by the computer algebra program itself. The second important illustration is probabilistic extension of the finite element and finite difference methods coded in MAPLE, showing how to solve boundary value problems with random parameters in the environment of symbolic computing. The response function method belongs to the third group, where interference of classical deterministic software with the non-linear fitting numerical techniques available in various symbolic environments is displayed. We recover in this context the probabilistic structural response in engineering systems and show how to solve partial differential equations including Gaussian randomness in their coefficients.

  7. Wireless mesh networks.

    Wang, Xinheng

    2008-01-01

    Wireless telemedicine using GSM and GPRS technologies can only provide low bandwidth connections, which makes it difficult to transmit images and video. Satellite or 3G wireless transmission provides greater bandwidth, but the running costs are high. Wireless networks (WLANs) appear promising, since they can supply high bandwidth at low cost. However, the WLAN technology has limitations, such as coverage. A new wireless networking technology named the wireless mesh network (WMN) overcomes some of the limitations of the WLAN. A WMN combines the characteristics of both a WLAN and ad hoc networks, thus forming an intelligent, large scale and broadband wireless network. These features are attractive for telemedicine and telecare because of the ability to provide data, voice and video communications over a large area. One successful wireless telemedicine project which uses wireless mesh technology is the Emergency Room Link (ER-LINK) in Tucson, Arizona, USA. There are three key characteristics of a WMN: self-organization, including self-management and self-healing; dynamic changes in network topology; and scalability. What we may now see is a shift from mobile communication and satellite systems for wireless telemedicine to the use of wireless networks based on mesh technology, since the latter are very attractive in terms of cost, reliability and speed.

  8. Analysis of a Model for Computer Virus Transmission

    Peng Qin

    2015-01-01

    Full Text Available Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our theoretical analysis, some numerical simulations are also included. The results provide a theoretical basis to control the spread of computer virus.

  9. Computational methods for nuclear criticality safety analysis

    Maragni, M.G.

    1992-01-01

    Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)

  10. Mathematics and computational methods development in U.S. department of energy-sponsored research (nuclear energy research initiative and nuclear engineering education research). 4. Development of an Expert System for Generation of an Effective Mesh Distribution for the SN Method

    Patchimpattapong, Apisit; Haghighat, Alireza

    2001-01-01

    The discrete ordinates (S N ) method is widely used to obtain numerical solutions of the transport equation. The method calls for discretization of spatial, energy, and angular variables. To generate an 'effective' spatial mesh distribution, one has to consider various factors including particle mean free path (mfp), material and source discontinuities, and problem objectives. This becomes more complicated if we consider the effect of numerics such as differencing schemes, parallel processing strategies, and computation resources. As a result, one may often over/under-mesh depending upon limitations on accuracy, computing resources, and time allotted. To overcome the foregoing issues, we are developing an expert system for input preparation of the discrete ordinates (S N ) method. This project is a part of an ongoing project sponsored by Nuclear Engineering Education Research. Our expert system consists of two parts: (a) an algorithm for generation of a mesh distribution for a serial calculation and (b) an algorithm for extension to parallel computing, which accounts for parallelization parameters including granularity, load balancing, parallel algorithms, and possible architectural issues. Thus far, we have developed a stand-alone algorithm for generation of an 'effective' mesh distribution for a serial calculation. The algorithm has been successfully tested with the Parallel Environment Neutral-Particle Transport (PENTRAN) code system. In this paper, we discuss the structure of our algorithm and present its use for simulating the VENUS-3 experimental facility. To date, we have developed and tested part 1 of this system. This part comprises of four steps: creation of a geometric model and coarse meshes, calculation of un-collided flux, selection of differencing schemes, and generation of fine-mesh distribution. For the un-collided flux calculation, we have developed a parallel code called PENFC. It is capable of calculating un-collided and first-collision fluxes

  11. Influence of mesh non-orthogonality on numerical simulation of buoyant jet flows

    Ishigaki, Masahiro; Abe, Satoshi; Sibamoto, Yasuteru; Yonomoto, Taisuke

    2017-01-01

    Highlights: • Influence of mesh non-orthogonality on numerical solution of buoyant jet flows. • Buoyant jet flows are simulated with hexahedral and prismatic meshes. • Jet instability with prismatic meshes may be overestimated compared to that with hexahedral meshes. • Modified solvers that can reduce the influence of mesh non-orthogonality and reduce computation time are proposed. - Abstract: In the present research, we discuss the influence of mesh non-orthogonality on numerical solution of a type of buoyant flow. Buoyant jet flows are simulated numerically with hexahedral and prismatic mesh elements in an open source Computational Fluid Dynamics (CFD) code called “OpenFOAM”. Buoyant jet instability obtained with the prismatic meshes may be overestimated compared to that obtained with the hexahedral meshes when non-orthogonal correction is not applied in the code. Although the non-orthogonal correction method can improve the instability generated by mesh non-orthogonality, it may increase computation time required to reach a convergent solution. Thus, we propose modified solvers that can reduce the influence of mesh non-orthogonality and reduce the computation time compared to the existing solvers in OpenFOAM. It is demonstrated that calculations for a buoyant jet with a large temperature difference are performed faster by the modified solver.

  12. Influence of mesh non-orthogonality on numerical simulation of buoyant jet flows

    Ishigaki, Masahiro, E-mail: ishigaki.masahiro@jaea.go.jp; Abe, Satoshi; Sibamoto, Yasuteru; Yonomoto, Taisuke

    2017-04-01

    Highlights: • Influence of mesh non-orthogonality on numerical solution of buoyant jet flows. • Buoyant jet flows are simulated with hexahedral and prismatic meshes. • Jet instability with prismatic meshes may be overestimated compared to that with hexahedral meshes. • Modified solvers that can reduce the influence of mesh non-orthogonality and reduce computation time are proposed. - Abstract: In the present research, we discuss the influence of mesh non-orthogonality on numerical solution of a type of buoyant flow. Buoyant jet flows are simulated numerically with hexahedral and prismatic mesh elements in an open source Computational Fluid Dynamics (CFD) code called “OpenFOAM”. Buoyant jet instability obtained with the prismatic meshes may be overestimated compared to that obtained with the hexahedral meshes when non-orthogonal correction is not applied in the code. Although the non-orthogonal correction method can improve the instability generated by mesh non-orthogonality, it may increase computation time required to reach a convergent solution. Thus, we propose modified solvers that can reduce the influence of mesh non-orthogonality and reduce the computation time compared to the existing solvers in OpenFOAM. It is demonstrated that calculations for a buoyant jet with a large temperature difference are performed faster by the modified solver.

  13. Computational advances in transition phase analysis

    Morita, K.; Kondo, S.; Tobita, Y.; Shirakawa, N.; Brear, D.J.; Fischer, E.A.

    1994-01-01

    In this paper, historical perspective and recent advances are reviewed on computational technologies to evaluate a transition phase of core disruptive accidents in liquid-metal fast reactors. An analysis of the transition phase requires treatment of multi-phase multi-component thermohydraulics coupled with space- and energy-dependent neutron kinetics. Such a comprehensive modeling effort was initiated when the program of SIMMER-series computer code development was initiated in the late 1970s in the USA. Successful application of the latest SIMMER-II in USA, western Europe and Japan have proved its effectiveness, but, at the same time, several areas that require further research have been identified. Based on the experience and lessons learned during the SIMMER-II application through 1980s, a new project of SIMMER-III development is underway at the Power Reactor and Nuclear Fuel Development Corporation (PNC), Japan. The models and methods of SIMMER-III are briefly described with emphasis on recent advances in multi-phase multi-component fluid dynamics technologies and their expected implication on a future reliable transition phase analysis. (author)

  14. A general coarse and fine mesh solution scheme for fluid flow modeling in VHTRS

    Clifford, I; Ivanov, K; Avramova, M.

    2011-01-01

    Coarse mesh Computational Fluid Dynamics (CFD) methods offer several advantages over traditional coarse mesh methods for the safety analysis of helium-cooled graphite-moderated Very High Temperature Reactors (VHTRs). This relatively new approach opens up the possibility for system-wide calculations to be carried out using a consistent set of field equations throughout the calculation, and subsequently the possibility for hybrid coarse/fine mesh or hierarchical multi scale CFD simulations. To date, a consistent methodology for hierarchical multi-scale CFD has not been developed. This paper describes work carried out in the initial development of a multi scale CFD solver intended to be used for the safety analysis of VHTRs. The VHTR is considered on any scale to consist of a homogenized two-phase mixture of fluid and stationary solid material of varying void fraction. A consistent set of conservation equations was selected such that they reduce to the single-phase conservation equations for the case where void fraction is unity. The discretization of the conservation equations uses a new pressure interpolation scheme capable of capturing the discontinuity in pressure across relatively large changes in void fraction. Based on this, a test solver was developed which supports fully unstructured meshes for three-dimensional time-dependent compressible flow problems, including buoyancy effects. For typical VHTR flow phenomena the new solver shows promise as an effective candidate for predicting the flow behavior on multiple scales, as it is capable of modeling both fine mesh single phase flows as well as coarse mesh flows in homogenized regions containing both fluid and solid materials. (author)

  15. An automated approach for solution based mesh adaptation to enhance numerical accuracy for a given number of grid cells

    Lucas, P.; Van Zuijlen, A.H.; Bijl, H.

    2009-01-01

    Mesh adaptation is a fairly established tool to obtain numerically accurate solutions for flow problems. Computational efficiency is, however, not always guaranteed for the adaptation strategies found in literature. Typically excessive mesh growth diminishes the potential efficiency gain. This

  16. CAPAClTYANALYSIS OF WIRELESS MESH NET\\VORKS

    The limited available bandwidth makes capacity analysis of the network very essential. ... Wireless mesh networks can also be employed for wide variety ofapplications such ... wireless mesh networks using OPNET (Optimized Network Engineering Tool) Modeller 1-J..5. The .... /bps using I I Mbps data rate and 12000 bits.

  17. Open preperitoneal groin hernia repair with mesh

    Andresen, Kristoffer; Rosenberg, Jacob

    2017-01-01

    Background For the repair of inguinal hernias, several surgical methods have been presented where the purpose is to place a mesh in the preperitoneal plane through an open access. The aim of this systematic review was to describe preperitoneal repairs with emphasis on the technique. Data sources...... A systematic review was conducted and reported according to the PRISMA statement. PubMed, Cochrane library and Embase were searched systematically. Studies were included if they provided clinical data with more than 30 days follow up following repair of an inguinal hernia with an open preperitoneal mesh......-analysis. Open preperitoneal techniques with placement of a mesh through an open approach seem promising compared with the standard anterior techniques. This systematic review provides an overview of these techniques together with a description of surgical methods and clinical outcomes....

  18. Open preperitoneal groin hernia repair with mesh

    Andresen, Kristoffer; Rosenberg, Jacob

    2017-01-01

    BACKGROUND: For the repair of inguinal hernias, several surgical methods have been presented where the purpose is to place a mesh in the preperitoneal plane through an open access. The aim of this systematic review was to describe preperitoneal repairs with emphasis on the technique. DATA SOURCES......: A systematic review was conducted and reported according to the PRISMA statement. PubMed, Cochrane library and Embase were searched systematically. Studies were included if they provided clinical data with more than 30 days follow up following repair of an inguinal hernia with an open preperitoneal mesh......-analysis. Open preperitoneal techniques with placement of a mesh through an open approach seem promising compared with the standard anterior techniques. This systematic review provides an overview of these techniques together with a description of surgical methods and clinical outcomes....

  19. Comparative analysis of early adverse events of pelvic organ prolapse repair with or without transvaginal mesh using Clavien-Dindo classification.

    Besser, Limor; Schwarzman, Polina; Mastrolia, Salvatore A; Rotem, Reut; Leron, Elad; Yohay, David; Weintraub, Adi Y

    2018-04-10

    To assess adverse events following surgical repair of pelvic organ prolapse (POP) with or without the use of transvaginal mesh. The present retrospective study was conducted among women who underwent surgical POP repair at Soroka University Medical Center, Beer Sheva, Israel, between January 1, 2013, and December 31, 2015. Patients underwent anterior and posterior colporrhaphy either with transvaginal mesh (Elevate Prolapse Repair System; American Medical Systems, Minnetonka, MN, USA) or without transvaginal mesh (native tissue repair). Perioperative adverse events were assessed using the Clavien-Dindo classification; multivariate regression models were constructed to predict minor and major adverse events. There were 111 women included; 35 were treated with transvaginal mesh, and 76 underwent native tissue repair. Women undergoing native tissue repair had a lower mean grade of cystocele (P=0.023) and a higher rate of urinary stress incontinence (P=0.017) than patients treated with transvaginal mesh. The duration of surgery (P=0.002), duration of hospitalization (Ptransvaginal mesh was not associated with increased odds of major or minor adverse events (P>0.05 for all models examined). Perioperative and postoperative adverse events were comparable regardless of the operative approach. © 2018 International Federation of Gynecology and Obstetrics.

  20. Computational Analysis of Human Blood Flow

    Panta, Yogendra; Marie, Hazel; Harvey, Mark

    2009-11-01

    Fluid flow modeling with commercially available computational fluid dynamics (CFD) software is widely used to visualize and predict physical phenomena related to various biological systems. In this presentation, a typical human aorta model was analyzed assuming the blood flow as laminar with complaint cardiac muscle wall boundaries. FLUENT, a commercially available finite volume software, coupled with Solidworks, a modeling software, was employed for the preprocessing, simulation and postprocessing of all the models.The analysis mainly consists of a fluid-dynamics analysis including a calculation of the velocity field and pressure distribution in the blood and a mechanical analysis of the deformation of the tissue and artery in terms of wall shear stress. A number of other models e.g. T branches, angle shaped were previously analyzed and compared their results for consistency for similar boundary conditions. The velocities, pressures and wall shear stress distributions achieved in all models were as expected given the similar boundary conditions. The three dimensional time dependent analysis of blood flow accounting the effect of body forces with a complaint boundary was also performed.

  1. High-performance computing in accelerating structure design and analysis

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  2. Mesh erosion after abdominal sacrocolpopexy.

    Kohli, N; Walsh, P M; Roat, T W; Karram, M M

    1998-12-01

    To report our experience with erosion of permanent suture or mesh material after abdominal sacrocolpopexy. A retrospective chart review was performed to identify patients who underwent sacrocolpopexy by the same surgeon over 8 years. Demographic data, operative notes, hospital records, and office charts were reviewed after sacrocolpopexy. Patients with erosion of either suture or mesh were treated initially with conservative therapy followed by surgical intervention as required. Fifty-seven patients underwent sacrocolpopexy using synthetic mesh during the study period. The mean (range) postoperative follow-up was 19.9 (1.3-50) months. Seven patients (12%) had erosions after abdominal sacrocolpopexy with two suture erosions and five mesh erosions. Patients with suture erosion were asymptomatic compared with patients with mesh erosion, who presented with vaginal bleeding or discharge. The mean (+/-standard deviation) time to erosion was 14.0+/-7.7 (range 4-24) months. Both patients with suture erosion were treated conservatively with estrogen cream. All five patients with mesh erosion required transvaginal removal of the mesh. Mesh erosion can follow abdominal sacrocolpopexy over a long time, and usually presents as vaginal bleeding or discharge. Although patients with suture erosion can be managed successfully with conservative treatment, patients with mesh erosion require surgical intervention. Transvaginal removal of the mesh with vaginal advancement appears to be an effective treatment in patients failing conservative management.

  3. Partitioning of unstructured meshes for load balancing

    Martin, O.C.; Otto, S.W.

    1994-01-01

    Many large-scale engineering and scientific calculations involve repeated updating of variables on an unstructured mesh. To do these types of computations on distributed memory parallel computers, it is necessary to partition the mesh among the processors so that the load balance is maximized and inter-processor communication time is minimized. This can be approximated by the problem, of partitioning a graph so as to obtain a minimum cut, a well-studied combinatorial optimization problem. Graph partitioning algorithms are discussed that give good but not necessarily optimum solutions. These algorithms include local search methods recursive spectral bisection, and more general purpose methods such as simulated annealing. It is shown that a general procedure enables to combine simulated annealing with Kernighan-Lin. The resulting algorithm is both very fast and extremely effective. (authors) 23 refs., 3 figs., 1 tab

  4. Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling

    Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.

    2014-12-01

    Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.

  5. Computer-aided Fault Tree Analysis

    Willie, R.R.

    1978-08-01

    A computer-oriented methodology for deriving minimal cut and path set families associated with arbitrary fault trees is discussed first. Then the use of the Fault Tree Analysis Program (FTAP), an extensive FORTRAN computer package that implements the methodology is described. An input fault tree to FTAP may specify the system state as any logical function of subsystem or component state variables or complements of these variables. When fault tree logical relations involve complements of state variables, the analyst may instruct FTAP to produce a family of prime implicants, a generalization of the minimal cut set concept. FTAP can also identify certain subsystems associated with the tree as system modules and provide a collection of minimal cut set families that essentially expresses the state of the system as a function of these module state variables. Another FTAP feature allows a subfamily to be obtained when the family of minimal cut sets or prime implicants is too large to be found in its entirety; this subfamily consists only of sets that are interesting to the analyst in a special sense

  6. Computational System For Rapid CFD Analysis In Engineering

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  7. Discrete Surface Evolution and Mesh Deformation for Aircraft Icing Applications

    Thompson, David; Tong, Xiaoling; Arnoldus, Qiuhan; Collins, Eric; McLaurin, David; Luke, Edward; Bidwell, Colin S.

    2013-01-01

    Robust, automated mesh generation for problems with deforming geometries, such as ice accreting on aerodynamic surfaces, remains a challenging problem. Here we describe a technique to deform a discrete surface as it evolves due to the accretion of ice. The surface evolution algorithm is based on a smoothed, face-offsetting approach. We also describe a fast algebraic technique to propagate the computed surface deformations into the surrounding volume mesh while maintaining geometric mesh quality. Preliminary results presented here demonstrate the ecacy of the approach for a sphere with a prescribed accretion rate, a rime ice accretion, and a more complex glaze ice accretion.

  8. New Geometry of Worm Face Gear Drives with Conical and Cylindrical Worms: Generation, Simulation of Meshing, and Stress Analysis

    Litvin, Faydor L.; Nava, Alessandro; Fan, Qi; Fuentes, Alfonso

    2002-01-01

    New geometry of face worm gear drives with conical and cylindrical worms is proposed. The generation of the face worm-gear is based on application of a tilted head-cutter (grinding tool) instead of application of a hob applied at present. The generation of a conjugated worm is based on application of a tilted head-cutter (grinding tool) as well. The bearing contact of the gear drive is localized and is oriented longitudinally. A predesigned parabolic function of transmission errors for reduction of noise and vibration is provided. The stress analysis of the gear drive is performed using a three-dimensional finite element analysis. The contacting model is automatically generated. The developed theory is illustrated with numerical examples.

  9. Surface meshing with curvature convergence

    Li, Huibin; Zeng, Wei; Morvan, Jean-Marie; Chen, Liming; Gu, Xianfengdavid

    2014-01-01

    Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.

  10. Surface meshing with curvature convergence

    Li, Huibin

    2014-06-01

    Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.

  11. Computer analysis of sodium cold trap design and performance

    McPheeters, C.C.; Raue, D.J.

    1983-11-01

    Normal steam-side corrosion of steam-generator tubes in Liquid Metal Fast Breeder Reactors (LMFBRs) results in liberation of hydrogen, and most of this hydrogen diffuses through the tubes into the heat-transfer sodium and must be removed by the purification system. Cold traps are normally used to purify sodium, and they operate by cooling the sodium to temperatures near the melting point, where soluble impurities including hydrogen and oxygen precipitate as NaH and Na 2 O, respectively. A computer model was developed to simulate the processes that occur in sodium cold traps. The Model for Analyzing Sodium Cold Traps (MASCOT) simulates any desired configuration of mesh arrangements and dimensions and calculates pressure drops and flow distributions, temperature profiles, impurity concentration profiles, and impurity mass distributions

  12. A Unified 3D Mesh Segmentation Framework Based on Markov Random Field

    Z.F. Shi; L.Y. Lu; D. Le; X.M. Niu

    2012-01-01

    3D Mesh segmentation has become an important research field in computer graphics during the past decades. Many geometry based and semantic oriented approaches for 3D mesh segmentation has been presented. In this paper, we present a definition of mesh segmentation according to labeling problem. Inspired by the Markov Random Field (MRF) based image segmentation, we propose a new framework of 3D mesh segmentation based on MRF and use graph cuts to solve it. Any features of 3D mesh can be integra...

  13. Analysis on the security of cloud computing

    He, Zhonglin; He, Yuhua

    2011-02-01

    Cloud computing is a new technology, which is the fusion of computer technology and Internet development. It will lead the revolution of IT and information field. However, in cloud computing data and application software is stored at large data centers, and the management of data and service is not completely trustable, resulting in safety problems, which is the difficult point to improve the quality of cloud service. This paper briefly introduces the concept of cloud computing. Considering the characteristics of cloud computing, it constructs the security architecture of cloud computing. At the same time, with an eye toward the security threats cloud computing faces, several corresponding strategies are provided from the aspect of cloud computing users and service providers.

  14. Incremental ALARA cost/benefit computer analysis

    Hamby, P.

    1987-01-01

    Commonwealth Edison Company has developed and is testing an enhanced Fortran Computer Program to be used for cost/benefit analysis of Radiation Reduction Projects at its six nuclear power facilities and Corporate Technical Support Groups. This paper describes a Macro-Diven IBM Mainframe Program comprised of two different types of analyses-an Abbreviated Program with fixed costs and base values, and an extended Engineering Version for a detailed, more through and time-consuming approach. The extended engineering version breaks radiation exposure costs down into two components-Health-Related Costs and Replacement Labor Costs. According to user input, the program automatically adjust these two cost components and applies the derivation to company economic analyses such as replacement power costs, carrying charges, debt interest, and capital investment cost. The results from one of more program runs using different parameters may be compared in order to determine the most appropriate ALARA dose reduction technique. Benefits of this particular cost / benefit analysis technique includes flexibility to accommodate a wide range of user data and pre-job preparation, as well as the use of proven and standardized company economic equations

  15. Use of mesh in laparoscopic paraesophageal hernia repair

    Müller-Stich, Beat P.; Kenngott, Hannes G.; Gondan, Matthias

    2015-01-01

    Introduction. Mesh augmentation seems to reduce recurrences following laparoscopic paraesophageal hernia repair (LPHR). However, there is an uncertain risk of mesh-associated complications. Risk-benefit analysis might solve the dilemma. Materials and Methods. A systematic literature search...... potential benefits of LMAH. All data regarding LMAH were used to estimate risk of mesh-associated complications. Risk-benefit analysis was performed using a Markov Monte Carlo decision-analytic model. Results. Meta-analysis of 3 RCTs and 9 OCSs including 915 patients revealed a significantly lower...

  16. Computing in Qualitative Analysis: A Healthy Development?

    Richards, Lyn; Richards, Tom

    1991-01-01

    Discusses the potential impact of computers in qualitative health research. Describes the original goals, design, and implementation of NUDIST, a qualitative computing software. Argues for evaluation of the impact of computer techniques and for an opening of debate among program developers and users to address the purposes and power of computing…

  17. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  18. Can cloud computing benefit health services? - a SWOT analysis.

    Kuo, Mu-Hsing; Kushniruk, Andre; Borycki, Elizabeth

    2011-01-01

    In this paper, we discuss cloud computing, the current state of cloud computing in healthcare, and the challenges and opportunities of adopting cloud computing in healthcare. A Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis was used to evaluate the feasibility of adopting this computing model in healthcare. The paper concludes that cloud computing could have huge benefits for healthcare but there are a number of issues that will need to be addressed before its widespread use in healthcare.

  19. Impact of mesh points number on the accuracy of deterministic calculations of control rods worth for Tehran research reactor

    Boustani, Ehsan; Amirkabir University of Technology, Tehran; Khakshournia, Samad

    2016-01-01

    In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.

  20. Impact of mesh points number on the accuracy of deterministic calculations of control rods worth for Tehran research reactor

    Boustani, Ehsan [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.; Khakshournia, Samad [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.

    2016-12-15

    In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.

  1. Use of computer codes for system reliability analysis

    Sabek, M.; Gaafar, M.; Poucet, A.

    1988-01-01

    This paper gives a collective summary of the studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRANTIC, FTAP, computer code package RALLY, and BOUNDS codes. Two reference study cases were executed by each code. The results obtained logic/probabilistic analysis as well as computation time are compared

  2. Ferrofluids: Modeling, numerical analysis, and scientific computation

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  3. BOT3P: a mesh generation software package for the transport analysis codes Dort, Tort, Twodant, Threedant and MCNP

    Orsi, R.

    2003-01-01

    Bot3p consists of a set of standard Fortran 77 language programs that gives the users of the deterministic transport codes Dort and Tort some useful diagnostic tools to prepare and check the geometry of their input data files for both Cartesian and cylindrical geometries including graphical display modules. Bot3p produces at the same time the geometrical and material distribution data for the deterministic transport codes Twodant and Threedant and, only in three-dimensional (3D) Cartesian geometry, for the Monte Carlo Transport Code MCNP. This makes it possible to compare directly for the same geometry the effects stemming from the use of different data libraries and solution approaches on transport analysis results. Through the use of Bot3p, radiation transport problems with complex 3D geometrical structures can be modelled easily, as a relatively small amount of engineer-time is required and refinement is achieved by changing few parameters. This tool is useful for solving very large challenging problems. (author)

  4. Computational modelling of fibre-reinforced cementitious composites : An analysis of discrete and mesh-independent techniques

    Radtke, F.K.F.

    2012-01-01

    Failure patterns and mechanical behaviour of high performance fibre-reinforced cementitious composites depend to a large extent on the distribution of fibres within a specimen. A discrete treatment of fibres enables us to study the influence of various fibre distributions on the mechanical

  5. Research in applied mathematics, numerical analysis, and computer science

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  6. Computational intelligence for big data analysis frontier advances and applications

    Dehuri, Satchidananda; Sanyal, Sugata

    2015-01-01

    The work presented in this book is a combination of theoretical advancements of big data analysis, cloud computing, and their potential applications in scientific computing. The theoretical advancements are supported with illustrative examples and its applications in handling real life problems. The applications are mostly undertaken from real life situations. The book discusses major issues pertaining to big data analysis using computational intelligence techniques and some issues of cloud computing. An elaborate bibliography is provided at the end of each chapter. The material in this book includes concepts, figures, graphs, and tables to guide researchers in the area of big data analysis and cloud computing.

  7. Dynamic Mesh Adaptation for Front Evolution Using Discontinuous Galerkin Based Weighted Condition Number Mesh Relaxation

    Greene, Patrick T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schofield, Samuel P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nourgaliev, Robert [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-06-21

    A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.

  8. Computational systems analysis of dopamine metabolism.

    Zhen Qi

    2008-06-01

    Full Text Available A prominent feature of Parkinson's disease (PD is the loss of dopamine in the striatum, and many therapeutic interventions for the disease are aimed at restoring dopamine signaling. Dopamine signaling includes the synthesis, storage, release, and recycling of dopamine in the presynaptic terminal and activation of pre- and post-synaptic receptors and various downstream signaling cascades. As an aid that might facilitate our understanding of dopamine dynamics in the pathogenesis and treatment in PD, we have begun to merge currently available information and expert knowledge regarding presynaptic dopamine homeostasis into a computational model, following the guidelines of biochemical systems theory. After subjecting our model to mathematical diagnosis and analysis, we made direct comparisons between model predictions and experimental observations and found that the model exhibited a high degree of predictive capacity with respect to genetic and pharmacological changes in gene expression or function. Our results suggest potential approaches to restoring the dopamine imbalance and the associated generation of oxidative stress. While the proposed model of dopamine metabolism is preliminary, future extensions and refinements may eventually serve as an in silico platform for prescreening potential therapeutics, identifying immediate side effects, screening for biomarkers, and assessing the impact of risk factors of the disease.

  9. Computational Analysis of Pharmacokinetic Behavior of Ampicillin

    Mária Ďurišová

    2016-07-01

    Full Text Available orrespondence: Institute of Experimental Pharmacology and Toxicology, Slovak Academy of Sciences, 841 04 Bratislava, Slovak Republic. Phone + 42-1254775928; Fax +421254775928; E-mail: maria.durisova@savba.sk 84 RESEARCH ARTICLE The objective of this study was to perform a computational analysis of the pharmacokinetic behavior of ampicillin, using data from the literature. A method based on the theory of dynamic systems was used for modeling purposes. The method used has been introduced to pharmacokinetics with the aim to contribute to the knowledge base in pharmacokinetics by including the modeling method which enables researchers to develop mathematical models of various pharmacokinetic processes in an identical way, using identical model structures. A few examples of a successful use of the modeling method considered here in pharmacokinetics can be found in full texts articles available free of charge at the website of the author, and in the example given in the this study. The modeling method employed in this study can be used to develop a mathematical model of the pharmacokinetic behavior of any drug, under the condition that the pharmacokinetic behavior of the drug under study can be at least partially approximated using linear models.

  10. New computing systems, future computing environment, and their implications on structural analysis and design

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  11. Leveraging the power of mesh

    Glass, H. [Cellnet, Alpharetta, GA (United States)

    2006-07-01

    Mesh network applications are used by utilities for metering, demand response, and mobile workforce management. This presentation provided an overview of a multi-dimensional mesh application designed to offer improved scalability and higher throughput in advanced metering infrastructure (AMI) systems. Mesh applications can be used in AMI for load balancing and forecasting, as well as for distribution and transmission planning. New revenue opportunities can be realized through the application's ability to improve notification and monitoring services, and customer service communications. Mesh network security features include data encryption, data fragmentation and the automatic re-routing of data. In order to use mesh network applications, networks must have sufficient bandwidth and provide flexibility at the endpoint layer to support multiple devices from multiple vendors, as well as support multiple protocols. It was concluded that smart meters will not enable energy response solutions without an underlying AMI that is reliable, scalable and self-healing. .refs., tabs., figs.

  12. Codesign Analysis of a Computer Graphics Application

    Madsen, Jan; Brage, Jens P.

    1996-01-01

    This paper describes a codesign case study where a computer graphics application is examined with the intention to speed up its execution. The application is specified as a C program, and is characterized by the lack of a simple compute-intensive kernel. The hardware/software partitioning is based...

  13. Architectural analysis for wirelessly powered computing platforms

    Kapoor, A.; Pineda de Gyvez, J.

    2013-01-01

    We present a design framework for wirelessly powered generic computing platforms that takes into account various system parameters in response to a time-varying energy source. These parameters are the charging profile of the energy source, computing speed (fclk), digital supply voltage (VDD), energy

  14. Cell-centered particle weighting algorithm for PIC simulations in a non-uniform 2D axisymmetric mesh

    Araki, Samuel J.; Wirz, Richard E.

    2014-09-01

    Standard area weighting methods for particle-in-cell simulations result in systematic errors on particle densities for a non-uniform mesh in cylindrical coordinates. These errors can be significantly reduced by using weighted cell volumes for density calculations. A detailed description on the corrected volume calculations and cell-centered weighting algorithm in a non-uniform mesh is provided. The simple formulas for the corrected volume can be used for any type of quadrilateral and/or triangular mesh in cylindrical coordinates. Density errors arising from the cell-centered weighting algorithm are computed for radial density profiles of uniform, linearly decreasing, and Bessel function in an adaptive Cartesian mesh and an unstructured mesh. For all the density profiles, it is shown that the weighting algorithm provides a significant improvement for density calculations. However, relatively large density errors may persist at outermost cells for monotonically decreasing density profiles. A further analysis has been performed to investigate the effect of the density errors in potential calculations, and it is shown that the error at the outermost cell does not propagate into the potential solution for the density profiles investigated.

  15. Computational Intelligence in Intelligent Data Analysis

    Nürnberger, Andreas

    2013-01-01

    Complex systems and their phenomena are ubiquitous as they can be found in biology, finance, the humanities, management sciences, medicine, physics and similar fields. For many problems in these fields, there are no conventional ways to mathematically or analytically solve them completely at low cost. On the other hand, nature already solved many optimization problems efficiently. Computational intelligence attempts to mimic nature-inspired problem-solving strategies and methods. These strategies can be used to study, model and analyze complex systems such that it becomes feasible to handle them. Key areas of computational intelligence are artificial neural networks, evolutionary computation and fuzzy systems. As only a few researchers in that field, Rudolf Kruse has contributed in many important ways to the understanding, modeling and application of computational intelligence methods. On occasion of his 60th birthday, a collection of original papers of leading researchers in the field of computational intell...

  16. Computer vision syndrome (CVS) - Thermographic Analysis

    Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.

    2017-01-01

    The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.

  17. High-fidelity meshes from tissue samples for diffusion MRI simulations.

    Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C

    2010-01-01

    This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.

  18. Analysis of Computer Network Information Based on "Big Data"

    Li, Tianli

    2017-11-01

    With the development of the current era, computer network and large data gradually become part of the people's life, people use the computer to provide convenience for their own life, but at the same time there are many network information problems has to pay attention. This paper analyzes the information security of computer network based on "big data" analysis, and puts forward some solutions.

  19. Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Warsa, James S.

    2004-01-01

    In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically

  20. An Analysis of 27 Years of Research into Computer Education Published in Australian Educational Computing

    Zagami, Jason

    2015-01-01

    Analysis of three decades of publications in Australian Educational Computing (AEC) provides insight into the historical trends in Australian educational computing, highlighting an emphasis on pedagogy, comparatively few articles on educational technologies, and strong research topic alignment with similar international journals. Analysis confirms…

  1. Form-finding with polyhedral meshes made simple

    Tang, Chengcheng; Sun, Xiang; Gomes, Maria Alexandra; Wallner, Johannes; Pottmann, Helmut

    2014-01-01

    We solve the form-finding problem for polyhedral meshes in a way which combines form, function and fabrication; taking care of user-specified constraints like boundary interpolation, planarity of faces, statics, panel size and shape, enclosed volume, and last, but not least, cost. Our main application is the interactive modeling of meshes for architectural and industrial design. Our approach can be described as guided exploration of the constraint space whose algebraic structure is simplified by introducing auxiliary variables and ensuring that constraints are at most quadratic. Computationally, we perform a projection onto the constraint space which is biased towards low values of an energy which expresses desirable "soft" properties like fairness. We have created a tool which elegantly handles difficult tasks, such as taking boundary-alignment of polyhedral meshes into account, planarization, fairing under planarity side conditions, handling hybrid meshes, and extending the treatment of static equilibrium to shapes which possess overhanging parts.

  2. Ordering schemes for parallel processing of certain mesh problems

    O'Leary, D.

    1984-01-01

    In this work, some ordering schemes for mesh points are presented which enable algorithms such as the Gauss-Seidel or SOR iteration to be performed efficiently for the nine-point operator finite difference method on computers consisting of a two-dimensional grid of processors. Convergence results are presented for the discretization of u /SUB xx/ + u /SUB yy/ on a uniform mesh over a square, showing that the spectral radius of the iteration for these orderings is no worse than that for the standard row by row ordering of mesh points. Further applications of these mesh point orderings to network problems, more general finite difference operators, and picture processing problems are noted

  3. Form-finding with polyhedral meshes made simple

    Tang, Chengcheng

    2014-07-27

    We solve the form-finding problem for polyhedral meshes in a way which combines form, function and fabrication; taking care of user-specified constraints like boundary interpolation, planarity of faces, statics, panel size and shape, enclosed volume, and last, but not least, cost. Our main application is the interactive modeling of meshes for architectural and industrial design. Our approach can be described as guided exploration of the constraint space whose algebraic structure is simplified by introducing auxiliary variables and ensuring that constraints are at most quadratic. Computationally, we perform a projection onto the constraint space which is biased towards low values of an energy which expresses desirable "soft" properties like fairness. We have created a tool which elegantly handles difficult tasks, such as taking boundary-alignment of polyhedral meshes into account, planarization, fairing under planarity side conditions, handling hybrid meshes, and extending the treatment of static equilibrium to shapes which possess overhanging parts.

  4. Trajectory Optimization Based on Multi-Interval Mesh Refinement Method

    Ningbo Li

    2017-01-01

    Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.

  5. Coarse-mesh rebalancing acceleration for eigenvalue problems

    Asaoka, T.; Nakahara, Y.; Miyasaka, S.

    1974-01-01

    The coarse-mesh rebalance method is adopted for Monte Carlo schemes for aiming at accelerating the convergence of a source iteration process. At every completion of the Monte Carlo game for one batch of neutron histories, the scaling factor for the neutron flux is calculated to achieve the neutron balance in each coarse-mesh zone into which the total system is divided. This rebalance factor is multiplied to the weight of each fission source neutron in the coarse-mesh zone for playing the next Monte Carlo game. The numerical examples have shown that the coarse-mesh rebalance Monte Carlo calculation gives a good estimate of the eigenvalue already after several batches with a negligible extra computer time compared to the standard Monte Carlo. 5 references. (U.S.)

  6. Interoperable mesh and geometry tools for advanced petascale simulations

    Diachin, L; Bauer, A; Fix, B; Kraftcheck, J; Jansen, K; Luo, X; Miller, M; Ollivier-Gooch, C; Shephard, M S; Tautges, T; Trease, H

    2007-01-01

    SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. The Center for Interoperable Technologies for Advanced Petascale Simulations (ITAPS) will deliver interoperable and interchangeable mesh, geometry, and field manipulation services that are of direct use to SciDAC applications. The premise of our technology development goal is to provide such services as libraries that can be used with minimal intrusion into application codes. To develop these technologies, we focus on defining a common data model and data-structure neutral interfaces that unify a number of different services such as mesh generation and improvement, front tracking, adaptive mesh refinement, shape optimization, and solution transfer operations. We highlight the use of several ITAPS services in SciDAC applications

  7. Use of moving heat conductor mesh to perform reflood calculations with RELAP4/MOD6

    Fischer, S.R.; Ellis, L.V.; Chen, Y.S.

    1979-01-01

    RELAP4 is a computer code which can be used for the transient thermal hydraulic analysis of light water reactors and related systems. RELAP4/MOD6 includes many new analytical models which were developed primarily for the analysis of the reflood phase of a PWR loss-of-coolant accident (LOCA) transient. The key feature forming the basis for the MOD6 reflood calculation is a unique moving finite differenced heat conductor. The development and application of the moving heat conductor mesh for use in reflood analysis are described

  8. Two-dimensional isostatic meshes in the finite element method

    Martínez Marín, Rubén; Samartín, Avelino

    2002-01-01

    In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's...

  9. An Approach for Patient-Specific Multi-domain Vascular Mesh Generation Featuring Spatially Varying Wall Thickness Modeling

    Raut, Samarth S.; Liu, Peng; Finol, Ender A.

    2015-01-01

    In this work, we present a computationally efficient image-derived volume mesh generation approach for vasculatures that implements spatially varying patient-specific wall thickness with a novel inward extrusion of the wall surface mesh. Multi-domain vascular meshes with arbitrary numbers, locations, and patterns of both iliac bifurcations and thrombi can be obtained without the need to specify features or landmark points as input. In addition, the mesh output is coordinate-frame independent ...

  10. Streaming Compression of Hexahedral Meshes

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  11. In-vitro examination of the biocompatibility of fibroblast cell lines on alloplastic meshes and sterilized polyester mosquito mesh.

    Wiessner, R; Kleber, T; Ekwelle, N; Ludwig, K; Richter, D-U

    2017-06-01

    The use of alloplastic implants for tissue strengthening when treating hernias is an established therapy worldwide. Despite the high incidence of hernias in Africa and Asia, the implantation of costly mesh netting is not financially feasible. Because of that various investigative groups have examined the use of sterilized mosquito netting. The animal experiments as well as the clinical trials have both shown equivalent short- and long-term results. The goal of this paper is the comparison of biocompatibility of human fibroblasts on the established commercially available nets and on sterilized polyester mosquito mesh over a period of 12 weeks. Three commercially available plastic mesh types and a gas-sterilized mosquito polyethylenterephtalate (polyester) mesh were examined. Human fibroblasts from subcutaneous healthy tissue were used. Various tests for evaluating the growth behavior and the cell morphology of human fibroblasts were conducted. The semi-quantitative (light microscopy) and qualitative (scanning electron microscopy) analyses were performed after 1 week and then again after 12 weeks. The cell proliferation and cytotoxicity of the implants were investigated with the help of the 5'-bromo-2'-deoxyuridine (BrdU)-cell proliferation test and the LDH-cytotoxicity test. The number of live cells per ml was determined with the Bürker counting chamber. In addition, analyses were made of the cell metabolism (oxidative stress) by measuring the pH value, hydrogen peroxide, and glycolysis. After 12 weeks, a proliferation of fibroblasts on all mesh is documented. No mesh showed a complete apoptosis of the cells. This qualitative observation could be confirmed quantitatively in a biochemical assay by marking the proliferating cells with BrdU. The biochemical analysis brought the proof that the materials used, including the polyester of the mosquito mesh, are not cytotoxic for the fibroblasts. The vitality of the cells was between 94 and 98%. The glucose metabolism

  12. Computer science: Data analysis meets quantum physics

    Schramm, Steven

    2017-10-01

    A technique that combines machine learning and quantum computing has been used to identify the particles known as Higgs bosons. The method could find applications in many areas of science. See Letter p.375

  13. Analysis On Security Of Cloud Computing

    Muhammad Zunnurain Hussain

    2017-01-01

    Full Text Available In this paper Author will be discussing the security issues and challenges faced by the industry in securing the cloud computing and how these problems can be tackled. Cloud computing is modern technique of sharing resources like data sharing file sharing basically sharing of resources without launching own infrastructure and using some third party resources to avoid huge investment . It is very challenging these days to secure the communication between two users although people use different encryption techniques 1.

  14. Schottky signal analysis: tune and chromaticity computation

    Chanon, Ondine

    2016-01-01

    Schottky monitors are used to determine important beam parameters in a non-destructive way. The Schottky signal is due to the internal statistical fluctuations of the particles inside the beam. In this report, after explaining the different components of a Schottky signal, an algorithm to compute the betatron tune is presented, followed by some ideas to compute machine chromaticity. The tests have been performed with offline and/or online LHC data.

  15. Computer code MLCOSP for multiple-correlation and spectrum analysis with a hybrid computer

    Oguma, Ritsuo; Fujii, Yoshio; Usui, Hozumi; Watanabe, Koichi

    1975-10-01

    Usage of the computer code MLCOSP(Multiple Correlation and Spectrum) developed is described for a hybrid computer installed in JAERI Functions of the hybrid computer and its terminal devices are utilized ingeniously in the code to reduce complexity of the data handling which occurrs in analysis of the multivariable experimental data and to perform the analysis in perspective. Features of the code are as follows; Experimental data can be fed to the digital computer through the analog part of the hybrid computer by connecting with a data recorder. The computed results are displayed in figures, and hardcopies are taken when necessary. Series-messages to the code are shown on the terminal, so man-machine communication is possible. And further the data can be put in through a keyboard, so case study according to the results of analysis is possible. (auth.)

  16. Mesh Adaptation and Shape Optimization on Unstructured Meshes, Phase I

    National Aeronautics and Space Administration — In this SBIR CRM proposes to implement the entropy adjoint method for solution adaptive mesh refinement into the Loci/CHEM unstructured flow solver. The scheme will...

  17. Research on Multiple-Split Load Sharing Characteristics of 2-Stage External Meshing Star Gear System in Consideration of Displacement Compatibility

    Shuai Mo

    2017-01-01

    Full Text Available This paper studies the multiple-split load sharing mechanism of gears in two-stage external meshing planetary transmission system of aeroengine. According to the eccentric error, gear tooth thickness error, pitch error, installation error, and bearing manufacturing error, we performed the meshing error analysis of equivalent angles, respectively, and we also considered the floating meshing error caused by the variation of the meshing backlash, which is from the floating of all gears at the same time. Finally, we obtained the comprehensive angle meshing error of the two-stage meshing line, established a refined mathematical computational model of 2-stage external 3-split loading sharing coefficient in consideration of displacement compatibility, got the regular curves of the load sharing coefficient and load sharing characteristic curve of full floating multiple-split and multiple-stage system, and took the variation law of the floating track and the floating quantity of the center wheel. These provide a scientific theory to determine the load sharing coefficient, reasonable load distribution, and control tolerances in aviation design and manufacturing.

  18. Autoclaved Sand-Lime Products with a Polypropylene Mesh

    Kostrzewa, Paulina; Stępień, Anna

    2017-10-01

    The paper presents the results of the research on modifications of silicate bricks with a polypropylene mesh and their influence on physical, mechanical and microstructural properties of such bricks. The main goal of the paper was to determine effects of the polypropylene mesh on sand-lime product parameters. The analysis has focused on compressive strength, water absorption, bulk density and structural features of the material. The obtained product is characterized by improved basic performance characteristics compared to traditional silicate products. Using the polypropylene mesh increased compressive strength by 25% while decreasing the product density. The modified products retain their form and do not disintegrate after losing their bearing capacity.

  19. Computer-Assisted Linguistic Analysis of the Peshitta

    Roorda, D.; Talstra, Eep; Dyk, Janet; van Keulen, Percy; Sikkel, Constantijn; Bosman, H.J.; Jenner, K.D.; Bakker, Dirk; Volkmer, J.A.; Gutman, Ariel; van Peursen, Wido Th.

    2014-01-01

    CALAP (Computer-Assisted Linguistic Analysis of the Peshitta), a joint research project of the Peshitta Institute Leiden and the Werkgroep Informatica at the Vrije Universiteit Amsterdam (1999-2005) CALAP concerned the computer-assisted analysis of the Peshitta to Kings (Janet Dyk and Percy van

  20. Run 2 analysis computing for CDF and D0

    Fuess, S.

    1995-11-01

    Two large experiments at the Fermilab Tevatron collider will use upgraded of running. The associated analysis software is also expected to change, both to account for higher data rates and to embrace new computing paradigms. A discussion is given to the problems facing current and future High Energy Physics (HEP) analysis computing, and several issues explored in detail

  1. Frequency modulation television analysis: Threshold impulse analysis. [with computer program

    Hodge, W. H.

    1973-01-01

    A computer program is developed to calculate the FM threshold impulse rates as a function of the carrier-to-noise ratio for a specified FM system. The system parameters and a vector of 1024 integers, representing the probability density of the modulating voltage, are required as input parameters. The computer program is utilized to calculate threshold impulse rates for twenty-four sets of measured probability data supplied by NASA and for sinusoidal and Gaussian modulating waveforms. As a result of the analysis several conclusions are drawn: (1) The use of preemphasis in an FM television system improves the threshold by reducing the impulse rate. (2) Sinusoidal modulation produces a total impulse rate which is a practical upper bound for the impulse rates of TV signals providing the same peak deviations. (3) As the moment of the FM spectrum about the center frequency of the predetection filter increases, the impulse rate tends to increase. (4) A spectrum having an expected frequency above (below) the center frequency of the predetection filter produces a higher negative (positive) than positive (negative) impulse rate.

  2. Computational Analysis of SAXS Data Acquisition.

    Dong, Hui; Kim, Jin Seob; Chirikjian, Gregory S

    2015-09-01

    Small-angle x-ray scattering (SAXS) is an experimental biophysical method used for gaining insight into the structure of large biomolecular complexes. Under appropriate chemical conditions, the information obtained from a SAXS experiment can be equated to the pair distribution function, which is the distribution of distances between every pair of points in the complex. Here we develop a mathematical model to calculate the pair distribution function for a structure of known density, and analyze the computational complexity of these calculations. Efficient recursive computation of this forward model is an important step in solving the inverse problem of recovering the three-dimensional density of biomolecular structures from their pair distribution functions. In particular, we show that integrals of products of three spherical-Bessel functions arise naturally in this context. We then develop an algorithm for the efficient recursive computation of these integrals.

  3. Finite element method for solving Kohn-Sham equations based on self-adaptive tetrahedral mesh

    Zhang Dier; Shen Lihua; Zhou Aihui; Gong Xingao

    2008-01-01

    A finite element (FE) method with self-adaptive mesh-refinement technique is developed for solving the density functional Kohn-Sham equations. The FE method adopts local piecewise polynomials basis functions, which produces sparsely structured matrices of Hamiltonian. The method is well suitable for parallel implementation without using Fourier transform. In addition, the self-adaptive mesh-refinement technique can control the computational accuracy and efficiency with optimal mesh density in different regions

  4. Mersiline mesh in premaxillary augmentation.

    Foda, Hossam M T

    2005-01-01

    Premaxillary retrusion may distort the aesthetic appearance of the columella, lip, and nasal tip. This defect is characteristically seen in, but not limited to, patients with cleft lip nasal deformity. This study investigated 60 patients presenting with premaxillary deficiencies in which Mersiline mesh was used to augment the premaxilla. All the cases had surgery using the external rhinoplasty technique. Two methods of augmentation with Mersiline mesh were used: the Mersiline roll technique, for the cases with central symmetric deficiencies, and the Mersiline packing technique, for the cases with asymmetric deficiencies. Premaxillary augmentation with Mersiline mesh proved to be simple technically, easy to perform, and not associated with any complications. Periodic follow-up evaluation for a mean period of 32 months (range, 12-98 months) showed that an adequate degree of premaxillary augmentation was maintained with no clinically detectable resorption of the mesh implant.

  5. Male infertility after mesh hernia repair: A prospective study.

    Hallén, Magnus; Sandblom, Gabriel; Nordin, Pär; Gunnarsson, Ulf; Kvist, Ulrik; Westerdahl, Johan

    2011-02-01

    Several animal studies have raised concern about the risk for obstructive azoospermia owing to vasal fibrosis caused by the use of alloplastic mesh prosthesis in inguinal hernia repair. The aim of this study was to determine the prevalence of male infertility after bilateral mesh repair. In a prospective study, a questionnaire inquiring about involuntary childlessness, investigation for infertility and number of children was sent by mail to a group of 376 men aged 18-55 years, who had undergone bilateral mesh repair, identified in the Swedish Hernia Register (SHR). Questionnaires were also sent to 2 control groups, 1 consisting of 186 men from the SHR who had undergone bilateral repair without mesh, and 1 consisting of 383 men identified in the general population. The control group from the SHR was matched 2:1 for age and years elapsed since operation. The control group from the general population was matched 1:1 for age and marital status. The overall response rate was 525 of 945 (56%). Method of approach (anterior or posterior), type of mesh, and testicular status at the time of the repair had no significant impact on the answers to the questions. Nor did subgroup analysis of the men ≤40 years old reveal any significant differences. The results of this prospective study in men do not support the hypothesis that bilateral inguinal hernia repair with alloplastic mesh prosthesis causes male infertility at a significantly greater rate than those operated without mesh. Copyright © 2011 Mosby, Inc. All rights reserved.

  6. Design Investigation on Applicable Mesh Structures for Medical Stent Applications

    Asano, Shoji; He, Jianmei

    2017-11-01

    In recent years, utilization of medical stents is one of effective treatments for stenosis and occlusion occurring in a living body’s lumen indispensable for maintenance of human life such as superficial femoral artery (SFA) occlusion. However, there are concerns about the occurrence of fatigue fractures caused by stress concentrations, neointimal hyperplasia and the like due to the shape structure and the manufacturing method in the conventional stents, and a stent having high strength and high flexibility is required. Therefore, in this research, applicable mesh structures for medical stents based on the design concepts of high strength, high flexibility are interested to solve various problem of conventional stent. According to the shape and dimensions of SFA occlusion therapy stent and indwelling delivery catheter, shape design of the meshed stent are performed using 3-dimensional CAD software Solid Works first. Then analytical examination on storage characteristics and compression characteristics of such mesh structure applied stent models were carried out through finite element analysis software ANSYS Workbench. Meshed stent models with higher strength and higher flexibility with integral molding are investigated analytically. It was found that the storage characteristics and compression characteristics of meshed stent modles are highly dependent on the basic mesh shapes with same surface void ratio. Trade-off relationship between flexibility and storage characteristics is found exited, it is required to provide appropriate curvatures during basic mesh shape design.

  7. GENERATION OF IRREGULAR HEXAGONAL MESHES

    Vlasov Aleksandr Nikolaevich

    2012-07-01

    Decomposition is performed in a constructive way and, as option, it involves meshless representation. Further, this mapping method is used to generate the calculation mesh. In this paper, the authors analyze different cases of mapping onto simply connected and bi-connected canonical domains. They represent forward and backward mapping techniques. Their potential application for generation of nonuniform meshes within the framework of the asymptotic homogenization theory is also performed to assess and project effective characteristics of heterogeneous materials (composites.

  8. Field-aligned mesh joinery

    Cignoni, Paolo; Pietroni, Nico; Malomo, Luigi

    2014-01-01

    Mesh joinery is an innovative method to produce illustrative shape approximations suitable for fabrication. Mesh joinery is capable of producing complex fabricable structures in an efficient and visually pleasing manner. We represent an input geometry as a set of planar pieces arranged to compose a rigid structure, by exploiting an efficient slit mechanism. Since slices are planar, to fabricate them a standard 2D cutting system is enough. We automatically arrange slices according to a smooth ...

  9. Anisotropic mesh adaptation for marine ice-sheet modelling

    Gillet-Chaulet, Fabien; Tavard, Laure; Merino, Nacho; Peyaud, Vincent; Brondex, Julien; Durand, Gael; Gagliardini, Olivier

    2017-04-01

    Improving forecasts of ice-sheets contribution to sea-level rise requires, amongst others, to correctly model the dynamics of the grounding line (GL), i.e. the line where the ice detaches from its underlying bed and goes afloat on the ocean. Many numerical studies, including the intercomparison exercises MISMIP and MISMIP3D, have shown that grid refinement in the GL vicinity is a key component to obtain reliable results. Improving model accuracy while maintaining the computational cost affordable has then been an important target for the development of marine icesheet models. Adaptive mesh refinement (AMR) is a method where the accuracy of the solution is controlled by spatially adapting the mesh size. It has become popular in models using the finite element method as they naturally deal with unstructured meshes, but block-structured AMR has also been successfully applied to model GL dynamics. The main difficulty with AMR is to find efficient and reliable estimators of the numerical error to control the mesh size. Here, we use the estimator proposed by Frey and Alauzet (2015). Based on the interpolation error, it has been found effective in practice to control the numerical error, and has some flexibility, such as its ability to combine metrics for different variables, that makes it attractive. Routines to compute the anisotropic metric defining the mesh size have been implemented in the finite element ice flow model Elmer/Ice (Gagliardini et al., 2013). The mesh adaptation is performed using the freely available library MMG (Dapogny et al., 2014) called from Elmer/Ice. Using a setup based on the inter-comparison exercise MISMIP+ (Asay-Davis et al., 2016), we study the accuracy of the solution when the mesh is adapted using various variables (ice thickness, velocity, basal drag, …). We show that combining these variables allows to reduce the number of mesh nodes by more than one order of magnitude, for the same numerical accuracy, when compared to uniform mesh

  10. Computational and Physical Analysis of Catalytic Compounds

    Wu, Richard; Sohn, Jung Jae; Kyung, Richard

    2015-03-01

    Nanoparticles exhibit unique physical and chemical properties depending on their geometrical properties. For this reason, synthesis of nanoparticles with controlled shape and size is important to use their unique properties. Catalyst supports are usually made of high-surface-area porous oxides or carbon nanomaterials. These support materials stabilize metal catalysts against sintering at high reaction temperatures. Many studies have demonstrated large enhancements of catalytic behavior due to the role of the oxide-metal interface. In this paper, the catalyzing ability of supported nano metal oxides, such as silicon oxide and titanium oxide compounds as catalysts have been analyzed using computational chemistry method. Computational programs such as Gamess and Chemcraft has been used in an effort to compute the efficiencies of catalytic compounds, and bonding energy changes during the optimization convergence. The result illustrates how the metal oxides stabilize and the steps that it takes. The graph of the energy computation step(N) versus energy(kcal/mol) curve shows that the energy of the titania converges faster at the 7th iteration calculation, whereas the silica converges at the 9th iteration calculation.

  11. Classification and Analysis of Computer Network Traffic

    Bujlow, Tomasz

    2014-01-01

    various classification modes (decision trees, rulesets, boosting, softening thresholds) regarding the classification accuracy and the time required to create the classifier. We showed how to use our VBS tool to obtain per-flow, per-application, and per-content statistics of traffic in computer networks...

  12. Computer programs simplify optical system analysis

    1965-01-01

    The optical ray-trace computer program performs geometrical ray tracing. The energy-trace program calculates the relative monochromatic flux density on a specific target area. This program uses the ray-trace program as a subroutine to generate a representation of the optical system.

  13. Analysis of airways in computed tomography

    Petersen, Jens

    Chronic Obstructive Pulmonary Disease (COPD) is major cause of death and disability world-wide. It affects lung function through destruction of lung tissue known as emphysema and inflammation of airways, leading to thickened airway walls and narrowed airway lumen. Computed Tomography (CT) imaging...

  14. Affect and Learning : a computational analysis

    Broekens, Douwe Joost

    2007-01-01

    In this thesis we have studied the influence of emotion on learning. We have used computational modelling techniques to do so, more specifically, the reinforcement learning paradigm. Emotion is modelled as artificial affect, a measure that denotes the positiveness versus negativeness of a situation

  15. Adapting computational text analysis to social science (and vice versa

    Paul DiMaggio

    2015-11-01

    Full Text Available Social scientists and computer scientist are divided by small differences in perspective and not by any significant disciplinary divide. In the field of text analysis, several such differences are noted: social scientists often use unsupervised models to explore corpora, whereas many computer scientists employ supervised models to train data; social scientists hold to more conventional causal notions than do most computer scientists, and often favor intense exploitation of existing algorithms, whereas computer scientists focus more on developing new models; and computer scientists tend to trust human judgment more than social scientists do. These differences have implications that potentially can improve the practice of social science.

  16. Method and system for mesh network embedded devices

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for managing mesh network devices. A mesh network device with integrated features creates an N-way mesh network with a full mesh network topology or a partial mesh network topology.

  17. Experience with a distributed computing system for magnetic field analysis

    Newman, M.J.

    1978-08-01

    The development of a general purpose computer system, THESEUS, is described the initial use for which has been magnetic field analysis. The system involves several computers connected by data links. Some are small computers with interactive graphics facilities and limited analysis capabilities, and others are large computers for batch execution of analysis programs with heavy processor demands. The system is highly modular for easy extension and highly portable for transfer to different computers. It can easily be adapted for a completely different application. It provides a highly efficient and flexible interface between magnet designers and specialised analysis programs. Both the advantages and problems experienced are highlighted, together with a mention of possible future developments. (U.K.)

  18. Adaptive mesh refinement for shocks and material interfaces

    Dai, William Wenlong [Los Alamos National Laboratory

    2010-01-01

    There are three kinds of adaptive mesh refinement (AMR) in structured meshes. Block-based AMR sometimes over refines meshes. Cell-based AMR treats cells cell by cell and thus loses the advantage of the nature of structured meshes. Patch-based AMR is intended to combine advantages of block- and cell-based AMR, i.e., the nature of structured meshes and sharp regions of refinement. But, patch-based AMR has its own difficulties. For example, patch-based AMR typically cannot preserve symmetries of physics problems. In this paper, we will present an approach for a patch-based AMR for hydrodynamics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, management of patches, and load balance. The special features of this patch-based AMR include symmetry preserving, efficiency of refinement across shock fronts and material interfaces, special implementation of flux correction, and patch management in parallel computing environments. To demonstrate the capability of the AMR framework, we will show both two- and three-dimensional hydrodynamics simulations with many levels of refinement.

  19. Enriching Triangle Mesh Animations with Physically Based Simulation.

    Li, Yijing; Xu, Hongyi; Barbic, Jernej

    2017-10-01

    We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.

  20. Parallel adaptation of general three-dimensional hybrid meshes

    Kavouklis, Christos; Kallinderis, Yannis

    2010-01-01

    A new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid grids has been developed. The meshes considered in this work are composed of four kinds of elements; tetrahedra, prisms, hexahedra and pyramids, which poses a challenge to parallel mesh adaptation. Additional complexity imposed by the presence of multiple types of elements affects especially data migration, updates of local data structures and interpartition data structures. Efficient partition of hybrid meshes has been accomplished by transforming them to suitable graphs and using serial graph partitioning algorithms. Communication among processors is based on the faces of the interpartition boundary and the termination detection algorithm of Dijkstra is employed to ensure proper flagging of edges for refinement. An inexpensive dynamic load balancing strategy is introduced to redistribute work load among processors after adaptation. In particular, only the initial coarse mesh, with proper weighting, is balanced which yields savings in computation time and relatively simple implementation of mesh quality preservation rules, while facilitating coarsening of refined elements. Special algorithms are employed for (i) data migration and dynamic updates of the local data structures, (ii) determination of the resulting interpartition boundary and (iii) identification of the communication pattern of processors. Several representative applications are included to evaluate the method.

  1. Characterization of the mechanism of drug-drug interactions from PubMed using MeSH terms.

    Lu, Yin; Figler, Bryan; Huang, Hong; Tu, Yi-Cheng; Wang, Ju; Cheng, Feng

    2017-01-01

    Identifying drug-drug interaction (DDI) is an important topic for the development of safe pharmaceutical drugs and for the optimization of multidrug regimens for complex diseases such as cancer and HIV. There have been about 150,000 publications on DDIs in PubMed, which is a great resource for DDI studies. In this paper, we introduced an automatic computational method for the systematic analysis of the mechanism of DDIs using MeSH (Medical Subject Headings) terms from PubMed literature. MeSH term is a controlled vocabulary thesaurus developed by the National Library of Medicine for indexing and annotating articles. Our method can effectively identify DDI-relevant MeSH terms such as drugs, proteins and phenomena with high accuracy. The connections among these MeSH terms were investigated by using co-occurrence heatmaps and social network analysis. Our approach can be used to visualize relationships of DDI terms, which has the potential to help users better understand DDIs. As the volume of PubMed records increases, our method for automatic analysis of DDIs from the PubMed database will become more accurate.

  2. Characterization of the mechanism of drug-drug interactions from PubMed using MeSH terms.

    Yin Lu

    Full Text Available Identifying drug-drug interaction (DDI is an important topic for the development of safe pharmaceutical drugs and for the optimization of multidrug regimens for complex diseases such as cancer and HIV. There have been about 150,000 publications on DDIs in PubMed, which is a great resource for DDI studies. In this paper, we introduced an automatic computational method for the systematic analysis of the mechanism of DDIs using MeSH (Medical Subject Headings terms from PubMed literature. MeSH term is a controlled vocabulary thesaurus developed by the National Library of Medicine for indexing and annotating articles. Our method can effectively identify DDI-relevant MeSH terms such as drugs, proteins and phenomena with high accuracy. The connections among these MeSH terms were investigated by using co-occurrence heatmaps and social network analysis. Our approach can be used to visualize relationships of DDI terms, which has the potential to help users better understand DDIs. As the volume of PubMed records increases, our method for automatic analysis of DDIs from the PubMed database will become more accurate.

  3. Feedforward Control of Gear Mesh Vibration Using Piezoelectric Actuators

    Gerald T. Montague

    1994-01-01

    Full Text Available This article presents a novel means for suppressing gear mesh related vibrations. The key components in this approach are piezoelectric actuators and a high-frequency, analog feed forward controller. Test results are presented and show up to a 70% reduction in gear mesh acceleration and vibration control up to 4500 Hz. The principle of the approach is explained by an analysis of a harmonically excited, general linear vibratory system.

  4. Surgeon Experience and Complications of Transvaginal Prolapse Mesh.

    Kelly, Erin C; Winick-Ng, Jennifer; Welk, Blayne

    2016-07-01

    To measure the proportion of women with transvaginal prolapse mesh complications and their association with surgeon volume. We conducted a retrospective, population-based cohort study of all women who underwent a mesh-based prolapse procedure using administrative data (hospital procedure and physician billing records) between 2002 and 2013 in Ontario, Canada. The primary outcome was surgical revision of the mesh. Primary exposure was surgeon volume: high (greater than the 75th percentile, requiring a median of five [interquartile range 5-6] procedures per year) and very high (greater than the 90th percentile, requiring a median of 13 [interquartile range 11-14] procedures per year) volume mesh implanters were identified each year. Primary analysis was an adjusted Cox proportional hazards model. A total of 5,488 women underwent mesh implantation by 1 of 368 unique surgeons. Median follow-up time was 5.4 (interquartile range 3.0-8.0) years. We found that 218 women (4.0%) underwent mesh reoperation a median of 1.17 (interquartile range 0.58-2.90) years after implantation. The hazard of reoperation for complications was only lower for patients of very high-volume surgeons (3.0% [145/3,001] compared with 4.8% [73/2,447], adjusted hazards ratio 0.59, 95% confidence interval 0.40-0.86). In multivariable modeling, younger age, concomitant hysterectomy, blood transfusion, and increased medical comorbidity were all associated with vaginal mesh reoperation. Approximately 5% of women who underwent mesh-based prolapse surgery required reoperation for a mesh complication within 10 years. The risk of reoperation was lowest for surgeons performing 14 or more procedures per year.

  5. Interface between computational fluid dynamics (CFD) and plant analysis computer codes

    Coffield, R.D.; Dunckhorst, F.F.; Tomlinson, E.T.; Welch, J.W.

    1993-01-01

    Computational fluid dynamics (CFD) can provide valuable input to the development of advanced plant analysis computer codes. The types of interfacing discussed in this paper will directly contribute to modeling and accuracy improvements throughout the plant system and should result in significant reduction of design conservatisms that have been applied to such analyses in the past

  6. Computational analysis of ozonation in bubble columns

    Quinones-Bolanos, E.; Zhou, H.; Otten, L.

    2002-01-01

    This paper presents a new computational ozonation model based on the principle of computational fluid dynamics along with the kinetics of ozone decay and microbial inactivation to predict the performance of ozone disinfection in fine bubble columns. The model can be represented using a mixture two-phase flow model to simulate the hydrodynamics of the water flow and using two transport equations to track the concentration profiles of ozone and microorganisms along the height of the column, respectively. The applicability of this model was then demonstrated by comparing the simulated ozone concentrations with experimental measurements obtained from a pilot scale fine bubble column. One distinct advantage of this approach is that it does not require the prerequisite assumptions such as plug flow condition, perfect mixing, tanks-in-series, uniform radial or longitudinal dispersion in predicting the performance of disinfection contactors without carrying out expensive and tedious tracer studies. (author)

  7. User Manual for the PROTEUS Mesh Tools

    Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, Emily R. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-06-01

    This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.

  8. User Manual for the PROTEUS Mesh Tools

    Smith, Micheal A.; Shemon, Emily R.

    2015-01-01

    This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT M eshToMesh.x and the MT R adialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as ''mesh'' input for any of the mesh tools discussed in this manual.

  9. What is the evidence for the use of biologic or biosynthetic meshes in abdominal wall reconstruction?

    Köckerling, F; Alam, N N; Antoniou, S A; Daniels, I R; Famiglietti, F; Fortelny, R H; Heiss, M M; Kallinowski, F; Kyle-Leinhase, I; Mayer, F; Miserez, M; Montgomery, A; Morales-Conde, S; Muysoms, F; Narang, S K; Petter-Puchner, A; Reinpold, W; Scheuerlein, H; Smietanski, M; Stechemesser, B; Strey, C; Woeste, G; Smart, N J

    2018-04-01

    Although many surgeons have adopted the use of biologic and biosynthetic meshes in complex abdominal wall hernia repair, others have questioned the use of these products. Criticism is addressed in several review articles on the poor standard of studies reporting on the use of biologic meshes for different abdominal wall repairs. The aim of this consensus review is to conduct an evidence-based analysis of the efficacy of biologic and biosynthetic meshes in predefined clinical situations. A European working group, "BioMesh Study Group", composed of invited surgeons with a special interest in surgical meshes, formulated key questions, and forwarded them for processing in subgroups. In January 2016, a workshop was held in Berlin where the findings were presented, discussed, and voted on for consensus. Findings were set out in writing by the subgroups followed by consensus being reached. For the review, 114 studies and background analyses were used. The cumulative data regarding biologic mesh under contaminated conditions do not support the claim that it is better than synthetic mesh. Biologic mesh use should be avoided when bridging is needed. In inguinal hernia repair biologic and biosynthetic meshes do not have a clear advantage over the synthetic meshes. For prevention of incisional or parastomal hernias, there is no evidence to support the use of biologic/biosynthetic meshes. In complex abdominal wall hernia repairs (incarcerated hernia, parastomal hernia, infected mesh, open abdomen, enterocutaneous fistula, and component separation technique), biologic and biosynthetic meshes do not provide a superior alternative to synthetic meshes. The routine use of biologic and biosynthetic meshes cannot be recommended.

  10. LHCb Distributed Data Analysis on the Computing Grid

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  11. Hybrid soft computing systems for electromyographic signals analysis: a review

    2014-01-01

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979

  12. Hybrid soft computing systems for electromyographic signals analysis: a review.

    Xie, Hong-Bo; Guo, Tianruo; Bai, Siwei; Dokos, Socrates

    2014-02-03

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis.

  13. Accident sequence analysis of human-computer interface design

    Fan, C.-F.; Chen, W.-H.

    2000-01-01

    It is important to predict potential accident sequences of human-computer interaction in a safety-critical computing system so that vulnerable points can be disclosed and removed. We address this issue by proposing a Multi-Context human-computer interaction Model along with its analysis techniques, an Augmented Fault Tree Analysis, and a Concurrent Event Tree Analysis. The proposed augmented fault tree can identify the potential weak points in software design that may induce unintended software functions or erroneous human procedures. The concurrent event tree can enumerate possible accident sequences due to these weak points

  14. Application of microarray analysis on computer cluster and cloud platforms.

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  15. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  16. Isogeometric analysis : a calculus for computational mechanics

    Benson, D.J.; Borst, de R.; Hughes, T.J.R.; Scott, M.A.; Verhoosel, C.V.; Topping, B.H.V.; Adam, J.M.; Pallarés, F.J.; Bru, R.; Romero, M.L.

    2010-01-01

    The first paper on isogeometric analysis appeared only five years ago [1], and the first book appeared last year [2]. Progress has been rapid. Isogeometric analysis has been applied to a wide variety of problems in solids, fluids and fluid-structure interactions. Superior accuracy to traditional

  17. Incompressible Navier-Stokes inverse design method based on adaptive unstructured meshes

    Rahmati, M.T.; Charlesworth, D.; Zangeneh, M.

    2005-01-01

    An inverse method for blade design based on Navier-Stokes equations on adaptive unstructured meshes has been developed. In the method, unlike the method based on inviscid equations, the effect of viscosity is directly taken into account. In the method, the pressure (or pressure loading) is prescribed. The design method then computes the blade shape that would accomplish the target prescribed pressure distribution. The method is implemented using a cell-centered finite volume method, which solves the incompressible Navier-Stokes equations on unstructured meshes. An adaptive unstructured mesh method based on grid subdivision and local adaptive mesh method is utilized for increasing the accuracy. (author)

  18. Computer-Based Interaction Analysis with DEGREE Revisited

    Barros, B.; Verdejo, M. F.

    2016-01-01

    We review our research with "DEGREE" and analyse how our work has impacted the collaborative learning community since 2000. Our research is framed within the context of computer-based interaction analysis and the development of computer-supported collaborative learning (CSCL) tools. We identify some aspects of our work which have been…

  19. Isotopic analysis of plutonium by computer controlled mass spectrometry

    1974-01-01

    Isotopic analysis of plutonium chemically purified by ion exchange is achieved using a thermal ionization mass spectrometer. Data acquisition from and control of the instrument is done automatically with a dedicated system computer in real time with subsequent automatic data reduction and reporting. Separation of isotopes is achieved by varying the ion accelerating high voltage with accurate computer control

  20. Computer Programme for the Dynamic Analysis of Tall Regular ...

    The traditional method of dynamic analysis of tall rigid frames assumes the shear frame model. Models that allow joint rotations with/without the inclusion of the column axial loads give improved results but pose much more computational difficulty. In this work a computer program Natfrequency that determines the dynamic ...

  1. Topological patterns of mesh textures in serpentinites

    Miyazawa, M.; Suzuki, A.; Shimizu, H.; Okamoto, A.; Hiraoka, Y.; Obayashi, I.; Tsuji, T.; Ito, T.

    2017-12-01

    Serpentinization is a hydration process that forms serpentine minerals and magnetite within the oceanic lithosphere. Microfractures crosscut these minerals during the reactions, and the structures look like mesh textures. It has been known that the patterns of microfractures and the system evolutions are affected by the hydration reaction and fluid transport in fractures and within matrices. This study aims at quantifying the topological patterns of the mesh textures and understanding possible conditions of fluid transport and reaction during serpentinization in the oceanic lithosphere. Two-dimensional simulation by the distinct element method (DEM) generates fracture patterns due to serpentinization. The microfracture patterns are evaluated by persistent homology, which measures features of connected components of a topological space and encodes multi-scale topological features in the persistence diagrams. The persistence diagrams of the different mesh textures are evaluated by principal component analysis to bring out the strong patterns of persistence diagrams. This approach help extract feature values of fracture patterns from high-dimensional and complex datasets.

  2. Computer use and carpal tunnel syndrome: A meta-analysis.

    Shiri, Rahman; Falah-Hassani, Kobra

    2015-02-15

    Studies have reported contradictory results on the role of keyboard or mouse use in carpal tunnel syndrome (CTS). This meta-analysis aimed to assess whether computer use causes CTS. Literature searches were conducted in several databases until May 2014. Twelve studies qualified for a random-effects meta-analysis. Heterogeneity and publication bias were assessed. In a meta-analysis of six studies (N=4964) that compared computer workers with the general population or other occupational populations, computer/typewriter use (pooled odds ratio (OR)=0.72, 95% confidence interval (CI) 0.58-0.90), computer/typewriter use ≥1 vs. computer/typewriter use ≥4 vs. computer/typewriter use (pooled OR=1.34, 95% CI 1.08-1.65), mouse use (OR=1.93, 95% CI 1.43-2.61), frequent computer use (OR=1.89, 95% CI 1.15-3.09), frequent mouse use (OR=1.84, 95% CI 1.18-2.87) and with years of computer work (OR=1.92, 95% CI 1.17-3.17 for long vs. short). There was no evidence of publication bias for both types of studies. Studies that compared computer workers with the general population or several occupational groups did not control their estimates for occupational risk factors. Thus, office workers with no or little computer use are a more appropriate comparison group than the general population or several occupational groups. This meta-analysis suggests that excessive computer use, particularly mouse usage might be a minor occupational risk factor for CTS. Further prospective studies among office workers with objectively assessed keyboard and mouse use, and CTS symptoms or signs confirmed by a nerve conduction study are needed. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Urban Flow and Pollutant Dispersion Simulation with Multi-scale coupling of Meteorological Model with Computational Fluid Dynamic Analysis

    Liu, Yushi; Poh, Hee Joo

    2014-11-01

    The Computational Fluid Dynamics analysis has become increasingly important in modern urban planning in order to create highly livable city. This paper presents a multi-scale modeling methodology which couples Weather Research and Forecasting (WRF) Model with open source CFD simulation tool, OpenFOAM. This coupling enables the simulation of the wind flow and pollutant dispersion in urban built-up area with high resolution mesh. In this methodology meso-scale model WRF provides the boundary condition for the micro-scale CFD model OpenFOAM. The advantage is that the realistic weather condition is taken into account in the CFD simulation and complexity of building layout can be handled with ease by meshing utility of OpenFOAM. The result is validated against the Joint Urban 2003 Tracer Field Tests in Oklahoma City and there is reasonably good agreement between the CFD simulation and field observation. The coupling of WRF- OpenFOAM provide urban planners with reliable environmental modeling tool in actual urban built-up area; and it can be further extended with consideration of future weather conditions for the scenario studies on climate change impact.

  4. Tolerance analysis through computational imaging simulations

    Birch, Gabriel C.; LaCasse, Charles F.; Stubbs, Jaclynn J.; Dagel, Amber L.; Bradley, Jon

    2017-11-01

    The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.

  5. Analysis and Assessment of Computer-Supported Collaborative Learning Conversations

    Trausan-Matu, Stefan

    2008-01-01

    Trausan-Matu, S. (2008). Analysis and Assessment of Computer-Supported Collaborative Learning Conversations. Workshop presentation at the symposium Learning networks for professional. November, 14, 2008, Heerlen, Nederland: Open Universiteit Nederland.

  6. Surveillance Analysis Computer System (SACS) software requirements specification (SRS)

    Glasscock, J.A.; Flanagan, M.J.

    1995-09-01

    This document is the primary document establishing requirements for the Surveillance Analysis Computer System (SACS) Database, an Impact Level 3Q system. The purpose is to provide the customer and the performing organization with the requirements for the SACS Project

  7. Users manual for Opt-MS : local methods for simplicial mesh smoothing and untangling.

    Freitag, L.

    1999-07-20

    Creating meshes containing good-quality elements is a challenging, yet critical, problem facing computational scientists today. Several researchers have shown that the size of the mesh, the shape of the elements within that mesh, and their relationship to the physical application of interest can profoundly affect the efficiency and accuracy of many numerical approximation techniques. If the application contains anisotropic physics, the mesh can be improved by considering both local characteristics of the approximate application solution and the geometry of the computational domain. If the application is isotropic, regularly shaped elements in the mesh reduce the discretization error, and the mesh can be improved a priori by considering geometric criteria only. The Opt-MS package provides several local node point smoothing techniques that improve elements in the mesh by adjusting grid point location using geometric, criteria. The package is easy to use; only three subroutine calls are required for the user to begin using the software. The package is also flexible; the user may change the technique, function, or dimension of the problem at any time during the mesh smoothing process. Opt-MS is designed to interface with C and C++ codes, ad examples for both two-and three-dimensional meshes are provided.

  8. COMPUTING

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  9. Adaptive mesh refinement and adjoint methods in geophysics simulations

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  10. From Digital Imaging to Computer Image Analysis of Fine Art

    Stork, David G.

    An expanding range of techniques from computer vision, pattern recognition, image analysis, and computer graphics are being applied to problems in the history of art. The success of these efforts is enabled by the growing corpus of high-resolution multi-spectral digital images of art (primarily paintings and drawings), sophisticated computer vision methods, and most importantly the engagement of some art scholars who bring questions that may be addressed through computer methods. This paper outlines some general problem areas and opportunities in this new inter-disciplinary research program.

  11. Use of computer codes for system reliability analysis

    Sabek, M.; Gaafar, M.; Poucet, A.

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author)

  12. Use of computer codes for system reliability analysis

    Sabek, M.; Gaafar, M. (Nuclear Regulatory and Safety Centre, Atomic Energy Authority, Cairo (Egypt)); Poucet, A. (Commission of the European Communities, Ispra (Italy). Joint Research Centre)

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author).

  13. Computer System Analysis for Decommissioning Management of Nuclear Reactor

    Nurokhim; Sumarbagiono

    2008-01-01

    Nuclear reactor decommissioning is a complex activity that should be planed and implemented carefully. A system based on computer need to be developed to support nuclear reactor decommissioning. Some computer systems have been studied for management of nuclear power reactor. Software system COSMARD and DEXUS that have been developed in Japan and IDMT in Italy used as models for analysis and discussion. Its can be concluded that a computer system for nuclear reactor decommissioning management is quite complex that involved some computer code for radioactive inventory database calculation, calculation module on the stages of decommissioning phase, and spatial data system development for virtual reality. (author)

  14. System Matrix Analysis for Computed Tomography Imaging

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  15. Computational analysis of sequence selection mechanisms.

    Meyerguz, Leonid; Grasso, Catherine; Kleinberg, Jon; Elber, Ron

    2004-04-01

    Mechanisms leading to gene variations are responsible for the diversity of species and are important components of the theory of evolution. One constraint on gene evolution is that of protein foldability; the three-dimensional shapes of proteins must be thermodynamically stable. We explore the impact of this constraint and calculate properties of foldable sequences using 3660 structures from the Protein Data Bank. We seek a selection function that receives sequences as input, and outputs survival probability based on sequence fitness to structure. We compute the number of sequences that match a particular protein structure with energy lower than the native sequence, the density of the number of sequences, the entropy, and the "selection" temperature. The mechanism of structure selection for sequences longer than 200 amino acids is approximately universal. For shorter sequences, it is not. We speculate on concrete evolutionary mechanisms that show this behavior.

  16. Process for computing geometric perturbations for probabilistic analysis

    Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  17. Structural dynamics in LMFBR containment analysis. A brief survey of computational methods and codes

    Chang, Y.W.

    1977-01-01

    This paper gives a brief survey of the computational methods and codes available for LMFBR containment analysis. The various numerical methods commonly used in the computer codes are compared. It provides the reactor engineers to up-to-date information on the development of structural dynamics in LMFBR containment analysis. It can also be used as a basis for the selection of the numerical method in the future code development. First, the commonly used finite-difference expressions in the Lagrangian codes will be compared. Sample calculations will be used as a basis for discussing and comparing the accuracy of the various finite-difference representations. The distortion of the meshes will also be compared; the techniques used for eliminating the numerical instabilities will be discussed and compared using examples. Next, the numerical methods used in the Eulerian formulation will be compared, first among themselves and then with the Lagrangian formulations. Special emphasis is placed on the effect of mass diffusion of the Eulerian calculation on the propagation of discontinuities. Implicit and explicit numerical integrations will be discussed and results obtained from these two techniques will be compared. Then, the finite-element methods are compared with the finite-difference methods. The advantages and disadvantages of the two methods will be discussed in detail, together with the versatility and ease of application of the method to containment analysis having complex geometries. It will also be shown that the finite-element equations for a constant-pressure fluid element is identical to the finite-difference equations using contour integrations. Finally, conclusions based on this study will be given

  18. Development of small scale cluster computer for numerical analysis

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  19. Data analysis through interactive computer animation method (DATICAM)

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process

  20. Computational Analysis of Spray Jet Flames

    Jain, Utsav

    There is a boost in the utilization of renewable sources of energy but because of high energy density applications, combustion will never be obsolete. Spray combustion is a type of multiphase combustion which has tremendous engineering applications in different fields, varying from energy conversion devices to rocket propulsion system. Developing accurate computational models for turbulent spray combustion is vital for improving the design of combustors and making them energy efficient. Flamelet models have been extensively used for gas phase combustion because of their relatively low computational cost to model the turbulence-chemistry interaction using a low dimensional manifold approach. This framework is designed for gas phase non-premixed combustion and its implementation is not very straight forward for multiphase and multi-regime combustion such as spray combustion. This is because of the use of a conserved scalar and various flamelet related assumptions. Mixture fraction has been popularly employed as a conserved scalar and hence used to parameterize the characteristics of gaseous flamelets. However, for spray combustion, the mixture fraction is not monotonic and does not give a unique mapping in order to parameterize the structure of spray flames. In order to develop a flamelet type model for spray flames, a new variable called the mixing variable is introduced which acts as an ideal conserved scalar and takes into account the convection and evaporation of fuel droplets. In addition to the conserved scalar, it has been observed that though gaseous flamelets can be characterized by the conserved scalar and its dissipation, this might not be true for spray flamelets. Droplet dynamics has a significant influence on the spray flamelet and because of effects such as flame penetration of droplets and oscillation of droplets across the stagnation plane, it becomes important to accommodate their influence in the flamelet formulation. In order to recognize the

  1. Retrospective analysis of a VACM (vacuum-assisted closure and mesh-mediated fascial traction treatment manual for temporary abdominal wall closure – results of 58 consecutive patients

    Beltzer, Christian

    2016-07-01

    Full Text Available Introduction: The optimal treatment concept for temporary abdominal closure (TAC in critically ill visceral surgery patients with open abdomen (OA continues to be unclear. The VACM (vacuum-assisted closure and mesh-mediated fascial traction therapy seems to permit higher delayed primary fascial closure rates (FCR than other TAC procedures. Material and methods: Patients of our clinic (n=58 who were treated by application of a VAC/VACM treatment manual in the period from 2005 to 2008 were retrospectively analysed. Results: The overall FCR of all patients was 48.3% (95% confidence interval: 34.95–61.78. An FCR of 61.3% was achieved in patients who had a vicryl mesh implanted at the fascial level (VACM therapy in the course of treatment. Mortality among patients treated with VACM therapy was 45.2% (95% CI: 27.32–63.97.Conclusions: The results of our own study confirm the results of previous studies which showed an acceptable FCR among non-trauma patients who were treated with VACM therapy. VACM therapy currently appears to be the treatment regime of choice for patients with OA requiring TAC.

  2. Modeling and Performance Analysis of Route-Over and Mesh-Under Routing Schemes in 6LoWPAN under Error-Prone Channel Condition

    Tsung-Han Lee

    2013-01-01

    Full Text Available 6LoWPAN technology has attracted extensive attention recently. It is because 6LoWPAN is one of Internet of Things standard and it adapts to IPv6 protocol stack over low-rate wireless personal area network, such as IEEE 802.15.4. One view is that IP architecture is not suitable for low-rate wireless personal area network. It is a challenge to implement the IPv6 protocol stack into IEEE 802.15.4 devices due to that the size of IPv6 packet is much larger than the maximum packet size of IEEE 802.15.4 in data link layer. In order to solve this problem, 6LoWPAN provides header compression to reduce the transmission overhead for IP packets. In addition, two selected routing schemes, mesh-under and route-over routing schemes, are also proposed in 6LoWPAN to forward IP fragmentations under IEEE 802.15.4 radio link. The distinction is based on which layer of the 6LoWPAN protocol stack is in charge of routing decisions. In route-over routing scheme, the routing distinction is taken at the network layer and, in mesh-under, is taken by the adaptation layer. Thus, the goal of this research is to understand the performance of two routing schemes in 6LoWPAN under error-prone channel condition.

  3. Computational analysis of thresholds for magnetophosphenes

    Laakso, Ilkka; Hirata, Akimasa

    2012-01-01

    In international guidelines, basic restriction limits on the exposure of humans to low-frequency magnetic and electric fields are set with the objective of preventing the generation of phosphenes, visual sensations of flashing light not caused by light. Measured data on magnetophosphenes, i.e. phosphenes caused by a magnetically induced electric field on the retina, are available from volunteer studies. However, there is no simple way for determining the retinal threshold electric field or current density from the measured threshold magnetic flux density. In this study, the experimental field configuration of a previous study, in which phosphenes were generated in volunteers by exposing their heads to a magnetic field between the poles of an electromagnet, is computationally reproduced. The finite-element method is used for determining the induced electric field and current in five different MRI-based anatomical models of the head. The direction of the induced current density on the retina is dominantly radial to the eyeball, and the maximum induced current density is observed at the superior and inferior sides of the retina, which agrees with literature data on the location of magnetophosphenes at the periphery of the visual field. On the basis of computed data, the macroscopic retinal threshold current density for phosphenes at 20 Hz can be estimated as 10 mA m −2 (−20% to  + 30%, depending on the anatomical model); this current density corresponds to an induced eddy current of 14 μA (−20% to  + 10%), and about 20% of this eddy current flows through each eye. The ICNIRP basic restriction limit for the induced electric field in the case of occupational exposure is not exceeded until the magnetic flux density is about two to three times the measured threshold for magnetophosphenes, so the basic restriction limit does not seem to be conservative. However, the reasons for the non-conservativeness are purely technical: removal of the highest 1% of

  4. Computer-automated neutron activation analysis system

    Minor, M.M.; Garcia, S.R.

    1983-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. 5 references

  5. EVP2D- a computer code developed for the eslastoviscoplastic-damage analysis of axyssimetrical and two-dimensional problems

    Goncalves Filho, O.J.A.

    1987-01-01

    This work aims to describe the computer code EVP2D developed for the elastoviscoplastic-damage analysis of mettalic components, with particular emphasis dedicated to the problem of creep damage and rupture. After a brief introduction of the basic concepts and procedures of Continuum Damage Mechanics, the constitutive equations implemented are presented. Next, the finite element approximation proposed for solution of the initial boundary value problem of interest is discussed, particularly the numerical algorithms used for time integration of the creep strain rate and damage rate equations, and the numerical procedures adopted for dealing with the presense of partially or fully ruptured finite elements in the mesh. As a pratical application, the rupture behaviour of a biaxially tension loaded plate containing a central circular hole is examined. Finally, future developments of the code, which include as prioritiesthe treatment of ciyclic loads and the description of the anisotropic feature of creep damage evolution, are briefly introduced. (author) [pt

  6. Computed tomographic analysis of urinary calculi

    Naito, Akira; Ito, Katsuhide; Ito, Shouko

    1986-01-01

    Computed tomography (CT) was employed in an effort to analyze the chemical composition of urinary calculi. Twenty-three surgically removed calculi were scanned in a water bath (in vitro study). Forteen of them in the body were scanned (in vivo study). The calculi consisted of four types: mixed calcium oxalate and phosphate, mixed calcium carbonate and phosphate, magnesium ammonium phosphate, and uric acid. The in vitro study showed that the mean and maximum CT values of uric acid stones were significantly lower than those of the other three types of stones. This indicated that stones with less than 450 HU are composed of uric acid. In an in vivo study, CT did not help to differentiate the three types of urinary calculi, except for uric acid stones. Regarding the mean CT values, there was no correlation between in vitro and in vivo studies. An experiment with commercially available drugs showed that CT values of urinary calculi were not dependent upon the composition, but dependent upon the density of the calculi. (Namekawa, K.)

  7. Analysis of computational vulnerabilities in digital repositories

    Valdete Fernandes Belarmino

    2015-04-01

    Full Text Available Objective. Demonstrates the results of research that aimed to analyze the computational vulnerabilities of digital directories in public Universities. Argues the relevance of information in contemporary societies like an invaluable resource, emphasizing scientific information as an essential element to constitute scientific progress. Characterizes the emergence of Digital Repositories and highlights its use in academic environment to preserve, promote, disseminate and encourage the scientific production. Describes the main software for the construction of digital repositories. Method. The investigation identified and analyzed the vulnerabilities that are exposed the digital repositories using Penetration Testing running. Discriminating the levels of risk and the types of vulnerabilities. Results. From a sample of 30 repositories, we could examine 20, identified that: 5% of the repositories have critical vulnerabilities, 85% high, 25% medium and 100% lowers. Conclusions. Which demonstrates the necessity to adapt actions for these environments that promote informational security to minimizing the incidence of external and / or internal systems attacks.Abstract Grey Text – use bold for subheadings when needed.

  8. COSMOLOGICAL ADAPTIVE MESH REFINEMENT MAGNETOHYDRODYNAMICS WITH ENZO

    Collins, David C.; Xu Hao; Norman, Michael L.; Li Hui; Li Shengtai

    2010-01-01

    In this work, we present EnzoMHD, the extension of the cosmological code Enzo to include the effects of magnetic fields through the ideal magnetohydrodynamics approximation. We use a higher order Godunov method for the computation of interface fluxes. We use two constrained transport methods to compute the electric field from those interface fluxes, which simultaneously advances the induction equation and maintains the divergence of the magnetic field. A second-order divergence-free reconstruction technique is used to interpolate the magnetic fields in the block-structured adaptive mesh refinement framework already extant in Enzo. This reconstruction also preserves the divergence of the magnetic field to machine precision. We use operator splitting to include gravity and cosmological expansion. We then present a series of cosmological and non-cosmological test problems to demonstrate the quality of solution resulting from this combination of solvers.

  9. Outcome of transvaginal mesh and tape removed for pain only.

    Hou, Jack C; Alhalabi, Feras; Lemack, Gary E; Zimmern, Philippe E

    2014-09-01

    Because there is reluctance to operate for pain, we evaluated midterm outcomes of vaginal mesh and synthetic suburethral tape removed for pain as the only indication. After receiving institutional review board approval we reviewed a prospective database of women without a neurogenic condition who underwent surgery for vaginal mesh or suburethral tape removal with a focus on pain as the single reason for removal and a minimum 6-month followup. The primary outcome was pain level assessed by a visual analog scale (range 0 to 10) at baseline and at each subsequent visit with the score at the last visit used for analysis. Parameters evaluated included demographics, mean time to presentation and type of mesh or tape inserted. From 2005 to 2013, 123 patients underwent surgical removal of mesh (69) and suburethral tape (54) with pain as the only indication. Mean followup was 35 months (range 6 to 59) in the tape group and 22 months (range 6 to 47) in the mesh group. The visual analog scale score decreased from a mean preoperative level of 7.9 to 0.9 postoperatively (p = 0.0014) in the mesh group and from 5.3 to 1.5 (p = 0.00074) in the tape group. Pain-free status, considered a score of 0, was achieved in 81% of tape and 67% of mesh cases, respectively. No statistically significant difference was found between the groups. When pain is the only indication for suburethral tape or vaginal mesh removal, a significant decrease in the pain score can be durably expected after removal in most patients at midterm followup. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  10. Classification and Analysis of Computer Network Traffic

    Bujlow, Tomasz

    2014-01-01

    Traffic monitoring and analysis can be done for multiple different reasons: to investigate the usage of network resources, assess the performance of network applications, adjust Quality of Service (QoS) policies in the network, log the traffic to comply with the law, or create realistic models of traffic for academic purposes. We define the objective of this thesis as finding a way to evaluate the performance of various applications in a high-speed Internet infrastructure. To satisfy the obje...

  11. Finite element meshing approached as a global minimization process

    WITKOWSKI,WALTER R.; JUNG,JOSEPH; DOHRMANN,CLARK R.; LEUNG,VITUS J.

    2000-03-01

    The ability to generate a suitable finite element mesh in an automatic fashion is becoming the key to being able to automate the entire engineering analysis process. However, placing an all-hexahedron mesh in a general three-dimensional body continues to be an elusive goal. The approach investigated in this research is fundamentally different from any other that is known of by the authors. A physical analogy viewpoint is used to formulate the actual meshing problem which constructs a global mathematical description of the problem. The analogy used was that of minimizing the electrical potential of a system charged particles within a charged domain. The particles in the presented analogy represent duals to mesh elements (i.e., quads or hexes). Particle movement is governed by a mathematical functional which accounts for inter-particles repulsive, attractive and alignment forces. This functional is minimized to find the optimal location and orientation of each particle. After the particles are connected a mesh can be easily resolved. The mathematical description for this problem is as easy to formulate in three-dimensions as it is in two- or one-dimensions. The meshing algorithm was developed within CoMeT. It can solve the two-dimensional meshing problem for convex and concave geometries in a purely automated fashion. Investigation of the robustness of the technique has shown a success rate of approximately 99% for the two-dimensional geometries tested. Run times to mesh a 100 element complex geometry were typically in the 10 minute range. Efficiency of the technique is still an issue that needs to be addressed. Performance is an issue that is critical for most engineers generating meshes. It was not for this project. The primary focus of this work was to investigate and evaluate a meshing algorithm/philosophy with efficiency issues being secondary. The algorithm was also extended to mesh three-dimensional geometries. Unfortunately, only simple geometries were tested

  12. AIR INGRESS ANALYSIS: COMPUTATIONAL FLUID DYNAMIC MODELS

    Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang

    2010-08-01

    The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.

  13. An approach to quantum-computational hydrologic inverse analysis.

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.

  14. DataView: a computational visualisation system for multidisciplinary design and analysis

    Wang, Chengen

    2016-01-01

    Rapidly processing raw data and effectively extracting underlining information from huge volumes of multivariate data become essential to all decision-making processes in sectors like finance, government, medical care, climate analysis, industries, science, etc. Remarkably, visualisation is recognised as a fundamental technology that props up human comprehension, cognition and utilisation of burgeoning amounts of heterogeneous data. This paper presents a computational visualisation system, named DataView, which has been developed for graphically displaying and capturing outcomes of multiphysics problem-solvers widely used in engineering fields. The DataView is functionally composed of techniques for table/diagram representation, and graphical illustration of scalar, vector and tensor fields. The field visualisation techniques are implemented on the basis of a range of linear and non-linear meshes, which flexibly adapts to disparate data representation schemas adopted by a variety of disciplinary problem-solvers. The visualisation system has been successfully applied to a number of engineering problems, of which some illustrations are presented to demonstrate effectiveness of the visualisation techniques.

  15. Cafts: computer aided fault tree analysis

    Poucet, A.

    1985-01-01

    The fault tree technique has become a standard tool for the analysis of safety and reliability of complex system. In spite of the costs, which may be high for a complete and detailed analysis of a complex plant, the fault tree technique is popular and its benefits are fully recognized. Due to this applications of these codes have mostly been restricted to simple academic examples and rarely concern complex, real world systems. In this paper an interactive approach to fault tree construction is presented. The aim is not to replace the analyst, but to offer him an intelligent tool which can assist him in modeling complex systems. Using the CAFTS-method, the analyst interactively constructs a fault tree in two phases: (1) In a first phase he generates an overall failure logic structure of the system; the macrofault tree. In this phase, CAFTS features an expert system approach to assist the analyst. It makes use of a knowledge base containing generic rules on the behavior of subsystems and components; (2) In a second phase the macrofault tree is further refined and transformed in a fully detailed and quantified fault tree. In this phase a library of plant-specific component failure models is used

  16. COMPUTING

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  17. COMPUTING

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  18. A coarse-mesh nodal method-diffusive-mesh finite difference method

    Joo, H.; Nichols, W.R.

    1994-01-01

    Modern nodal methods have been successfully used for conventional light water reactor core analyses where the homogenized, node average cross sections (XSs) and the flux discontinuity factors (DFs) based on equivalence theory can reliably predict core behavior. For other types of cores and other geometries characterized by tightly-coupled, heterogeneous core configurations, the intranodal flux shapes obtained from a homogenized nodal problem may not accurately portray steep flux gradients near fuel assembly interfaces or various reactivity control elements. This may require extreme values of DFs (either very large, very small, or even negative) to achieve a desired solution accuracy. Extreme values of DFs, however, can disrupt the convergence of the iterative methods used to solve for the node average fluxes, and can lead to a difficulty in interpolating adjacent DF values. Several attempts to remedy the problem have been made, but nothing has been satisfactory. A new coarse-mesh nodal scheme called the Diffusive-Mesh Finite Difference (DMFD) technique, as contrasted with the coarse-mesh finite difference (CMFD) technique, has been developed to resolve this problem. This new technique and the development of a few-group, multidimensional kinetics computer program are described in this paper

  19. GEPOIS: a two dimensional nonuniform mesh Poisson solver

    Quintenz, J.P.; Freeman, J.R.

    1979-06-01

    A computer code is described which solves Poisson's equation for the electric potential over a two dimensional cylindrical (r,z) nonuniform mesh which can contain internal electrodes. Poisson's equation is solved over a given region subject to a specified charge distribution with either Neumann or Dirichlet perimeter boundary conditions and with Dirichlet boundary conditions on internal surfaces. The static electric field is also computed over the region with special care given to normal electric field components at boundary surfaces

  20. Rapid Separation of Disconnected Triangle Meshes Based on Graph Traversal

    Ji, S J; Wang, Y

    2006-01-01

    In recent year, The STL file become a de facto standard on the file presentation in CAD/CAM, computer graph and reverse engineering. When point cloud which is obtained by scanning object body using optical instrument is used to reconstruct an original model, the points cloud is presented by the STL file. Usually, datum of several separated and relative objects are stored in a single STL file, when such a file is operated by a computer, the datum in the file is firstly separated and then each element of every triangle pitch on the triangle mesh is traversed and visited and is calculated. The problem is analyzed and studied by many experts, but there is still a lack of a simple and quick algorithm. An algorithm which uses graph traversal to traverse each element of the triangle meshes and separate several disconnected triangle meshes is presented by the paper, the searching and calculating speed of the data on the triangle meshes is enhanced, memory size of the computer is reduced, complexity of the data structure is simplified and powerful guarantee is made for the next process by using this algorithm

  1. Mesh refinement of simulation with the AID riser transmission gamma

    Lima Filho, Hilario J.B. de; Benachour, Mohand; Dantas, Carlos C.; Brito, Marcio F.P.; Santos, Valdemir A. dos

    2013-01-01

    Type reactors Circulating Fluidized Bed (CFBR) vertical, in which the particulate and gaseous phases have flows upward (riser) have been widely used in gasification processes, combustion and fluid catalytic cracking (FCC). These biphasic reactors (gas-solid) efficiency depends largely on their hydrodynamic characteristics, and shows different behaviors in the axial and radial directions. The solids axial distribution is observed by the higher concentration in the base, getting more diluted toward the top. Radially, the solids concentration is characterized as core-annular, in which the central region is highly diluted, consisting of dispersed particles and fluid. In the present work developed a two-dimensional geometry (2D) techniques through simulations in computational fluid dynamics (CFD) to predict the gas-solid flow in the riser type CFBR through transient modeling, based on the kinetic theory of granular flow . The refinement of computational meshes provide larger amounts of information on the parameters studied, but may increase the processing time of the simulations. A minimum number of cells applied to the mesh construction was obtained by testing five meshes. The validation of the hydrodynamic parameters was performed using a range of 241Am source and detector NaI (Tl). The numerical results were provided consistent with the experimental data, indicating that the refined computational mesh in a controlled manner, improve the approximation of the expected results. (author)

  2. Computer-based quantitative computed tomography image analysis in idiopathic pulmonary fibrosis: A mini review.

    Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio

    2018-01-01

    Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  3. Emerging medical informatics research trends detection based on MeSH terms.

    Lyu, Peng-Hui; Yao, Qiang; Mao, Jin; Zhang, Shi-Jing

    2015-01-01

    The aim of this study is to analyze the research trends of medical informatics over the last 12 years. A new method based on MeSH terms was proposed to identify emerging topics and trends of medical informatics research. Informetric methods and visualization technologies were applied to investigate research trends of medical informatics. The metric of perspective factor (PF) embedding MeSH terms was appropriately employed to assess the perspective quality for journals. The emerging MeSH terms have changed dramatically over the last 12 years, identifying two stages of medical informatics: the "medical imaging stage" and the "medical informatics stage". The focus of medical informatics has shifted from acquisition and storage of healthcare data by integrating computational, informational, cognitive and organizational sciences to semantic analysis for problem solving and clinical decision-making. About 30 core journals were determined by Bradford's Law in the last 3 years in this area. These journals, with high PF values, have relative high perspective quality and lead the trend of medical informatics.

  4. Conference “Computational Analysis and Optimization” (CAO 2011)

    Tiihonen, Timo; Tuovinen, Tero; Numerical Methods for Differential Equations, Optimization, and Technological Problems : Dedicated to Professor P. Neittaanmäki on His 60th Birthday

    2013-01-01

    This book contains the results in numerical analysis and optimization presented at the ECCOMAS thematic conference “Computational Analysis and Optimization” (CAO 2011) held in Jyväskylä, Finland, June 9–11, 2011. Both the conference and this volume are dedicated to Professor Pekka Neittaanmäki on the occasion of his sixtieth birthday. It consists of five parts that are closely related to his scientific activities and interests: Numerical Methods for Nonlinear Problems; Reliable Methods for Computer Simulation; Analysis of Noised and Uncertain Data; Optimization Methods; Mathematical Models Generated by Modern Technological Problems. The book also includes a short biography of Professor Neittaanmäki.

  5. Computer code for qualitative analysis of gamma-ray spectra

    Yule, H.P.

    1979-01-01

    Computer code QLN1 provides complete analysis of gamma-ray spectra observed with Ge(Li) detectors and is used at both the National Bureau of Standards and the Environmental Protection Agency. It locates peaks, resolves multiplets, identifies component radioisotopes, and computes quantitative results. The qualitative-analysis (or component identification) algorithms feature thorough, self-correcting steps which provide accurate isotope identification in spite of errors in peak centroids, energy calibration, and other typical problems. The qualitative-analysis algorithm is described in this paper

  6. A single-chip computer analysis system for liquid fluorescence

    Zhang Yongming; Wu Ruisheng; Li Bin

    1998-01-01

    The single-chip computer analysis system for liquid fluorescence is an intelligent analytic instrument, which is based on the principle that the liquid containing hydrocarbons can give out several characteristic fluorescences when irradiated by strong light. Besides a single-chip computer, the system makes use of the keyboard and the calculation and printing functions of a CASIO printing calculator. It combines optics, mechanism and electronics into one, and is small, light and practical, so it can be used for surface water sample analysis in oil field and impurity analysis of other materials

  7. A Computational Discriminability Analysis on Twin Fingerprints

    Liu, Yu; Srihari, Sargur N.

    Sharing similar genetic traits makes the investigation of twins an important study in forensics and biometrics. Fingerprints are one of the most commonly found types of forensic evidence. The similarity between twins’ prints is critical establish to the reliability of fingerprint identification. We present a quantitative analysis of the discriminability of twin fingerprints on a new data set (227 pairs of identical twins and fraternal twins) recently collected from a twin population using both level 1 and level 2 features. Although the patterns of minutiae among twins are more similar than in the general population, the similarity of fingerprints of twins is significantly different from that between genuine prints of the same finger. Twins fingerprints are discriminable with a 1.5%~1.7% higher EER than non-twins. And identical twins can be distinguished by examine fingerprint with a slightly higher error rate than fraternal twins.

  8. Content Analysis of a Computer-Based Faculty Activity Repository

    Baker-Eveleth, Lori; Stone, Robert W.

    2013-01-01

    The research presents an analysis of faculty opinions regarding the introduction of a new computer-based faculty activity repository (FAR) in a university setting. The qualitative study employs content analysis to better understand the phenomenon underlying these faculty opinions and to augment the findings from a quantitative study. A web-based…

  9. Computer-Aided Communication Satellite System Analysis and Optimization.

    Stagl, Thomas W.; And Others

    Various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. The rationale for selecting General Dynamics/Convair's Satellite Telecommunication Analysis and Modeling Program (STAMP) in modified form to aid in the system costing and sensitivity analysis work in the Program on…

  10. Multigrid for refined triangle meshes

    Shapira, Yair

    1997-02-01

    A two-level preconditioning method for the solution of (locally) refined finite element schemes using triangle meshes is introduced. In the isotropic SPD case, it is shown that the condition number of the preconditioned stiffness matrix is bounded uniformly for all sufficiently regular triangulations. This is also verified numerically for an isotropic diffusion problem with highly discontinuous coefficients.

  11. Mesh Network Architecture for Enabling Inter-Spacecraft Communication

    Becker, Christopher; Merrill, Garrick

    2017-01-01

    To enable communication between spacecraft operating in a formation or small constellation, a mesh network architecture was developed and tested using a time division multiple access (TDMA) communication scheme. The network is designed to allow for the exchange of telemetry and other data between spacecraft to enable collaboration between small spacecraft. The system uses a peer-to-peer topology with no central router, so that it does not have a single point of failure. The mesh network is dynamically configurable to allow for addition and subtraction of new spacecraft into the communication network. Flight testing was performed using an unmanned aerial system (UAS) formation acting as a spacecraft analogue and providing a stressing environment to prove mesh network performance. The mesh network was primarily devised to provide low latency, high frequency communication but is flexible and can also be configured to provide higher bandwidth for applications desiring high data throughput. The network includes a relay functionality that extends the maximum range between spacecraft in the network by relaying data from node to node. The mesh network control is implemented completely in software making it hardware agnostic, thereby allowing it to function with a wide variety of existing radios and computing platforms..

  12. Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units

    Beckingsale, D. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Gaudin, W. P. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Hornung, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gunney, B. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Herdman, J. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Jarvis, S. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom)

    2014-11-17

    Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.

  13. Kinetic solvers with adaptive mesh in phase space

    Arslanbekov, Robert R.; Kolobov, Vladimir I.; Frolova, Anna A.

    2013-12-01

    An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a “tree of trees” (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems.

  14. Subcooled decompression analysis of the ROSA and the LOFT semiscale blowdown test data with the digital computer code DEPCO-MULTI

    Namatame, Ken; Kobayashi, Kensuke

    1975-12-01

    In the ROSA (Rig of Safety Assessment) program, the digital computer code DEPCO-SINGLE and DEPCO-MULTI (Subcooled Decompression Process in Loss-of-Coolant Accident - Single Pipe and - Multiple Pipe Network) were prepared to study thermo-hydraulic behavior of the primary coolant in subcooled decompression of the PWR LOCA. The analytical results with DEPCO-MULTI on the subcooled decompression phenomena are presented for ROSA-I, ROSA-II and LOFT 500, 600, 700 and 800 series experiments. The effects of space mesh length, elasticity of pressure boundary materials and simplification for computational piping system on the computed result are described. This will be the final work on the study of the subcooled decompression analysis as for the ROSA program, and the authors wish that the present code shall further be examined with the data of much advanced experiments. (auth.)

  15. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  16. Computer-Aided Qualitative Data Analysis with Word

    Bruno Nideröst

    2002-05-01

    Full Text Available Despite some fragmentary references in the literature about qualitative methods, it is fairly unknown that Word can be successfully used for computer-aided Qualitative Data Analyses (QDA. Based on several Word standard operations, elementary QDA functions such as sorting data, code-and-retrieve and frequency counts can be realized. Word is particularly interesting for those users who wish to have first experiences with computer-aided analysis before investing time and money in a specialized QDA Program. The well-known standard software could also be an option for those qualitative researchers who usually work with word processing but have certain reservations towards computer-aided analysis. The following article deals with the most important requirements and options of Word for computer-aided QDA. URN: urn:nbn:de:0114-fqs0202225

  17. Computer programs for analysis of geophysical data

    Rozhkov, M.; Nakanishi, K.

    1994-06-01

    This project is oriented toward the application of the mobile seismic array data analysis technique in seismic investigations of the Earth (the noise-array method). The technique falls into the class of emission tomography methods but, in contrast to classic tomography, 3-D images of the microseismic activity of the media are obtained by passive seismic antenna scanning of the half-space, rather than by solution of the inverse Radon`s problem. It is reasonable to expect that areas of geothermal activity, active faults, areas of volcanic tremors and hydrocarbon deposits act as sources of intense internal microseismic activity or as effective sources for scattered (secondary) waves. The conventional approaches of seismic investigations of a geological medium include measurements of time-limited determinate signals from artificial or natural sources. However, the continuous seismic oscillations, like endogenous microseisms, coda and scattering waves, can give very important information about the structure of the Earth. The presence of microseismic sources or inhomogeneities within the Earth results in the appearance of coherent seismic components in a stochastic wave field recorded on the surface by a seismic array. By careful processing of seismic array data, these coherent components can be used to develop a 3-D model of the microseismic activity of the media or images of the noisy objects. Thus, in contrast to classic seismology where narrow windows are used to get the best time resolution of seismic signals, our model requires long record length for the best spatial resolution.

  18. Computer programs for analysis of geophysical data

    Rozhkov, M.; Nakanishi, K.

    1994-06-01

    This project is oriented toward the application of the mobile seismic array data analysis technique in seismic investigations of the Earth (the noise-array method). The technique falls into the class of emission tomography methods but, in contrast to classic tomography, 3-D images of the microseismic activity of the media are obtained by passive seismic antenna scanning of the half-space, rather than by solution of the inverse Radon's problem. It is reasonable to expect that areas of geothermal activity, active faults, areas of volcanic tremors and hydrocarbon deposits act as sources of intense internal microseismic activity or as effective sources for scattered (secondary) waves. The conventional approaches of seismic investigations of a geological medium include measurements of time-limited determinate signals from artificial or natural sources. However, the continuous seismic oscillations, like endogenous microseisms, coda and scattering waves, can give very important information about the structure of the Earth. The presence of microseismic sources or inhomogeneities within the Earth results in the appearance of coherent seismic components in a stochastic wave field recorded on the surface by a seismic array. By careful processing of seismic array data, these coherent components can be used to develop a 3-D model of the microseismic activity of the media or images of the noisy objects. Thus, in contrast to classic seismology where narrow windows are used to get the best time resolution of seismic signals, our model requires long record length for the best spatial resolution

  19. Resterilized Polypropylene Mesh for Inguinal Hernia Repair

    2018-04-19

    Apr 19, 2018 ... Conclusion: The use of sterilized polypropylene mesh for the repair of inguinal ... and nonabsorbable materials to reduce the tissue–mesh. INTRODUCTION ... which we have been practicing in our center since we introduced ...

  20. Management of complications of mesh surgery.

    Lee, Dominic; Zimmern, Philippe E

    2015-07-01

    Transvaginal placements of synthetic mid-urethral slings and vaginal meshes have largely superseded traditional tissue repairs in the current era because of presumed efficacy and ease of implant with device 'kits'. The use of synthetic material has generated novel complications including mesh extrusion, pelvic and vaginal pain and mesh contraction. In this review, our aim is to discuss the management, surgical techniques and outcomes associated with mesh removal. Recent publications have seen an increase in presentation of these mesh-related complications, and reports from multiple tertiary centers have suggested that not all patients benefit from surgical intervention. Although the true incidence of mesh complications is unknown, recent publications can serve to guide physicians and inform patients of the surgical outcomes from mesh-related complications. In addition, the literature highlights the growing need for a registry to account for a more accurate reporting of these events and to counsel patients on the risk and benefits before proceeding with mesh surgeries.

  1. h-Adaptive Mesh Generation using Electric Field Intensity Value as a Criterion (in Japanese)

    Toyonaga, Kiyomi; Cingoski, Vlatko; Kaneda, Kazufumi; Yamashita, Hideo

    1994-01-01

    Finite mesh divisions are essential to obtain accurate solution of two dimensional electric field analysis. It requires the technical knowledge to generate a suitable fine mesh divisions. In electric field problem, analysts are usually interested in the electric field intensity and its distribution. In order to obtain electric field intensity with high-accuracy, we have developed and adaptive mesh generator using electric field intensity value as a criterion.

  2. Introducing remarks upon the analysis of computer systems performance

    Baum, D.

    1980-05-01

    Some of the basis ideas of analytical techniques to study the behaviour of computer systems are presented. Single systems as well as networks of computers are viewed as stochastic dynamical systems which may be modelled by queueing networks. Therefore this report primarily serves as an introduction to probabilistic methods for qualitative analysis of systems. It is supplemented by an application example of Chandy's collapsing method. (orig.) [de

  3. Computer-aided visualization and analysis system for sequence evaluation

    Chee, Mark S.; Wang, Chunwei; Jevons, Luis C.; Bernhart, Derek H.; Lipshutz, Robert J.

    2004-05-11

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  4. Strategic Analysis of Autodesk and the Move to Cloud Computing

    Kewley, Kathleen

    2012-01-01

    This paper provides an analysis of the opportunity for Autodesk to move its core technology to a cloud delivery model. Cloud computing offers clients a number of advantages, such as lower costs for computer hardware, increased access to technology and greater flexibility. With the IT industry embracing this transition, software companies need to plan for future change and lead with innovative solutions. Autodesk is in a unique position to capitalize on this market shift, as it is the leader i...

  5. Radiation heat transfer model using Monte Carlo ray tracing method on hierarchical ortho-Cartesian meshes and non-uniform rational basis spline surfaces for description of boundaries

    Kuczyński Paweł

    2014-06-01

    Full Text Available The paper deals with a solution of radiation heat transfer problems in enclosures filled with nonparticipating medium using ray tracing on hierarchical ortho-Cartesian meshes. The idea behind the approach is that radiative heat transfer problems can be solved on much coarser grids than their counterparts from computational fluid dynamics (CFD. The resulting code is designed as an add-on to OpenFOAM, an open-source CFD program. Ortho-Cartesian mesh involving boundary elements is created based upon CFD mesh. Parametric non-uniform rational basis spline (NURBS surfaces are used to define boundaries of the enclosure, allowing for dealing with domains of complex shapes. Algorithm for determining random, uniformly distributed locations of rays leaving NURBS surfaces is described. The paper presents results of test cases assuming gray diffusive walls. In the current version of the model the radiation is not absorbed within gases. However, the ultimate aim of the work is to upgrade the functionality of the model, to problems in absorbing, emitting and scattering medium projecting iteratively the results of radiative analysis on CFD mesh and CFD solution on radiative mesh.

  6. Adaptive moving mesh methods for simulating one-dimensional groundwater problems with sharp moving fronts

    Huang, W.; Zheng, Lingyun; Zhan, X.

    2002-01-01

    Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.

  7. Parallel unstructured mesh optimisation for 3D radiation transport and fluids modelling

    Gorman, G.J.; Pain, Ch. C.; Oliveira, C.R.E. de; Umpleby, A.P.; Goddard, A.J.H.

    2003-01-01

    In this paper we describe the theory and application of a parallel mesh optimisation procedure to obtain self-adapting finite element solutions on unstructured tetrahedral grids. The optimisation procedure adapts the tetrahedral mesh to the solution of a radiation transport or fluid flow problem without sacrificing the integrity of the boundary (geometry), or internal boundaries (regions) of the domain. The objective is to obtain a mesh which has both a uniform interpolation error in any direction and the element shapes are of good quality. This is accomplished with use of a non-Euclidean (anisotropic) metric which is related to the Hessian of the solution field. Appropriate scaling of the metric enables the resolution of multi-scale phenomena as encountered in transient incompressible fluids and multigroup transport calculations. The resulting metric is used to calculate element size and shape quality. The mesh optimisation method is based on a series of mesh connectivity and node position searches of the landscape defining mesh quality which is gauged by a functional. The mesh modification thus fits the solution field(s) in an optimal manner. The parallel mesh optimisation/adaptivity procedure presented in this paper is of general applicability. We illustrate this by applying it to a transient CFD (computational fluid dynamics) problem. Incompressible flow past a cylinder at moderate Reynolds numbers is modelled to demonstrate that the mesh can follow transient flow features. (authors)

  8. Mesh-morphing algorithms for specimen-specific finite element modeling.

    Sigal, Ian A; Hardisty, Michael R; Whyne, Cari M

    2008-01-01

    Despite recent advances in software for meshing specimen-specific geometries, considerable effort is still often required to produce and analyze specimen-specific models suitable for biomechanical analysis through finite element modeling. We hypothesize that it is possible to obtain accurate models by adapting a pre-existing geometry to represent a target specimen using morphing techniques. Here we present two algorithms for morphing, automated wrapping (AW) and manual landmarks (ML), and demonstrate their use to prepare specimen-specific models of caudal rat vertebrae. We evaluate the algorithms by measuring the distance between target and morphed geometries and by comparing response to axial loading simulated with finite element (FE) methods. First a traditional reconstruction process based on microCT was used to obtain two natural specimen-specific FE models. Next, the two morphing algorithms were used to compute mappings from the surface of one model, the source, to the other, the target, and to use this mapping to morph the source mesh to produce a target mesh. The microCT images were then used to assign element-specific material properties. In AW the mappings were obtained by wrapping the source and target surfaces with an auxiliary triangulated surface. In ML, landmarks were manually placed on corresponding locations on the surfaces of both source and target. Both morphing algorithms were successful in reproducing the shape of the target vertebra with a median distance between natural and morphed models of 18.8 and 32.2 microm, respectively, for AW and ML. Whereas AW-morphing produced a surface more closely resembling that of the target, ML guaranteed correspondence of the landmark locations between source and target. Morphing preserved the quality of the mesh producing models suitable for FE simulation. Moreover, there were only minor differences between natural and morphed models in predictions of deformation, strain and stress. We therefore conclude that

  9. Computational Aspects of Dam Risk Analysis: Findings and Challenges

    Ignacio Escuder-Bueno

    2016-09-01

    Full Text Available In recent years, risk analysis techniques have proved to be a useful tool to inform dam safety management. This paper summarizes the outcomes of three themes related to dam risk analysis discussed in the Benchmark Workshops organized by the International Commission on Large Dams Technical Committee on “Computational Aspects of Analysis and Design of Dams.” In the 2011 Benchmark Workshop, estimation of the probability of failure of a gravity dam for the sliding failure mode was discussed. Next, in 2013, the discussion focused on the computational challenges of the estimation of consequences in dam risk analysis. Finally, in 2015, the probability of sliding and overtopping in an embankment was analyzed. These Benchmark Workshops have allowed a complete review of numerical aspects for dam risk analysis, showing that risk analysis methods are a very useful tool to analyze the risk of dam systems, including downstream consequence assessments and the uncertainty of structural models.

  10. A SURVEY ON DOCUMENT CLUSTERING APPROACH FOR COMPUTER FORENSIC ANALYSIS

    Monika Raghuvanshi*, Rahul Patel

    2016-01-01

    In a forensic analysis, large numbers of files are examined. Much of the information comprises of in unstructured format, so it’s quite difficult task for computer forensic to perform such analysis. That’s why to do the forensic analysis of document within a limited period of time require a special approach such as document clustering. This paper review different document clustering algorithms methodologies for example K-mean, K-medoid, single link, complete link, average link in accorandance...

  11. Numeric computation and statistical data analysis on the Java platform

    Chekanov, Sergei V

    2016-01-01

    Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...

  12. PIXAN: the Lucas Heights PIXE analysis computer package

    Clayton, E.

    1986-11-01

    To fully utilise the multielement capability and short measurement time of PIXE it is desirable to have an automated computer evaluation of the measured spectra. Because of the complex nature of PIXE spectra, a critical step in the analysis is the data reduction, in which the areas of characteristic peaks in the spectrum are evaluated. In this package the computer program BATTY is presented for such an analysis. The second step is to determine element concentrations, knowing the characteristic peak areas in the spectrum. This requires a knowledge of the expected X-ray yield for that element in the sample. The computer program THICK provides that information for both thick and thin PIXE samples. Together, these programs form the package PIXAN used at Lucas Heights for PIXE analysis

  13. Evaluation of the generality and accuracy of a new mesh morphing procedure for the human femur.

    Grassi, Lorenzo; Hraiech, Najah; Schileo, Enrico; Ansaloni, Mauro; Rochette, Michel; Viceconti, Marco

    2011-01-01

    Various papers described mesh morphing techniques for computational biomechanics, but none of them provided a quantitative assessment of generality, robustness, automation, and accuracy in predicting strains. This study aims to quantitatively evaluate the performance of a novel mesh-morphing algorithm. A mesh-morphing algorithm based on radial-basis functions and on manual selection of corresponding landmarks on template and target was developed. The periosteal geometries of 100 femurs were derived from a computed tomography scan database and used to test the algorithm generality in producing finite element (FE) morphed meshes. A published benchmark, consisting of eight femurs for which in vitro strain measurements and standard FE model strain prediction accuracy were available, was used to assess the accuracy of morphed FE models in predicting strains. Relevant parameters were identified to test the algorithm robustness to operative conditions. Time and effort needed were evaluated to define the algorithm degree of automation. Morphing was successful for 95% of the specimens, with mesh quality indicators comparable to those of standard FE meshes. Accuracy of the morphed meshes in predicting strains was good (R(2)>0.9, RMSE%0.05) and partially to the number of landmark used. Producing a morphed mesh starting from the triangularized geometry of the specimen requires on average 10 min. The proposed method is general, robust, automated, and accurate enough to be used in bone FE modelling from diagnostic data, and prospectively in applications such as statistical shape modelling. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Automated uncertainty analysis methods in the FRAP computer codes

    Peck, S.O.

    1980-01-01

    A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts

  15. Conceptual design of pipe whip restraints using interactive computer analysis

    Rigamonti, G.; Dainora, J.

    1975-01-01

    Protection against pipe break effects necessitates a complex interaction between failure mode analysis, piping layout, and structural design. Many iterations are required to finalize structural designs and equipment arrangements. The magnitude of the pipe break loads transmitted by the pipe whip restraints to structural embedments precludes the application of conservative design margins. A simplified analytical formulation of the nonlinear dynamic problems associated with pipe whip has been developed and applied using interactive computer analysis techniques. In the dynamic analysis, the restraint and the associated portion of the piping system, are modeled using the finite element lumped mass approach to properly reflect the dynamic characteristics of the piping/restraint system. The analysis is performed as a series of piecewise linear increments. Each of these linear increments is terminated by either formation of plastic conditions or closing/opening of gaps. The stiffness matrix is modified to reflect the changed stiffness characteristics of the system and re-started using the previous boundary conditions. The formation of yield hinges are related to the plastic moment of the section and unloading paths are automatically considered. The conceptual design of the piping/restraint system is performed using interactive computer analysis. The application of the simplified analytical approach with interactive computer analysis results in an order of magnitude reduction in engineering time and computer cost. (Auth.)

  16. Computer aided plant engineering: An analysis and suggestions for computer use

    Leinemann, K.

    1979-09-01

    To get indications to and boundary conditions for computer use in plant engineering, an analysis of the engineering process was done. The structure of plant engineering is represented by a network of substaks and subsets of data which are to be manipulated. Main tool for integration of CAD-subsystems in plant engineering should be a central database which is described by characteristical requirements and a possible simple conceptual schema. The main features of an interactive system for computer aided plant engineering are shortly illustrated by two examples. The analysis leads to the conclusion, that an interactive graphic system for manipulation of net-like structured data, usable for various subtasks, should be the base for computer aided plant engineering. (orig.) [de

  17. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  18. HYDRA-II: A hydrothermal analysis computer code: Volume 1, Equations and numerics

    McCann, R.A.

    1987-04-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in Cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the Cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits of modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. This volume, Volume I - Equations and Numerics, describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. The final volume, Volume III - Verification/Validation Assessments, presents results of numerical simulations of single- and multiassembly storage systems and comparisons with experimental data. 4 refs

  19. User Manual for the PROTEUS Mesh Tools

    Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, Emily R [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-09-19

    PROTEUS is built around a finite element representation of the geometry for visualization. In addition, the PROTEUS-SN solver was built to solve the even-parity transport equation on a finite element mesh provided as input. Similarly, PROTEUS-MOC and PROTEUS-NEMO were built to apply the method of characteristics on unstructured finite element meshes. Given the complexity of real world problems, experience has shown that using commercial mesh generator to create rather simple input geometries is overly complex and slow. As a consequence, significant effort has been put into place to create multiple codes that help assist in the mesh generation and manipulation. There are three input means to create a mesh in PROTEUS: UFMESH, GRID, and NEMESH. At present, the UFMESH is a simple way to generate two-dimensional Cartesian and hexagonal fuel assembly geometries. The UFmesh input allows for simple assembly mesh generation while the GRID input allows the generation of Cartesian, hexagonal, and regular triangular structured grid geometry options. The NEMESH is a way for the user to create their own mesh or convert another mesh file format into a PROTEUS input format. Given that one has an input mesh format acceptable for PROTEUS, we have constructed several tools which allow further mesh and geometry construction (i.e. mesh extrusion and merging). This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial

  20. Investigating the computer analysis of eddy current NDT data

    Brown, R.L.

    1979-01-01

    The objective of this activity was to investigate and develop techniques for computer analysis of eddy current nondestructive testing (NDT) data. A single frequency commercial eddy current tester and a precision mechanical scanner were interfaced with a PDP-11/34 computer to obtain and analyze eddy current data from samples of 316 stainless steel tubing containing known discontinuities. Among the data analysis techniques investigated were: correlation, Fast Fourier Transforms (FFT), clustering, and Adaptive Learning Networks (ALN). The results were considered encouraging. ALN, for example, correctly identified 88% of the defects and non-defects from a group of 153 signal indications

  1. First Experiences with LHC Grid Computing and Distributed Analysis

    Fisk, Ian

    2010-01-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  2. Visualization and Data Analysis for High-Performance Computing

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  3. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    Boutchko, R.; Sitek, A.; Gullberg, G. T.

    2013-05-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio

  4. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    Boutchko, R; Gullberg, G T; Sitek, A

    2013-01-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio

  5. TESS: A RELATIVISTIC HYDRODYNAMICS CODE ON A MOVING VORONOI MESH

    Duffell, Paul C.; MacFadyen, Andrew I.

    2011-01-01

    We have generalized a method for the numerical solution of hyperbolic systems of equations using a dynamic Voronoi tessellation of the computational domain. The Voronoi tessellation is used to generate moving computational meshes for the solution of multidimensional systems of conservation laws in finite-volume form. The mesh-generating points are free to move with arbitrary velocity, with the choice of zero velocity resulting in an Eulerian formulation. Moving the points at the local fluid velocity makes the formulation effectively Lagrangian. We have written the TESS code to solve the equations of compressible hydrodynamics and magnetohydrodynamics for both relativistic and non-relativistic fluids on a dynamic Voronoi mesh. When run in Lagrangian mode, TESS is significantly less diffusive than fixed mesh codes and thus preserves contact discontinuities to high precision while also accurately capturing strong shock waves. TESS is written for Cartesian, spherical, and cylindrical coordinates and is modular so that auxiliary physics solvers are readily integrated into the TESS framework and so that this can be readily adapted to solve general systems of equations. We present results from a series of test problems to demonstrate the performance of TESS and to highlight some of the advantages of the dynamic tessellation method for solving challenging problems in astrophysical fluid dynamics.

  6. Analysis of the computed tomography in the acute abdomen

    Hochhegger, Bruno; Moraes, Everton; Haygert, Carlos Jesus Pereira; Antunes, Paulo Sergio Pase; Gazzoni, Fernando; Lopes, Luis Felipe Dias

    2007-01-01

    Introduction: This study tends to test the capacity of the computed tomography in assist in the diagnosis and the approach of the acute abdomen. Material and method: This is a longitudinal and prospective study, in which were analyzed the patients with the diagnosis of acute abdomen. There were obtained 105 cases of acute abdomen and after the application of the exclusions criteria were included 28 patients in the study. Results: Computed tomography changed the diagnostic hypothesis of the physicians in 50% of the cases (p 0.05), where 78.57% of the patients had surgical indication before computed tomography and 67.86% after computed tomography (p = 0.0546). The index of accurate diagnosis of computed tomography, when compared to the anatomopathologic examination and the final diagnosis, was observed in 82.14% of the cases (p = 0.013). When the analysis was done dividing the patients in surgical and nonsurgical group, were obtained an accuracy of 89.28% (p 0.0001). The difference of 7.2 days of hospitalization (p = 0.003) was obtained compared with the mean of the acute abdomen without use the computed tomography. Conclusion: The computed tomography is correlative with the anatomopathology and has great accuracy in the surgical indication, associated with the capacity of increase the confident index of the physicians, reduces the hospitalization time, reduces the number of surgeries and is cost-effective. (author)

  7. Computational mathematics models, methods, and analysis with Matlab and MPI

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  8. Convergence Analysis of a Class of Computational Intelligence Approaches

    Junfeng Chen

    2013-01-01

    Full Text Available Computational intelligence approaches is a relatively new interdisciplinary field of research with many promising application areas. Although the computational intelligence approaches have gained huge popularity, it is difficult to analyze the convergence. In this paper, a computational model is built up for a class of computational intelligence approaches represented by the canonical forms of generic algorithms, ant colony optimization, and particle swarm optimization in order to describe the common features of these algorithms. And then, two quantification indices, that is, the variation rate and the progress rate, are defined, respectively, to indicate the variety and the optimality of the solution sets generated in the search process of the model. Moreover, we give four types of probabilistic convergence for the solution set updating sequences, and their relations are discussed. Finally, the sufficient conditions are derived for the almost sure weak convergence and the almost sure strong convergence of the model by introducing the martingale theory into the Markov chain analysis.

  9. Analysis of Biosignals During Immersion in Computer Games.

    Yeo, Mina; Lim, Seokbeen; Yoon, Gilwon

    2017-11-17

    The number of computer game users is increasing as computers and various IT devices in connection with the Internet are commonplace in all ages. In this research, in order to find the relevance of behavioral activity and its associated biosignal, biosignal changes before and after as well as during computer games were measured and analyzed for 31 subjects. For this purpose, a device to measure electrocardiogram, photoplethysmogram and skin temperature was developed such that the effect of motion artifacts could be minimized. The device was made wearable for convenient measurement. The game selected for the experiments was League of Legends™. Analysis on the pulse transit time, heart rate variability and skin temperature showed increased sympathetic nerve activities during computer game, while the parasympathetic nerves became less active. Interestingly, the sympathetic predominance group showed less change in the heart rate variability as compared to the normal group. The results can be valuable for studying internet gaming disorder.

  10. Voltammetry at micro-mesh electrodes

    Wadhawan Jay D.

    2003-01-01

    Full Text Available The voltammetry at three micro-mesh electrodes is explored. It is found that at sufficiently short experimental durations, the micro-mesh working electrode first behaves as an ensemble of microband electrodes, then follows the behaviour anticipated for an array of diffusion-independent micro-ring electrodes of the same perimeter as individual grid-squares within the mesh. During prolonged electrolysis, the micro-mesh electrode follows that behaviour anticipated theoretically for a cubically-packed partially-blocked electrode. Application of the micro-mesh electrode for the electrochemical determination of carbon dioxide in DMSO electrolyte solutions is further illustrated.

  11. COMPUTING

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  12. Parallel-In-Time For Moving Meshes

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  13. Automatic, unstructured mesh optimization for simulation and assessment of tide- and surge-driven hydrodynamics in a longitudinal estuary: St. Johns River

    Bacopoulos, Peter

    2018-05-01

    A localized truncation error analysis with complex derivatives (LTEA+CD) is applied recursively with advanced circulation (ADCIRC) simulations of tides and storm surge for finite element mesh optimization. Mesh optimization is demonstrated with two iterations of LTEA+CD for tidal simulation in the lower 200 km of the St. Johns River, located in northeast Florida, and achieves more than an over 50% decrease in the number of mesh nodes, relating to a twofold increase in efficiency, at a zero cost to model accuracy. The recursively generated meshes using LTEA+CD lead to successive reductions in the global cumulative truncation error associated with the model mesh. Tides are simulated with root mean square error (RMSE) of 0.09-0.21 m and index of agreement (IA) values generally in the 80s and 90s percentage ranges. Tidal currents are simulated with RMSE of 0.09-0.23 m s-1 and IA values of 97% and greater. Storm tide due to Hurricane Matthew 2016 is simulated with RMSE of 0.09-0.33 m and IA values of 75-96%. Analysis of the LTEA+CD results shows the M2 constituent to dominate the node spacing requirement in the St. Johns River, with the M4 and M6 overtides and the STEADY constituent contributing some. Friction is the predominant physical factor influencing the target element size distribution, especially along the main river stem, while frequency (inertia) and Coriolis (rotation) are supplementary contributing factors. The combination of interior- and boundary-type computational molecules, providing near-full coverage of the model domain, renders LTEA+CD an attractive mesh generation/optimization tool for complex coastal and estuarine domains. The mesh optimization procedure using LTEA+CD is automatic and extensible to other finite element-based numerical models. Discussion is provided on the scope of LTEA+CD, the starting point (mesh) of the procedure, the user-specified scaling of the LTEA+CD results, and the iteration (termination) of LTEA+CD for mesh optimization.

  14. HEXAGA-II. A two-dimensional multi-group neutron diffusion programme for a uniform triangular mesh with arbitrary group scattering for the IBM/370-168 computer

    Woznicki, Z.

    1976-05-01

    This report presents the AGA two-sweep iterative methods belonging to the family of factorization techniques in their practical application in the HEXAGA-II two-dimensional programme to obtain the numerical solution to the multi-group, time-independent, (real and/or adjoint) neutron diffusion equations for a fine uniform triangular mesh. An arbitrary group scattering model is permitted. The report written for the users provides the description of input and output. The use of HEXAGA-II is illustrated by two sample reactor problems. (orig.) [de

  15. COMPUTING

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  16. Adaptive Mesh Refinement in CTH

    Crawford, David

    1999-01-01

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems

  17. SALP-PC, a computer program for fault tree analysis on personal computers

    Contini, S.; Poucet, A.

    1987-01-01

    The paper presents the main characteristics of the SALP-PC computer code for fault tree analysis. The program has been developed in Fortran 77 on an Olivetti M24 personal computer (IBM compatible) in order to reach a high degree of portability. It is composed of six processors implementing the different phases of the analysis procedure. This particular structure presents some advantages like, for instance, the restart facility and the possibility to develop an event tree analysis code. The set of allowed logical operators, i.e. AND, OR, NOT, K/N, XOR, INH, together with the possibility to define boundary conditions, make the SALP-PC code a powerful tool for risk assessment. (orig.)

  18. Numerical methods design, analysis, and computer implementation of algorithms

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  19. Recent developments of the NESSUS probabilistic structural analysis computer program

    Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.

    1992-01-01

    The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.

  20. Sentiment analysis and ontology engineering an environment of computational intelligence

    Chen, Shyi-Ming

    2016-01-01

    This edited volume provides the reader with a fully updated, in-depth treatise on the emerging principles, conceptual underpinnings, algorithms and practice of Computational Intelligence in the realization of concepts and implementation of models of sentiment analysis and ontology –oriented engineering. The volume involves studies devoted to key issues of sentiment analysis, sentiment models, and ontology engineering. The book is structured into three main parts. The first part offers a comprehensive and prudently structured exposure to the fundamentals of sentiment analysis and natural language processing. The second part consists of studies devoted to the concepts, methodologies, and algorithmic developments elaborating on fuzzy linguistic aggregation to emotion analysis, carrying out interpretability of computational sentiment models, emotion classification, sentiment-oriented information retrieval, a methodology of adaptive dynamics in knowledge acquisition. The third part includes a plethora of applica...