Algebraic mesh generation for large scale viscous-compressible aerodynamic simulation
International Nuclear Information System (INIS)
Smith, R.E.
1984-01-01
Viscous-compressible aerodynamic simulation is the numerical solution of the compressible Navier-Stokes equations and associated boundary conditions. Boundary-fitted coordinate systems are well suited for the application of finite difference techniques to the Navier-Stokes equations. An algebraic approach to boundary-fitted coordinate systems is one where an explicit functional relation describes a mesh on which a solution is obtained. This approach has the advantage of rapid-precise mesh control. The basic mathematical structure of three algebraic mesh generation techniques is described. They are transfinite interpolation, the multi-surface method, and the two-boundary technique. The Navier-Stokes equations are transformed to a computational coordinate system where boundary-fitted coordinates can be applied. Large-scale computation implies that there is a large number of mesh points in the coordinate system. Computation of viscous compressible flow using boundary-fitted coordinate systems and the application of this computational philosophy on a vector computer are presented
A regularized vortex-particle mesh method for large eddy simulation
DEFF Research Database (Denmark)
Spietz, Henrik Juul; Walther, Jens Honore; Hejlesen, Mads Mølholm
We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible ﬂuid ﬂow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green’s function...... solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the ﬁltered Navier Stokes equations, hence we use the method for Large Eddy...
A regularized vortex-particle mesh method for large eddy simulation
Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.
2017-11-01
We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
Interoperable mesh components for large-scale, distributed-memory simulations
International Nuclear Information System (INIS)
Devine, K; Leung, V; Diachin, L; Miller, M
2009-01-01
SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. In this paper, we describe a software component - an abstract data model and programming interface - designed to provide support for parallel unstructured mesh operations. We describe key issues that must be addressed to successfully provide high-performance, distributed-memory unstructured mesh services and highlight some recent research accomplishments in developing new load balancing and MPI-based communication libraries appropriate for leadership class computing. Finally, we give examples of the use of parallel adaptive mesh modification in two SciDAC applications.
Energy Technology Data Exchange (ETDEWEB)
Vervisch, Luc; Domingo, Pascale; Lodato, Guido [CORIA - CNRS and INSA de Rouen, Technopole du Madrillet, BP 8, 76801 Saint-Etienne-du-Rouvray (France); Veynante, Denis [EM2C - CNRS and Ecole Centrale Paris, Grande Voie des Vignes, 92295 Chatenay-Malabry (France)
2010-04-15
Large-Eddy Simulation (LES) provides space-filtered quantities to compare with measurements, which usually have been obtained using a different filtering operation; hence, numerical and experimental results can be examined side-by-side in a statistical sense only. Instantaneous, space-filtered and statistically time-averaged signals feature different characteristic length-scales, which can be combined in dimensionless ratios. From two canonical manufactured turbulent solutions, a turbulent flame and a passive scalar turbulent mixing layer, the critical values of these ratios under which measured and computed variances (resolved plus sub-grid scale) can be compared without resorting to additional residual terms are first determined. It is shown that actual Direct Numerical Simulation can hardly accommodate a sufficiently large range of length-scales to perform statistical studies of LES filtered reactive scalar-fields energy budget based on sub-grid scale variances; an estimation of the minimum Reynolds number allowing for such DNS studies is given. From these developments, a reliability mesh criterion emerges for scalar LES and scaling for scalar sub-grid scale energy is discussed. (author)
International Nuclear Information System (INIS)
Pavlidis, D.; Lathouwers, D.
2011-01-01
A computational fluid dynamics model with anisotropic mesh adaptivity is used to investigate coolant flow and heat transfer in pebble bed reactors. A novel method for implicitly incorporating solid boundaries based on multi-fluid flow modelling is adopted. The resulting model is able to resolve and simulate flow and heat transfer in randomly packed beds, regardless of the actual geometry, starting off with arbitrarily coarse meshes. The model is initially evaluated using an orderly stacked square channel of channel-height-to-particle diameter ratio of unity for a range of Reynolds numbers. The model is then applied to the face-centred cubical geometry. Coolant flow and heat transfer patterns are investigated. (author)
Bui, H. H.; Kodikara, J. A.; Pathegama, R.; Bouazza, A.; Haque, A.
2015-01-01
Numerical methods are extremely useful in gaining insights into the behaviour of reinforced soil retaining walls. However, traditional numerical approaches such as limit equilibrium or finite element methods are unable to simulate large deformation and post-failure behaviour of soils and retaining wall blocks in the reinforced soil retaining walls system. To overcome this limitation, a novel numerical approach is developed aiming to predict accurately the large deformation and post-failure be...
Parallel adaptive simulations on unstructured meshes
International Nuclear Information System (INIS)
Shephard, M S; Jansen, K E; Sahni, O; Diachin, L A
2007-01-01
This paper discusses methods being developed by the ITAPS center to support the execution of parallel adaptive simulations on unstructured meshes. The paper first outlines the ITAPS approach to the development of interoperable mesh, geometry and field services to support the needs of SciDAC application in these areas. The paper then demonstrates the ability of unstructured adaptive meshing methods built on such interoperable services to effectively solve important physics problems. Attention is then focused on ITAPs' developing ability to solve adaptive unstructured mesh problems on massively parallel computers
Finite element simulation of impact response of wire mesh screens
Directory of Open Access Journals (Sweden)
Wang Caizheng
2015-01-01
Full Text Available In this paper, the response of wire mesh screens to low velocity impact with blunt objects is investigated using finite element (FE simulation. The woven wire mesh is modelled with homogeneous shell elements with equivalent smeared mechanical properties. The mechanical behaviour of the woven wire mesh was determined experimentally with tensile tests on steel wire mesh coupons to generate the data for the smeared shell material used in the FE. The effects of impacts with a low mass (4 kg and a large mass (40 kg providing the same impact energy are studied. The joint between the wire mesh screen and the aluminium frame surrounding it is modelled using contact elements with friction between the corresponding elements. Damage to the screen of different types compromising its structural integrity, such as mesh separation and pulling out from the surrounding frame is modelled. The FE simulation is validated with results of impact tests conducted on woven steel wire screen meshes.
Mesh refinement of simulation with the AID riser transmission gamma
International Nuclear Information System (INIS)
Lima Filho, Hilario J.B. de; Benachour, Mohand; Dantas, Carlos C.; Brito, Marcio F.P.; Santos, Valdemir A. dos
2013-01-01
Type reactors Circulating Fluidized Bed (CFBR) vertical, in which the particulate and gaseous phases have flows upward (riser) have been widely used in gasification processes, combustion and fluid catalytic cracking (FCC). These biphasic reactors (gas-solid) efficiency depends largely on their hydrodynamic characteristics, and shows different behaviors in the axial and radial directions. The solids axial distribution is observed by the higher concentration in the base, getting more diluted toward the top. Radially, the solids concentration is characterized as core-annular, in which the central region is highly diluted, consisting of dispersed particles and fluid. In the present work developed a two-dimensional geometry (2D) techniques through simulations in computational fluid dynamics (CFD) to predict the gas-solid flow in the riser type CFBR through transient modeling, based on the kinetic theory of granular flow . The refinement of computational meshes provide larger amounts of information on the parameters studied, but may increase the processing time of the simulations. A minimum number of cells applied to the mesh construction was obtained by testing five meshes. The validation of the hydrodynamic parameters was performed using a range of 241Am source and detector NaI (Tl). The numerical results were provided consistent with the experimental data, indicating that the refined computational mesh in a controlled manner, improve the approximation of the expected results. (author)
Adaptive and dynamic meshing methods for numerical simulations
Acikgoz, Nazmiye
-hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations
A mesh density study for application to large deformation rolling process evaluation
International Nuclear Information System (INIS)
Martin, J.A.
1997-12-01
When addressing large deformation through an elastic-plastic analysis the mesh density is paramount in determining the accuracy of the solution. However, given the nonlinear nature of the problem, a highly-refined mesh will generally require a prohibitive amount of computer resources. This paper addresses finite element mesh optimization studies considering accuracy of results and computer resource needs as applied to large deformation rolling processes. In particular, the simulation of the thread rolling manufacturing process is considered using the MARC software package and a Cray C90 supercomputer. Both mesh density and adaptive meshing on final results for both indentation of a rigid body to a specified depth and contact rolling along a predetermined length are evaluated
Influence of mesh non-orthogonality on numerical simulation of buoyant jet flows
International Nuclear Information System (INIS)
Ishigaki, Masahiro; Abe, Satoshi; Sibamoto, Yasuteru; Yonomoto, Taisuke
2017-01-01
Highlights: • Influence of mesh non-orthogonality on numerical solution of buoyant jet flows. • Buoyant jet flows are simulated with hexahedral and prismatic meshes. • Jet instability with prismatic meshes may be overestimated compared to that with hexahedral meshes. • Modified solvers that can reduce the influence of mesh non-orthogonality and reduce computation time are proposed. - Abstract: In the present research, we discuss the influence of mesh non-orthogonality on numerical solution of a type of buoyant flow. Buoyant jet flows are simulated numerically with hexahedral and prismatic mesh elements in an open source Computational Fluid Dynamics (CFD) code called “OpenFOAM”. Buoyant jet instability obtained with the prismatic meshes may be overestimated compared to that obtained with the hexahedral meshes when non-orthogonal correction is not applied in the code. Although the non-orthogonal correction method can improve the instability generated by mesh non-orthogonality, it may increase computation time required to reach a convergent solution. Thus, we propose modified solvers that can reduce the influence of mesh non-orthogonality and reduce the computation time compared to the existing solvers in OpenFOAM. It is demonstrated that calculations for a buoyant jet with a large temperature difference are performed faster by the modified solver.
Influence of mesh non-orthogonality on numerical simulation of buoyant jet flows
Energy Technology Data Exchange (ETDEWEB)
Ishigaki, Masahiro, E-mail: ishigaki.masahiro@jaea.go.jp; Abe, Satoshi; Sibamoto, Yasuteru; Yonomoto, Taisuke
2017-04-01
Highlights: • Influence of mesh non-orthogonality on numerical solution of buoyant jet flows. • Buoyant jet flows are simulated with hexahedral and prismatic meshes. • Jet instability with prismatic meshes may be overestimated compared to that with hexahedral meshes. • Modified solvers that can reduce the influence of mesh non-orthogonality and reduce computation time are proposed. - Abstract: In the present research, we discuss the influence of mesh non-orthogonality on numerical solution of a type of buoyant flow. Buoyant jet flows are simulated numerically with hexahedral and prismatic mesh elements in an open source Computational Fluid Dynamics (CFD) code called “OpenFOAM”. Buoyant jet instability obtained with the prismatic meshes may be overestimated compared to that obtained with the hexahedral meshes when non-orthogonal correction is not applied in the code. Although the non-orthogonal correction method can improve the instability generated by mesh non-orthogonality, it may increase computation time required to reach a convergent solution. Thus, we propose modified solvers that can reduce the influence of mesh non-orthogonality and reduce the computation time compared to the existing solvers in OpenFOAM. It is demonstrated that calculations for a buoyant jet with a large temperature difference are performed faster by the modified solver.
MHD simulations on an unstructured mesh
International Nuclear Information System (INIS)
Strauss, H.R.; Park, W.; Belova, E.; Fu, G.Y.; Sugiyama, L.E.
1998-01-01
Two reasons for using an unstructured computational mesh are adaptivity, and alignment with arbitrarily shaped boundaries. Two codes which use finite element discretization on an unstructured mesh are described. FEM3D solves 2D and 3D RMHD using an adaptive grid. MH3D++, which incorporates methods of FEM3D into the MH3D generalized MHD code, can be used with shaped boundaries, which might be 3D
MHD simulations on an unstructured mesh
International Nuclear Information System (INIS)
Strauss, H.R.; Park, W.
1996-01-01
We describe work on a full MHD code using an unstructured mesh. MH3D++ is an extension of the PPPL MH3D resistive full MHD code. MH3D++ replaces the structured mesh and finite difference / fourier discretization of MH3D with an unstructured mesh and finite element / fourier discretization. Low level routines which perform differential operations, solution of PDEs such as Poisson's equation, and graphics, are encapsulated in C++ objects to isolate the finite element operations from the higher level code. The high level code is the same, whether it is run in structured or unstructured mesh versions. This allows the unstructured mesh version to be benchmarked against the structured mesh version. As a preliminary example, disruptions in DIIID reverse shear equilibria are studied numerically with the MH3D++ code. Numerical equilibria were first produced starting with an EQDSK file containing equilibrium data of a DIII-D L-mode negative central shear discharge. Using these equilibria, the linearized equations are time advanced to get the toroidal mode number n = 1 linear growth rate and eigenmode, which is resistively unstable. The equilibrium and linear mode are used to initialize 3D nonlinear runs. An example shows poloidal slices of 3D pressure surfaces: initially, on the left, and at an intermediate time, on the right
MUSIC: a mesh-unrestricted simulation code
International Nuclear Information System (INIS)
Bonalumi, R.A.; Rouben, B.; Dastur, A.R.; Dondale, C.S.; Li, H.Y.H.
1978-01-01
A general formalism to solve the G-group neutron diffusion equation is described. The G-group flux is represented by complementing an ''asymptotic'' mode with (G-1) ''transient'' modes. A particular reduction-to-one-group technique gives a high computational efficiency. MUSIC, a 2-group code using the above formalism, is presented. MUSIC is demonstrated on a fine-mesh calculation and on 2 coarse-mesh core calculations: a heavy-water reactor (HWR) problem and the 2-D lightwater reactor (LWR) IAEA benchmark. Comparison is made to finite-difference results
Implicit Geometry Meshing for the simulation of Rotary Friction Welding
Schmicker, D.; Persson, P.-O.; Strackeljan, J.
2014-08-01
The simulation of Rotary Friction Welding (RFW) is a challenging task, since it states a coupled problem of phenomena like large plastic deformations, heat flux, contact and friction. In particular the mesh generation and its restoration when using a Lagrangian description of motion is of significant severity. In this regard Implicit Geometry Meshing (IGM) algorithms are promising alternatives to the more conventional explicit methods. Because of the implicit description of the geometry during remeshing, the IGM procedure turns out to be highly robust and generates spatial discretizations of high quality regardless of the complexity of the flash shape and its inclusions. A model for efficient RFW simulation is presented, which is based on a Carreau fluid law, an Augmented Lagrange approach in mapping the incompressible deformations, a penalty contact approach, a fully regularized Coulomb-/fluid friction law and a hybrid time integration strategy. The implementation of the IGM algorithm using 6-node triangular finite elements is described in detail. The techniques are demonstrated on a fairly complex friction welding problem, demonstrating the performance and the potentials of the proposed method. The techniques are general and straight-forward to implement, and offer the potential of successful adoption to a wide range of other engineering problems.
Enriching Triangle Mesh Animations with Physically Based Simulation.
Li, Yijing; Xu, Hongyi; Barbic, Jernej
2017-10-01
We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.
Interoperable mesh and geometry tools for advanced petascale simulations
International Nuclear Information System (INIS)
Diachin, L; Bauer, A; Fix, B; Kraftcheck, J; Jansen, K; Luo, X; Miller, M; Ollivier-Gooch, C; Shephard, M S; Tautges, T; Trease, H
2007-01-01
SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. The Center for Interoperable Technologies for Advanced Petascale Simulations (ITAPS) will deliver interoperable and interchangeable mesh, geometry, and field manipulation services that are of direct use to SciDAC applications. The premise of our technology development goal is to provide such services as libraries that can be used with minimal intrusion into application codes. To develop these technologies, we focus on defining a common data model and data-structure neutral interfaces that unify a number of different services such as mesh generation and improvement, front tracking, adaptive mesh refinement, shape optimization, and solution transfer operations. We highlight the use of several ITAPS services in SciDAC applications
Full Core Multiphysics Simulation with Offline Mesh Deformation
Energy Technology Data Exchange (ETDEWEB)
Merzari, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, E. R. [Argonne National Lab. (ANL), Argonne, IL (United States); Yu, Y. [Argonne National Lab. (ANL), Argonne, IL (United States); Thomas, J. W. [Argonne National Lab. (ANL), Argonne, IL (United States); Obabko, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States); Solberg, Jerome [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ferencz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Whitesides, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-12-21
In this report, building on previous reports issued in FY13 we describe our continued efforts to integrate thermal/hydraulics, neutronics, and structural mechanics modeling codes to perform coupled analysis of a representative fast sodium-cooled reactor core. The focus of the present report is a full core simulation with off-line mesh deformation.
Impact of Variable-Resolution Meshes on Regional Climate Simulations
Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.
2014-12-01
The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using ERA-Interim re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally- refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.
Toward 10-km mesh global climate simulations
Ohfuchi, W.; Enomoto, T.; Takaya, K.; Yoshioka, M. K.
2002-12-01
An atmospheric general circulation model (AGCM) that runs very efficiently on the Earth Simulator (ES) was developed. The ES is a gigantic vector-parallel computer with the peak performance of 40 Tflops. The AGCM, named AFES (AGCM for ES), was based on the version 5.4.02 of an AGCM developed jointly by the Center for Climate System Research, the University of Tokyo and the Japanese National Institute for Environmental Sciences. The AFES was, however, totally rewritten in FORTRAN90 and MPI while the original AGCM was written in FORTRAN77 and not capable of parallel computing. The AFES achieved 26 Tflops (about 65 % of the peak performance of the ES) at resolution of T1279L96 (10-km horizontal resolution and 500-m vertical resolution in middle troposphere to lower stratosphere). Some results of 10- to 20-day global simulations will be presented. At this moment, only short-term simulations are possible due to data storage limitation. As ten tera flops computing is achieved, peta byte data storage are necessary to conduct climate-type simulations at this super-high resolution global simulations. Some possibilities for future research topics in global super-high resolution climate simulations will be discussed. Some target topics are mesoscale structures and self-organization of the Baiu-Meiyu front over Japan, cyclogenecsis over the North Pacific and typhoons around the Japan area. Also improvement in local precipitation with increasing horizontal resolution will be demonstrated.
Value for money in particle-mesh plasma simulations
International Nuclear Information System (INIS)
Eastwood, J.W.
1976-01-01
The established particle-mesh method of simulating a collisionless plasma is discussed. Problems are outlined, and it is stated that given constraints on mesh size and particle number, the only way to adjust the compromise between dispersive forces, collision time and heating time is by altering the force calculating cycle. In 'value for money', schemes, matching of parts of the force calculation cycle is optimized. Interparticle forces are considered. Optimized combinations of elements of the force calculation cycle are compared. Following sections cover the dispersion relation, and comparisons with other schemes. (U.K.)
Gear Mesh Loss-of-Lubrication Experiments and Analytical Simulation
Handschuh, Robert F.; Polly, Joseph; Morales, Wilfredo
2011-01-01
An experimental program to determine the loss-of-lubrication (LOL) characteristics of spur gears in an aerospace simulation test facility has been completed. Tests were conducted using two different emergency lubricant types: (1) an oil mist system (two different misted lubricants) and (2) a grease injection system (two different grease types). Tests were conducted using a NASA Glenn test facility normally used for conducting contact fatigue. Tests were run at rotational speeds up to 10000 rpm using two different gear designs and two different gear materials. For the tests conducted using an air-oil misting system, a minimum lubricant injection rate was determined to permit the gear mesh to operate without failure for at least 1 hr. The tests allowed an elevated steady state temperature to be established. A basic 2-D heat transfer simulation has been developed to investigate temperatures of a simulated gear as a function of frictional behavior. The friction (heat generation source) between the meshing surfaces is related to the position in the meshing cycle, the load applied, and the amount of lubricant in the contact. Experimental conditions will be compared to those from the 2-D simulation.
International Nuclear Information System (INIS)
Apisit, Patchimpattapong; Alireza, Haghighat; Shedlock, D.
2003-01-01
An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)
Energy Technology Data Exchange (ETDEWEB)
Apisit, Patchimpattapong [Electricity Generating Authority of Thailand, Office of Corporate Planning, Bangkruai, Nonthaburi (Thailand); Alireza, Haghighat; Shedlock, D. [Florida Univ., Department of Nuclear and Radiological Engineering, Gainesville, FL (United States)
2003-07-01
An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)
Numerical simulation for quenching meshes with TONUS platform
International Nuclear Information System (INIS)
Bin, Chen; Hongxing, Yu
2009-01-01
For mitigation of hydrogen risks during severe accidents to protect the integrity of containment, PAR and ignitors are used in current advanced nuclear power plants. But multiple combustions induced by ignitors and consequent DDT phenomena are not practically eliminated. An innovative design call 'quenching meshes' is considered to confine hydrogen flame within one compartment by metallic meshes, so that hazardous flame propagation can be prevented. The numerical simulation results based on discretization of the full Navier-Stokes equations with global one-step reaction represented by Arrhenius laminar combustion model have shown the possibility of flame quenching 'numerically'. This is achieved via multiplication of the combustion rate expression by a Heaviside function having an ignition temperature as a parameter. Qualitative behavior of the computed flow shows that the flame velocity diminishes while passing through a quenching mesh, while qualitative analysis based on the energy balance reveals the mechanism of flame quenching. All the above analysis has been performed for a stoichiometric mixture and normal initial pressure and temperature for initial conditions. For further research we would like to suggest the investigation of the influence of the mixture composition, initial pressure and/or temperature on the quenching criteria
Simulating control rod and fuel assembly motion using moving meshes
Energy Technology Data Exchange (ETDEWEB)
Gilbert, D. [Department of Electrical and Computer Engineering, McMaster University, 1280 Main Street West, Hamilton Ontario, L8S 4K1 (Canada)], E-mail: gilbertdw1@gmail.com; Roman, J.E. [Departamento de Sistemas Informaticos y Computacion, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain); Garland, Wm. J. [Department of Engineering Physics, McMaster University, 1280 Main Street West, Hamilton Ontario, L8S 4K1 (Canada); Poehlman, W.F.S. [Department of Computing and Software, McMaster University, 1280 Main Street West, Hamilton Ontario, L8S 4K1 (Canada)
2008-02-15
A prerequisite for designing a transient simulation experiment which includes the motion of control and fuel assemblies is the careful verification of a steady state model which computes k{sub eff} versus assembly insertion distance. Previous studies in nuclear engineering have usually approached the problem of the motion of control rods with the use of nonlinear nodal models. Nodal methods employ special approximations for the leading and trailing cells of the moving assemblies to avoid the rod cusping problem which results from the naive volume weighted cell cross-section approximation. A prototype framework called the MOOSE has been developed for modeling moving components in the presence of diffusion phenomena. A linear finite difference model is constructed, solutions for which are computed by SLEPc, a high performance parallel eigenvalue solver. Design techniques for the implementation of a patched non-conformal mesh which links groups of sub-meshes that can move relative to one another are presented. The generation of matrices which represent moving meshes which conserve neutron current at their boundaries, and the performance of the framework when applied to model reactivity insertion experiments is also discussed.
Adaptive mesh refinement and adjoint methods in geophysics simulations
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Crack growth simulation for plural crack using hexahedral mesh generation technique
International Nuclear Information System (INIS)
Orita, Y; Wada, Y; Kikuchi, M
2010-01-01
This paper describes a surface crack growth simulation using a new mesh generation technique. The generated mesh is constituted of all hexahedral elements. Hexahedral elements are suitable for an analysis of fracture mechanics parameters, i.e. stress intensity factor. The advantage of a hexahedral mesh is good accuracy of an analysis and less number of degrees of freedoms than a tetrahedral mesh. In this study, a plural crack growth simulation is computed using the hexahedral mesh and its distribution of stress intensity factor is investigated.
Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations
Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.
2012-09-01
Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.
Zarzycki, C. M.; Gettelman, A.; Callaghan, P.
2017-12-01
Accurately predicting weather extremes such as precipitation (floods and droughts) and temperature (heat waves) requires high resolution to resolve mesoscale dynamics and topography at horizontal scales of 10-30km. Simulating such resolutions globally for climate scales (years to decades) remains computationally impractical. Simulating only a small region of the planet is more tractable at these scales for climate applications. This work describes global simulations using variable-resolution static meshes with multiple dynamical cores that target the continental United States using developmental versions of the Community Earth System Model version 2 (CESM2). CESM2 is tested in idealized, aquaplanet and full physics configurations to evaluate variable mesh simulations against uniform high and uniform low resolution simulations at resolutions down to 15km. Different physical parameterization suites are also evaluated to gauge their sensitivity to resolution. Idealized variable-resolution mesh cases compare well to high resolution tests. More recent versions of the atmospheric physics, including cloud schemes for CESM2, are more stable with respect to changes in horizontal resolution. Most of the sensitivity is due to sensitivity to timestep and interactions between deep convection and large scale condensation, expected from the closure methods. The resulting full physics model produces a comparable climate to the global low resolution mesh and similar high frequency statistics in the high resolution region. Some biases are reduced (orographic precipitation in the western United States), but biases do not necessarily go away at high resolution (e.g. summertime JJA surface Temp). The simulations are able to reproduce uniform high resolution results, making them an effective tool for regional climate studies and are available in CESM2.
PowderSim: Lagrangian Discrete and Mesh-Free Continuum Simulation Code for Cohesive Soils
Johnson, Scott; Walton, Otis; Settgast, Randolph
2013-01-01
PowderSim is a calculation tool that combines a discrete-element method (DEM) module, including calibrated interparticle-interaction relationships, with a mesh-free, continuum, SPH (smoothed-particle hydrodynamics) based module that utilizes enhanced, calibrated, constitutive models capable of mimicking both large deformations and the flow behavior of regolith simulants and lunar regolith under conditions anticipated during in situ resource utilization (ISRU) operations. The major innovation introduced in PowderSim is to use a mesh-free method (SPH-based) with a calibrated and slightly modified critical-state soil mechanics constitutive model to extend the ability of the simulation tool to also address full-scale engineering systems in the continuum sense. The PowderSim software maintains the ability to address particle-scale problems, like size segregation, in selected regions with a traditional DEM module, which has improved contact physics and electrostatic interaction models.
Resolution convergence in cosmological hydrodynamical simulations using adaptive mesh refinement
Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim
2018-06-01
We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions (ICs) and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low-mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of ICs is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold-back falls with increasing spatial and IC resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of the order of 10-20 per cent, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold-back on the star formation rate.
High-fidelity meshes from tissue samples for diffusion MRI simulations.
Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C
2010-01-01
This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.
Simulating galactic dust grain evolution on a moving mesh
McKinnon, Ryan; Vogelsberger, Mark; Torrey, Paul; Marinacci, Federico; Kannan, Rahul
2018-05-01
Interstellar dust is an important component of the galactic ecosystem, playing a key role in multiple galaxy formation processes. We present a novel numerical framework for the dynamics and size evolution of dust grains implemented in the moving-mesh hydrodynamics code AREPO suited for cosmological galaxy formation simulations. We employ a particle-based method for dust subject to dynamical forces including drag and gravity. The drag force is implemented using a second-order semi-implicit integrator and validated using several dust-hydrodynamical test problems. Each dust particle has a grain size distribution, describing the local abundance of grains of different sizes. The grain size distribution is discretised with a second-order piecewise linear method and evolves in time according to various dust physical processes, including accretion, sputtering, shattering, and coagulation. We present a novel scheme for stochastically forming dust during stellar evolution and new methods for sub-cycling of dust physics time-steps. Using this model, we simulate an isolated disc galaxy to study the impact of dust physical processes that shape the interstellar grain size distribution. We demonstrate, for example, how dust shattering shifts the grain size distribution to smaller sizes resulting in a significant rise of radiation extinction from optical to near-ultraviolet wavelengths. Our framework for simulating dust and gas mixtures can readily be extended to account for other dynamical processes relevant in galaxy formation, like magnetohydrodynamics, radiation pressure, and thermo-chemical processes.
Simulation of transients with space-dependent feedback by coarse mesh flux expansion method
International Nuclear Information System (INIS)
Langenbuch, S.; Maurer, W.; Werner, W.
1975-01-01
For the simulation of the time-dependent behaviour of large LWR-cores, even the most efficient Finite-Difference (FD) methods require a prohibitive amount of computing time in order to achieve results of acceptable accuracy. Static CM-solutions computed with a mesh-size corresponding to the fuel element structure (about 20 cm) are at least as accurate as FD-solutions computed with about 5 cm mesh-size. For 3d-calculations this results in a reduction of storage requirements by a factor 60 and of computing costs by a factor 40, relative to FD-methods. These results have been obtained for pure neutronic calculations, where feedback is not taken into account. In this paper it is demonstrated that the method retains its accuracy also in kinetic calculations, even in the presence of strong space dependent feedback. (orig./RW) [de
Hernández-Gascón, B; Peña, E; Melero, H; Pascual, G; Doblaré, M; Ginebra, M P; Bellón, J M; Calvo, B
2011-11-01
The material properties of meshes used in hernia surgery contribute to the overall mechanical behaviour of the repaired abdominal wall. The mechanical response of a surgical mesh has to be defined since the haphazard orientation of an anisotropic mesh can lead to inconsistent surgical outcomes. This study was designed to characterize the mechanical behaviour of three surgical meshes (Surgipro®, Optilene® and Infinit®) and to describe a mechanical constitutive law that accurately reproduces the experimental results. Finally, through finite element simulation, the behaviour of the abdominal wall was modelled before and after surgical mesh implant. Uniaxial loading of mesh samples in two perpendicular directions revealed the isotropic response of Surgipro® and the anisotropic behaviour of Optilene® and Infinit®. A phenomenological constitutive law was used to reproduce the measured experimental curves. To analyze the mechanical effect of the meshes once implanted in the abdomen, finite element simulation of the healthy and partially herniated repaired rabbit abdominal wall served to reproduce wall behaviour before and after mesh implant. In all cases, maximal displacements were lower and maximal principal stresses higher in the implanted abdomen than the intact wall model. Despite the fact that no mesh showed a behaviour that perfectly matched that of abdominal muscle, the Infinit® mesh was able to best comply with the biomechanics of the abdominal wall. Copyright © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Transvaginal Repair of a Large Chronic Porcine Ventral Hernia with Synthetic Mesh Using NOTES
Powell, Ben; Whang, Susan H.; Bachman, Sharon L.; Andres Astudillo, J.; Sporn, Emanuel; Miedema, Brent W.
2010-01-01
Background: Ventral incisional hernias still remain a common surgical problem. We tested the feasibility of transvaginal placement of a large synthetic mesh to repair a porcine hernia. Methods: Seven pigs were used in this survival model. Each animal had creation of a 5-cm hernia defect and underwent a transvaginal repair of the defect with synthetic mesh. A single colpotomy was made using a 12-cm trocar for an overtube. The mesh was cut to size and placed through the trocar. A single-channel gastroscope with an endoscopic atraumatic grasper was used for grasping sutures. Further fascial sutures were placed every 5cm. Results: Mesh repair was feasible in all 7 animals. Mean operative time was 133 minutes. Technical difficulties were encountered. No gross contamination was seen at the time of necropsy. However, 5 animals had positive mesh cultures; 7 had positive cultures in the rectouterine space in enrichment broth or on direct culture. Conclusion: Transvaginal placement of synthetic mesh to repair a large porcine hernia using NOTES is challenging but feasible. Future studies need to be conducted to develop better techniques and determine the significance of mesh contamination. PMID:20932375
Design and analysis of a deployable truss for the large modular mesh antenna
Meguro, Akira
This paper describes the design and deployment analysis for large deployable modular mesh antennas. Key design criteria are deployability, and the driving force and latching moment requirements. Reaction forces and moments due to mesh and cable network seriously influence the driving force. These forces and moments can be precisely estimated by means of analyzing the cable network using Cable Structure Analyzer (CASA). Deployment analysis is carried out using Dynamic Analysis and Design System (DADS). The influence of alignment errors on the driving reaction force can be eliminated by replacing the joint element with a spring element. The joint slop is also modeled using a discontinuous spring elements. Their design approach for three types of deployable modules and the deployment characterstics of three Bread-Board Models based on those designs are also presented. In order to study gravity effects on the deployment characteristics and the effects of the gravity compensation method, ground deployment analysis is carried out. A planned deployment test that will use aircraft parabolic flight to simulate a micro-gravity environment is also described.
Direct numerical simulation of bubbles with parallelized adaptive mesh refinement
International Nuclear Information System (INIS)
Talpaert, A.
2015-01-01
The study of two-phase Thermal-Hydraulics is a major topic for Nuclear Engineering for both security and efficiency of nuclear facilities. In addition to experiments, numerical modeling helps to knowing precisely where bubbles appear and how they behave, in the core as well as in the steam generators. This work presents the finest scale of representation of two-phase flows, Direct Numerical Simulation of bubbles. We use the 'Di-phasic Low Mach Number' equation model. It is particularly adapted to low-Mach number flows, that is to say flows which velocity is much slower than the speed of sound; this is very typical of nuclear thermal-hydraulics conditions. Because we study bubbles, we capture the front between vapor and liquid phases thanks to a downward flux limiting numerical scheme. The specific discrete analysis technique this work introduces is well-balanced parallel Adaptive Mesh Refinement (AMR). With AMR, we refined the coarse grid on a batch of patches in order to locally increase precision in areas which matter more, and capture fine changes in the front location and its topology. We show that patch-based AMR is very adapted for parallel computing. We use a variety of physical examples: forced advection, heat transfer, phase changes represented by a Stefan model, as well as the combination of all those models. We will present the results of those numerical simulations, as well as the speed up compared to equivalent non-AMR simulation and to serial computation of the same problems. This document is made up of an abstract and the slides of the presentation. (author)
Transvaginal Repair of a Large Chronic Porcine Ventral Hernia with Synthetic Mesh Using NOTES
Powell, Ben; Whang, Susan H.; Bachman, Sharon L.; Andres Astudillo, J.; Sporn, Emanuel; Miedema, Brent W.; Thaler, Klaus
2010-01-01
Background: Ventral incisional hernias still remain a common surgical problem. We tested the feasibility of transvaginal placement of a large synthetic mesh to repair a porcine hernia. Methods: Seven pigs were used in this survival model. Each animal had creation of a 5-cm hernia defect and underwent a transvaginal repair of the defect with synthetic mesh. A single colpotomy was made using a 12-cm trocar for an overtube. The mesh was cut to size and placed through the trocar. A single-channel...
A software framework for the portable parallelization of particle-mesh simulations
DEFF Research Database (Denmark)
Sbalzarini, I.F.; Walther, Jens Honore; Polasek, B.
2006-01-01
Abstract: We present a software framework for the transparent and portable parallelization of simulations using particle-mesh methods. Particles are used to transport physical properties and a mesh is required in order to reinitialize the distorted particle locations, ensuring the convergence...
Go with the Flow. Moving meshes and solution monitoring for compressible flow simulation
van Dam, A.
2009-01-01
The simulation of time-dependent physical problems, such as flows of some kind, places high demands on the domain discretization in order to obtain high accuracy of the numerical solution. We present a moving mesh method in which the mesh points automatically move towards regions where high spatial
Yuan, H. Z.; Wang, Y.; Shu, C.
2017-12-01
This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.
How MESSENGER Meshes Simulations and Games with Citizen Science
Hirshon, B.; Chapman, C. R.; Edmonds, J.; Goldstein, J.; Hallau, K. G.; Solomon, S. C.; Vanhala, H.; Weir, H. M.; Messenger Education; Public Outreach (Epo) Team
2010-12-01
How MESSENGER Meshes Simulations and Games with Citizen Science In the film The Last Starfighter, an alien civilization grooms their future champion—a kid on Earth—using a video game. As he gains proficiency in the game, he masters the skills he needs to pilot a starship and save their civilization. The NASA MESSENGER Education and Public Outreach (EPO) Team is using the same tactic to train citizen scientists to help the Science Team explore the planet Mercury. We are building a new series of games that appear to be designed primarily for fun, but that guide players through a knowledge and skill set that they will need for future science missions in support of MESSENGER mission scientists. As players score points, they gain expertise. Once they achieve a sufficiently high score, they will be invited to become participants in Mercury Zoo, a new program being designed by Zooniverse. Zooniverse created Galaxy Zoo and Moon Zoo, programs that allow interested citizens to participate in the exploration and interpretation of galaxy and lunar data. Scientists use the citizen interpretations to further refine their exploration of the same data, thereby narrowing their focus and saving precious time. Mercury Zoo will be designed with input from the MESSENGER Science Team. This project will not only support the MESSENGER mission, but it will also add to the growing cadre of informed members of the public available to help with other citizen science projects—building on the concept that engaged, informed citizens can help scientists make new discoveries. The MESSENGER EPO Team comprises individuals from the American Association for the Advancement of Science (AAAS); Carnegie Academy for Science Education (CASE); Center for Educational Resources (CERES) at Montana State University (MSU) - Bozeman; National Center for Earth and Space Science Education (NCESSE); Johns Hopkins University Applied Physics Laboratory (JHU/APL); National Air and Space Museum (NASM); Science
National Research Council Canada - National Science Library
Litvin, F
1999-01-01
An integrated tooth contact analysis (TCA) computer program for the simulation of meshing and contact of gear drives that calculates transmission errors and shift of hearing contact for misaligned gear drives has been developed...
Optimization-based Fluid Simulation on Unstructured Meshes
DEFF Research Database (Denmark)
Misztal, Marek Krzysztof; Bridson, Robert; Erleben, Kenny
2010-01-01
for solving the fluid dynamics equations as well as direct access to the interface geometry data, making in- clusion of a new surface energy term feasible. Furthermore, using an unstructured mesh makes it straightforward to handle curved solid boundaries and gives us a possibility to explore several fluid...
Parallel Performance Optimizations on Unstructured Mesh-based Simulations
Energy Technology Data Exchange (ETDEWEB)
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid
2015-01-01
© The Authors. Published by Elsevier B.V. This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.
The numerical simulation study of hemodynamics of the new dense-mesh stent
Ma, Jiali; Yuan, Zhishan; Yu, Xuebao; Feng, Zhaowei; Miao, Weidong; Xu, Xueli; Li, Juntao
2017-09-01
The treatment of aortic aneurysm in new dense mesh stent is based on the principle of hemodynamic changes. But the mechanism is not yet very clear. This paper analyzed and calculated the hemodynamic situation before and after the new dense mesh stent implanting by the method of numerical simulation. The results show the dense mesh stent changed and impacted the blood flow in the aortic aneurysm. The changes include significant decrement of blood velocity, pressure and shear forces, while ensuring blood can supply branches, which means the new dense mesh stent's hemodynamic mechanism in the treatment of aortic aneurysm is clearer. It has very important significance in developing new dense mesh stent in order to cure aortic aneurysm.
Energy Technology Data Exchange (ETDEWEB)
Balaven-Clermidy, S.
2001-12-01
Oil reservoir simulations study multiphase flows in porous media. These flows are described and evaluated through numerical schemes on a discretization of the reservoir domain. In this thesis, we were interested in this spatial discretization and a new kind of hybrid mesh has been proposed where the radial nature of flows in the vicinity of wells is directly taken into account in the geometry. Our modular approach described wells and their drainage area through radial circular meshes. These well meshes are inserted in a structured reservoir mesh (a Corner Point Geometry mesh) made up with hexahedral cells. Finally, in order to generate a global conforming mesh, proper connections are realized between the different kinds of meshes through unstructured transition ones. To compute these transition meshes that we want acceptable in terms of finite volume methods, an automatic method based on power diagrams has been developed. Our approach can deal with a homogeneous anisotropic medium and allows the user to insert vertical or horizontal wells as well as secondary faults in the reservoir mesh. Our work has been implemented, tested and validated in 2D and 2D1/2. It can also be extended in 3D when the geometrical constraints are simplicial ones: points, segments and triangles. (author)
Cell-centered particle weighting algorithm for PIC simulations in a non-uniform 2D axisymmetric mesh
Araki, Samuel J.; Wirz, Richard E.
2014-09-01
Standard area weighting methods for particle-in-cell simulations result in systematic errors on particle densities for a non-uniform mesh in cylindrical coordinates. These errors can be significantly reduced by using weighted cell volumes for density calculations. A detailed description on the corrected volume calculations and cell-centered weighting algorithm in a non-uniform mesh is provided. The simple formulas for the corrected volume can be used for any type of quadrilateral and/or triangular mesh in cylindrical coordinates. Density errors arising from the cell-centered weighting algorithm are computed for radial density profiles of uniform, linearly decreasing, and Bessel function in an adaptive Cartesian mesh and an unstructured mesh. For all the density profiles, it is shown that the weighting algorithm provides a significant improvement for density calculations. However, relatively large density errors may persist at outermost cells for monotonically decreasing density profiles. A further analysis has been performed to investigate the effect of the density errors in potential calculations, and it is shown that the error at the outermost cell does not propagate into the potential solution for the density profiles investigated.
Gear selectivity of large-mesh nets and drumlines used to catch ...
African Journals Online (AJOL)
Catches of sharks and bycatch in large-mesh nets and baited drumlines used by the Queensland Shark Control Program were examined to determine the efficacy of both gear types and assess fishing strategies that minimise their impacts. There were few significant differences in the size of both sharks and bycatch in the ...
Biastoch, Arne; Sein, Dmitry; Durgadoo, Jonathan V.; Wang, Qiang; Danilov, Sergey
2018-01-01
Many questions in ocean and climate modelling require the combined use of high resolution, global coverage and multi-decadal integration length. For this combination, even modern resources limit the use of traditional structured-mesh grids. Here we compare two approaches: A high-resolution grid nested into a global model at coarser resolution (NEMO with AGRIF) and an unstructured-mesh grid (FESOM) which allows to variably enhance resolution where desired. The Agulhas system around South Africa is used as a testcase, providing an energetic interplay of a strong western boundary current and mesoscale dynamics. Its open setting into the horizontal and global overturning circulations also requires global coverage. Both model configurations simulate a reasonable large-scale circulation. Distribution and temporal variability of the wind-driven circulation are quite comparable due to the same atmospheric forcing. However, the overturning circulation differs, owing each model's ability to represent formation and spreading of deep water masses. In terms of regional, high-resolution dynamics, all elements of the Agulhas system are well represented. Owing to the strong nonlinearity in the system, Agulhas Current transports of both configurations and in comparison with observations differ in strength and temporal variability. Similar decadal trends in Agulhas Current transport and Agulhas leakage are linked to the trends in wind forcing.
Carvalho, Sílvia C. P.; de Lima, João L. M. P.; de Lima, M. Isabel P.
2013-04-01
Rainfall simulators can be a powerful tool to increase our understanding of hydrological and geomorphological processes. Nevertheless, rainfall simulators' design and operation might be rather demanding, for achieving specific rainfall intensity distributions and drop characteristics. The pressurized simulators have some advantages over the non-pressurized simulators: drops do not rely on gravity to reach terminal velocity, but are sprayed out under pressure; pressurized simulators also yield a broad range of drop sizes in comparison with drop-formers simulators. The main purpose of this study was to explore in the laboratory the potential of combining spray nozzle simulators with meshes in order to change rainfall characteristics (rainfall intensity and diameters and fall speed of drops). Different types of spray nozzles were tested, such as single full-cone and multiple full-cone nozzles. The impact of the meshes on the simulated rain was studied by testing different materials (i.e. plastic and steel meshes), square apertures and wire thicknesses, and different vertical distances between the nozzle and the meshes underneath. The diameter and fall speed of the rain drops were measured using a Laser Precipitation Monitor (Thies Clima). The rainfall intensity range and coefficients of uniformity of the sprays and the drop size distribution, fall speed and kinetic energy were analysed. Results show that when meshes intercept drop trajectories the spatial distribution of rainfall intensity and the drop size distribution are affected. As the spray nozzles generate typically small drop sizes and narrow drop size distributions, meshes can be used to promote the formation of bigger drops and random their landing positions.
Documentation for MeshKit - Reactor Geometry (&mesh) Generator
Energy Technology Data Exchange (ETDEWEB)
Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-09-30
This report gives documentation for using MeshKit’s Reactor Geometry (and mesh) Generator (RGG) GUI and also briefly documents other algorithms and tools available in MeshKit. RGG is a program designed to aid in modeling and meshing of complex/large hexagonal and rectilinear reactor cores. RGG uses Argonne’s SIGMA interfaces, Qt and VTK to produce an intuitive user interface. By integrating a 3D view of the reactor with the meshing tools and combining them into one user interface, RGG streamlines the task of preparing a simulation mesh and enables real-time feedback that reduces accidental scripting mistakes that could waste hours of meshing. RGG interfaces with MeshKit tools to consolidate the meshing process, meaning that going from model to mesh is as easy as a button click. This report is designed to explain RGG v 2.0 interface and provide users with the knowledge and skills to pilot RGG successfully. Brief documentation of MeshKit source code, tools and other algorithms available are also presented for developers to extend and add new algorithms to MeshKit. RGG tools work in serial and parallel and have been used to model complex reactor core models consisting of conical pins, load pads, several thousands of axially varying material properties of instrumentation pins and other interstices meshes.
LARGE BUILDING HVAC SIMULATION
The report discusses the monitoring and collection of data relating to indoor pressures and radon concentrations under several test conditions in a large school building in Bartow, Florida. The Florida Solar Energy Center (FSEC) used an integrated computational software, FSEC 3.0...
Numerical simulation of deformation of dynamic mesh in the human vocal tract model
Directory of Open Access Journals (Sweden)
Řidký Václav
2015-01-01
Full Text Available Numerical simulation of the acoustic signal generation in the human vocal tract is a very complex problem. The computational mesh is not static; it is deformed due to vibration of vocal folds. Movement of vocal folds is in this case prescribed as function of translation and rotation. A new boundary condition for the 2DOF motion of the vocal folds was implemented in OpenFOAM, an open-source software package based on finite volume method Work is focused on the dynamic mesh and deformation of structured meshes in the computation a package OpenFOAM. These methods are compared with focus onquality of the mesh (non-orthogonality, aspect ratio and skewness.
Numerical form-finding method for large mesh reflectors with elastic rim trusses
Yang, Dongwu; Zhang, Yiqun; Li, Peng; Du, Jingli
2018-06-01
Traditional methods for designing a mesh reflector usually treat the rim truss as rigid. Due to large aperture, light weight and high accuracy requirements on spaceborne reflectors, the rim truss deformation is indeed not negligible. In order to design a cable net with asymmetric boundaries for the front and rear nets, a cable-net form-finding method is firstly introduced. Then, the form-finding method is embedded into an iterative approach for designing a mesh reflector considering the elasticity of the supporting rim truss. By iterations on form-findings of the cable-net based on the updated boundary conditions due to the rim truss deformation, a mesh reflector with a fairly uniform tension distribution in its equilibrium state could be finally designed. Applications on offset mesh reflectors with both circular and elliptical rim trusses are illustrated. The numerical results show the effectiveness of the proposed approach and that a circular rim truss is more stable than an elliptical rim truss.
Nyx: Adaptive mesh, massively-parallel, cosmological simulation code
Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun
2017-12-01
Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.
Numerical simulation of 3D unsteady flow in a rotating pump by dynamic mesh technique
International Nuclear Information System (INIS)
Huang, S; Guo, J; Yang, F X
2013-01-01
In this paper, the numerical simulation of unsteady flow for three kinds of typical rotating pumps, roots blower, roto-jet pump and centrifugal pump, were performed using the three-dimensional Dynamic Mesh technique. In the unsteady simulation, all the computational domains, as stationary, were set in one inertial reference frame. The motions of the solid boundaries were defined by the Profile file in FLUENT commercial code, in which the rotational orientation and speed of the rotors were specified. Three methods (Spring-based Smoothing, Dynamic Layering and Local Re-meshing) were used to achieve mesh deformation and re-meshing. The unsteady solutions of flow field and pressure distribution were solved. After a start-up stage, the flow parameters exhibit time-periodic behaviour corresponding to blade passing frequency of rotor. This work shows that Dynamic Mesh technique could achieve numerical simulation of three-dimensional unsteady flow field in various kinds of rotating pumps and have a strong versatility and broad application prospects
Tetrahedral-Mesh Simulation of Turbulent Flows with the Space-Time Conservative Schemes
Chang, Chau-Lyan; Venkatachari, Balaji; Cheng, Gary C.
2015-01-01
Direct numerical simulations of turbulent flows are predominantly carried out using structured, hexahedral meshes despite decades of development in unstructured mesh methods. Tetrahedral meshes offer ease of mesh generation around complex geometries and the potential of an orientation free grid that would provide un-biased small-scale dissipation and more accurate intermediate scale solutions. However, due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for triangular and tetrahedral meshes at the cell interfaces, numerical issues exist when flow discontinuities or stagnation regions are present. The space-time conservative conservation element solution element (CESE) method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to more accurately simulate turbulent flows using unstructured tetrahedral meshes. To pave the way towards accurate simulation of shock/turbulent boundary-layer interaction, a series of wave and shock interaction benchmark problems that increase in complexity, are computed in this paper with triangular/tetrahedral meshes. Preliminary computations for the normal shock/turbulence interactions are carried out with a relatively coarse mesh, by direct numerical simulations standards, in order to assess other effects such as boundary conditions and the necessity of a buffer domain. The results indicate that qualitative agreement with previous studies can be obtained for flows where, strong shocks co-exist along with unsteady waves that display a broad range of scales, with a relatively compact computational domain and less stringent requirements for grid clustering near the shock. With the space-time conservation properties, stable solutions without any spurious wave reflections can be obtained without a need for buffer domains near the outflow/farfield boundaries. Computational results for the
Large-Eddy Simulation of Subsonic Jets
International Nuclear Information System (INIS)
Vuorinen, Ville; Wehrfritz, Armin; Yu Jingzhou; Kaario, Ossi; Larmi, Martti; Boersma, Bendiks Jan
2011-01-01
The present study deals with development and validation of a fully explicit, compressible Runge-Kutta-4 (RK4) Navier-Stokes solver in the opensource CFD programming environment OpenFOAM. The background motivation is to shift towards explicit density based solution strategy and thereby avoid using the pressure based algorithms which are currently proposed in the standard OpenFOAM release for Large-Eddy Simulation (LES). This shift is considered necessary in strongly compressible flows when Ma > 0.5. Our application of interest is related to the pre-mixing stage in direct injection gas engines where high injection pressures are typically utilized. First, the developed flow solver is discussed and validated. Then, the implementation of subsonic inflow conditions using a forcing region in combination with a simplified nozzle geometry is discussed and validated. After this, LES of mixing in compressible, round jets at Ma = 0.3, 0.5 and 0.65 are carried out. Respectively, the Reynolds numbers of the jets correspond to Re = 6000, 10000 and 13000. Results for two meshes are presented. The results imply that the present solver produces turbulent structures, resolves a range of turbulent eddy frequencies and gives also mesh independent results within satisfactory limits for mean flow and turbulence statistics.
Enhanced Mesh-Free Simulation of Regolith Flow, Phase I
National Aeronautics and Space Administration — NASA needs simulation tools capable of predicting the behavior of regolith in proposed excavation, transport, and handling or sample acquisition systems. For...
Impact of mesh and DEM resolutions in SEM simulation of 3D seismic response
Khan, Saad; van der Meijde, M.; van der Werff, H.M.A.; Shafique, Muhammad
2017-01-01
This study shows that the resolution of a digital elevation model (DEM) and model mesh strongly influences 3D simulations of seismic response. Topographic heterogeneity scatters seismic waves and causes variation in seismic response (am-plification and deamplification of seismic amplitudes) at the
Directory of Open Access Journals (Sweden)
P. Chatelain
2017-06-01
Full Text Available A vortex particle-mesh (VPM method with immersed lifting lines has been developed and validated. Based on the vorticity–velocity formulation of the Navier–Stokes equations, it combines the advantages of a particle method and of a mesh-based approach. The immersed lifting lines handle the creation of vorticity from the blade elements and its early development. Large-eddy simulation (LES of vertical axis wind turbine (VAWT flows is performed. The complex wake development is captured in detail and over up to 15 diameters downstream: from the blades to the near-wake coherent vortices and then through the transitional ones to the fully developed turbulent far wake (beyond 10 rotor diameters. The statistics and topology of the mean flow are studied. The computational sizes also allow insights into the detailed unsteady vortex dynamics and topological flow features, such as a recirculation region influenced by the tip speed ratio and the rotor geometry.
Liu, Peter X.; Lai, Pinhua; Xu, Shaoping; Zou, Yanni
2018-01-01
In the present work, the majority of implemented virtual surgery simulation systems have been based on either a mesh or meshless strategy with regard to soft tissue modelling. To take full advantage of the mesh and meshless models, a novel coupled soft tissue cutting model is proposed. Specifically, the reconstructed virtual soft tissue consists of two essential components. One is associated with surface mesh that is convenient for surface rendering and the other with internal meshless point elements that is used to calculate the force feedback during cutting. To combine two components in a seamless way, virtual points are introduced. During the simulation of cutting, the Bezier curve is used to characterize smooth and vivid incision on the surface mesh. At the same time, the deformation of internal soft tissue caused by cutting operation can be treated as displacements of the internal point elements. Furthermore, we discussed and proved the stability and convergence of the proposed approach theoretically. The real biomechanical tests verified the validity of the introduced model. And the simulation experiments show that the proposed approach offers high computational efficiency and good visual effect, enabling cutting of soft tissue with high stability. PMID:29850006
Jacob, Dietmar A; Schug-Pass, Christine; Sommerer, Florian; Tannapfel, Andrea; Lippert, Hans; Köckerling, Ferdinand
2012-02-01
The use of a mesh with good biocompatibility properties is of decisive importance for the avoidance of recurrences and chronic pain in endoscopic hernia repair surgery. As we know from numerous experiments and clinical experience, large-pore, lightweight polypropylene meshes possess the best biocompatibility. However, large-pore meshes of different polymers may be used as well and might be an alternative solution. Utilizing a totally extraperitoneal technique in an established animal model, 20 domestic pigs were implanted with either a lightweight large-pore polypropylene (PP) mesh (Optilene® LP) or a medium-weight large-pore knitted polytetrafluorethylene (PTFE) mesh (GORE® INFINIT® mesh). After 94 days, the pigs were sacrificed and postmortem diagnostic laparoscopy was performed, followed by explantation of the specimens for macroscopic, histological and immunohistochemical evaluation. The mean mesh shrinkage rate was 14.2% for Optilene® LP vs. 24.7% for INFINIT® mesh (p = 0.017). The partial volume of the inflammatory cells was 11.2% for Optilene® LP vs. 13.9% for INFINIT (n.s.). CD68 was significantly higher for INFINIT (11.8% vs. 5.6%, p = 0.007). The markers of cell turnover, namely Ki67 and the apoptotic index, were comparable at 6.4% vs. 12.4% (n.s.) and 1.6% vs. 2.0% (n.s.). In the extracellular matrix, TGF-β was 35.4% for Optilene® LP and 31.0% for INFINIT® (n.s.). Collagen I (pos/300 μm) deposits were 117.8 and 114.9, respectively. In our experimental examinations, Optilene® LP and INFINIT® showed a comparable biocompatibility in terms of chronic inflammatory reaction; however, the shrinkage rate was significantly higher for INFINIT® after 3 months. The higher shrinkage rate of INFINIT® should be taken into account when choosing the mesh size for an adequate hernia overlap.
Salinas, P.; Pavlidis, D.; Jacquemyn, C.; Lei, Q.; Xie, Z.; Pain, C.; Jackson, M.
2017-12-01
It is well known that the pressure gradient into a production well increases with decreasing distance to the well. To properly capture the local pressure drawdown into the well a high grid or mesh resolution is required; moreover, the location of the well must be captured accurately. In conventional simulation models, the user must interact with the model to modify grid resolution around wells of interest, and the well location is approximated on a grid defined early in the modelling process.We report a new approach for improved simulation of near wellbore flow in reservoir scale models through the use of dynamic mesh optimisation and the recently presented double control volume finite element method. Time is discretized using an adaptive, implicit approach. Heterogeneous geologic features are represented as volumes bounded by surfaces. Within these volumes, termed geologic domains, the material properties are constant. Up-, cross- or down-scaling of material properties during dynamic mesh optimization is not required, as the properties are uniform within each geologic domain. A given model typically contains numerous such geologic domains. Wells are implicitly coupled with the domain, and the fluid flows is modelled inside the wells. The method is novel for two reasons. First, a fully unstructured tetrahedral mesh is used to discretize space, and the spatial location of the well is specified via a line vector, ensuring its location even if the mesh is modified during the simulation. The well location is therefore accurately captured, the approach allows complex well trajectories and wells with many laterals to be modelled. Second, computational efficiency is increased by use of dynamic mesh optimization, in which an unstructured mesh adapts in space and time to key solution fields (preserving the geometry of the geologic domains), such as pressure, velocity or temperature, this also increases the quality of the solutions by placing higher resolution where required
International Nuclear Information System (INIS)
Skillman, Samuel W.; Hallman, Eric J.; Burns, Jack O.; Smith, Britton D.; O'Shea, Brian W.; Turk, Matthew J.
2011-01-01
Cosmological shocks are a critical part of large-scale structure formation, and are responsible for heating the intracluster medium in galaxy clusters. In addition, they are capable of accelerating non-thermal electrons and protons. In this work, we focus on the acceleration of electrons at shock fronts, which is thought to be responsible for radio relics-extended radio features in the vicinity of merging galaxy clusters. By combining high-resolution adaptive mesh refinement/N-body cosmological simulations with an accurate shock-finding algorithm and a model for electron acceleration, we calculate the expected synchrotron emission resulting from cosmological structure formation. We produce synthetic radio maps of a large sample of galaxy clusters and present luminosity functions and scaling relationships. With upcoming long-wavelength radio telescopes, we expect to see an abundance of radio emission associated with merger shocks in the intracluster medium. By producing observationally motivated statistics, we provide predictions that can be compared with observations to further improve our understanding of magnetic fields and electron shock acceleration.
Fuel-steel mixing and radial mesh effects in power excursion simulations
International Nuclear Information System (INIS)
Chen, X.-N.; Rineiski, A.; Gabrielli, F.; Andriolo, L.; Vezzoni, B.; Li, R.; Maschek, W.; Kiefhaber, E.
2016-01-01
Highlights: • Fuel-steel mixing and radial mesh effects are significant on power excursion. • The earliest power peak is reduced and retarded by these two effects. • Unprotected loss of coolant transients in ESFR core are calculated. - Abstract: This paper deals with SIMMER-III once-through simulations of the earliest power excursion initiated by an unprotected loss of flow (ULOF) in the Working Horse design of the European Sodium Cooled Fast Reactor (ESFR). Since the sodium void effect is strictly positive in this core and dominant in the transient, a power excursion is initiated by sodium boiling in the ULOF case. Two major effects, namely (1) reactivity effects due to fuel-steel mixing after melting and (2) the radial mesh size, which were not considered originally in SIMMER simulations for ESFR, are studied. The first effect concerns the reactivity difference between the heterogeneous fuel/clad/wrapper configuration and the homogeneous mixture of steel and fuel. The full core homogenization (due to melting) effect is −2 $, though a smaller effect takes place in case of partial core melting. The second effect is due to the SIMMER sub-assembly (SA) coarse mesh treatment, where a simultaneous sodium boiling onset in all SAs belonging to one ring leads to an overestimated reactivity ramp. For investigating the influence of fuel/steel mixing effects, a lumped “homogenization” reactivity feedback has been introduced, being proportional to the molten steel mass. For improving the coarse mesh treatment, we employ finer radial meshes to take the subchannel effects into account, where the side and interior channels have different coolant velocities and temperatures. The simulation results show that these two effects have significant impacts on the earliest power excursion after the sodium boiling.
Large Eddy Simulation of turbulence
International Nuclear Information System (INIS)
Poullet, P.; Sancandi, M.
1994-12-01
Results of Large Eddy Simulation of 3D isotropic homogeneous turbulent flows are presented. A computer code developed on Connexion Machine (CM5) has allowed to compare two turbulent viscosity models (Smagorinsky and structure function). The numerical scheme influence on the energy density spectrum is also studied [fr
Simulated Annealing Technique for Routing in a Rectangular Mesh Network
Directory of Open Access Journals (Sweden)
Noraziah Adzhar
2014-01-01
Full Text Available In the process of automatic design for printed circuit boards (PCBs, the phase following cell placement is routing. On the other hand, routing process is a notoriously difficult problem, and even the simplest routing problem which consists of a set of two-pin nets is known to be NP-complete. In this research, our routing region is first tessellated into a uniform Nx×Ny array of square cells. The ultimate goal for a routing problem is to achieve complete automatic routing with minimal need for any manual intervention. Therefore, shortest path for all connections needs to be established. While classical Dijkstra’s algorithm guarantees to find shortest path for a single net, each routed net will form obstacles for later paths. This will add complexities to route later nets and make its routing longer than the optimal path or sometimes impossible to complete. Today’s sequential routing often applies heuristic method to further refine the solution. Through this process, all nets will be rerouted in different order to improve the quality of routing. Because of this, we are motivated to apply simulated annealing, one of the metaheuristic methods to our routing model to produce better candidates of sequence.
Mesh-based weight window approach for Monte Carlo simulation
International Nuclear Information System (INIS)
Liu, L.; Gardner, R.P.
1997-01-01
The Monte Carlo method has been increasingly used to solve particle transport problems. Statistical fluctuation from random sampling is the major limiting factor of its application. To obtain the desired precision, variance reduction techniques are indispensable for most practical problems. Among various variance reduction techniques, the weight window method proves to be one of the most general, powerful, and robust. The method is implemented in the current MCNP code. An importance map is estimated during a regular Monte Carlo run, and then the map is used in the subsequent run for splitting and Russian roulette games. The major drawback of this weight window method is lack of user-friendliness. It normally requires that users divide the large geometric cells into smaller ones by introducing additional surfaces to ensure an acceptable spatial resolution of the importance map. In this paper, we present a new weight window approach to overcome this drawback
Caviedes-Voullième, Daniel; García-Navarro, Pilar; Murillo, Javier
2012-07-01
SummaryHydrological simulation of rain-runoff processes is often performed with lumped models which rely on calibration to generate storm hydrographs and study catchment response to rain. In this paper, a distributed, physically-based numerical model is used for runoff simulation in a mountain catchment. This approach offers two advantages. The first is that by using shallow-water equations for runoff flow, there is less freedom to calibrate routing parameters (as compared to, for example, synthetic hydrograph methods). The second, is that spatial distributions of water depth and velocity can be obtained. Furthermore, interactions among the various hydrological processes can be modeled in a physically-based approach which may depend on transient and spatially distributed factors. On the other hand, the undertaken numerical approach relies on accurate terrain representation and mesh selection, which also affects significantly the computational cost of the simulations. Hence, we investigate the response of a gauged catchment with this distributed approach. The methodology consists of analyzing the effects that the mesh has on the simulations by using a range of meshes. Next, friction is applied to the model and the response to variations and interaction with the mesh is studied. Finally, a first approach with the well-known SCS Curve Number method is studied to evaluate its behavior when coupled with a shallow-water model for runoff flow. The results show that mesh selection is of great importance, since it may affect the results in a magnitude as large as physical factors, such as friction. Furthermore, results proved to be less sensitive to roughness spatial distribution than to mesh properties. Finally, the results indicate that SCS-CN may not be suitable for simulating hydrological processes together with a shallow-water model.
DEFF Research Database (Denmark)
Hejlesen, Mads Mølholm; Spietz, Henrik J.; Walther, Jens Honore
2014-01-01
, unbounded particle-mesh based vortex method is used to simulate the instability, transition to turbulence and eventual destruction of a single vortex ring. From the simulation data a novel method on analyzing the dynamics of the enstrophy is presented based on the alignment of the vorticity vector...... with the principal axis of the strain rate tensor. We find that the dynamics of the enstrophy density is dominated by the local flow deformation and axis of rotation, which is used to infer some concrete tendencies related to the topology of the vorticity field....
Large-eddy simulation of flow over a cylinder with from to : a skin-friction perspective
Cheng, Wan; Pullin, D. I.; Samtaney, Ravi; Zhang, W.; Gao, Wei
2017-01-01
, numerical discretization fluctuations are sufficient to stimulate transition, while for higher resolution, an applied boundary-layer perturbation is found to be necessary to stimulate transition. Large-eddy simulation results at , with a mesh of , agree well
Large eddy simulation of vortex breakdown behind a delta wing
International Nuclear Information System (INIS)
Mary, I.
2003-01-01
A large eddy simulation (LES) of a turbulent flow past a 70 deg. sweep angle delta wing is performed and compared with wind tunnel experiments. The angle of attack and the Reynolds number based on the root chord are equal to 27 deg. and 1.6x10 6 , respectively. Due to the high value of the Reynolds number and the three-dimensional geometry, the mesh resolution usually required by LES cannot be reached. Therefore a local mesh refinement technique based on semi-structured grids is proposed, whereas different wall functions are assessed in this paper. The goal is to evaluate if these techniques are sufficient to provide an accurate solution of such flow on available supercomputers. An implicit Miles model is retained for the subgrid scale (SGS) modelling because the resolution is too coarse to take advantage of more sophisticated SGS models. The solution sensitivity to grid refinement in the streamwise and wall normal direction is investigated
In vitro bioactivity of 3D Ti-mesh with bioceramic coatings in simulated body fluid
Directory of Open Access Journals (Sweden)
Wei Yi
2014-09-01
Full Text Available 3D Ti-mesh has been coated with bioceramics under different coating conditions, such as material compositions and micro-porosity, using a dip casting method. Hydroxyapatite (HA, micro-HA particles (HAp, a bioglass (BG and their different mixtures together with polymer additives were used to control HA-coating microstructures. Layered composites with the following coating-to-substrate designs, such as BG/Ti, HA + BG/BG/Ti and HAp + BG/BG/Ti, were fabricated. The bioactivity of these coated composites and the uncoated Ti-mesh substrate was then investigated in a simulated body fluid (SBF. The Ti-mesh substrate and BG/Ti composite did not induce biomimetic apatite deposition when they were immersed in SBF for the selected BG, a pressable dental ceramic, used in this study. After seven days in SBF, an apatite layer was formed on both HA + BG/BG/Ti and HAp + BG/BG/Ti composites. The difference is the apatite layer on the HAp + BG/BG/Ti composite was rougher and contained more micro-pores, while the apatite layer on the HA + BG/BG/Ti composite was dense and smooth. The formation of biomimetic apatite, being more bioresorbable, is favored for bone regeneration.
Numerical simulation of the laminar hydrogen flame in the presence of a quenching mesh
International Nuclear Information System (INIS)
Kudriakov, S.; Studer, E.; Bin, C.
2011-01-01
Recent studies of J.H. Song et al., and S.Y. Yang et al. have been concentrated on mitigation measures against hydrogen risk. The authors have proposed installation of quenching meshes between compartments or around the essential equipment in order to contain hydrogen flames. Preliminary tests were conducted which demonstrated the possibility of flame extinction using metallic meshes of specific size. Considerable amount of numerical and theoretical work on flame quenching phenomenon has been performed in the second half of the last century and several techniques and models have been proposed to predict the quenching phenomenon of the laminar flame system. Most of these models appreciated the importance of heat loss to the surroundings as a primary cause of extinguishment, in particular, the heat transfer by conduction to the containing wall. The supporting simulations predict flame-quenching structure either between parallel plates (quenching distance) or inside a tube of a certain diameter (quenching diameter). In the present study the flame quenching is investigated assuming the laminar hydrogen flame propagating towards a quenching mesh using two-dimensional configuration and the earlier developed models. It is shown that due to a heat loss to a metallic grid the flame can be quenched numerically. (authors)
Simulation study for high resolution alpha particle spectrometry with mesh type collimator
International Nuclear Information System (INIS)
Park, Seunghoon; Kwak, Sungwoo; Kang, Hanbyeol; Shin, Jungki; Park, Iljin
2014-01-01
An alpha particle spectrometry with a mesh type collimator plays a crucial role in identifying specific radionuclide in a radioactive source collected from the atmosphere or environment. The energy resolution is degraded without collimation because particles with a high angle have a longer path to travel in the air. Therefore, collision with the background increases. The collimator can cut out particles which traveling at a high angle. As a result, an energy distribution with high resolution can be obtained. Therefore, the mesh type collimator is simulated for high resolution alpha particle spectrometry. In conclusion, the collimator can improve resolution. With collimator, the collimator is a role of cutting out particles with a high angle, so, low energy tail and broadened energy distribution can be reduced. The mesh diameter is found out as an important factor to control resolution and counting efficiency. Therefore, a target particle, for example, 235 U, can be distinguished by a detector with a collimator under a mixture of various nuclides, for example: 232 U, 238 U, and 232 Th
Audette, M. A.; Hertel, I.; Burgert, O.; Strauss, G.
This paper presents on-going work on a method for determining which subvolumes of a patient-specific tissue map, extracted from CT data of the head, are relevant to simulating endoscopic sinus surgery of that individual, and for decomposing these relevant tissues into triangles and tetrahedra whose mesh size is well controlled. The overall goal is to limit the complexity of the real-time biomechanical interaction while ensuring the clinical relevance of the simulation. Relevant tissues are determined as the union of the pathology present in the patient, of critical tissues deemed to be near the intended surgical path or pathology, and of bone and soft tissue near the intended path, pathology or critical tissues. The processing of tissues, prior to meshing, is based on the Fast Marching method applied under various guises, in a conditional manner that is related to tissue classes. The meshing is based on an adaptation of a meshing method of ours, which combines the Marching Tetrahedra method and the discrete Simplex mesh surface model to produce a topologically faithful surface mesh with well controlled edge and face size as a first stage, and Almost-regular Tetrahedralization of the same prescribed mesh size as a last stage.
Directory of Open Access Journals (Sweden)
Gang-ge CHENG
2013-09-01
Full Text Available Objective To explore the clinical effect and surgical technique of the repair of large defect involving frontal, temporal, and parietal regions using digitally reconstructed titanium mesh. Methods Twenty patients with large frontal, temporal, and parietal skull defect hospitalized in Air Force General Hospital from November 2006 to May 2012 were involved in this study. In these 20 patients, there were 13 males and 7 females, aged 18-58 years (mean 39 years, and the defect size measured from 7.0cm×9.0cm to 11.5cm×14.0cm (mean 8.5cm×12.0cm. Spiral CT head scan and digital three-dimensional reconstruction of skull were performed in all the patients. The shape and geometric size of skull defect was traced based on the symmetry principle, and then the data were transferred into digital precision lathe to reconstruct a titanium mesh slightly larger (1.0-1.5cm than the skull defect, and the finally the prosthesis was perfected after pruning the border. Cranioplasty was performed 6-12 months after craniotomy using the digitally reconstructed titanium mesh. Results The digitally reconstructed titanium mesh was used in 20 patients with large frontal, temporal, parietal skull defect. The surgical technique was relatively simple, and the surgical duration was shorter than before. The titanium mesh fit to the defect of skull accurately with satisfactory molding effect, good appearance and symmetrical in shape. No related complication was found in all the patients. Conclusion Repair of large frontal, temporal, parietal skull defect with digitally reconstructed titanium mesh is more advantageous than traditional manual reconstruction, and it can improve the life quality of patients.
Dynamic Mesh CFD Simulations of Orion Parachute Pendulum Motion During Atmospheric Entry
Halstrom, Logan D.; Schwing, Alan M.; Robinson, Stephen K.
2016-01-01
This paper demonstrates the usage of computational fluid dynamics to study the effects of pendulum motion dynamics of the NASAs Orion Multi-Purpose Crew Vehicle parachute system on the stability of the vehicles atmospheric entry and decent. Significant computational fluid dynamics testing has already been performed at NASAs Johnson Space Center, but this study sought to investigate the effect of bulk motion of the parachute, such as pitching, on the induced aerodynamic forces. Simulations were performed with a moving grid geometry oscillating according to the parameters observed in flight tests. As with the previous simulations, OVERFLOW computational fluid dynamics tool is used with the assumption of rigid, non-permeable geometry. Comparison to parachute wind tunnel tests is included for a preliminary validation of the dynamic mesh model. Results show qualitative differences in the flow fields of the static and dynamic simulations and quantitative differences in the induced aerodynamic forces, suggesting that dynamic mesh modeling of the parachute pendulum motion may uncover additional dynamic effects.
Large Eddy Simulation of Turbulent Flows in Wind Energy
DEFF Research Database (Denmark)
Chivaee, Hamid Sarlak
This research is devoted to the Large Eddy Simulation (LES), and to lesser extent, wind tunnel measurements of turbulent flows in wind energy. It starts with an introduction to the LES technique associated with the solution of the incompressible Navier-Stokes equations, discretized using a finite......, should the mesh resolution, numerical discretization scheme, time averaging period, and domain size be chosen wisely. A thorough investigation of the wind turbine wake interactions is also conducted and the simulations are validated against available experimental data from external sources. The effect...... Reynolds numbers, and thereafter, the fully-developed infinite wind farm boundary later simulations are performed. Sources of inaccuracy in the simulations are investigated and it is found that high Reynolds number flows are more sensitive to the choice of the SGS model than their low Reynolds number...
DEFF Research Database (Denmark)
Vrana, Til Kristian; Zeni, Lorenzo; Fosso, Olav Bjarte
A new control method for large meshed HVDC grids has been developed, which helps to keep the active power balance at the AC and the DC side. The method definition is kept wide, leaving the possibility for control parameter optimisation. Other known control methods can be seen as specific examples...
Large Scale Simulations of the Euler Equations on GPU Clusters
Liebmann, Manfred
2010-08-01
The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one billion elements. We investigate communication protocols for the GPU cluster to compensate for the slow Gigabit Ethernet network between the GPU compute nodes and to maintain overall efficiency. A diesel engine intake-port and a nozzle, meshed in different resolutions, give good real world examples for the scalability tests on the GPU cluster. © 2010 IEEE.
Energy Technology Data Exchange (ETDEWEB)
Pointer, William David [ORNL
2017-08-01
The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes were used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge
Exertional Myopathy in a Juvenile Green Sea Turtle (Chelonia mydas Entangled in a Large Mesh Gillnet
Directory of Open Access Journals (Sweden)
Brianne E. Phillips
2015-01-01
Full Text Available A juvenile female green sea turtle (Chelonia mydas was found entangled in a large mesh gillnet in Pamlico Sound, NC, and was weak upon presentation for treatment. Blood gas analysis revealed severe metabolic acidosis and hyperlactatemia. Plasma biochemistry analysis showed elevated aspartate aminotransferase and creatine kinase, marked hypercalcemia, hyperphosphatemia, and hyperkalemia. Death occurred within 24 hours of presentation despite treatment with intravenous and subcutaneous fluids and sodium bicarbonate. Necropsy revealed multifocal to diffuse pallor of the superficial and deep pectoral muscles. Mild, multifocal, and acute myofiber necrosis was identified by histopathological examination. While histological changes in the examined muscle were modest, the acid-base, mineral, and electrolyte abnormalities were sufficiently severe to contribute to this animal’s mortality. Exertional myopathy in reptiles has not been well characterized. Sea turtle mortality resulting from forced submergence has been attributed to blood gas derangements and seawater aspiration; however, exertional myopathy may also be an important contributing factor. If possible, sea turtles subjected to incidental capture and entanglement that exhibit weakness or dull mentation should be clinically evaluated prior to release to minimize the risk of delayed mortality. Treatment with appropriate fluid therapy and supportive care may mitigate the effects of exertional myopathy in some cases.
3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks
International Nuclear Information System (INIS)
Samtaney, S.; Jardin, S.C.; Colella, P.; Martin, D.F.
2003-01-01
We present results of Adaptive Mesh Refinement (AMR) simulations of the pellet injection process, a proven method of refueling tokamaks. AMR is a computationally efficient way to provide the resolution required to simulate realistic pellet sizes relative to device dimensions. The mathematical model comprises of single-fluid MHD equations with source terms in the continuity equation along with a pellet ablation rate model. The numerical method developed is an explicit unsplit upwinding treatment of the 8-wave formulation, coupled with a MAC projection method to enforce the solenoidal property of the magnetic field. The Chombo framework is used for AMR. The role of the E x B drift in mass redistribution during inside and outside pellet injections is emphasized
Energy Technology Data Exchange (ETDEWEB)
Hepburn, I.; De Schutter, E., E-mail: erik@oist.jp [Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa 904 0495 (Japan); Theoretical Neurobiology & Neuroengineering, University of Antwerp, Antwerp 2610 (Belgium); Chen, W. [Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa 904 0495 (Japan)
2016-08-07
Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realistic biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.
Yu, Sang-Hui; Oh, Seunghan; Cho, Hye-Won; Bae, Ji-Myung
2017-11-01
Studies that evaluated the strength of complete dentures reinforced with glass-fiber mesh or metal mesh on a cast with a simulated oral mucosa are lacking. The purpose of this in vitro study was to compare the mechanical properties of maxillary complete dentures reinforced with glass-fiber mesh with those of metal mesh in a new test model, using a simulated oral mucosa. Complete dentures reinforced with 2 types of glass-fiber mesh, SES mesh (SES) and glass cloth (GC) and metal mesh (metal) were fabricated. Complete dentures without any reinforcement were prepared as a control (n=10). The complete dentures were located on a cast with a simulated oral mucosa, and a load was applied on the posterior artificial teeth bilaterally. The fracture load, elastic modulus, and toughness of a complete denture were measured using a universal testing machine at a crosshead speed of 5 mm/min. The fracture load and elastic modulus were analyzed using 1-way analysis of variance, and the toughness was analyzed with the Kruskal-Wallis test (α=.05). The Tukey multiple range test was used as a post hoc test. The fracture load and toughness of the SES group was significantly higher than that of the metal and control groups (P<.05) but not significantly different from that of the GC group. The elastic modulus of the metal group was significantly higher than that of the control group (P<.05), and no significant differences were observed in the SES and GC groups. Compared with the control group, the fracture load and toughness of the SES and GC groups were higher, while those of the metal group were not significantly different. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Hummels, Cameron B.; Bryan, Greg L.
2012-01-01
We carry out adaptive mesh refinement cosmological simulations of Milky Way mass halos in order to investigate the formation of disk-like galaxies in a Λ-dominated cold dark matter model. We evolve a suite of five halos to z = 0 and find a gas disk formation in each; however, in agreement with previous smoothed particle hydrodynamics simulations (that did not include a subgrid feedback model), the rotation curves of all halos are centrally peaked due to a massive spheroidal component. Our standard model includes radiative cooling and star formation, but no feedback. We further investigate this angular momentum problem by systematically modifying various simulation parameters including: (1) spatial resolution, ranging from 1700 to 212 pc; (2) an additional pressure component to ensure that the Jeans length is always resolved; (3) low star formation efficiency, going down to 0.1%; (4) fixed physical resolution as opposed to comoving resolution; (5) a supernova feedback model that injects thermal energy to the local cell; and (6) a subgrid feedback model which suppresses cooling in the immediate vicinity of a star formation event. Of all of these, we find that only the last (cooling suppression) has any impact on the massive spheroidal component. In particular, a simulation with cooling suppression and feedback results in a rotation curve that, while still peaked, is considerably reduced from our standard runs.
Directory of Open Access Journals (Sweden)
Andrej Kastrin
Full Text Available Concept associations can be represented by a network that consists of a set of nodes representing concepts and a set of edges representing their relationships. Complex networks exhibit some common topological features including small diameter, high degree of clustering, power-law degree distribution, and modularity. We investigated the topological properties of a network constructed from co-occurrences between MeSH descriptors in the MEDLINE database. We conducted the analysis on two networks, one constructed from all MeSH descriptors and another using only major descriptors. Network reduction was performed using the Pearson's chi-square test for independence. To characterize topological properties of the network we adopted some specific measures, including diameter, average path length, clustering coefficient, and degree distribution. For the full MeSH network the average path length was 1.95 with a diameter of three edges and clustering coefficient of 0.26. The Kolmogorov-Smirnov test rejects the power law as a plausible model for degree distribution. For the major MeSH network the average path length was 2.63 edges with a diameter of seven edges and clustering coefficient of 0.15. The Kolmogorov-Smirnov test failed to reject the power law as a plausible model. The power-law exponent was 5.07. In both networks it was evident that nodes with a lower degree exhibit higher clustering than those with a higher degree. After simulated attack, where we removed 10% of nodes with the highest degrees, the giant component of each of the two networks contains about 90% of all nodes. Because of small average path length and high degree of clustering the MeSH network is small-world. A power-law distribution is not a plausible model for the degree distribution. The network is highly modular, highly resistant to targeted and random attack and with minimal dissortativity.
International Nuclear Information System (INIS)
Andreani, Michele; Paladino, Domenico
2010-01-01
The recently concluded OECD SETH project included twenty-four experiments on basic flows and gas transport and mixing driven by jets and plumes in two, large, connected vessels of the PANDA facility. The experiments featured injection of saturated or superheated steam, or a mixture of steam and helium in one vessel and venting from the same vessel or from the connected one. These tests have been especially designed for providing an extensive data base for the assessment of three-dimensional codes, including CFD codes. In particular, one of the goals of the analytical activities associated with the experiments was to evaluate the detail of the model (mesh) necessary for capturing the various phenomena. This work reports an overview of the results obtained for these experimental data using the advanced containment code GOTHIC and relatively coarse meshes, which are coarser than the ones typically used for the simulation with commercial CFD codes, but are still representative of the models which are currently affordable for a full containment analysis. In general, the phenomena were correctly represented in the simulations with GOTHIC, and the agreement of the results with the data was in most cases pretty good, in some cases excellent. Only for a few tests (or particular phenomena occurring in some tests) the simulations showed noticeable discrepancies with the experimental data, which could be referred to either an insufficiently detailed mesh or to lack of specialized models for local effects.
Large-scale numerical simulations of plasmas
International Nuclear Information System (INIS)
Hamaguchi, Satoshi
2004-01-01
The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)
Large Eddy Simulation (LES for IC Engine Flows
Directory of Open Access Journals (Sweden)
Kuo Tang-Wei
2013-10-01
Full Text Available Numerical computations are carried out using an engineering-level Large Eddy Simulation (LES model that is provided by a commercial CFD code CONVERGE. The analytical framework and experimental setup consist of a single cylinder engine with Transparent Combustion Chamber (TCC under motored conditions. A rigorous working procedure for comparing and analyzing the results from simulation and high speed Particle Image Velocimetry (PIV experiments is documented in this work. The following aspects of LES are analyzed using this procedure: number of cycles required for convergence with adequate accuracy; effect of mesh size, time step, sub-grid-scale (SGS turbulence models and boundary condition treatments; application of the proper orthogonal decomposition (POD technique.
Modeling and simulation of large HVDC systems
Energy Technology Data Exchange (ETDEWEB)
Jin, H.; Sood, V.K.
1993-01-01
This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.
International Nuclear Information System (INIS)
Fonseca, Gabriel Paiva; Yoriyaz, Hélio; Landry, Guillaume; White, Shane; Reniers, Brigitte; Verhaegen, Frank; D’Amours, Michel; Beaulieu, Luc
2014-01-01
Accounting for brachytherapy applicator attenuation is part of the recommendations from the recent report of AAPM Task Group 186. To do so, model based dose calculation algorithms require accurate modelling of the applicator geometry. This can be non-trivial in the case of irregularly shaped applicators such as the Fletcher Williamson gynaecological applicator or balloon applicators with possibly irregular shapes employed in accelerated partial breast irradiation (APBI) performed using electronic brachytherapy sources (EBS). While many of these applicators can be modelled using constructive solid geometry (CSG), the latter may be difficult and time-consuming. Alternatively, these complex geometries can be modelled using tessellated geometries such as tetrahedral meshes (mesh geometries (MG)). Recent versions of Monte Carlo (MC) codes Geant4 and MCNP6 allow for the use of MG. The goal of this work was to model a series of applicators relevant to brachytherapy using MG. Applicators designed for 192 Ir sources and 50 kV EBS were studied; a shielded vaginal applicator, a shielded Fletcher Williamson applicator and an APBI balloon applicator. All applicators were modelled in Geant4 and MCNP6 using MG and CSG for dose calculations. CSG derived dose distributions were considered as reference and used to validate MG models by comparing dose distribution ratios. In general agreement within 1% for the dose calculations was observed for all applicators between MG and CSG and between codes when considering volumes inside the 25% isodose surface. When compared to CSG, MG required longer computation times by a factor of at least 2 for MC simulations using the same code. MCNP6 calculation times were more than ten times shorter than Geant4 in some cases. In conclusion we presented methods allowing for high fidelity modelling with results equivalent to CSG. To the best of our knowledge MG offers the most accurate representation of an irregular APBI balloon applicator. (paper)
Huang, W.; Zheng, Lingyun; Zhan, X.
2002-01-01
Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.
International Nuclear Information System (INIS)
Nonaka, A.; Aspden, A. J.; Almgren, A. S.; Bell, J. B.; Zingale, M.; Woosley, S. E.
2012-01-01
We extend our previous three-dimensional, full-star simulations of the final hours of convection preceding ignition in Type Ia supernovae to higher resolution using the adaptive mesh refinement capability of our low Mach number code, MAESTRO. We report the statistics of the ignition of the first flame at an effective 4.34 km resolution and general flow field properties at an effective 2.17 km resolution. We find that off-center ignition is likely, with radius of 50 km most favored and a likely range of 40-75 km. This is consistent with our previous coarser (8.68 km resolution) simulations, implying that we have achieved sufficient resolution in our determination of likely ignition radii. The dynamics of the last few hot spots preceding ignition suggest that a multiple ignition scenario is not likely. With improved resolution, we can more clearly see the general flow pattern in the convective region, characterized by a strong outward plume with a lower speed recirculation. We show that the convective core is turbulent with a Kolmogorov spectrum and has a lower turbulent intensity and larger integral length scale than previously thought (on the order of 16 km s –1 and 200 km, respectively), and we discuss the potential consequences for the first flames.
Direct numerical simulation of bubbles with adaptive mesh refinement with distributed algorithms
International Nuclear Information System (INIS)
Talpaert, Arthur
2017-01-01
This PhD work presents the implementation of the simulation of two-phase flows in conditions of water-cooled nuclear reactors, at the scale of individual bubbles. To achieve that, we study several models for Thermal-Hydraulic flows and we focus on a technique for the capture of the thin interface between liquid and vapour phases. We thus review some possible techniques for adaptive Mesh Refinement (AMR) and provide algorithmic and computational tools adapted to patch-based AMR, which aim is to locally improve the precision in regions of interest. More precisely, we introduce a patch-covering algorithm designed with balanced parallel computing in mind. This approach lets us finely capture changes located at the interface, as we show for advection test cases as well as for models with hyperbolic-elliptic coupling. The computations we present also include the simulation of the incompressible Navier-Stokes system, which models the shape changes of the interface between two non-miscible fluids. (author) [fr
AbuAlSaud, Moataz
2012-07-01
The purpose of this thesis is to solve unsteady two-dimensional compressible Navier-Stokes equations for a moving mesh using implicit explicit (IMEX) Runge- Kutta scheme. The moving mesh is implemented in the equations using Arbitrary Lagrangian Eulerian (ALE) formulation. The inviscid part of the equation is explicitly solved using second-order Godunov method, whereas the viscous part is calculated implicitly. We simulate subsonic compressible flow over static NACA-0012 airfoil at different angle of attacks. Finally, the moving mesh is examined via oscillating the airfoil between angle of attack = 0 and = 20 harmonically. It is observed that the numerical solution matches the experimental and numerical results in the literature to within 20%.
Distributed simulation of large computer systems
International Nuclear Information System (INIS)
Marzolla, M.
2001-01-01
Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator
Numerical techniques for large cosmological N-body simulations
International Nuclear Information System (INIS)
Efstathiou, G.; Davis, M.; Frenk, C.S.; White, S.D.M.
1985-01-01
We describe and compare techniques for carrying out large N-body simulations of the gravitational evolution of clustering in the fundamental cube of an infinite periodic universe. In particular, we consider both particle mesh (PM) codes and P 3 M codes in which a higher resolution force is obtained by direct summation of contributions from neighboring particles. We discuss the mesh-induced anisotropies in the forces calculated by these schemes, and the extent to which they can model the desired 1/r 2 particle-particle interaction. We also consider how transformation of the time variable can improve the efficiency with which the equations of motion are integrated. We present tests of the accuracy with which the resulting schemes conserve energy and are able to follow individual particle trajectories. We have implemented an algorithm which allows initial conditions to be set up to model any desired spectrum of linear growing mode density fluctuations. A number of tests demonstrate the power of this algorithm and delineate the conditions under which it is effective. We carry out several test simulations using a variety of techniques in order to show how the results are affected by dynamic range limitations in the force calculations, by boundary effects, by residual artificialities in the initial conditions, and by the number of particles employed. For most purposes cosmological simulations are limited by the resolution of their force calculation rather than by the number of particles they can employ. For this reason, while PM codes are quite adequate to study the evolution of structure on large scale, P 3 M methods are to be preferred, in spite of their greater cost and complexity, whenever the evolution of small-scale structure is important
Large-scale numerical simulations of star formation put to the test
DEFF Research Database (Denmark)
Frimann, Søren; Jørgensen, Jes Kristian; Haugbølle, Troels
2016-01-01
(SEDs), calculated from large-scalenumerical simulations, to observational studies, thereby aiding in boththe interpretation of the observations and in testing the fidelity ofthe simulations. Methods: The adaptive mesh refinement code,RAMSES, is used to simulate the evolution of a 5 pc × 5 pc ×5 pc...... to calculate evolutionary tracers Tbol andLsmm/Lbol. It is shown that, while the observeddistributions of the tracers are well matched by the simulation, theygenerally do a poor job of tracking the protostellar ages. Disks formearly in the simulation, with 40% of the Class 0 protostars beingencircled by one...
De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico
2012-02-01
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.
International Nuclear Information System (INIS)
De Colle, Fabio; Ramirez-Ruiz, Enrico; Granot, Jonathan; López-Cámara, Diego
2012-01-01
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρ∝r –k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.
Energy Technology Data Exchange (ETDEWEB)
De Colle, Fabio; Ramirez-Ruiz, Enrico [Astronomy and Astrophysics Department, University of California, Santa Cruz, CA 95064 (United States); Granot, Jonathan [Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); Lopez-Camara, Diego [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Ap. 70-543, 04510 D.F. (Mexico)
2012-02-20
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the
Towards Large Eddy Simulation of gas turbine compressors
McMullan, W. A.; Page, G. J.
2012-07-01
With increasing computing power, Large Eddy Simulation could be a useful simulation tool for gas turbine axial compressor design. This paper outlines a series of simulations performed on compressor geometries, ranging from a Controlled Diffusion Cascade stator blade to the periodic sector of a stage in a 3.5 stage axial compressor. The simulation results show that LES may offer advantages over traditional RANS methods when off-design conditions are considered - flow regimes where RANS models often fail to converge. The time-dependent nature of LES permits the resolution of transient flow structures, and can elucidate new mechanisms of vorticity generation on blade surfaces. It is shown that accurate LES is heavily reliant on both the near-wall mesh fidelity and the ability of the imposed inflow condition to recreate the conditions found in the reference experiment. For components embedded in a compressor this requires the generation of turbulence fluctuations at the inlet plane. A recycling method is developed that improves the quality of the flow in a single stage calculation of an axial compressor, and indicates that future developments in both the recycling technique and computing power will bring simulations of axial compressors within reach of industry in the coming years.
Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks
Directory of Open Access Journals (Sweden)
Raja Jurdak
2008-11-01
Full Text Available Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.
Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks.
Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio
2008-11-24
Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.
Generating wind fluctuations for Large Eddy Simulation inflow boundary condition
International Nuclear Information System (INIS)
Bekele, S.A.; Hangan, H.
2004-01-01
Large Eddy Simulation (LES) studies of flows over bluff bodies immersed in a boundary layer wind environment require instantaneous wind characteristics. The influences of the wind environment on the building pressure distribution are a well-established fact in the experimental study of wind engineering. Measured wind data of full or model scale are available only at a limited number of points. A method of obtaining instantaneous wind data at all mesh points of the inlet boundary for LES computation is necessary. Herein previous and new wind inflow generation techniques are presented. The generated wind data is then applied to a LES computation of a channel flow. The characteristics of the generated wind fluctuations in comparison to the measured data and the properties of the flow field computed from these two wind data are discussed. (author)
Cortical imaging on a head template: a simulation study using a resistor mesh model (RMM).
Chauveau, Nicolas; Franceries, Xavier; Aubry, Florent; Celsis, Pierre; Rigaud, Bernard
2008-09-01
The T1 head template model used in Statistical Parametric Mapping Version 2000 (SPM2), was segmented into five layers (scalp, skull, CSF, grey and white matter) and implemented in 2 mm voxels. We designed a resistor mesh model (RMM), based on the finite volume method (FVM) to simulate the electrical properties of this head model along the three axes for each voxel. Then, we introduced four dipoles of high eccentricity (about 0.8) in this RMM, separately and simultaneously, to compute the potentials for two sets of conductivities. We used the direct cortical imaging technique (CIT) to recover the simulated dipoles, using 60 or 107 electrodes and with or without addition of Gaussian white noise (GWN). The use of realistic conductivities gave better CIT results than standard conductivities, lowering the blurring effect on scalp potentials and displaying more accurate position areas when CIT was applied to single dipoles. Simultaneous dipoles were less accurately localized, but good qualitative and stable quantitative results were obtained up to 5% noise level for 107 electrodes and up to 10% noise level for 60 electrodes, showing that a compromise must be found to optimize both the number of electrodes and the noise level. With the RMM defined in 2 mm voxels, the standard 128-electrode cap and 5% noise appears to be the upper limit providing reliable source positions when direct CIT is used. The admittance matrix defining the RMM is easy to modify so as to adapt to different conductivities. The next step will be the adaptation of individual real head T2 images to the RMM template and the introduction of anisotropy using diffusion imaging (DI).
Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow
Sam Ali Al; Szasz Robert; Revstedt Johan
2015-01-01
The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simu...
Energy Technology Data Exchange (ETDEWEB)
Lopez-Camara, D.; Lazzati, Davide [Department of Physics, NC State University, 2401 Stinson Drive, Raleigh, NC 27695-8202 (United States); Morsony, Brian J. [Department of Astronomy, University of Wisconsin-Madison, 2535 Sterling Hall, 475 N. Charter Street, Madison, WI 53706-1582 (United States); Begelman, Mitchell C., E-mail: dlopezc@ncsu.edu [JILA, University of Colorado, 440 UCB, Boulder, CO 80309-0440 (United States)
2013-04-10
We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.
DEFF Research Database (Denmark)
2015-01-01
Mesh generation and visualization software based on the CGAL library. Folder content: drawmesh Visualize slices of the mesh (surface/volumetric) as wireframe on top of an image (3D). drawsurf Visualize surfaces of the mesh (surface/volumetric). img2mesh Convert isosurface in image to volumetric m...... mesh (medit format). img2off Convert isosurface in image to surface mesh (off format). off2mesh Convert surface mesh (off format) to volumetric mesh (medit format). reduce Crop and resize 3D and stacks of images. data Example data to test the library on...
Large Eddy Simulation of High-Speed, Premixed Ethylene Combustion
Ramesh, Kiran; Edwards, Jack R.; Chelliah, Harsha; Goyne, Christopher; McDaniel, James; Rockwell, Robert; Kirik, Justin; Cutler, Andrew; Danehy, Paul
2015-01-01
A large-eddy simulation / Reynolds-averaged Navier-Stokes (LES/RANS) methodology is used to simulate premixed ethylene-air combustion in a model scramjet designed for dual mode operation and equipped with a cavity for flameholding. A 22-species reduced mechanism for ethylene-air combustion is employed, and the calculations are performed on a mesh containing 93 million cells. Fuel plumes injected at the isolator entrance are processed by the isolator shock train, yielding a premixed fuel-air mixture at an equivalence ratio of 0.42 at the cavity entrance plane. A premixed flame is anchored within the cavity and propagates toward the opposite wall. Near complete combustion of ethylene is obtained. The combustor is highly dynamic, exhibiting a large-scale oscillation in global heat release and mass flow rate with a period of about 2.8 ms. Maximum heat release occurs when the flame front reaches its most downstream extent, as the flame surface area is larger. Minimum heat release is associated with flame propagation toward the cavity and occurs through a reduction in core flow velocity that is correlated with an upstream movement of the shock train. Reasonable agreement between simulation results and available wall pressure, particle image velocimetry, and OH-PLIF data is obtained, but it is not yet clear whether the system-level oscillations seen in the calculations are actually present in the experiment.
Learning from large scale neural simulations
DEFF Research Database (Denmark)
Serban, Maria
2017-01-01
Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...
Large Eddy Simulations using oodlesDST
2016-01-01
Research Agency DST-Group-TR-3205 ABSTRACT The oodlesDST code is based on OpenFOAM software and performs Large Eddy Simulations of......maritime platforms using a variety of simulation techniques. He is currently using OpenFOAM software to perform both Reynolds Averaged Navier-Stokes
Energy Technology Data Exchange (ETDEWEB)
Sun, Yuzhou, E-mail: yuzhousun@126.com; Chen, Gensheng; Li, Dongxia [School of Civil Engineering and Architecture, Zhongyuan University of Technology, Zhengzhou (China)
2016-06-08
This paper attempts to study the application of mesh-free method in the numerical simulations of the higher-order continuum structures. A high-order bending beam considers the effect of the third-order derivative of deflections, and can be viewed as a one-dimensional higher-order continuum structure. The moving least-squares method is used to construct the shape function with the high-order continuum property, the curvature and the third-order derivative of deflections are directly interpolated with nodal variables and the second- and third-order derivative of the shape function, and the mesh-free computational scheme is establish for beams. The coupled stress theory is introduced to describe the special constitutive response of the layered rock mass in which the bending effect of thin layer is considered. The strain and the curvature are directly interpolated with the nodal variables, and the mesh-free method is established for the layered rock mass. The good computational efficiency is achieved based on the developed mesh-free method, and some key issues are discussed.
Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime; Liebschner, Michael A K; Xia, James J
2018-04-01
Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians' need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical
Energy Technology Data Exchange (ETDEWEB)
Majander, E.O.J.; Manninen, M.T. [VTT Energy, Espoo (Finland)
1996-12-31
The flow induced by a pitched blade turbine was simulated using the sliding mesh technique. The detailed geometry of the turbine was modelled in a computational mesh rotating with the turbine and the geometry of the reactor including baffles was modelled in a stationary co-ordinate system. Effects of grid density were investigated. Turbulence was modelled by using the standard k-{epsilon} model. Results were compared to experimental observations. Velocity components were found to be in good agreement with the measured values throughout the tank. Averaged source terms were calculated from the sliding mesh simulations in order to investigate the reliability of the source term approach. The flow field in the tank was then simulated in a simple grid using these source terms. Agreement with the results of the sliding mesh simulations was good. Commercial CFD-code FLUENT was used in all simulations. (author)
Energy Technology Data Exchange (ETDEWEB)
Majander, E O.J.; Manninen, M T [VTT Energy, Espoo (Finland)
1997-12-31
The flow induced by a pitched blade turbine was simulated using the sliding mesh technique. The detailed geometry of the turbine was modelled in a computational mesh rotating with the turbine and the geometry of the reactor including baffles was modelled in a stationary co-ordinate system. Effects of grid density were investigated. Turbulence was modelled by using the standard k-{epsilon} model. Results were compared to experimental observations. Velocity components were found to be in good agreement with the measured values throughout the tank. Averaged source terms were calculated from the sliding mesh simulations in order to investigate the reliability of the source term approach. The flow field in the tank was then simulated in a simple grid using these source terms. Agreement with the results of the sliding mesh simulations was good. Commercial CFD-code FLUENT was used in all simulations. (author)
International Nuclear Information System (INIS)
Vay, J.-L.; Colella, P.; McCorquodale, P.; Van Straalen, B.; Friedman, A.; Grote, D.P.
2002-01-01
The numerical simulation of the driving beams in a heavy ion fusion power plant is a challenging task, and simulation of the power plant as a whole, or even of the driver, is not yet possible. Despite the rapid progress in computer power, past and anticipated, one must consider the use of the most advanced numerical techniques, if we are to reach our goal expeditiously. One of the difficulties of these simulations resides in the disparity of scales, in time and in space, which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g., fluid dynamics simulations) is the mesh refinement technique. They discuss the challenges posed by the implementation of this technique into plasma simulations (due to the presence of particles and electromagnetic waves). They will present the prospects for and projected benefits of its application to heavy ion fusion. In particular to the simulation of the ion source and the final beam propagation in the chamber. A collaboration project is under way at LBNL between the Applied Numerical Algorithms Group (ANAG) and the HIF group to couple the Adaptive Mesh Refinement (AMR) library (CHOMBO) developed by the ANAG group to the Particle-In-Cell accelerator code WARP developed by the HIF-VNL. They describe their progress and present their initial findings
Large eddy simulation of bundle turbulent flows
International Nuclear Information System (INIS)
Hassan, Y.A.; Barsamian, H.R.
1995-01-01
Large eddy simulation may be defined as simulation of a turbulent flow in which the large scale motions are explicitly resolved while the small scale motions are modeled. This results into a system of equations that require closure models. The closure models relate the effects of the small scale motions onto the large scale motions. There have been several models developed, the most popular is the Smagorinsky eddy viscosity model. A new model has recently been introduced by Lee that modified the Smagorinsky model. Using both of the above mentioned closure models, two different geometric arrangements were used in the simulation of turbulent cross flow within rigid tube bundles. An inlined array simulations was performed for a deep bundle (10,816 nodes) as well as an inlet/outlet simulation (57,600 nodes). Comparisons were made to available experimental data. Flow visualization enabled the distinction of different characteristics within the flow such as jet switching effects in the wake of the bundle flow for the inlet/outlet simulation case, as well as within tube bundles. The results indicate that the large eddy simulation technique is capable of turbulence prediction and may be used as a viable engineering tool with the careful consideration of the subgrid scale model. (author)
Cardall, Christian Y.; Budiardja, Reuben D.
2018-01-01
The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.
A high-resolution code for large eddy simulation of incompressible turbulent boundary layer flows
Cheng, Wan
2014-03-01
We describe a framework for large eddy simulation (LES) of incompressible turbulent boundary layers over a flat plate. This framework uses a fractional-step method with fourth-order finite difference on a staggered mesh. We present several laminar examples to establish the fourth-order accuracy and energy conservation property of the code. Furthermore, we implement a recycling method to generate turbulent inflow. We use the stretched spiral vortex subgrid-scale model and virtual wall model to simulate the turbulent boundary layer flow. We find that the case with Reθ ≈ 2.5 × 105 agrees well with available experimental measurements of wall friction, streamwise velocity profiles and turbulent intensities. We demonstrate that for cases with extremely large Reynolds numbers (Reθ = 1012), the present LES can reasonably predict the flow with a coarse mesh. The parallel implementation of the LES code demonstrates reasonable scaling on O(103) cores. © 2013 Elsevier Ltd.
Predicting mesh density for adaptive modelling of the global atmosphere.
Weller, Hilary
2009-11-28
The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1-20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.
Pelties, Christian
2012-02-18
Accurate and efficient numerical methods to simulate dynamic earthquake rupture and wave propagation in complex media and complex fault geometries are needed to address fundamental questions in earthquake dynamics, to integrate seismic and geodetic data into emerging approaches for dynamic source inversion, and to generate realistic physics-based earthquake scenarios for hazard assessment. Modeling of spontaneous earthquake rupture and seismic wave propagation by a high-order discontinuous Galerkin (DG) method combined with an arbitrarily high-order derivatives (ADER) time integration method was introduced in two dimensions by de la Puente et al. (2009). The ADER-DG method enables high accuracy in space and time and discretization by unstructured meshes. Here we extend this method to three-dimensional dynamic rupture problems. The high geometrical flexibility provided by the usage of tetrahedral elements and the lack of spurious mesh reflections in the ADER-DG method allows the refinement of the mesh close to the fault to model the rupture dynamics adequately while concentrating computational resources only where needed. Moreover, ADER-DG does not generate spurious high-frequency perturbations on the fault and hence does not require artificial Kelvin-Voigt damping. We verify our three-dimensional implementation by comparing results of the SCEC TPV3 test problem with two well-established numerical methods, finite differences, and spectral boundary integral. Furthermore, a convergence study is presented to demonstrate the systematic consistency of the method. To illustrate the capabilities of the high-order accurate ADER-DG scheme on unstructured meshes, we simulate an earthquake scenario, inspired by the 1992 Landers earthquake, that includes curved faults, fault branches, and surface topography. Copyright 2012 by the American Geophysical Union.
Numerical simulation of large deformation polycrystalline plasticity
International Nuclear Information System (INIS)
Inal, K.; Neale, K.W.; Wu, P.D.; MacEwen, S.R.
2000-01-01
A finite element model based on crystal plasticity has been developed to simulate the stress-strain response of sheet metal specimens in uniaxial tension. Each material point in the sheet is considered to be a polycrystalline aggregate of FCC grains. The Taylor theory of crystal plasticity is assumed. The numerical analysis incorporates parallel computing features enabling simulations of realistic models with large number of grains. Simulations have been carried out for the AA3004-H19 aluminium alloy and the results are compared with experimental data. (author)
Directory of Open Access Journals (Sweden)
Essadki Mohamed
2016-09-01
Full Text Available Predictive simulation of liquid fuel injection in automotive engines has become a major challenge for science and applications. The key issue in order to properly predict various combustion regimes and pollutant formation is to accurately describe the interaction between the carrier gaseous phase and the polydisperse evaporating spray produced through atomization. For this purpose, we rely on the EMSM (Eulerian Multi-Size Moment Eulerian polydisperse model. It is based on a high order moment method in size, with a maximization of entropy technique in order to provide a smooth reconstruction of the distribution, derived from a Williams-Boltzmann mesoscopic model under the monokinetic assumption [O. Emre (2014 PhD Thesis, École Centrale Paris; O. Emre, R.O. Fox, M. Massot, S. Chaisemartin, S. Jay, F. Laurent (2014 Flow, Turbulence and Combustion 93, 689-722; O. Emre, D. Kah, S. Jay, Q.-H. Tran, A. Velghe, S. de Chaisemartin, F. Laurent, M. Massot (2015 Atomization Sprays 25, 189-254; D. Kah, F. Laurent, M. Massot, S. Jay (2012 J. Comput. Phys. 231, 394-422; D. Kah, O. Emre, Q.-H. Tran, S. de Chaisemartin, S. Jay, F. Laurent, M. Massot (2015 Int. J. Multiphase Flows 71, 38-65; A. Vié, F. Laurent, M. Massot (2013 J. Comp. Phys. 237, 277-310]. The present contribution relies on a major extension of this model [M. Essadki, S. de Chaisemartin, F. Laurent, A. Larat, M. Massot (2016 Submitted to SIAM J. Appl. Math.], with the aim of building a unified approach and coupling with a separated phases model describing the dynamics and atomization of the interface near the injector. The novelty is to be found in terms of modeling, numerical schemes and implementation. A new high order moment approach is introduced using fractional moments in surface, which can be related to geometrical quantities of the gas-liquid interface. We also provide a novel algorithm for an accurate resolution of the evaporation. Adaptive mesh refinement properly scaling on massively
Schroeder, Craig; Zheng, Wen; Fedkiw, Ronald
2012-01-01
-implicit and fully-coupled viscosity, pressure, and Lagrangian forces. We apply our new framework for forces on a Lagrangian mesh to the case of a surface tension force, which when treated explicitly leads to a tight time step restriction. By applying surface tension
Transonic Airfoil Flow Simulation. Part I: Mesh Generation and Inviscid Method
Directory of Open Access Journals (Sweden)
Vladimir CARDOS
2010-06-01
Full Text Available A calculation method for the subsonic and transonic viscous flow over airfoil using thedisplacement surface concept is described. Part I presents a mesh generation method forcomputational grid and a finite volume method for the time-dependent Euler equations. The inviscidsolution is used for the inviscid-viscous coupling procedure presented in the Part II.
International Nuclear Information System (INIS)
Caribe, Paulo Rauli Rafeson Vasconcelos; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil
2013-01-01
The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV
Regularization modeling for large-eddy simulation
Geurts, Bernardus J.; Holm, D.D.
2003-01-01
A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of
Schroeder, Craig
2012-02-01
We present a method for applying semi-implicit forces on a Lagrangian mesh to an Eulerian discretization of the Navier Stokes equations in a way that produces a sparse symmetric positive definite system. The resulting method has semi-implicit and fully-coupled viscosity, pressure, and Lagrangian forces. We apply our new framework for forces on a Lagrangian mesh to the case of a surface tension force, which when treated explicitly leads to a tight time step restriction. By applying surface tension as a semi-implicit Lagrangian force, the resulting method benefits from improved stability and the ability to take larger time steps. The resulting discretization is also able to maintain parasitic currents at low levels. © 2011.
Large-Eddy Simulation of turbulent vortex shedding
International Nuclear Information System (INIS)
Archambeau, F.
1995-06-01
This thesis documents the development and application of a computational algorithm for Large-Eddy Simulation. Unusually, the method adopts a fully collocated variable storage arrangement and is applicable to complex, non-rectilinear geometries. A Reynolds-averaged Navier-Stokes algorithm has formed the starting point of the development, but has been modified substantially: the spatial approximation of convection is effected by an energy-conserving central-differencing scheme; a second-order time-marching Adams-Bashforth scheme has been introduced; the pressure field is determined by solving the pressure-Poisson equation; this equation is solved either by use of preconditioned Conjugate-Gradient methods or with the Generalised Minimum Residual method; two types of sub-grid scale models have been introduced and examined. The algorithm has been validated by reference to a hierarchy of unsteady flows of increasing complexity starting with unsteady lid-driven cavity flows and ending with 3-D turbulent vortex shedding behind a square prism. In the latter case, for which extensive experimental data are available, special emphasis has been put on examining the dependence of the results on mesh density, near-wall treatment and the nature of the sub-grid-scale model, one of which is an advanced dynamic model. The LES scheme is shown to return time-average and phase-averaged results which agree well with experimental data and which support the view that LES is a promising approach for unsteady flows dominated by large periodic structures. (author)
N-body simulations for f(R) gravity using a self-adaptive particle-mesh code
Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya
2011-02-01
We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.
N-body simulations for f(R) gravity using a self-adaptive particle-mesh code
International Nuclear Information System (INIS)
Zhao Gongbo; Koyama, Kazuya; Li Baojiu
2011-01-01
We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.
Multi-dimensional two-phase flow measurements in a large-diameter pipe using wire-mesh sensor
International Nuclear Information System (INIS)
Kanai, Taizo; Furuya, Masahiro; Arai, Takahiro; Shirakawa, Kenetsu; Nishi, Yoshihisa; Ueda, Nobuyuki
2011-01-01
The authors developed a method of measurement to determine the multi-dimensionality of two phase flow. A wire-mesh sensor (WMS) can acquire a void fraction distribution at a high temporal and spatial resolution and also estimate the velocity of a vertical rising flow by investigating the signal time-delay of the upstream WMS relative to downstream. Previously, one-dimensional velocity was estimated by using the same point of each WMS at a temporal resolution of 1.0 - 5.0 s. The authors propose to extend this time series analysis to estimate the multi-dimensional velocity profile via cross-correlation analysis between a point of upstream WMS and multiple points downstream. Bubbles behave in various ways according to size, which is used to classify them into certain groups via wavelet analysis before cross-correlation analysis. This method was verified by air-water straight and swirl flows within a large-diameter vertical pipe. A high-speed camera is used to set the parameter of cross-correlation analysis. The results revealed that for the rising straight and swirl flows, large scale bubbles tend to move to the center, while the small bubble is pushed to the outside or sucked into the space where the large bubbles existed. Moreover, it is found that this method can estimate the rotational component of velocity of the swirl flow as well as measuring the multi-dimensional velocity vector at high temporal resolutions of 0.2 s. (author)
Kimura, Satoshi; Candy, Adam S.; Holland, Paul R.; Piggott, Matthew D.; Jenkins, Adrian
2013-07-01
Several different classes of ocean model are capable of representing floating glacial ice shelves. We describe the incorporation of ice shelves into Fluidity-ICOM, a nonhydrostatic finite-element ocean model with the capacity to utilize meshes that are unstructured and adaptive in three dimensions. This geometric flexibility offers several advantages over previous approaches. The model represents melting and freezing on all ice-shelf surfaces including vertical faces, treats the ice shelf topography as continuous rather than stepped, and does not require any smoothing of the ice topography or any of the additional parameterisations of the ocean mixed layer used in isopycnal or z-coordinate models. The model can also represent a water column that decreases to zero thickness at the 'grounding line', where the floating ice shelf is joined to its tributary ice streams. The model is applied to idealised ice-shelf geometries in order to demonstrate these capabilities. In these simple experiments, arbitrarily coarsening the mesh outside the ice-shelf cavity has little effect on the ice-shelf melt rate, while the mesh resolution within the cavity is found to be highly influential. Smoothing the vertical ice front results in faster flow along the smoothed ice front, allowing greater exchange with the ocean than in simulations with a realistic ice front. A vanishing water-column thickness at the grounding line has little effect in the simulations studied. We also investigate the response of ice shelf basal melting to variations in deep water temperature in the presence of salt stratification.
Direct and large-eddy simulation IX
Kuerten, Hans; Geurts, Bernard; Armenio, Vincenzo
2015-01-01
This volume reflects the state of the art of numerical simulation of transitional and turbulent flows and provides an active forum for discussion of recent developments in simulation techniques and understanding of flow physics. Following the tradition of earlier DLES workshops, these papers address numerous theoretical and physical aspects of transitional and turbulent flows. At an applied level it contributes to the solution of problems related to energy production, transportation, magneto-hydrodynamics and the environment. A special session is devoted to quality issues of LES. The ninth Workshop on 'Direct and Large-Eddy Simulation' (DLES-9) was held in Dresden, April 3-5, 2013, organized by the Institute of Fluid Mechanics at Technische Universität Dresden. This book is of interest to scientists and engineers, both at an early level in their career and at more senior levels.
Field simulations for large dipole magnets
International Nuclear Information System (INIS)
Lazzaro, A.; Cappuzzello, F.; Cunsolo, A.; Cavallaro, M.; Foti, A.; Khouaja, A.; Orrigo, S.E.A.; Winfield, J.S.
2007-01-01
The problem of the description of magnetic field for large bending magnets is addressed in relation to the requirements of modern techniques of trajectory reconstruction. The crucial question of the interpolation and extrapolation of fields known at a discrete number of points is analysed. For this purpose a realistic field model of the large dipole of the MAGNEX spectrometer, obtained with finite elements three dimensional simulations, is used. The influence of the uncertainties in the measured field to the quality of the trajectory reconstruction is treated in detail. General constraints for field measurements in terms of required resolutions, step sizes and precisions are thus extracted
Large data management and systematization of simulation
International Nuclear Information System (INIS)
Ueshima, Yutaka; Saitho, Kanji; Koga, James; Isogai, Kentaro
2004-01-01
In the advanced photon research large-scale simulations are powerful tools. In the numerical experiments, real-time visualization and steering system are thought as hopeful methods of data analysis. This approach is valid in the stereotype analysis at one time or short-cycle simulation. In the research for an unknown problem, it is necessary that the output data can be analyzed many times because profitable analysis is difficult at the first time. Consequently, output data should be filed to refer and analyze at any time. To support the research, we need the followed automatic functions, transporting data files from data generator to data storage, analyzing data, tracking history of data handling, and so on. The Large Data Management system will be functional Problem Solving Environment distributed system. (author)
Large Eddy Simulation for Compressible Flows
Garnier, E; Sagaut, P
2009-01-01
Large Eddy Simulation (LES) of compressible flows is still a widely unexplored area of research. The authors, whose books are considered the most relevant monographs in this field, provide the reader with a comprehensive state-of-the-art presentation of the available LES theory and application. This book is a sequel to "Large Eddy Simulation for Incompressible Flows", as most of the research on LES for compressible flows is based on variable density extensions of models, methods and paradigms that were developed within the incompressible flow framework. The book addresses both the fundamentals and the practical industrial applications of LES in order to point out gaps in the theoretical framework as well as to bridge the gap between LES research and the growing need to use it in engineering modeling. After introducing the fundamentals on compressible turbulence and the LES governing equations, the mathematical framework for the filtering paradigm of LES for compressible flow equations is established. Instead ...
Angelidis, Dionysios; Sotiropoulos, Fotis
2015-11-01
The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.
Figueredo-Cardero, Alvio; Chico, Ernesto; Castilho, Leda R; Medronho, Ricardo A
2009-11-01
In the present work Computational Fluid Dynamics (CFD) was used to study the flow field and particle dynamics in an internal spin-filter (SF) bioreactor system. Evidence of a radial exchange flow through the filter mesh was detected, with a magnitude up to 130-fold higher than the perfusion flow, thus significantly contributing to radial drag. The exchange flow magnitude was significantly influenced by the filter rotation rate, but not by the perfusion flow, within the ranges evaluated. Previous reports had only given indirect evidences of this exchange flow phenomenon in spin-filters, but the current simulations were able to quantify and explain it. Flow pattern inside the spin-filter bioreactor resembled a typical Taylor-Couette flow, with vortices being formed in the annular gap and eventually penetrating the internal volume of the filter, thus being the probable reason for the significant exchange flow observed. The simulations also showed that cells become depleted in the vicinity of the mesh due to lateral particle migration. Cell concentration near the filter was approximately 50% of the bulk concentration, explaining why cell separation achieved in SFs is not solely due to size exclusion. The results presented indicate the power of CFD techniques to study and better understand spin-filter systems, aiming at the establishment of effective design, operation and scale-up criteria.
International Nuclear Information System (INIS)
Vay, J.L.; Colella, P.; McCorquodale, P.; Van Straalen, B.; Friedman, A.; Grote, D.P.
2002-01-01
The numerical simulation of the driving beams in a heavy ion fusion power plant is a challenging task, and simulation of the power plant as a whole, or even of the drive,r is not yet possible. Despite the rapid progress in computer power, past and anticipated, one must consider the use of the most advanced numerical techniques, if they are to reach the goal expeditiously. One of the difficulties of these simulations resides in the disparity of scales, in time and in space, which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g., fluid dynamics simulations) is the mesh refinement technique. They discuss the challenges posed by the implementation of this technique into plasma simulations (due to the presence of particles and electromagnetic waves). They will present the prospects for and projected benefits of its application to heavy ion fusion, in particular to the simulation of the ion source and the final beam propagation in the chamber
DEFF Research Database (Denmark)
Herrmann, Bent; Krag, Ludvig Ahm; Feekings, Jordan P.
2016-01-01
described by a double logistic selection curve, implying that two different size selection processes occur in the cod end. The double selection process could be explained by an additional selection process occurring through slack meshes. The results imply that the escapement of 46% and 34% of the larger...... Atlantic Cod and Haddock (those above 48 cm), respectively, would be through wide-open or slack meshes. Since these mesh states are only likely to be present in the latest stage of the fishing process (e.g., when the cod end is near the surface), a large fraction of the bigger fish probably escaped near...
Marlen, van B.
2003-01-01
This paper presents the results of experiments aimed to improve the selectivity of beam trawls in the North Sea for roundfish whilst minimizing losses on target flatfish. Large-meshed top panels were designed for the tickler chain type of beam trawls used in this fishery. The design process involved
Large Scale Textured Mesh Reconstruction from Mobile Mapping Images and LIDAR Scans
Boussaha, M.; Vallet, B.; Rives, P.
2018-05-01
The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS). First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points) and photometry (images). Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014) is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.
LARGE SCALE TEXTURED MESH RECONSTRUCTION FROM MOBILE MAPPING IMAGES AND LIDAR SCANS
Directory of Open Access Journals (Sweden)
M. Boussaha
2018-05-01
Full Text Available The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS. First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points and photometry (images. Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014 is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.
Cobra-IE Evaluation by Simulation of the NUPEC BWR Full-Size Fine-Mesh Bundle Test (BFBT)
International Nuclear Information System (INIS)
Burns, C. J.; Aumiler, D.L.
2006-01-01
The COBRA-IE computer code is a thermal-hydraulic subchannel analysis program capable of simulating phenomena present in both PWRs and BWRs. As part of ongoing COBRA-IE assessment efforts, the code has been evaluated against experimental data from the NUPEC BWR Full-Size Fine-Mesh Bundle Tests (BFBT). The BFBT experiments utilized an 8 x 8 rod bundle to simulate BWR operating conditions and power profiles, providing an excellent database for investigation of the capabilities of the code. Benchmarks performed included steady-state and transient void distribution, single-phase and two-phase pressure drop, and steady-state and transient critical power measurements. COBRA-IE effectively captured the trends seen in the experimental data with acceptable prediction error. Future sensitivity studies are planned to investigate the effects of enabling and/or modifying optional code models dealing with void drift, turbulent mixing, rewetting, and CHF
Solution adaptive triangular meshes with application to the simulation of plasma equilibrium
International Nuclear Information System (INIS)
Erlebacher, G.
1984-01-01
A new discrete Laplace operator is constructed on a local mesh molecule, second order accurate on symmetric cell regions, based on local Taylor series expansions. This discrete Laplacian is then compared to the one commonly used in the literature. A truncation error analysis of gradient and Laplace operators calculated at triangle centroids reveals that the maximum bounds of their truncation errors are minimized on equilateral triangles, for a fixed triangle perimeter. A new adaptive strategy on arbitrary triangular grids is developed in which a uniform grid is defined with respect to the solution surface, as opposed to the x,y plane. Departures from mesh uniformity arises from a spacially dependent mean-curvature of the solution surface. The power of this new adaptive technique is applied to the problem of finding free-boundary plasma equilibria within the context of MHD. The geometry is toroidal, and axisymmetry in the toroidal direction is assumed. We are led to conclude that the grid should move, not towards regions of high curvature of magnetic flux, but rather towards regions of greater toroidal current density. This has a direct bearing on the accuracy with which the Grad-Shafranov equation is being approximated
Large eddy simulation of hydrodynamic cavitation
Bhatt, Mrugank; Mahesh, Krishnan
2017-11-01
Large eddy simulation is used to study sheet to cloud cavitation over a wedge. The mixture of water and water vapor is represented using a homogeneous mixture model. Compressible Navier-Stokes equations for mixture quantities along with transport equation for vapor mass fraction employing finite rate mass transfer between the two phases, are solved using the numerical method of Gnanaskandan and Mahesh. The method is implemented on unstructured grid with parallel MPI capabilities. Flow over a wedge is simulated at Re = 200 , 000 and the performance of the homogeneous mixture model is analyzed in predicting different regimes of sheet to cloud cavitation; namely, incipient, transitory and periodic, as observed in the experimental investigation of Harish et al.. This work is supported by the Office of Naval Research.
Large-eddy simulation of contrails
Energy Technology Data Exchange (ETDEWEB)
Chlond, A [Max-Planck-Inst. fuer Meteorologie, Hamburg (Germany)
1998-12-31
A large eddy simulation (LES) model has been used to investigate the role of various external parameters and physical processes in the life-cycle of contrails. The model is applied to conditions that are typical for those under which contrails could be observed, i.e. in an atmosphere which is supersaturated with respect to ice and at a temperature of approximately 230 K or colder. The sensitivity runs indicate that the contrail evolution is controlled primarily by humidity, temperature and static stability of the ambient air and secondarily by the baroclinicity of the atmosphere. Moreover, it turns out that the initial ice particle concentration and radiative processes are of minor importance in the evolution of contrails at least during the 30 minutes simulation period. (author) 9 refs.
Large eddy simulation of breaking waves
DEFF Research Database (Denmark)
Christensen, Erik Damgaard; Deigaard, Rolf
2001-01-01
A numerical model is used to simulate wave breaking, the large scale water motions and turbulence induced by the breaking process. The model consists of a free surface model using the surface markers method combined with a three-dimensional model that solves the flow equations. The turbulence....... The incoming waves are specified by a flux boundary condition. The waves are approaching in the shore-normal direction and are breaking on a plane, constant slope beach. The first few wave periods are simulated by a two-dimensional model in the vertical plane normal to the beach line. The model describes...... the steepening and the overturning of the wave. At a given instant, the model domain is extended to three dimensions, and the two-dimensional flow field develops spontaneously three-dimensional flow features with turbulent eddies. After a few wave periods, stationary (periodic) conditions are achieved...
Large-eddy simulation of contrails
Energy Technology Data Exchange (ETDEWEB)
Chlond, A. [Max-Planck-Inst. fuer Meteorologie, Hamburg (Germany)
1997-12-31
A large eddy simulation (LES) model has been used to investigate the role of various external parameters and physical processes in the life-cycle of contrails. The model is applied to conditions that are typical for those under which contrails could be observed, i.e. in an atmosphere which is supersaturated with respect to ice and at a temperature of approximately 230 K or colder. The sensitivity runs indicate that the contrail evolution is controlled primarily by humidity, temperature and static stability of the ambient air and secondarily by the baroclinicity of the atmosphere. Moreover, it turns out that the initial ice particle concentration and radiative processes are of minor importance in the evolution of contrails at least during the 30 minutes simulation period. (author) 9 refs.
Directory of Open Access Journals (Sweden)
Federico Canè
2018-01-01
Full Text Available With cardiovascular disease (CVD remaining the primary cause of death worldwide, early detection of CVDs becomes essential. The intracardiac flow is an important component of ventricular function, motion kinetics, wash-out of ventricular chambers, and ventricular energetics. Coupling between Computational Fluid Dynamics (CFD simulations and medical images can play a fundamental role in terms of patient-specific diagnostic tools. From a technical perspective, CFD simulations with moving boundaries could easily lead to negative volumes errors and the sudden failure of the simulation. The generation of high-quality 4D meshes (3D in space + time with 1-to-1 vertex becomes essential to perform a CFD simulation with moving boundaries. In this context, we developed a semiautomatic morphing tool able to create 4D high-quality structured meshes starting from a segmented 4D dataset. To prove the versatility and efficiency, the method was tested on three different 4D datasets (Ultrasound, MRI, and CT by evaluating the quality and accuracy of the resulting 4D meshes. Furthermore, an estimation of some physiological quantities is accomplished for the 4D CT reconstruction. Future research will aim at extending the region of interest, further automation of the meshing algorithm, and generating structured hexahedral mesh models both for the blood and myocardial volume.
Torner, Benjamin; Konnigk, Lucas; Hallier, Sebastian; Kumar, Jitendra; Witte, Matthias; Wurm, Frank-Hendrik
2018-06-01
Numerical flow analysis (computational fluid dynamics) in combination with the prediction of blood damage is an important procedure to investigate the hemocompatibility of a blood pump, since blood trauma due to shear stresses remains a problem in these devices. Today, the numerical damage prediction is conducted using unsteady Reynolds-averaged Navier-Stokes simulations. Investigations with large eddy simulations are rarely being performed for blood pumps. Hence, the aim of the study is to examine the viscous shear stresses of a large eddy simulation in a blood pump and compare the results with an unsteady Reynolds-averaged Navier-Stokes simulation. The simulations were carried out at two operation points of a blood pump. The flow was simulated on a 100M element mesh for the large eddy simulation and a 20M element mesh for the unsteady Reynolds-averaged Navier-Stokes simulation. As a first step, the large eddy simulation was verified by analyzing internal dissipative losses within the pump. Then, the pump characteristics and mean and turbulent viscous shear stresses were compared between the two simulation methods. The verification showed that the large eddy simulation is able to reproduce the significant portion of dissipative losses, which is a global indication that the equivalent viscous shear stresses are adequately resolved. The comparison with the unsteady Reynolds-averaged Navier-Stokes simulation revealed that the hydraulic parameters were in agreement, but differences for the shear stresses were found. The results show the potential of the large eddy simulation as a high-quality comparative case to check the suitability of a chosen Reynolds-averaged Navier-Stokes setup and turbulence model. Furthermore, the results lead to suggest that large eddy simulations are superior to unsteady Reynolds-averaged Navier-Stokes simulations when instantaneous stresses are applied for the blood damage prediction.
Large eddy simulations of compressible magnetohydrodynamic turbulence
International Nuclear Information System (INIS)
Grete, Philipp
2016-01-01
Supersonic, magnetohydrodynamic (MHD) turbulence is thought to play an important role in many processes - especially in astrophysics, where detailed three-dimensional observations are scarce. Simulations can partially fill this gap and help to understand these processes. However, direct simulations with realistic parameters are often not feasible. Consequently, large eddy simulations (LES) have emerged as a viable alternative. In LES the overall complexity is reduced by simulating only large and intermediate scales directly. The smallest scales, usually referred to as subgrid-scales (SGS), are introduced to the simulation by means of an SGS model. Thus, the overall quality of an LES with respect to properly accounting for small-scale physics crucially depends on the quality of the SGS model. While there has been a lot of successful research on SGS models in the hydrodynamic regime for decades, SGS modeling in MHD is a rather recent topic, in particular, in the compressible regime. In this thesis, we derive and validate a new nonlinear MHD SGS model that explicitly takes compressibility effects into account. A filter is used to separate the large and intermediate scales, and it is thought to mimic finite resolution effects. In the derivation, we use a deconvolution approach on the filter kernel. With this approach, we are able to derive nonlinear closures for all SGS terms in MHD: the turbulent Reynolds and Maxwell stresses, and the turbulent electromotive force (EMF). We validate the new closures both a priori and a posteriori. In the a priori tests, we use high-resolution reference data of stationary, homogeneous, isotropic MHD turbulence to compare exact SGS quantities against predictions by the closures. The comparison includes, for example, correlations of turbulent fluxes, the average dissipative behavior, and alignment of SGS vectors such as the EMF. In order to quantify the performance of the new nonlinear closure, this comparison is conducted from the
Large eddy simulations of compressible magnetohydrodynamic turbulence
Grete, Philipp
2017-02-01
Supersonic, magnetohydrodynamic (MHD) turbulence is thought to play an important role in many processes - especially in astrophysics, where detailed three-dimensional observations are scarce. Simulations can partially fill this gap and help to understand these processes. However, direct simulations with realistic parameters are often not feasible. Consequently, large eddy simulations (LES) have emerged as a viable alternative. In LES the overall complexity is reduced by simulating only large and intermediate scales directly. The smallest scales, usually referred to as subgrid-scales (SGS), are introduced to the simulation by means of an SGS model. Thus, the overall quality of an LES with respect to properly accounting for small-scale physics crucially depends on the quality of the SGS model. While there has been a lot of successful research on SGS models in the hydrodynamic regime for decades, SGS modeling in MHD is a rather recent topic, in particular, in the compressible regime. In this thesis, we derive and validate a new nonlinear MHD SGS model that explicitly takes compressibility effects into account. A filter is used to separate the large and intermediate scales, and it is thought to mimic finite resolution effects. In the derivation, we use a deconvolution approach on the filter kernel. With this approach, we are able to derive nonlinear closures for all SGS terms in MHD: the turbulent Reynolds and Maxwell stresses, and the turbulent electromotive force (EMF). We validate the new closures both a priori and a posteriori. In the a priori tests, we use high-resolution reference data of stationary, homogeneous, isotropic MHD turbulence to compare exact SGS quantities against predictions by the closures. The comparison includes, for example, correlations of turbulent fluxes, the average dissipative behavior, and alignment of SGS vectors such as the EMF. In order to quantify the performance of the new nonlinear closure, this comparison is conducted from the
Large eddy simulations of compressible magnetohydrodynamic turbulence
Energy Technology Data Exchange (ETDEWEB)
Grete, Philipp
2016-09-09
Supersonic, magnetohydrodynamic (MHD) turbulence is thought to play an important role in many processes - especially in astrophysics, where detailed three-dimensional observations are scarce. Simulations can partially fill this gap and help to understand these processes. However, direct simulations with realistic parameters are often not feasible. Consequently, large eddy simulations (LES) have emerged as a viable alternative. In LES the overall complexity is reduced by simulating only large and intermediate scales directly. The smallest scales, usually referred to as subgrid-scales (SGS), are introduced to the simulation by means of an SGS model. Thus, the overall quality of an LES with respect to properly accounting for small-scale physics crucially depends on the quality of the SGS model. While there has been a lot of successful research on SGS models in the hydrodynamic regime for decades, SGS modeling in MHD is a rather recent topic, in particular, in the compressible regime. In this thesis, we derive and validate a new nonlinear MHD SGS model that explicitly takes compressibility effects into account. A filter is used to separate the large and intermediate scales, and it is thought to mimic finite resolution effects. In the derivation, we use a deconvolution approach on the filter kernel. With this approach, we are able to derive nonlinear closures for all SGS terms in MHD: the turbulent Reynolds and Maxwell stresses, and the turbulent electromotive force (EMF). We validate the new closures both a priori and a posteriori. In the a priori tests, we use high-resolution reference data of stationary, homogeneous, isotropic MHD turbulence to compare exact SGS quantities against predictions by the closures. The comparison includes, for example, correlations of turbulent fluxes, the average dissipative behavior, and alignment of SGS vectors such as the EMF. In order to quantify the performance of the new nonlinear closure, this comparison is conducted from the
Simulation of contaminant transport in fractured porous media on triangular meshes
Dong, Chen
2010-12-01
A mathematical model for contaminant species passing through fractured porous media is presented. In the numerical model, we combine two locally conservative methods, i.e. mixed finite element (MFE) and the finite volume (FV) methods. Adaptive triangle mesh is used for effective treatment of the fractures. A hybrid MFE method is employed to provide an accurate approximation of velocities field for both the fractures and matrix which are crucial to the convection part of the transport equation. The FV method and the standard MFE method are used to approximate the convection and dispersion terms respectively. Numerical examples in a medium containing fracture network illustrate the robustness and efficiency of the proposed numerical model. © 2010 IEEE.
Simulation of contaminant transport in fractured porous media on triangular meshes
Dong, Chen; Sun, Shuyu
2010-01-01
A mathematical model for contaminant species passing through fractured porous media is presented. In the numerical model, we combine two locally conservative methods, i.e. mixed finite element (MFE) and the finite volume (FV) methods. Adaptive triangle mesh is used for effective treatment of the fractures. A hybrid MFE method is employed to provide an accurate approximation of velocities field for both the fractures and matrix which are crucial to the convection part of the transport equation. The FV method and the standard MFE method are used to approximate the convection and dispersion terms respectively. Numerical examples in a medium containing fracture network illustrate the robustness and efficiency of the proposed numerical model. © 2010 IEEE.
Bacopoulos, Peter
2018-05-01
A localized truncation error analysis with complex derivatives (LTEA+CD) is applied recursively with advanced circulation (ADCIRC) simulations of tides and storm surge for finite element mesh optimization. Mesh optimization is demonstrated with two iterations of LTEA+CD for tidal simulation in the lower 200 km of the St. Johns River, located in northeast Florida, and achieves more than an over 50% decrease in the number of mesh nodes, relating to a twofold increase in efficiency, at a zero cost to model accuracy. The recursively generated meshes using LTEA+CD lead to successive reductions in the global cumulative truncation error associated with the model mesh. Tides are simulated with root mean square error (RMSE) of 0.09-0.21 m and index of agreement (IA) values generally in the 80s and 90s percentage ranges. Tidal currents are simulated with RMSE of 0.09-0.23 m s-1 and IA values of 97% and greater. Storm tide due to Hurricane Matthew 2016 is simulated with RMSE of 0.09-0.33 m and IA values of 75-96%. Analysis of the LTEA+CD results shows the M2 constituent to dominate the node spacing requirement in the St. Johns River, with the M4 and M6 overtides and the STEADY constituent contributing some. Friction is the predominant physical factor influencing the target element size distribution, especially along the main river stem, while frequency (inertia) and Coriolis (rotation) are supplementary contributing factors. The combination of interior- and boundary-type computational molecules, providing near-full coverage of the model domain, renders LTEA+CD an attractive mesh generation/optimization tool for complex coastal and estuarine domains. The mesh optimization procedure using LTEA+CD is automatic and extensible to other finite element-based numerical models. Discussion is provided on the scope of LTEA+CD, the starting point (mesh) of the procedure, the user-specified scaling of the LTEA+CD results, and the iteration (termination) of LTEA+CD for mesh optimization.
Large-Eddy Simulation of turbulent vortex shedding
Energy Technology Data Exchange (ETDEWEB)
Archambeau, F
1995-06-01
This thesis documents the development and application of a computational algorithm for Large-Eddy Simulation. Unusually, the method adopts a fully collocated variable storage arrangement and is applicable to complex, non-rectilinear geometries. A Reynolds-averaged Navier-Stokes algorithm has formed the starting point of the development, but has been modified substantially: the spatial approximation of convection is effected by an energy-conserving central-differencing scheme; a second-order time-marching Adams-Bashforth scheme has been introduced; the pressure field is determined by solving the pressure-Poisson equation; this equation is solved either by use of preconditioned Conjugate-Gradient methods or with the Generalised Minimum Residual method; two types of sub-grid scale models have been introduced and examined. The algorithm has been validated by reference to a hierarchy of unsteady flows of increasing complexity starting with unsteady lid-driven cavity flows and ending with 3-D turbulent vortex shedding behind a square prism. In the latter case, for which extensive experimental data are available, special emphasis has been put on examining the dependence of the results on mesh density, near-wall treatment and the nature of the sub-grid-scale model, one of which is an advanced dynamic model. The LES scheme is shown to return time-average and phase-averaged results which agree well with experimental data and which support the view that LES is a promising approach for unsteady flows dominated by large periodic structures. (author) 87 refs.
Large eddy simulation of cavitating flows
Gnanaskandan, Aswin; Mahesh, Krishnan
2014-11-01
Large eddy simulation on unstructured grids is used to study hydrodynamic cavitation. The multiphase medium is represented using a homogeneous equilibrium model that assumes thermal equilibrium between the liquid and the vapor phase. Surface tension effects are ignored and the governing equations are the compressible Navier Stokes equations for the liquid/vapor mixture along with a transport equation for the vapor mass fraction. A characteristic-based filtering scheme is developed to handle shocks and material discontinuities in non-ideal gases and mixtures. A TVD filter is applied as a corrector step in a predictor-corrector approach with the predictor scheme being non-dissipative and symmetric. The method is validated for canonical one dimensional flows and leading edge cavitation over a hydrofoil, and applied to study sheet to cloud cavitation over a wedge. This work is supported by the Office of Naval Research.
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
Large-eddy simulations for turbulent flows
International Nuclear Information System (INIS)
Husson, S.
2007-07-01
The aim of this work is to study the impact of thermal gradients on a turbulent channel flow with imposed wall temperatures and friction Reynolds numbers of 180 and 395. In this configuration, temperature variations can be strong and induce significant variations of the fluid properties. We consider the low Mach number equations and carry out large eddy simulations. We first validate our simulations thanks to comparisons of some of our LES results with DNS data. Then, we investigate the influence of the variations of the conductivity and the viscosity and show that we can assume these properties constant only for weak temperature gradients. We also study the thermal sub-grid-scale modelling and find no difference when the sub-grid-scale Prandtl number is taken constant or dynamically calculated. The analysis of the effects of strongly increasing the temperature ratio mainly shows a dissymmetry of the profiles. The physical mechanism responsible of these modifications is explained. Finally, we use semi-local scaling and the Van Driest transformation and we show that they lead to a better correspondence of the low and high temperature ratios profiles. (author)
Dynamic large eddy simulation: Stability via realizability
Mokhtarpoor, Reza; Heinz, Stefan
2017-10-01
The concept of dynamic large eddy simulation (LES) is highly attractive: such methods can dynamically adjust to changing flow conditions, which is known to be highly beneficial. For example, this avoids the use of empirical, case dependent approximations (like damping functions). Ideally, dynamic LES should be local in physical space (without involving artificial clipping parameters), and it should be stable for a wide range of simulation time steps, Reynolds numbers, and numerical schemes. These properties are not trivial, but dynamic LES suffers from such problems over decades. We address these questions by performing dynamic LES of periodic hill flow including separation at a high Reynolds number Re = 37 000. For the case considered, the main result of our studies is that it is possible to design LES that has the desired properties. It requires physical consistency: a PDF-realizable and stress-realizable LES model, which requires the inclusion of the turbulent kinetic energy in the LES calculation. LES models that do not honor such physical consistency can become unstable. We do not find support for the previous assumption that long-term correlations of negative dynamic model parameters are responsible for instability. Instead, we concluded that instability is caused by the stable spatial organization of significant unphysical states, which are represented by wall-type gradient streaks of the standard deviation of the dynamic model parameter. The applicability of our realizability stabilization to other dynamic models (including the dynamic Smagorinsky model) is discussed.
Directory of Open Access Journals (Sweden)
Alessandra M Bavo
Full Text Available In recent years the role of FSI (fluid-structure interaction simulations in the analysis of the fluid-mechanics of heart valves is becoming more and more important, being able to capture the interaction between the blood and both the surrounding biological tissues and the valve itself. When setting up an FSI simulation, several choices have to be made to select the most suitable approach for the case of interest: in particular, to simulate flexible leaflet cardiac valves, the type of discretization of the fluid domain is crucial, which can be described with an ALE (Arbitrary Lagrangian-Eulerian or an Eulerian formulation. The majority of the reported 3D heart valve FSI simulations are performed with the Eulerian formulation, allowing for large deformations of the domains without compromising the quality of the fluid grid. Nevertheless, it is known that the ALE-FSI approach guarantees more accurate results at the interface between the solid and the fluid. The goal of this paper is to describe the same aortic valve model in the two cases, comparing the performances of an ALE-based FSI solution and an Eulerian-based FSI approach. After a first simplified 2D case, the aortic geometry was considered in a full 3D set-up. The model was kept as similar as possible in the two settings, to better compare the simulations' outcomes. Although for the 2D case the differences were unsubstantial, in our experience the performance of a full 3D ALE-FSI simulation was significantly limited by the technical problems and requirements inherent to the ALE formulation, mainly related to the mesh motion and deformation of the fluid domain. As a secondary outcome of this work, it is important to point out that the choice of the solver also influenced the reliability of the final results.
Shen, Junyu; Wang, Mei; Zhao, Liang; Zhang, Peili; Jiang, Jian; Liu, Jinxuan
2018-06-01
The development of highly efficient, robust, and cheap water oxidation electrodes is a major challenge in constructing industrially applicable electrolyzers for large-scale production of hydrogen from water. Herein we report a hierarchical stainless steel mesh electrode which features Ni(Fe)OxHy-coated self-supported nanocone arrays. Through a facile, mild, low-cost and readily scalable two-step fabrication procedure, the electrochemically active area of the optimized electrode is enlarged by a factor of 3.1 and the specific activity is enhanced by a factor of 250 at 265 mV overpotential compared with that of a corresponding pristine stainless steel mesh electrode. Moreover, the charge-transfer resistance is reduced from 4.47 Ω for the stainless steel mesh electrode to 0.13 Ω for the Ni(Fe)OxHy-coated nanocone array stainless steel mesh electrode. As a result, the cheap and easily fabricated electrode displays 280 and 303 mV low overpotentials to achieve high current densities of 500 and 1000 mA cmgeo-2, respectively, for oxygen evolution reaction in 1 M KOH. More importantly, the electrode exhibits a good stability over 340 h of chronopotentiometric test at 50 mA cmgeo-2 and only a slight attenuation (4.2%, ∼15 mV) in catalytic activity over 82 h electrolysis at a constant current density of 500 mA cmgeo-2.
Deng, Xiaolong; Dong, Haibo
2017-11-01
Developing a high-fidelity, high-efficiency numerical method for bio-inspired flow problems with flow-structure interaction is important for understanding related physics and developing many bio-inspired technologies. To simulate a fast-swimming big fish with multiple finlets or fish schooling, we need fine grids and/or a big computational domain, which are big challenges for 3-D simulations. In current work, based on the 3-D finite-difference sharp-interface immersed boundary method for incompressible flows (Mittal et al., JCP 2008), we developed an octree-like Adaptive Mesh Refinement (AMR) technique to enhance the computational ability and increase the computational efficiency. The AMR is coupled with a multigrid acceleration technique and a MPI +OpenMP hybrid parallelization. In this work, different AMR layers are treated separately and the synchronization is performed in the buffer regions and iterations are performed for the convergence of solution. Each big region is calculated by a MPI process which then uses multiple OpenMP threads for further acceleration, so that the communication cost is reduced. With these acceleration techniques, various canonical and bio-inspired flow problems with complex boundaries can be simulated accurately and efficiently. This work is supported by the MURI Grant Number N00014-14-1-0533 and NSF Grant CBET-1605434.
Energy Technology Data Exchange (ETDEWEB)
Ahmadi, M. [Heriot Watt Univ., Edinburgh (United Kingdom)
2008-10-15
This paper described a project in which a higher order up-winding scheme was used to solve mass/energy conservation equations for simulating steam flood processes in an oil reservoir. Thermal recovery processes are among the most complex because they require a detailed accounting of thermal energy and chemical reaction kinetics. The numerical simulation of thermal recovery processes involves localized phenomena such as saturation and temperatures fronts due to hyperbolic features of governing conservation laws. A second order accurate FV method that was improved by a moving mesh strategy was used to adjust for moving coordinates on a finely gridded domain. The Finite volume method was used and the problem of steam injection was then tested using derived solution frameworks on both mixed and moving coordinates. The benefits of using a higher-order Godunov solver instead of lower-order ones were qualified. This second order correction resulted in better resolution on moving features. Preferences of higher-order solvers over lower-order ones in terms of shock capturing is under further investigation. It was concluded that although this simulation study was limited to steam flooding processes, the newly presented approach may be suitable to other enhanced oil recovery processes such as VAPEX, SAGD and in situ combustion processes. 23 refs., 28 figs.
Large-scale Intelligent Transporation Systems simulation
Energy Technology Data Exchange (ETDEWEB)
Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.
1995-06-01
A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.
Sensitivity technologies for large scale simulation
International Nuclear Information System (INIS)
Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard
2005-01-01
Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first
International Nuclear Information System (INIS)
Reyes Lopez, Y.; Yervilla Herrera, H.; Viamontes Esquivel, A.; Recarey Morfa, C. A.
2009-01-01
In the following paper we developed a new method to interpolate large volumes of scattered data, focused mainly on the results of the Mesh free Methods, Points Methods and the Particles Methods application. Through this one, we use local radial basis function as interpolating functions. We also use over-tree as the data structure that allows to accelerate the localization of the data that influences to interpolate the values at a new point, speeding up the application of scientific visualization techniques to generate images from large data volumes from the application of Mesh-free Methods, Points and Particle Methods, in the resolution of diverse models of physics-mathematics. As an example, the results obtained after applying this method using the local interpolation functions of Shepard are shown. (Author) 22 refs
Creech, Angus; Früh, Wolf-Gerrit; Maguire, A. Eoghan
2015-05-01
We present here a computational fluid dynamics (CFD) simulation of Lillgrund offshore wind farm, which is located in the Øresund Strait between Sweden and Denmark. The simulation combines a dynamic representation of wind turbines embedded within a large-eddy simulation CFD solver and uses hr-adaptive meshing to increase or decrease mesh resolution where required. This allows the resolution of both large-scale flow structures around the wind farm, and the local flow conditions at individual turbines; consequently, the response of each turbine to local conditions can be modelled, as well as the resulting evolution of the turbine wakes. This paper provides a detailed description of the turbine model which simulates the interaction between the wind, the turbine rotors, and the turbine generators by calculating the forces on the rotor, the body forces on the air, and instantaneous power output. This model was used to investigate a selection of key wind speeds and directions, investigating cases where a row of turbines would be fully aligned with the wind or at specific angles to the wind. Results shown here include presentations of the spin-up of turbines, the observation of eddies moving through the turbine array, meandering turbine wakes, and an extensive wind farm wake several kilometres in length. The key measurement available for cross-validation with operational wind farm data is the power output from the individual turbines, where the effect of unsteady turbine wakes on the performance of downstream turbines was a main point of interest. The results from the simulations were compared to the performance measurements from the real wind farm to provide a firm quantitative validation of this methodology. Having achieved good agreement between the model results and actual wind farm measurements, the potential of the methodology to provide a tool for further investigations of engineering and atmospheric science problems is outlined.
TESLA: Large Signal Simulation Code for Klystrons
International Nuclear Information System (INIS)
Vlasov, Alexander N.; Cooke, Simon J.; Chernin, David P.; Antonsen, Thomas M. Jr.; Nguyen, Khanh T.; Levush, Baruch
2003-01-01
TESLA (Telegraphist's Equations Solution for Linear Beam Amplifiers) is a new code designed to simulate linear beam vacuum electronic devices with cavities, such as klystrons, extended interaction klystrons, twistrons, and coupled cavity amplifiers. The model includes a self-consistent, nonlinear solution of the three-dimensional electron equations of motion and the solution of time-dependent field equations. The model differs from the conventional Particle in Cell approach in that the field spectrum is assumed to consist of a carrier frequency and its harmonics with slowly varying envelopes. Also, fields in the external cavities are modeled with circuit like equations and couple to fields in the beam region through boundary conditions on the beam tunnel wall. The model in TESLA is an extension of the model used in gyrotron code MAGY. The TESLA formulation has been extended to be capable to treat the multiple beam case, in which each beam is transported inside its own tunnel. The beams interact with each other as they pass through the gaps in their common cavities. The interaction is treated by modification of the boundary conditions on the wall of each tunnel to include the effect of adjacent beams as well as the fields excited in each cavity. The extended version of TESLA for the multiple beam case, TESLA-MB, has been developed for single processor machines, and can run on UNIX machines and on PC computers with a large memory (above 2GB). The TESLA-MB algorithm is currently being modified to simulate multiple beam klystrons on multiprocessor machines using the MPI (Message Passing Interface) environment. The code TESLA has been verified by comparison with MAGIC for single and multiple beam cases. The TESLA code and the MAGIC code predict the same power within 1% for a simple two cavity klystron design while the computational time for TESLA is orders of magnitude less than for MAGIC 2D. In addition, recently TESLA was used to model the L-6048 klystron, code
Toward An Unstructured Mesh Database
Rezaei Mahdiraji, Alireza; Baumann, Peter Peter
2014-05-01
Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi
Directory of Open Access Journals (Sweden)
Daniel Pérez-Grande
2016-11-01
Full Text Available This manuscript explores numerical errors in highly anisotropic diffusion problems. First, the paper addresses the use of regular structured meshes in numerical solutions versus meshes aligned with the preferential directions of the problem. Numerical diffusion in structured meshes is quantified by solving the classical anisotropic diffusion problem; the analysis is exemplified with the application to a numerical model of conducting fluids under magnetic confinement, where rates of transport in directions parallel and perpendicular to a magnetic field are quite different. Numerical diffusion errors in this problem promote the use of magnetic field aligned meshes (MFAM. The generation of this type of meshes presents some challenges; several meshing strategies are implemented and analyzed in order to provide insight into achieving acceptable mesh regularity. Second, Gradient Reconstruction methods for magnetically aligned meshes are addressed and numerical errors are compared for the structured and magnetically aligned meshes. It is concluded that using the latter provides a more correct and straightforward approach to solving problems where anisotropicity is present, especially, if the anisotropicity level is high or difficult to quantify. The conclusions of the study may be extrapolated to the study of anisotropic flows different from conducting fluids.
Large scale molecular simulations of nanotoxicity.
Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong
2014-01-01
The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells. © 2014 Wiley Periodicals, Inc.
Large eddy simulation of stably stratified turbulence
International Nuclear Information System (INIS)
Shen Zhi; Zhang Zhaoshun; Cui Guixiang; Xu Chunxiao
2011-01-01
Stably stratified turbulence is a common phenomenon in atmosphere and ocean. In this paper the large eddy simulation is utilized for investigating homogeneous stably stratified turbulence numerically at Reynolds number Re = uL/v = 10 2 ∼10 3 and Froude number Fr = u/NL = 10 −2 ∼10 0 in which u is root mean square of velocity fluctuations, L is integral scale and N is Brunt-Vaïsälä frequency. Three sets of computation cases are designed with different initial conditions, namely isotropic turbulence, Taylor Green vortex and internal waves, to investigate the statistical properties from different origins. The computed horizontal and vertical energy spectra are consistent with observation in atmosphere and ocean when the composite parameter ReFr 2 is greater than O(1). It has also been found in this paper that the stratification turbulence can be developed under different initial velocity conditions and the internal wave energy is dominated in the developed stably stratified turbulence.
Introducing a distributed unstructured mesh into gyrokinetic particle-in-cell code, XGC
Yoon, Eisung; Shephard, Mark; Seol, E. Seegyoung; Kalyanaraman, Kaushik
2017-10-01
XGC has shown good scalability for large leadership supercomputers. The current production version uses a copy of the entire unstructured finite element mesh on every MPI rank. Although an obvious scalability issue if the mesh sizes are to be dramatically increased, the current approach is also not optimal with respect to data locality of particles and mesh information. To address these issues we have initiated the development of a distributed mesh PIC method. This approach directly addresses the base scalability issue with respect to mesh size and, through the use of a mesh entity centric view of the particle mesh relationship, provides opportunities to address data locality needs of many core and GPU supported heterogeneous systems. The parallel mesh PIC capabilities are being built on the Parallel Unstructured Mesh Infrastructure (PUMI). The presentation will first overview the form of mesh distribution used and indicate the structures and functions used to support the mesh, the particles and their interaction. Attention will then focus on the node-level optimizations being carried out to ensure performant operation of all PIC operations on the distributed mesh. Partnership for Edge Physics Simulation (EPSI) Grant No. DE-SC0008449 and Center for Extended Magnetohydrodynamic Modeling (CEMM) Grant No. DE-SC0006618.
Large-scale simulation of ductile fracture process of microstructured materials
International Nuclear Information System (INIS)
Tian Rong; Wang Chaowei
2011-01-01
The promise of computational science in the extreme-scale computing era is to reduce and decompose macroscopic complexities into microscopic simplicities with the expense of high spatial and temporal resolution of computing. In materials science and engineering, the direct combination of 3D microstructure data sets and 3D large-scale simulations provides unique opportunity for the development of a comprehensive understanding of nano/microstructure-property relationships in order to systematically design materials with specific desired properties. In the paper, we present a framework simulating the ductile fracture process zone in microstructural detail. The experimentally reconstructed microstructural data set is directly embedded into a FE mesh model to improve the simulation fidelity of microstructure effects on fracture toughness. To the best of our knowledge, it is for the first time that the linking of fracture toughness to multiscale microstructures in a realistic 3D numerical model in a direct manner is accomplished. (author)
Discontinuous Galerkin methodology for Large-Eddy Simulations of wind turbine airfoils
DEFF Research Database (Denmark)
Frére, A.; Sørensen, Niels N.; Hillewaert, K.
2016-01-01
This paper aims at evaluating the potential of the Discontinuous Galerkin (DG) methodology for Large-Eddy Simulation (LES) of wind turbine airfoils. The DG method has shown high accuracy, excellent scalability and capacity to handle unstructured meshes. It is however not used in the wind energy...... sector yet. The present study aims at evaluating this methodology on an application which is relevant for that sector and focuses on blade section aerodynamics characterization. To be pertinent for large wind turbines, the simulations would need to be at low Mach numbers (M ≤ 0.3) where compressible...... at low and high Reynolds numbers and compares the results to state-of-the-art models used in industry, namely the panel method (XFOIL with boundary layer modeling) and Reynolds Averaged Navier-Stokes (RANS). At low Reynolds number (Re = 6 × 104), involving laminar boundary layer separation and transition...
Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow
Directory of Open Access Journals (Sweden)
Sam Ali Al
2015-01-01
Full Text Available The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simulations and Direct Numerical Simulations data regardless the Sub Grid Scale models. However, the agreement is less satisfactory with relatively coarse grid without using any wall models and the differences between Sub Grid Scale models are distinguishable. Using local wall model retuned the basic flow topology and reduced significantly the differences between the coarse meshed Large-Eddy Simulations and Direct Numerical Simulations data. The results show that the ability of local wall model to predict the separation zone depends strongly on its implementation way.
Large-eddy simulations of the non-reactive flow in the Sydney swirl burner
International Nuclear Information System (INIS)
Yang Yang; Kær, Søren Knudsen
2012-01-01
Highlights: ► Rational mesh and grid system for LES are discussed. ► Validated results are provided and discrepancy of mean radial velocity component is discussed. ► Flow structures are identified using vorticity field. ► We performed POD on cross sections to assist in understanding of coherent structures. - Abstract: This paper presents a numerical investigation using large-eddy simulation. Two isothermal cases from the Sydney swirling flame database with different swirl numbers were tested. Rational grid system and mesh details were presented firstly. Validations showed overall good agreement in time averaged results. In medium swirling case, there are two reverse-flow regions with a collar-like structure between them. The existence of strong unsteady structure, precessing vortex core, was proven. Coherent structures are detached from the instantaneous field. Q-criterion was used to visualize vorticity field with distinct clear structure of vortice tubes. Dominating spatial–temporal structures contained in different cross sections were extracted using proper orthogonal decomposition. In high swirling case, there is only one long reverse-flow region. In this paper, we proved the capability of a commercial CFD package in predicting complex flow field and presented the potential of large eddy simulation in understanding dynamics.
An Examination of Parameters Affecting Large Eddy Simulations of Flow Past a Square Cylinder
Mankbadi, M. R.; Georgiadis, N. J.
2014-01-01
Separated flow over a bluff body is analyzed via large eddy simulations. The turbulent flow around a square cylinder features a variety of complex flow phenomena such as highly unsteady vortical structures, reverse flow in the near wall region, and wake turbulence. The formation of spanwise vortices is often times artificially suppressed in computations by either insufficient depth or a coarse spanwise resolution. As the resolution is refined and the domain extended, the artificial turbulent energy exchange between spanwise and streamwise turbulence is eliminated within the wake region. A parametric study is performed highlighting the effects of spanwise vortices where the spanwise computational domain's resolution and depth are varied. For Re=22,000, the mean and turbulent statistics computed from the numerical large eddy simulations (NLES) are in good agreement with experimental data. Von-Karman shedding is observed in the wake of the cylinder. Mesh independence is illustrated by comparing a mesh resolution of 2 million to 16 million. Sensitivities to time stepping were minimized and sampling frequency sensitivities were nonpresent. While increasing the spanwise depth and resolution can be costly, this practice was found to be necessary to eliminating the artificial turbulent energy exchange.
A dynamic mesh refinement technique for Lattice Boltzmann simulations on octree-like grids
Neumann, Philipp
2012-04-27
In this contribution, we present our new adaptive Lattice Boltzmann implementation within the Peano framework, with special focus on nanoscale particle transport problems. With the continuum hypothesis not holding anymore on these small scales, new physical effects - such as Brownian fluctuations - need to be incorporated. We explain the overall layout of the application, including memory layout and access, and shortly review the adaptive algorithm. The scheme is validated by different benchmark computations in two and three dimensions. An extension to dynamically changing grids and a spatially adaptive approach to fluctuating hydrodynamics, allowing for the thermalisation of the fluid in particular regions of interest, is proposed. Both dynamic adaptivity and adaptive fluctuating hydrodynamics are validated separately in simulations of particle transport problems. The application of this scheme to an oscillating particle in a nanopore illustrates the importance of Brownian fluctuations in such setups. © 2012 Springer-Verlag.
Large-Signal Klystron Simulations Using KLSC
Carlsten, B. E.; Ferguson, P.
1997-05-01
We describe a new, 2-1/2 dimensional, klystron-simulation code, KLSC. This code has a sophisticated input cavity model for calculating the klystron gain with arbitrary input cavity matching and tuning, and is capable of modeling coupled output cavities. We will discuss the input and output cavity models, and present simulation results from a high-power, S-band design. We will use these results to explore tuning issues with coupled output cavities.
Miyamoto, K.
2005-12-01
I investigate how the intensity and the activity of mid-latitude cyclones change as a result of global warming, based on a time-slice experiment with a super-high resolution Atmospheric General Circulation Model (20-km mesh TL959L60 MRI/JMA AGCM). The model was developed by the RR2002 project "Development of Super High Resolution Global and Regional Climate Models" funded by the Japanese Ministry of Education, Culture, Sports, Science and Technology. In this context, I use a 10-year control simulation with the climatological SST and a 10-year time-slice global warming simulation using the SST anomalies derived from the SRES A1B scenario run with the MRI-CGCM2.3 (T42L30 atmosphere, 0.5-2.0 x 2.5 L23 ocean) corresponding to the end of the 21st century. I have analyzed the sea-level pressure field and the kinetic energy field of the wind at the 500 hPa pressure level associated with mid-latitude transients from October through April. According to a comparison of 10-day average fields between present and future in the North Pacific, some statistically significant changes are found in a warmer climate for the both of sea-level pressure and the kinetic energy fields. In particular, from late winter through early spring, the sea-level pressure decreases on many parts of the whole Pacific. The kinetic energy of the wind becomes higher on center of the basin. Therefore, I suppose the Aleutian Low is likely to settle in longer by about one month than the present. Hereafter, I plan to investigate what kind of phenomena may accompany the changes on mid-latitude transients.
Modeling and analysis of large-eddy simulations of particle-laden turbulent boundary layer flows
Rahman, Mustafa M.
2017-01-05
We describe a framework for the large-eddy simulation of solid particles suspended and transported within an incompressible turbulent boundary layer (TBL). For the fluid phase, the large-eddy simulation (LES) of incompressible turbulent boundary layer employs stretched spiral vortex subgrid-scale model and a virtual wall model similar to the work of Cheng, Pullin & Samtaney (J. Fluid Mech., 2015). This LES model is virtually parameter free and involves no active filtering of the computed velocity field. Furthermore, a recycling method to generate turbulent inflow is implemented. For the particle phase, the direct quadrature method of moments (DQMOM) is chosen in which the weights and abscissas of the quadrature approximation are tracked directly rather than the moments themselves. The numerical method in this framework is based on a fractional-step method with an energy-conservative fourth-order finite difference scheme on a staggered mesh. This code is parallelized based on standard message passing interface (MPI) protocol and is designed for distributed-memory machines. It is proposed to utilize this framework to examine transport of particles in very large-scale simulations. The solver is validated using the well know result of Taylor-Green vortex case. A large-scale sandstorm case is simulated and the altitude variations of number density along with its fluctuations are quantified.
Streaming simplification of tetrahedral meshes.
Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T
2007-01-01
Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.
Developments and validation of large eddy simulation of turbulent flows in an industrial code
International Nuclear Information System (INIS)
Ackermann, C.
2000-01-01
Large Eddy Simulation, where large scales of the flow are resolved and sub-grid scales are modelled, is well adapted to the study of turbulent flow, in which geometry and/or heat transfer effects lead to unsteady phenomena. To obtain an improved numerical tool, simulations of elementary test cases, Homogeneous Isotropic Turbulence and Turbulent Plane Channel, were clone on both structured and unstructured grids, before moving to more complex geometries. This allowed the influence of the different physical and numerical parameters to be studied separately. On structured grids, the different properties of the numerical methods corresponding to our problem were identified, a new sub-grid model was elaborated and several laws of the wall tested: for this discretization, our numerical tool is yet validated. On unstructured grids, the construction of numerical methods with the same properties as on the structured grids is harder, especially for the convection scheme: several numerical schemes were tested, and sub-grid models and laws of the wall were adapted to unstructured grids. Simulations of the same elementary tests were clone: the results are relatively satisfactorily, even if they are not so good as the one obtained in structured grids, most probably because the numerical methods chosen cannot perfectly isolate the effects between the convection scheme, physical modelling and the mesh chosen. This work is the first stage towards the development of a practical Large Eddy Simulation tool for unstructured grid. (author) [fr
Wollherr, Stephanie; Gabriel, Alice-Agnes; Uphoff, Carsten
2018-05-01
The dynamics and potential size of earthquakes depend crucially on rupture transfers between adjacent fault segments. To accurately describe earthquake source dynamics, numerical models can account for realistic fault geometries and rheologies such as nonlinear inelastic processes off the slip interface. We present implementation, verification, and application of off-fault Drucker-Prager plasticity in the open source software SeisSol (www.seissol.org). SeisSol is based on an arbitrary high-order derivative modal Discontinuous Galerkin (ADER-DG) method using unstructured, tetrahedral meshes specifically suited for complex geometries. Two implementation approaches are detailed, modelling plastic failure either employing sub-elemental quadrature points or switching to nodal basis coefficients. At fine fault discretizations the nodal basis approach is up to 6 times more efficient in terms of computational costs while yielding comparable accuracy. Both methods are verified in community benchmark problems and by three dimensional numerical h- and p-refinement studies with heterogeneous initial stresses. We observe no spectral convergence for on-fault quantities with respect to a given reference solution, but rather discuss a limitation to low-order convergence for heterogeneous 3D dynamic rupture problems. For simulations including plasticity, a high fault resolution may be less crucial than commonly assumed, due to the regularization of peak slip rate and an increase of the minimum cohesive zone width. In large-scale dynamic rupture simulations based on the 1992 Landers earthquake, we observe high rupture complexity including reverse slip, direct branching, and dynamic triggering. The spatio-temporal distribution of rupture transfers are altered distinctively by plastic energy absorption, correlated with locations of geometrical fault complexity. Computational cost increases by 7% when accounting for off-fault plasticity in the demonstrating application. Our results
Energy Technology Data Exchange (ETDEWEB)
Lieberoth, J.
1975-06-15
The numerical solution of the neutron diffusion equation plays a very important role in the analysis of nuclear reactors. A wide variety of numerical procedures has been proposed, at which most of the frequently used numerical methods are fundamentally based on the finite- difference approximation where the partial derivatives are approximated by the finite difference. For complex geometries, typical of the practical reactor problems, the computational accuracy of the finite-difference method is seriously affected by the size of the mesh width relative to the neutron diffusion length and by the heterogeneity of the medium. Thus, a very large number of mesh points are generally required to obtain a reasonably accurate approximate solution of the multi-dimensional diffusion equation. Since the computation time is approximately proportional to the number of mesh points, a detailed multidimensional analysis, based on the conventional finite-difference method, is still expensive even with modern large-scale computers. Accordingly, there is a strong incentive to develop alternatives that can reduce the number of mesh-points and still retain accuracy. One of the promising alternatives is the finite element method, which consists of the expansion of the neutron flux by piecewise polynomials. One of the advantages of this procedure is its flexibility in selecting the locations of the mesh points and the degree of the expansion polynomial. The small number of mesh points of the coarse grid enables to store the results of several of the least outer iterations and to calculate well extrapolated values of them by comfortable formalisms. This holds especially if only one energy distribution of fission neutrons is assumed for all fission processes in the reactor, because the whole information of an outer iteration is contained in a field of fission rates which has the size of all mesh points of the coarse grid.
Proceedings of the meeting on large scale computer simulation research
International Nuclear Information System (INIS)
2004-04-01
The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)
Large TileCal magnetic field simulation
International Nuclear Information System (INIS)
Nessi, M.; Bergsma, F.; Vorozhtsov, S.B.; Borisov, O.N.; Lomakina, O.V.; Karamysheva, G.A.; Budagov, Yu.A.
1994-01-01
The ATLAS magnetic field map has been estimated in the presence of the hadron tile calorimeter. This is an important issue in order to quantify the needs for individual PMT shielding, the effect on the scintillator light yield and its implications on the calibration. The field source is based on a central solenoid and 8 superconducting air-core toroidal coils. The maximum induction value in the scintillating tiles does not exceed 6 mT. When an iron plate is used to close the open drawer window the field inside the PMT near to the extended barrel edge does not exceed 0.6 mT. Estimation of ponder motive force distribution, acting on individual units of the system was performed. VF electromagnetic software OPERA-TOSCA and CERN POISCR code were used for the field simulation of the system. 10 refs., 4 figs
Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids
Angelidis, Dionysios; Sotiropoulos, Fotis
2014-11-01
Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.
Urogynecologic Surgical Mesh Implants
... procedures performed to treat pelvic floor disorders with surgical mesh: Transvaginal mesh to treat POP Transabdominal mesh to treat ... address safety risks Final Order for Reclassification of Surgical Mesh for Transvaginal Pelvic Organ Prolapse Repair Final Order for Effective ...
Large Scale Simulation Platform for NODES Validation Study
Energy Technology Data Exchange (ETDEWEB)
Sotorrio, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Qin, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Min, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-04-27
This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.
Surface meshing with curvature convergence
Li, Huibin; Zeng, Wei; Morvan, Jean-Marie; Chen, Liming; Gu, Xianfengdavid
2014-01-01
Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.
Surface meshing with curvature convergence
Li, Huibin
2014-06-01
Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.
Large eddy simulation of the generation and breakdown of a tumbling flow
International Nuclear Information System (INIS)
Toledo, Mauricio S.; Le Penven, Lionel; Buffat, Marc; Cadiou, Anne; Padilla, Judith
2007-01-01
Large eddy simulations (LES) are performed in order to reproduce the generation and the breakdown of a tumbling motion in the simplified model engine [Boree, J., Maurel, S., Bazile, R., 2002. Disruption of a compressed vortex. Phys. Fluids, 14 (7) 2543-2556]. A second-order accurate numerical scheme is applied in conjunction with a mixed finite volume/finite element formulation adapted for unstructured deforming meshes. Subgrid terms are kept as simple as possible with a Smagorinsky model in order to build a methodology devoted to engine-like flows. The main statistical quantities, such as mean velocity and turbulent kinetic energy, are obtained from a set of independent cycles and compared to experiments. Important experimental features, such as oscillations of the intake jet, vortex precession and a turbulent kinetic energy peak near the vortex core, are well reproduced
Spyropoulos, Evangelos T.; Holmes, Bayard S.
1997-01-01
The dynamic subgrid-scale model is employed in large-eddy simulations of flow over a cylinder at a Reynolds number, based on the diameter of the cylinder, of 90,000. The Centric SPECTRUM(trademark) finite element solver is used for the analysis. The far field sound pressure is calculated from Lighthill-Curle's equation using the computed fluctuating pressure at the surface of the cylinder. The sound pressure level at a location 35 diameters away from the cylinder and at an angle of 90 deg with respect to the wake's downstream axis was found to have a peak value of approximately 110 db. Slightly smaller peak values were predicted at the 60 deg and 120 deg locations. A grid refinement study suggests that the dynamic model demands mesh refinement beyond that used here.
International Nuclear Information System (INIS)
Annalisa Manera; Horst-Michael Prasser; Dirk Lucas
2005-01-01
Full text of publication follows: A large number of experiments for water-air vertical flows in a large-diameter pipe has been carried out at the TOPFLOW facility (Forschunszentrum Rossendorf). The experiments cover a wide range of liquid and superficial gas velocity. The test section consists of a vertical pipe of ∼194 mm and 8.5 m long. At a distance of 7.6 m from the air injection, two wire-mesh sensors are installed. The two sensors are mounted at a distance of 63.3 mm from each other. The wire-mesh sensors measure sequences of instantaneous two-dimensional gas-fraction distributions in the cross-section in which they are mounted with a spatial resolution of 3 mm and a frequency of 2500 Hz. The total dimension of the matrix of measuring points for each mesh sensor is 64 x 64. In a central region of the measuring plane, where the void-fraction gradients are small, points of the first wire-mesh sensor are individually cross-correlated in time domain with measuring points belonging to the second wire-mesh sensor. The cross-correlation functions were calculated for pairs of points that are located accurately above each other as well as for points with a lateral distance. The lateral distance was varied from 0 to 48 mm (16 points), which is still within 50% of the pipe radius, i.e. in the region of small void-fraction gradients. The maximum of each of the 17 correlations is selected in order to derive a spatial correlation in the radial direction. The obtained spatial cross-correlations shows a maximum at zero lateral distance and decrease with growing lateral shift. In a region without gradients, the lateral displacement of bubbles is dominated by turbulent diffusion. This gives the opportunity to derive bubble turbulent diffusion coefficients from the spreading of the spatial correlations. At this aim, the spatial correlations have been first corrected to take into account the finite spatial resolution of the sensor and the finite dimension of the bubbles. The
Partitioning of unstructured meshes for load balancing
International Nuclear Information System (INIS)
Martin, O.C.; Otto, S.W.
1994-01-01
Many large-scale engineering and scientific calculations involve repeated updating of variables on an unstructured mesh. To do these types of computations on distributed memory parallel computers, it is necessary to partition the mesh among the processors so that the load balance is maximized and inter-processor communication time is minimized. This can be approximated by the problem, of partitioning a graph so as to obtain a minimum cut, a well-studied combinatorial optimization problem. Graph partitioning algorithms are discussed that give good but not necessarily optimum solutions. These algorithms include local search methods recursive spectral bisection, and more general purpose methods such as simulated annealing. It is shown that a general procedure enables to combine simulated annealing with Kernighan-Lin. The resulting algorithm is both very fast and extremely effective. (authors) 23 refs., 3 figs., 1 tab
Energy Technology Data Exchange (ETDEWEB)
Flandrin, N.
2005-09-15
During the exploitation of an oil reservoir, it is important to predict the recovery of hydrocarbons and to optimize its production. A better comprehension of the physical phenomena requires to simulate 3D multiphase flows in increasingly complex geological structures. In this thesis, we are interested in this spatial discretization and we propose to extend in 3D the 2D hybrid model proposed by IFP in 1998 that allows to take directly into account in the geometry the radial characteristics of the flows. In these hybrid meshes, the wells and their drainage areas are described by structured radial circular meshes and the reservoirs are represented by structured meshes that can be a non uniform Cartesian grid or a Corner Point Geometry grids. In order to generate a global conforming mesh, unstructured transition meshes based on power diagrams and satisfying finite volume properties are used to connect the structured meshes together. Two methods have been implemented to generate these transition meshes: the first one is based on a Delaunay triangulation, the other one uses a frontal approach. Finally, some criteria are introduced to measure the quality of the transition meshes and optimization procedures are proposed to increase this quality under finite volume properties constraints. (author)
Large-Eddy Simulations of Reacting Liquid Spray
Lederlin, Thomas; Sanjose, Marlene; Gicquel, Laurent; Cuenot, Benedicte; Pitsch, Heinz; Poinsot, Thierry
2008-11-01
Numerical simulation, which is commonly used in many stages of aero-engine design, still has to demonstrate its predictive capability for two-phase reacting flows. This study is a collaboration between Stanford University and CERFACS to perform LES of a realistic spray combustor installed at ONERA, Toulouse. The experimental configuration is computed on the same unstructured mesh with two different solvers: Stanford's CDP code and CERFACS's AVBP code. CDP uses a low-Mach, variable-density solver with implicit time advancement. Droplets are tracked in a Lagrangian point-particle framework. The combustion model uses a flamelet approach, based on two transported scalars, mixture fraction and reaction progress variable. AVBP is a fully compressible solver with explicit time advancement. The liquid phase is described with an Eulerian method. The flame-turbulence interaction is modeled using a dynamically-thickened flame. Results are compared with experimental data for three regimes: purely gaseous non-reacting flow, non-reacting flow with evaporating droplets, reacting flow with droplets. Both simulations show a good agreement with experimental data and also stress the difference and relative advantages of the numerical methods.
Utilization of Large Cohesive Interface Elements for Delamination Simulation
DEFF Research Database (Denmark)
Bak, Brian Lau Verndal; Lund, Erik
2012-01-01
This paper describes the difficulties of utilizing large interface elements in delamination simulation. Solutions to increase the size of applicable interface elements are described and cover numerical integration of the element and modifications of the cohesive law....
Large Eddy Simulation of Vertical Axis Wind Turbine wakes; Part II: effects of inflow turbulence
Duponcheel, Matthieu; Chatelain, Philippe; Caprace, Denis-Gabriel; Winckelmans, Gregoire
2017-11-01
The aerodynamics of Vertical Axis Wind Turbines (VAWTs) is inherently unsteady, which leads to vorticity shedding mechanisms due to both the lift distribution along the blade and its time evolution. Large-scale, fine-resolution Large Eddy Simulations of the flow past Vertical Axis Wind Turbines have been performed using a state-of-the-art Vortex Particle-Mesh (VPM) method combined with immersed lifting lines. Inflow turbulence with a prescribed turbulence intensity (TI) is injected at the inlet of the simulation from a precomputed synthetic turbulence field obtained using the Mann algorithm. The wake of a standard, medium-solidity, H-shaped machine is simulated for several TI levels. The complex wake development is captured in details and over long distances: from the blades to the near wake coherent vortices, then through the transitional ones to the fully developed turbulent far wake. Mean flow and turbulence statistics are computed over more than 10 diameters downstream of the machine. The sensitivity of the wake topology and decay to the TI level is assessed.
Large-Eddy Simulation of Internal Flow through Human Vocal Folds
Lasota, Martin; Šidlof, Petr
2018-06-01
The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.
Large-Eddy Simulation of Internal Flow through Human Vocal Folds
Directory of Open Access Journals (Sweden)
Lasota Martin
2018-01-01
Full Text Available The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.
Large eddy simulation of premixed and non-premixed combustion
Malalasekera, W; Ibrahim, SS; Masri, AR; Sadasivuni, SK; Gubba, SR
2010-01-01
This paper summarises the authors experience in using the Large Eddy Simulation (LES) technique for the modelling of premixed and non-premixed combustion. The paper describes the application of LES based combustion modelling technique to two well defined experimental configurations where high quality data is available for validation. The large eddy simulation technique for the modelling flow and turbulence is based on the solution of governing equations for continuity and momentum in a struct...
On asymptotically efficient simulation of large deviation probabilities.
Dieker, A.B.; Mandjes, M.R.H.
2005-01-01
ABSTRACT: Consider a family of probabilities for which the decay is governed by a large deviation principle. To find an estimate for a fixed member of this family, one is often forced to use simulation techniques. Direct Monte Carlo simulation, however, is often impractical, particularly if the
Chatelain, P.; Duponcheel, M.; Caprace, D.-G.; Marichal, Y.; Winckelmans, G.
2016-09-01
A Vortex Particle-Mesh (VPM) method with immersed lifting lines has been developed and validated. Based on the vorticity-velocity formulation of the Navier-Stokes equations, it combines the advantages of a particle method and of a mesh-based approach. The immersed lifting lines handle the creation of vorticity from the blade elements and its early development. LES of Vertical Axis Wind Turbine (VAWT) flows are performed. The complex wake development is captured in details and over very long distances: from the blades to the near wake coherent vortices, then through the transitional ones to the fully developed turbulent far wake (beyond 10 rotor diameters). The statistics and topology of the mean flow are studied. The computational sizes also allow insights into the detailed unsteady vortex dynamics, including some unexpected topological flow features.
International Nuclear Information System (INIS)
Chatelain, P; Duponcheel, M; Caprace, D-G; Winckelmans, G; Marichal, Y
2016-01-01
A Vortex Particle-Mesh (VPM) method with immersed lifting lines has been developed and validated. Based on the vorticity-velocity formulation of the Navier-Stokes equations, it combines the advantages of a particle method and of a mesh-based approach. The immersed lifting lines handle the creation of vorticity from the blade elements and its early development. LES of Vertical Axis Wind Turbine (VAWT) flows are performed. The complex wake development is captured in details and over very long distances: from the blades to the near wake coherent vortices, then through the transitional ones to the fully developed turbulent far wake (beyond 10 rotor diameters). The statistics and topology of the mean flow are studied. The computational sizes also allow insights into the detailed unsteady vortex dynamics, including some unexpected topological flow features. (paper)
Research of Impact Load in Large Electrohydraulic Load Simulator
Directory of Open Access Journals (Sweden)
Yongguang Liu
2014-01-01
Full Text Available The stronger impact load will appear in the initial phase when the large electric cylinder is tested in the hardware-in-loop simulation. In this paper, the mathematical model is built based on AMESim, and then the reason of the impact load is investigated through analyzing the changing tendency of parameters in the simulation results. The inhibition methods of impact load are presented according to the structural invariability principle and applied to the actual system. The final experimental result indicates that the impact load is inhibited, which provides a good experimental condition for the electric cylinder and promotes the study of large load simulator.
Quality and Reliability of Large-Eddy Simulations
Meyers, Johan; Sagaut, Pierre
2008-01-01
Computational resources have developed to the level that, for the first time, it is becoming possible to apply large-eddy simulation (LES) to turbulent flow problems of realistic complexity. Many examples can be found in technology and in a variety of natural flows. This puts issues related to assessing, assuring, and predicting the quality of LES into the spotlight. Several LES studies have been published in the past, demonstrating a high level of accuracy with which turbulent flow predictions can be attained, without having to resort to the excessive requirements on computational resources imposed by direct numerical simulations. However, the setup and use of turbulent flow simulations requires a profound knowledge of fluid mechanics, numerical techniques, and the application under consideration. The susceptibility of large-eddy simulations to errors in modelling, in numerics, and in the treatment of boundary conditions, can be quite large due to nonlinear accumulation of different contributions over time, ...
Management of complications of mesh surgery.
Lee, Dominic; Zimmern, Philippe E
2015-07-01
Transvaginal placements of synthetic mid-urethral slings and vaginal meshes have largely superseded traditional tissue repairs in the current era because of presumed efficacy and ease of implant with device 'kits'. The use of synthetic material has generated novel complications including mesh extrusion, pelvic and vaginal pain and mesh contraction. In this review, our aim is to discuss the management, surgical techniques and outcomes associated with mesh removal. Recent publications have seen an increase in presentation of these mesh-related complications, and reports from multiple tertiary centers have suggested that not all patients benefit from surgical intervention. Although the true incidence of mesh complications is unknown, recent publications can serve to guide physicians and inform patients of the surgical outcomes from mesh-related complications. In addition, the literature highlights the growing need for a registry to account for a more accurate reporting of these events and to counsel patients on the risk and benefits before proceeding with mesh surgeries.
SIMON: Remote collaboration system based on large scale simulation
International Nuclear Information System (INIS)
Sugawara, Akihiro; Kishimoto, Yasuaki
2003-01-01
Development of SIMON (SImulation MONitoring) system is described. SIMON aims to investigate many physical phenomena of tokamak type nuclear fusion plasma by simulation and to exchange information and to carry out joint researches with scientists in the world using internet. The characteristics of SIMON are followings; 1) decrease load of simulation by trigger sending method, 2) visualization of simulation results and hierarchical structure of analysis, 3) decrease of number of license by using command line when software is used, 4) improvement of support for using network of simulation data output by use of HTML (Hyper Text Markup Language), 5) avoidance of complex built-in work in client part and 6) small-sized and portable software. The visualization method of large scale simulation, remote collaboration system by HTML, trigger sending method, hierarchical analytical method, introduction into three-dimensional electromagnetic transportation code and technologies of SIMON system are explained. (S.Y.)
Dobravec, Tadej; Mavrič, Boštjan; Šarler, Božidar
2017-11-01
A two-dimensional model to simulate the dendritic and eutectic growth in binary alloys is developed. A cellular automaton method is adopted to track the movement of the solid-liquid interface. The diffusion equation is solved in the solid and liquid phases by using an explicit finite volume method. The computational domain is divided into square cells that can be hierarchically refined or coarsened using an adaptive mesh based on the quadtree algorithm. Such a mesh refines the regions of the domain near the solid-liquid interface, where the highest concentration gradients are observed. In the regions where the lowest concentration gradients are observed the cells are coarsened. The originality of the work is in the novel, adaptive approach to the efficient and accurate solution of the posed multiscale problem. The model is verified and assessed by comparison with the analytical results of the Lipton-Glicksman-Kurz model for the steady growth of a dendrite tip and the Jackson-Hunt model for regular eutectic growth. Several examples of typical microstructures are simulated and the features of the method as well as further developments are discussed.
Energy Technology Data Exchange (ETDEWEB)
Lober, R.R.; Tautges, T.J.; Vaughan, C.T.
1997-03-01
Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.
International Nuclear Information System (INIS)
Patil, Sunil; Tafti, Danesh
2012-01-01
Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.
New simulation capabilities of electron clouds in ion beams with large tune depression
International Nuclear Information System (INIS)
Vay, J.-L.; Furman, M.A.; Seidl, P.A.
2007-01-01
We have developed a new, comprehensive set of simulation tools aimed at modeling the interaction of intense ion beams and electron clouds (e-clouds). The set contains the 3-D accelerator PIC code WARP and the 2-D 'slice' e-cloud code POSINST [M. Furman, this workshop, paper TUAX05], as well as a merger of the two, augmented by new modules for impact ionization and neutral gas generation. The new capability runs on workstations or parallel supercomputers and contains advanced features such as mesh refinement, disparate adaptive time stepping, and a new 'drift-Lorentz' particle mover for tracking charged particles in magnetic fields using large time steps. It is being applied to the modeling of ion beams (1 MeV, 180 mA, K+) for heavy ion inertial fusion and warm dense matter studies, as they interact with electron clouds in the High-Current Experiment (HCX) [experimental results discussed by A. Molvik, this workshop, paper THAW02]. We describe the capabilities and present recent simulation results with detailed comparisons against the HCX experiment, as well as their application (in a different regime) to the modeling of e-clouds in the Large Hadron Collider (LHC). (author)
New simulation capabilities of electron clouds in ion beams with large tune depression
International Nuclear Information System (INIS)
Lawrence Livermore National Laboratory
2006-01-01
We have developed a new, comprehensive set of simulation tools aimed at modeling the interaction of intense ion beams and electron clouds (e-clouds). The set contains the 3-D accelerator PIC code WARP and the 2-D ''slice'' e-cloud code POSINST [M. Furman, this workshop, paper TUAX05], as well as a merger of the two, augmented by new modules for impact ionization and neutral gas generation. The new capability runs on workstations or parallel supercomputers and contains advanced features such as mesh refinement, disparate adaptive time stepping, and a new ''drift-Lorentz'' particle mover for tracking charged particles in magnetic fields using large time steps. It is being applied to the modeling of ion beams (1 MeV, 180 mA, K+) for heavy ion inertial fusion and warm dense matter studies, as they interact with electron clouds in the High-Current Experiment (HCX) [experimental results discussed by A. Molvik, this workshop, paper THAW02]. We describe the capabilities and present recent simulation results with detailed comparisons against the HCX experiment, as well as their application (in a different regime) to the modeling of e-clouds in the Large Hadron Collider (LHC)
New simulation capabilities of electron clouds in ion beams with large tune depression
International Nuclear Information System (INIS)
Vay, J.L.; Furman, M.A.; Seidl, P.A.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Kireeff-Covo, M.; Molvik, A.W.; Stoltz, P.H.; Veitzer, S.; Verboncoeur, J.P.
2006-01-01
The authors have developed a new, comprehensive set of simulation tools aimed at modeling the interaction of intense ion beams and electron clouds (e-clouds). The set contains the 3-D accelerator PIC code WARP and the 2-D ''slice'' e-cloud code POSINST, as well as a merger of the two, augmented by new modules for impact ionization and neutral gas generation. The new capability runs on workstations or parallel supercomputers and contains advanced features such as mesh refinement, disparate adaptive time stepping, and a new ''drift-Lorentz'' particle mover for tracking charged particles in magnetic fields using large time steps. It is being applied to the modeling of ion beams (1 MeV, 180 mA, K+) for heavy ion inertial fusion and warm dense matter studies, as they interact with electron clouds in the High-Current Experiment (HCX). They describe the capabilities and present recent simulation results with detailed comparisons against the HCX experiment, as well as their application (in a different regime) to the modeling of e-clouds in the Large Hadron Collider (LHC)
Remote collaboration system based on large scale simulation
International Nuclear Information System (INIS)
Kishimoto, Yasuaki; Sugahara, Akihiro; Li, J.Q.
2008-01-01
Large scale simulation using super-computer, which generally requires long CPU time and produces large amount of data, has been extensively studied as a third pillar in various advanced science fields in parallel to theory and experiment. Such a simulation is expected to lead new scientific discoveries through elucidation of various complex phenomena, which are hardly identified only by conventional theoretical and experimental approaches. In order to assist such large simulation studies for which many collaborators working at geographically different places participate and contribute, we have developed a unique remote collaboration system, referred to as SIMON (simulation monitoring system), which is based on client-server system control introducing an idea of up-date processing, contrary to that of widely used post-processing. As a key ingredient, we have developed a trigger method, which transmits various requests for the up-date processing from the simulation (client) running on a super-computer to a workstation (server). Namely, the simulation running on a super-computer actively controls the timing of up-date processing. The server that has received the requests from the ongoing simulation such as data transfer, data analyses, and visualizations, etc. starts operations according to the requests during the simulation. The server makes the latest results available to web browsers, so that the collaborators can monitor the results at any place and time in the world. By applying the system to a specific simulation project of laser-matter interaction, we have confirmed that the system works well and plays an important role as a collaboration platform on which many collaborators work with one another
Believability in simplifications of large scale physically based simulation
Han, Donghui; Hsu, Shu-wei; McNamara, Ann; Keyser, John
2013-01-01
We verify two hypotheses which are assumed to be true only intuitively in many rigid body simulations. I: In large scale rigid body simulation, viewers may not be able to perceive distortion incurred by an approximated simulation method. II: Fixing objects under a pile of objects does not affect the visual plausibility. Visual plausibility of scenarios simulated with these hypotheses assumed true are measured using subjective rating from viewers. As expected, analysis of results supports the truthfulness of the hypotheses under certain simulation environments. However, our analysis discovered four factors which may affect the authenticity of these hypotheses: number of collisions simulated simultaneously, homogeneity of colliding object pairs, distance from scene under simulation to camera position, and simulation method used. We also try to find an objective metric of visual plausibility from eye-tracking data collected from viewers. Analysis of these results indicates that eye-tracking does not present a suitable proxy for measuring plausibility or distinguishing between types of simulations. © 2013 ACM.
Large-scale computing techniques for complex system simulations
Dubitzky, Werner; Schott, Bernard
2012-01-01
Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and
Streaming Compression of Hexahedral Meshes
Energy Technology Data Exchange (ETDEWEB)
Isenburg, M; Courbet, C
2010-02-03
We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.
Numerical Investigation of Corrugated Wire Mesh Laminate
Directory of Open Access Journals (Sweden)
Jeongho Choi
2013-01-01
Full Text Available The aim of this work is to develop a numerical model of Corrugated Wire Mesh Laminate (CWML capturing all its complexities such as nonlinear material properties, nonlinear geometry and large deformation behaviour, and frictional behaviour. Development of such a model will facilitate numerical simulation of the mechanical behaviour of the wire mesh structure under various types of loading as well as the variation of the CWML configuration parameters to tailor its mechanical properties to suit the intended application. Starting with a single strand truss model consisting of four waves with a bilinear stress-strain model to represent the plastic behaviour of stainless steel, the finite element model is gradually built up to study single-layer structures with 18 strands of corrugated wire meshes consistency and double- and quadruple-layered laminates with alternating crossply orientations. The compressive behaviour of the CWML model is simulated using contact elements to model friction and is compared to the load-deflection behaviour determined experimentally in uniaxial compression tests. The numerical model of the CWML is then employed to conduct the aim of establishing the upper and lower bounds of stiffness and load capacity achievable by such structures.
Time simulation of flutter with large stiffness changes
Karpel, Mordechay; Wieseman, Carol D.
1992-01-01
Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.
Simurda, Matej; Duggen, Lars; Basse, Nils T; Lassen, Benny
2018-02-01
A numerical model for transit-time ultrasonic flowmeters operating under multiphase flow conditions previously presented by us is extended by mesh refinement and grid point redistribution. The method solves modified first-order stress-velocity equations of elastodynamics with additional terms to account for the effect of the background flow. Spatial derivatives are calculated by a Fourier collocation scheme allowing the use of the fast Fourier transform, while the time integration is realized by the explicit third-order Runge-Kutta finite-difference scheme. The method is compared against analytical solutions and experimental measurements to verify the benefit of using mapped grids. Additionally, a study of clamp-on and in-line ultrasonic flowmeters operating under multiphase flow conditions is carried out.
Large Atmospheric Computation on the Earth Simulator: The LACES Project
Directory of Open Access Journals (Sweden)
Michel Desgagné
2006-01-01
Full Text Available The Large Atmospheric Computation on the Earth Simulator (LACES project is a joint initiative between Canadian and Japanese meteorological services and academic institutions that focuses on the high resolution simulation of Hurricane Earl (1998. The unique aspect of this effort is the extent of the computational domain, which covers all of North America and Europe with a grid spacing of 1 km. The Canadian Mesoscale Compressible Community (MC2 model is shown to parallelize effectively on the Japanese Earth Simulator (ES supercomputer; however, even using the extensive computing resources of the ES Center (ESC, the full simulation for the majority of Hurricane Earl's lifecycle takes over eight days to perform and produces over 5.2 TB of raw data. Preliminary diagnostics show that the results of the LACES simulation for the tropical stage of Hurricane Earl's lifecycle compare well with available observations for the storm. Further studies involving advanced diagnostics have commenced, taking advantage of the uniquely large spatial extent of the high resolution LACES simulation to investigate multiscale interactions in the hurricane and its environment. It is hoped that these studies will enhance our understanding of processes occurring within the hurricane and between the hurricane and its planetary-scale environment.
Large Scale Simulations of the Euler Equations on GPU Clusters
Liebmann, Manfred; Douglas, Craig C.; Haase, Gundolf; Horvá th, Zoltá n
2010-01-01
The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one
Large Eddy Simulation of Sydney Swirl Non-Reaction Jets
DEFF Research Database (Denmark)
Yang, Yang; Kær, Søren Knudsen; Yin, Chungen
The Sydney swirl burner non-reaction case was studied using large eddy simulation. The two-point correlation method was introduced and used to estimate grid resolution. Energy spectra and instantaneous pressure and velocity plots were used to identify features in flow field. By using these method......, vortex breakdown and precessing vortex core are identified and different flow zones are shown....
Large interface simulation in an averaged two-fluid code
International Nuclear Information System (INIS)
Henriques, A.
2006-01-01
Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author) [fr
Large-eddy simulation of highly underexpanded transient gas jets
Vuorinen, V.; Yu, J.; Tirunagari, S.; Kaario, O.; Larmi, M.; Duwig, C.; Boersma, B.J.
2013-01-01
Large-eddy simulations (LES) based on scale-selective implicit filtering are carried out in order to study the effect of nozzle pressure ratios on the characteristics of highly underexpanded jets. Pressure ratios ranging from 4.5 to 8.5 with Reynolds numbers of the order 75?000–140?000 are
Large signal simulation of photonic crystal Fano laser
DEFF Research Database (Denmark)
Zali, Aref Rasoulzadeh; Yu, Yi; Moravvej-Farshi, Mohammad Kazem
2017-01-01
be modulated at frequencies exceeding 1 THz which is much higher than its corresponding relaxation oscillation frequency. Large signal simulation of the Fano laser is also investigated based on pseudorandom bit sequence at 0.5 Tbit/s. It shows eye patterns are open at such high modulation frequency, verifying...
Large eddy simulations of an airfoil in turbulent inflow
DEFF Research Database (Denmark)
Gilling, Lasse; Sørensen, Niels N.
2008-01-01
Wind turbines operate in the turbulent boundary layer of the atmosphere and due to the rotational sampling effect the blades experience a high level of turbulence [1]. In this project the effect of turbulence is investigated by large eddy simulations of the turbulent flow past a NACA 0015 airfoil...
Wang, Xinheng
2008-01-01
Wireless telemedicine using GSM and GPRS technologies can only provide low bandwidth connections, which makes it difficult to transmit images and video. Satellite or 3G wireless transmission provides greater bandwidth, but the running costs are high. Wireless networks (WLANs) appear promising, since they can supply high bandwidth at low cost. However, the WLAN technology has limitations, such as coverage. A new wireless networking technology named the wireless mesh network (WMN) overcomes some of the limitations of the WLAN. A WMN combines the characteristics of both a WLAN and ad hoc networks, thus forming an intelligent, large scale and broadband wireless network. These features are attractive for telemedicine and telecare because of the ability to provide data, voice and video communications over a large area. One successful wireless telemedicine project which uses wireless mesh technology is the Emergency Room Link (ER-LINK) in Tucson, Arizona, USA. There are three key characteristics of a WMN: self-organization, including self-management and self-healing; dynamic changes in network topology; and scalability. What we may now see is a shift from mobile communication and satellite systems for wireless telemedicine to the use of wireless networks based on mesh technology, since the latter are very attractive in terms of cost, reliability and speed.
Large scale particle simulations in a virtual memory computer
International Nuclear Information System (INIS)
Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.
1983-01-01
Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)
Real-time simulation of large-scale floods
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Development and verification of unstructured adaptive mesh technique with edge compatibility
International Nuclear Information System (INIS)
Ito, Kei; Ohshima, Hiroyuki; Kunugi, Tomoaki
2010-01-01
In the design study of the large-sized sodium-cooled fast reactor (JSFR), one key issue is suppression of gas entrainment (GE) phenomena at a gas-liquid interface. Therefore, the authors have been developed a high-precision CFD algorithm to evaluate the GE phenomena accurately. The CFD algorithm has been developed on unstructured meshes to establish an accurate modeling of JSFR system. For two-phase interfacial flow simulations, a high-precision volume-of-fluid algorithm is employed. It was confirmed that the developed CFD algorithm could reproduce the GE phenomena in a simple GE experiment. Recently, the authors have been developed an important technique for the simulation of the GE phenomena in JSFR. That is an unstructured adaptive mesh technique which can apply fine cells dynamically to the region where the GE occurs in JSFR. In this paper, as a part of the development, a two-dimensional unstructured adaptive mesh technique is discussed. In the two-dimensional adaptive mesh technique, each cell is refined isotropically to reduce distortions of the mesh. In addition, connection cells are formed to eliminate the edge incompatibility between refined and non-refined cells. The two-dimensional unstructured adaptive mesh technique is verified by solving well-known lid-driven cavity flow problem. As a result, the two-dimensional unstructured adaptive mesh technique succeeds in providing a high-precision solution, even though poor-quality distorted initial mesh is employed. In addition, the simulation error on the two-dimensional unstructured adaptive mesh is much less than the error on the structured mesh with a larger number of cells. (author)
How to simulate global cosmic strings with large string tension
Energy Technology Data Exchange (ETDEWEB)
Klaer, Vincent B.; Moore, Guy D., E-mail: vklaer@theorie.ikp.physik.tu-darmstadt.de, E-mail: guy.moore@physik.tu-darmstadt.de [Institut für Kernphysik, Technische Universität Darmstadt, Schlossgartenstraße 2, Darmstadt, D-64289 Germany (Germany)
2017-10-01
Global string networks may be relevant in axion production in the early Universe, as well as other cosmological scenarios. Such networks contain a large hierarchy of scales between the string core scale and the Hubble scale, ln( f {sub a} / H ) ∼ 70, which influences the network dynamics by giving the strings large tensions T ≅ π f {sub a} {sup 2} ln( f {sub a} / H ). We present a new numerical approach to simulate such global string networks, capturing the tension without an exponentially large lattice.
Computational mesh generation for vascular structures with deformable surfaces
International Nuclear Information System (INIS)
Putter, S. de; Laffargue, F.; Breeuwer, M.; Vosse, F.N. van de; Gerritsen, F.A.; Philips Medical Systems, Best
2006-01-01
Computational blood flow and vessel wall mechanics simulations for vascular structures are becoming an important research tool for patient-specific surgical planning and intervention. An important step in the modelling process for patient-specific simulations is the creation of the computational mesh based on the segmented geometry. Most known solutions either require a large amount of manual processing or lead to a substantial difference between the segmented object and the actual computational domain. We have developed a chain of algorithms that lead to a closely related implementation of image segmentation with deformable models and 3D mesh generation. The resulting processing chain is very robust and leads both to an accurate geometrical representation of the vascular structure as well as high quality computational meshes. The chain of algorithms has been tested on a wide variety of shapes. A benchmark comparison of our mesh generation application with five other available meshing applications clearly indicates that the new approach outperforms the existing methods in the majority of cases. (orig.)
Li, Zhengdao; Zhou, Yong; Bao, Chunxiong; Xue, Guogang; Zhang, Jiyuan; Liu, Jianguo; Yu, Tao; Zou, Zhigang
2012-06-07
Zn(2)SnO(4) nanowire arrays were for the first time grown onto a stainless steel mesh (SSM) in a binary ethylenediamine (En)/water solvent system using a solvothermal route. The morphology evolution following this reaction was carefully followed to understand the formation mechanism. The SSM-supported Zn(2)SnO(4) nanowire was utilized as a photoanode for fabrication of large-area (10 cm × 5 cm size as a typical sample), flexible dye-sensitized solar cells (DSSCs). The synthesized Zn(2)SnO(4) nanowires exhibit great bendability and flexibility, proving potential advantage over other metal oxide nanowires such as TiO(2), ZnO, and SnO(2) for application in flexible solar cells. Relative to the analogous Zn(2)SnO(4) nanoparticle-based flexible DSSCs, the nanowire geometry proves to enhance solar energy conversion efficiency through enhancement of electron transport. The bendable nature of the DSSCs without obvious degradation of efficiency and facile scale up gives the as-made flexible solar cell device potential for practical application.
Large-eddy simulation of atmospheric flow over complex terrain
DEFF Research Database (Denmark)
Bechmann, Andreas
2007-01-01
The present report describes the development and validation of a turbulence model designed for atmospheric flows based on the concept of Large-Eddy Simulation (LES). The background for the work is the high Reynolds number k - #epsilon# model, which has been implemented on a finite-volume code...... turbulence model is able to handle both engineering and atmospheric flows and can be run in both RANS or LES mode. For LES simulations a time-dependent wind field that accurately represents the turbulent structures of a wind environment must be prescribed at the computational inlet. A method is implemented...... where the turbulent wind field from a separate LES simulation can be used as inflow. To avoid numerical dissipation of turbulence special care is paid to the numerical method, e.g. the turbulence model is calibrated with the specific numerical scheme used. This is done by simulating decaying isotropic...
Large eddy simulation of flows in industrial compressors: a path from 2015 to 2035
Gourdain, N.; Sicot, F.; Duchaine, F.; Gicquel, L.
2014-01-01
A better understanding of turbulent unsteady flows is a necessary step towards a breakthrough in the design of modern compressors. Owing to high Reynolds numbers and very complex geometry, the flow that develops in such industrial machines is extremely hard to predict. At this time, the most popular method to simulate these flows is still based on a Reynolds-averaged Navier–Stokes approach. However, there is some evidence that this formalism is not accurate for these components, especially when a description of time-dependent turbulent flows is desired. With the increase in computing power, large eddy simulation (LES) emerges as a promising technique to improve both knowledge of complex physics and reliability of flow solver predictions. The objective of the paper is thus to give an overview of the current status of LES for industrial compressor flows as well as to propose future research axes regarding the use of LES for compressor design. While the use of wall-resolved LES for industrial multistage compressors at realistic Reynolds number should not be ready before 2035, some possibilities exist to reduce the cost of LES, such as wall modelling and the adaptation of the phase-lag condition. This paper also points out the necessity to combine LES to techniques able to tackle complex geometries. Indeed LES alone, i.e. without prior knowledge of such flows for grid construction or the prohibitive yet ideal use of fully homogeneous meshes to predict compressor flows, is quite limited today. PMID:25024422
Large-Eddy Simulations of Flows in Complex Terrain
Kosovic, B.; Lundquist, K. A.
2011-12-01
Large-eddy simulation as a methodology for numerical simulation of turbulent flows was first developed to study turbulent flows in atmospheric by Lilly (1967). The first LES were carried by Deardorff (1970) who used these simulations to study atmospheric boundary layers. Ever since, LES has been extensively used to study canonical atmospheric boundary layers, in most cases flat plate boundary layers under the assumption of horizontal homogeneity. Carefully designed LES of canonical convective and neutrally stratified and more recently stably stratified atmospheric boundary layers have contributed significantly to development of better understanding of these flows and their parameterizations in large scale models. These simulations were often carried out using codes specifically designed and developed for large-eddy simulations of horizontally homogeneous flows with periodic lateral boundary conditions. Recent developments in multi-scale numerical simulations of atmospheric flows enable numerical weather prediction (NWP) codes such as ARPS (Chow and Street, 2009), COAMPS (Golaz et al., 2009) and Weather Research and Forecasting model, to be used nearly seamlessly across a wide range of atmospheric scales from synoptic down to turbulent scales in atmospheric boundary layers. Before we can with confidence carry out multi-scale simulations of atmospheric flows, NWP codes must be validated for accurate performance in simulating flows over complex or inhomogeneous terrain. We therefore carry out validation of WRF-LES for simulations of flows over complex terrain using data from Askervein Hill (Taylor and Teunissen, 1985, 1987) and METCRAX (Whiteman et al., 2008) field experiments. WRF's nesting capability is employed with a one-way nested inner domain that includes complex terrain representation while the coarser outer nest is used to spin up fully developed atmospheric boundary layer turbulence and thus represent accurately inflow to the inner domain. LES of a
International Nuclear Information System (INIS)
Vay, J.-L.; Friedman, A.; Grote, D.P.
2002-01-01
The numerical simulation of the driving beams in a heavy ion fusion power plant is a challenging task, and, despite rapid progress in computer power, one must consider the use of the most advanced numerical techniques. One of the difficulties of these simulations resides in the disparity of scales in time and in space which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g. fluid dynamics simulations) is the Adaptive-Mesh-Refinement (AMR) technique. We follow in this article the progress accomplished in the last few months in the merging of the AMR technique with Particle-In-Cell (PIC) method. This includes a detailed modeling of the Lampel-Tiefenback solution for the one-dimensional diode using novel techniques to suppress undesirable numerical oscillations and an AMR patch to follow the head of the particle distribution. We also report new results concerning the modeling of ion sources using the axisymmetric WARPRZ-AMR prototype showing the utility of an AMR patch resolving the emitter vicinity and the beam edge
Investigation of Numerical Dissipation in Classical and Implicit Large Eddy Simulations
Directory of Open Access Journals (Sweden)
Moutassem El Rafei
2017-12-01
Full Text Available The quantitative measure of dissipative properties of different numerical schemes is crucial to computational methods in the field of aerospace applications. Therefore, the objective of the present study is to examine the resolving power of Monotonic Upwind Scheme for Conservation Laws (MUSCL scheme with three different slope limiters: one second-order and two third-order used within the framework of Implicit Large Eddy Simulations (ILES. The performance of the dynamic Smagorinsky subgrid-scale model used in the classical Large Eddy Simulation (LES approach is examined. The assessment of these schemes is of significant importance to understand the numerical dissipation that could affect the accuracy of the numerical solution. A modified equation analysis has been employed to the convective term of the fully-compressible Navier–Stokes equations to formulate an analytical expression of truncation error for the second-order upwind scheme. The contribution of second-order partial derivatives in the expression of truncation error showed that the effect of this numerical error could not be neglected compared to the total kinetic energy dissipation rate. Transitions from laminar to turbulent flow are visualized considering the inviscid Taylor–Green Vortex (TGV test-case. The evolution in time of volumetrically-averaged kinetic energy and kinetic energy dissipation rate have been monitored for all numerical schemes and all grid levels. The dissipation mechanism has been compared to Direct Numerical Simulation (DNS data found in the literature at different Reynolds numbers. We found that the resolving power and the symmetry breaking property are enhanced with finer grid resolutions. The production of vorticity has been observed in terms of enstrophy and effective viscosity. The instantaneous kinetic energy spectrum has been computed using a three-dimensional Fast Fourier Transform (FFT. All combinations of numerical methods produce a k − 4 spectrum
VARIABLE MESH STIFFNESS OF SPUR GEAR TEETH USING ...
African Journals Online (AJOL)
gear engagement. A gear mesh kinematic simulation ... model is appropnate for VMS of a spur gear tooth. The assumptions for ... This process has been continued until one complete tooth meshing cycle is ..... Element Method. Using MATLAB,.
Unstructured mesh adaptivity for urban flooding modelling
Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.
2018-05-01
Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.
Large-eddy simulation of sand dune morphodynamics
Khosronejad, Ali; Sotiropoulos, Fotis; St. Anthony Falls Laboratory, University of Minnesota Team
2015-11-01
Sand dunes are natural features that form under complex interaction between turbulent flow and bed morphodynamics. We employ a fully-coupled 3D numerical model (Khosronejad and Sotiropoulos, 2014, Journal of Fluid Mechanics, 753:150-216) to perform high-resolution large-eddy simulations of turbulence and bed morphodynamics in a laboratory scale mobile-bed channel to investigate initiation, evolution and quasi-equilibrium of sand dunes (Venditti and Church, 2005, J. Geophysical Research, 110:F01009). We employ a curvilinear immersed boundary method along with convection-diffusion and bed-morphodynamics modules to simulate the suspended sediment and the bed-load transports respectively. The coupled simulation were carried out on a grid with more than 100 million grid nodes and simulated about 3 hours of physical time of dune evolution. The simulations provide the first complete description of sand dune formation and long-term evolution. The geometric characteristics of the simulated dunes are shown to be in excellent agreement with observed data obtained across a broad range of scales. This work was supported by NSF Grants EAR-0120914 (as part of the National Center for Earth-Surface Dynamics). Computational resources were provided by the University of Minnesota Supercomputing Institute.
Large eddy simulation of a wing-body junction flow
Ryu, Sungmin; Emory, Michael; Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca
2014-11-01
We present numerical simulations of the wing-body junction flow experimentally investigated by Devenport & Simpson (1990). Wall-junction flows are common in engineering applications but relevant flow physics close to the corner region is not well understood. Moreover, performance of turbulence models for the body-junction case is not well characterized. Motivated by the insufficient investigations, we have numerically investigated the case with Reynolds-averaged Naiver-Stokes equation (RANS) and Large Eddy Simulation (LES) approaches. The Vreman model applied for the LES and SST k- ω model for the RANS simulation are validated focusing on the ability to predict turbulence statistics near the junction region. Moreover, a sensitivity study of the form of the Vreman model will also be presented. This work is funded under NASA Cooperative Agreement NNX11AI41A (Technical Monitor Dr. Stephen Woodruff)
Lightweight computational steering of very large scale molecular dynamics simulations
International Nuclear Information System (INIS)
Beazley, D.M.
1996-01-01
We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages
Large Eddy Simulations of Severe Convection Induced Turbulence
Ahmad, Nash'at; Proctor, Fred
2011-01-01
Convective storms can pose a serious risk to aviation operations since they are often accompanied by turbulence, heavy rain, hail, icing, lightning, strong winds, and poor visibility. They can cause major delays in air traffic due to the re-routing of flights, and by disrupting operations at the airports in the vicinity of the storm system. In this study, the Terminal Area Simulation System is used to simulate five different convective events ranging from a mesoscale convective complex to isolated storms. The occurrence of convection induced turbulence is analyzed from these simulations. The validation of model results with the radar data and other observations is reported and an aircraft-centric turbulence hazard metric calculated for each case is discussed. The turbulence analysis showed that large pockets of significant turbulence hazard can be found in regions of low radar reflectivity. Moderate and severe turbulence was often found in building cumulus turrets and overshooting tops.
Large Eddy Simulation of Cryogenic Injection Processes at Supercritical Pressure
Oefelein, Joseph C.
2002-01-01
This paper highlights results from the first of a series of hierarchical simulations aimed at assessing the modeling requirements for application of the large eddy simulation technique to cryogenic injection and combustion processes in liquid rocket engines. The focus is on liquid-oxygen-hydrogen coaxial injectors at a condition where the liquid-oxygen is injected at a subcritical temperature into a supercritical environment. For this situation a diffusion dominated mode of combustion occurs in the presence of exceedingly large thermophysical property gradients. Though continuous, these gradients approach the behavior of a contact discontinuity. Significant real gas effects and transport anomalies coexist locally in colder regions of the flow, with ideal gas and transport characteristics occurring within the flame zone. The current focal point is on the interfacial region between the liquid-oxygen core and the coaxial hydrogen jet where the flame anchors itself.
Experimental simulation of microinteractions in large scale explosions
Energy Technology Data Exchange (ETDEWEB)
Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety
1998-01-01
This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)
Simulation requirements for the Large Deployable Reflector (LDR)
Soosaar, K.
1984-01-01
Simulation tools for the large deployable reflector (LDR) are discussed. These tools are often the transfer function variety equations. However, transfer functions are inadequate to represent time-varying systems for multiple control systems with overlapping bandwidths characterized by multi-input, multi-output features. Frequency domain approaches are the useful design tools, but a full-up simulation is needed. Because of the need for a dedicated computer for high frequency multi degree of freedom components encountered, non-real time smulation is preferred. Large numerical analysis software programs are useful only to receive inputs and provide output to the next block, and should be kept out of the direct loop of simulation. The following blocks make up the simulation. The thermal model block is a classical heat transfer program. It is a non-steady state program. The quasistatic block deals with problems associated with rigid body control of reflector segments. The steady state block assembles data into equations of motion and dynamics. A differential raytrace is obtained to establish a change in wave aberrations. The observation scene is described. The focal plane module converts the photon intensity impinging on it into electron streams or into permanent film records.
Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations
Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto
2018-04-01
Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.
Basic Algorithms for the Asynchronous Reconfigurable Mesh
Directory of Open Access Journals (Sweden)
Yosi Ben-Asher
2002-01-01
Full Text Available Many constant time algorithms for various problems have been developed for the reconfigurable mesh (RM in the past decade. All these algorithms are designed to work with synchronous execution, with no regard for the fact that large size RMs will probably be asynchronous. A similar observation about the PRAM model motivated many researchers to develop algorithms and complexity measures for the asynchronous PRAM (APRAM. In this work, we show how to define the asynchronous reconfigurable mesh (ARM and how to measure the complexity of asynchronous algorithms executed on it. We show that connecting all processors in a row of an n×n ARM (the analog of barrier synchronization in the APRAM model can be solved with complexity Θ(nlogn. Intuitively, this is average work time for solving such a problem. Next, we describe general a technique for simulating T -step synchronous RM algorithms on the ARM with complexity of Θ(T⋅n2logn. Finally, we consider the simulation of the classical synchronous algorithm for counting the number of non-zero bits in an n bits vector using (k
International Nuclear Information System (INIS)
D’Alessandro, Delfo; Danti, Serena; Pertici, Gianni; Moscato, Stefania; Metelli, Maria Rita; Petrini, Mario; Danti, Sabrina; Berrettini, Stefano; Nesti, Claudia
2014-01-01
In this study, we performed a complete histologic analysis of constructs based on large diameter ( > 100 μm) poly-L-lactic acid (PLLA) microfibers obtained via dry-wet spinning and rat Mesenchymal Stromal Cells (rMSCs) differentiated towards the osteogenic lineage, using acrylic resin embedding. In many synthetic polymer-based microfiber meshes, ex post processability of fiber/cell constructs for histologic analysis may face deterring difficulties, leading to an incomplete investigation of the potential of these scaffolds. Indeed, while polymeric nanofiber (fiber diameter = tens of nanometers)/cell constructs can usually be embedded in common histologic media and easily sectioned, preserving the material structure and the antigenic reactivity, histologic analysis of large polymeric microfiber/cell constructs in the literature is really scant. This affects microfiber scaffolds based on FDA-approved and widely used polymers such as PLLA and its copolymers. Indeed, for such constructs, especially those with fiber diameter and fiber interspace much larger than cell size, standard histologic processing is usually inefficient due to inhomogeneous hardness and lack of cohesion between the synthetic and the biological phases under sectioning. In this study, the microfiber/MSC constructs were embedded in acrylic resin and the staining/reaction procedures were calibrated to demonstrate the possibility of successfully employing histologic methods in tissue engineering studies even in such difficult cases. We histologically investigated the main osteogenic markers and extracellular matrix molecules, such as alkaline phosphatase, osteopontin, osteocalcin, TGF-β1, Runx2, Collagen type I and the presence of amorphous, fibrillar and mineralized matrix. Biochemical tests were employed to confirm our findings. This protocol permitted efficient sectioning of the treated constructs and good penetration of the histologic reagents, thus allowing distribution and expression of
Adaptive hybrid mesh refinement for multiphysics applications
International Nuclear Information System (INIS)
Khamayseh, Ahmed; Almeida, Valmor de
2007-01-01
The accuracy and convergence of computational solutions of mesh-based methods is strongly dependent on the quality of the mesh used. We have developed methods for optimizing meshes that are comprised of elements of arbitrary polygonal and polyhedral type. We present in this research the development of r-h hybrid adaptive meshing technology tailored to application areas relevant to multi-physics modeling and simulation. Solution-based adaptation methods are used to reposition mesh nodes (r-adaptation) or to refine the mesh cells (h-adaptation) to minimize solution error. The numerical methods perform either the r-adaptive mesh optimization or the h-adaptive mesh refinement method on the initial isotropic or anisotropic meshes to equidistribute weighted geometric and/or solution error function. We have successfully introduced r-h adaptivity to a least-squares method with spherical harmonics basis functions for the solution of the spherical shallow atmosphere model used in climate modeling. In addition, application of this technology also covers a wide range of disciplines in computational sciences, most notably, time-dependent multi-physics, multi-scale modeling and simulation
Pelties, Christian; de la Puente, Josep; Ampuero, Jean-Paul; Brietzke, Gilbert B.; Kä ser, Martin
2012-01-01
Accurate and efficient numerical methods to simulate dynamic earthquake rupture and wave propagation in complex media and complex fault geometries are needed to address fundamental questions in earthquake dynamics, to integrate seismic and geodetic
Large-Eddy Simulation (LES of Spray Transients: Start and End of Injection Phenomena
Directory of Open Access Journals (Sweden)
Battistoni Michele
2016-01-01
Full Text Available This work reports investigations on Diesel spray transients, accounting for internal nozzle flow and needle motion, and demonstrates how seamless calculations of internal flow and external jet can be accomplished in a Large-Eddy Simulation (LES framework using an Eulerian mixture model. Sub-grid stresses are modeled with the Dynamic Structure (DS model, a non-viscosity based one-equation LES model. Two problems are studied with high level of spatial and temporal resolution. The first one concerns an End-Of-Injection (EOI case where gas ingestion, cavitation, and dribble formation are resolved. The second case is a Start-Of-Injection (SOI simulation that aims at analyzing the effect of residual gas trapped inside the injector sac on spray penetration and rate of fuel injection. Simulation results are compared against experiments carried out at Argonne National Laboratory (ANL using synchrotron X-ray. A mesh sensitivity analysis is conducted to assess the quality of the LES approach by evaluating the resolved turbulent kinetic energy budget and comparing the outcomes with a length-scale resolution index. LES of both EOI and SOI processes have been carried out on a single hole Diesel injector, providing insights in to the physics of the processes, with internal and external flow details, and linking the phenomena at the end of an injection event to those at the start of a new injection. Concerning the EOI, the model predicts ligament formation and gas ingestion, as observed experimentally, and the amount of residual gas in the nozzle sac matches with the available data. The fast dynamics of the process is described in detail. The simulation provides unique insights into the physics at the EOI. Similarly, the SOI simulation shows how gas is ejected first, and liquid fuel starts being injected with a delay. The simulation starts from a very low needle lift and is able to predict the actual Rate-Of-Injection (ROI and jet penetration, based only on the
Automatic mesh adaptivity for CADIS and FW-CADIS neutronics modeling of difficult shielding problems
International Nuclear Information System (INIS)
Ibrahim, A. M.; Peplow, D. E.; Mosher, S. W.; Wagner, J. C.; Evans, T. M.; Wilson, P. P.; Sawan, M. E.
2013-01-01
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macro-material approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm de-couples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, obviating the need for a world-class super computer. (authors)
International Nuclear Information System (INIS)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-01-01
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer
Parallel continuous simulated tempering and its applications in large-scale molecular simulations
Energy Technology Data Exchange (ETDEWEB)
Zang, Tianwu; Yu, Linglin; Zhang, Chong [Applied Physics Program and Department of Bioengineering, Rice University, Houston, Texas 77005 (United States); Ma, Jianpeng, E-mail: jpma@bcm.tmc.edu [Applied Physics Program and Department of Bioengineering, Rice University, Houston, Texas 77005 (United States); Verna and Marrs McLean Department of Biochemistry and Molecular Biology, Baylor College of Medicine, One Baylor Plaza, BCM-125, Houston, Texas 77030 (United States)
2014-07-28
In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.
Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units
Energy Technology Data Exchange (ETDEWEB)
Beckingsale, D. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Gaudin, W. P. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Hornung, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gunney, B. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Herdman, J. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom); Jarvis, S. A. [Atomic Weapons Establishment (AWE), Aldermaston (United Kingdom)
2014-11-17
Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.
Large-eddy simulation of atmospheric flow over complex terrain
Energy Technology Data Exchange (ETDEWEB)
Bechmann, A.
2006-11-15
The present report describes the development and validation of a turbulence model designed for atmospheric flows based on the concept of Large-Eddy Simulation (LES). The background for the work is the high Reynolds number k - epsilon model, which has been implemented on a finite-volume code of the incompressible Reynolds-averaged Navier-Stokes equations (RANS). The k - epsilon model is traditionally used for RANS computations, but is here developed to also enable LES. LES is able to provide detailed descriptions of a wide range of engineering flows at low Reynolds numbers. For atmospheric flows, however, the high Reynolds numbers and the rough surface of the earth provide difficulties normally not compatible with LES. Since these issues are most severe near the surface they are addressed by handling the near surface region with RANS and only use LES above this region. Using this method, the developed turbulence model is able to handle both engineering and atmospheric flows and can be run in both RANS or LES mode. For LES simulations a time-dependent wind field that accurately represents the turbulent structures of a wind environment must be prescribed at the computational inlet. A method is implemented where the turbulent wind field from a separate LES simulation can be used as inflow. To avoid numerical dissipation of turbulence special care is paid to the numerical method, e.g. the turbulence model is calibrated with the specific numerical scheme used. This is done by simulating decaying isotropic and homogeneous turbulence. Three atmospheric test cases are investigated in order to validate the behavior of the presented turbulence model. Simulation of the neutral atmospheric boundary layer, illustrates the turbulence model ability to generate and maintain the turbulent structures responsible for boundary layer transport processes. Velocity and turbulence profiles are in good agreement with measurements. Simulation of the flow over the Askervein hill is also
Tool Support for Parametric Analysis of Large Software Simulation Systems
Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony
2008-01-01
The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.
Simulations of Large-Area Electron Beam Diodes
Swanekamp, S. B.; Friedman, M.; Ludeking, L.; Smithe, D.; Obenschain, S. P.
1999-11-01
Large area electron beam diodes are typically used to pump the amplifiers of KrF lasers. Simulations of large-area electron beam diodes using the particle-in-cell code MAGIC3D have shown the electron flow in the diode to be unstable. Since this instability can potentially produce a non-uniform current and energy distribution in the hibachi structure and lasing medium it can be detrimental to laser efficiency. These results are similar to simulations performed using the ISIS code.(M.E. Jones and V.A. Thomas, Proceedings of the 8^th) International Conference on High-Power Particle Beams, 665 (1990). We have identified the instability as the so called ``transit-time" instability(C.K. Birdsall and W.B. Bridges, Electrodynamics of Diode Regions), (Academic Press, New York, 1966).^,(T.M. Antonsen, W.H. Miner, E. Ott, and A.T. Drobot, Phys. Fluids 27), 1257 (1984). and have investigated the role of the applied magnetic field and diode geometry. Experiments are underway to characterize the instability on the Nike KrF laser system and will be compared to simulation. Also some possible ways to mitigate the instability will be presented.
Five-equation and robust three-equation methods for solution verification of large eddy simulation
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
Large breast compressions: Observations and evaluation of simulations
Energy Technology Data Exchange (ETDEWEB)
Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J. [Centre of Medical Image Computing, UCL, London WC1E 6BT, United Kingdom and Computer Vision Laboratory, ETH Zuerich, 8092 Zuerich (Switzerland); Centre of Medical Image Computing, UCL, London WC1E 6BT (United Kingdom); Department of Surgery, UCL, London W1P 7LD (United Kingdom); Department of Imaging, UCL Hospital, London NW1 2BU (United Kingdom); Department of Surgery, UCL, London W1P 7LD (United Kingdom); Centre of Medical Image Computing, UCL, London WC1E 6BT (United Kingdom)
2011-02-15
Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast
Large breast compressions: observations and evaluation of simulations.
Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A; Douek, Michael; Hawkes, David J
2011-02-01
Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs. 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast shapes than when using
Large breast compressions: Observations and evaluation of simulations
International Nuclear Information System (INIS)
Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J.
2011-01-01
Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast
Large Eddy Simulation of the spray formation in confinements
International Nuclear Information System (INIS)
Lampa, A.; Fritsching, U.
2013-01-01
Highlights: • Process stability of confined spray processes is affected by the geometric design of the spray confinement. • LES simulations of confined spray flow have been performed successfully. • Clustering processes of droplets is predicted in simulations and validated with experiments. • Criteria for specific coherent gas flow patterns and droplet clustering behaviour are found. -- Abstract: The particle and powder properties produced within spray drying processes are influenced by various unsteady transport phenomena in the dispersed multiphase spray flow in a confined spray chamber. In this context differently scaled spray structures in a confined spray environment have been analyzed in experiments and numerical simulations. The experimental investigations have been carried out with Particle-Image-Velocimetry to determine the velocity of the gas and the discrete phase. Large-Eddy-Simulations have been set up to predict the transient behaviour of the spray process and have given more insight into the sensitivity of the spray flow structures in dependency from the spray chamber design
Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency
Aikens, Kurt; Craft, Kyle; Redman, Andrew
2015-11-01
The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.
Large eddy simulation of a fuel rod subchannel
International Nuclear Information System (INIS)
Mayer, Gusztav
2007-01-01
In a VVER-440 reactor the measured outlet temperature is related to fuel limit parameters and the power upgrading plans of VVER-440 reactors motivated us to obtain more information on the mixing process of the fuel assemblies. In a VVER-440 rod bundle the fuel rods are arranged in triangular array. Measurement shows (Krauss and Meyer, 1998) that the classical engineering approach, which tries to trace the characterization of such systems back to equivalent (hydraulic diameter) pipe flows, does not give reasonable results. Due to the different turbulence characteristics, the mixing is more intensive in rod bundles than it would be expected based on equivalent pipe flow correlations. As a possible explanation of the high mixing, secondary flow was deduced from measurements by several experimentalists (Trupp and Azad, 1975). Another candidate to explain the high mixing is the so-called flow pulsation phenomenon (Krauss and Meyer, 1998). In this paper we present subchannel simulations (Mayer et al. 2007) using large eddy simulation (LES) methodology and the lattice Boltzmann method (LBM) without the spacers at Reynolds number 21000. The simulation results are compared with the measurements of Trupp and Azad (1975). The mean axial velocity profile shows good agreement with the measurement data. Secondary flow has been observed directly in the simulation results. Reasonable agreement has been achieved for most Reynolds stresses. Nevertheless, the calculated normal stresses show small, but systematic deviation from the measurement data. (author)
Energy Technology Data Exchange (ETDEWEB)
Cai, Yunhai
2000-08-31
A highly accurate self-consistent particle code to simulate the beam-beam collision in e{sup +}e{sup -} storage rings has been developed. It adopts a method of solving the Poisson equation with an open boundary. The method consists of two steps: assigning the potential on a finite boundary using the Green's function, and then solving the potential inside the boundary with a fast Poisson solver. Since the solution of the Poisson's equation is unique, the authors solution is exactly the same as the one obtained by simply using the Green's function. The method allows us to select much smaller region of mesh and therefore increase the resolution of the solver. The better resolution makes more accurate the calculation of the dynamics in the core of the beams. The luminosity simulated with this method agrees quantitatively with the measurement for the PEP-II B-factory ring in the linear and nonlinear beam current regimes, demonstrating its predictive capability in detail.
Accelerating large-scale phase-field simulations with GPU
Directory of Open Access Journals (Sweden)
Xiaoming Shi
2017-10-01
Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.
Quality and Reliability of Large-Eddy Simulations II
Salvetti, Maria Vittoria; Meyers, Johan; Sagaut, Pierre
2011-01-01
The second Workshop on "Quality and Reliability of Large-Eddy Simulations", QLES2009, was held at the University of Pisa from September 9 to September 11, 2009. Its predecessor, QLES2007, was organized in 2007 in Leuven (Belgium). The focus of QLES2009 was on issues related to predicting, assessing and assuring the quality of LES. The main goal of QLES2009 was to enhance the knowledge on error sources and on their interaction in LES and to devise criteria for the prediction and optimization of simulation quality, by bringing together mathematicians, physicists and engineers and providing a platform specifically addressing these aspects for LES. Contributions were made by leading experts in the field. The present book contains the written contributions to QLES2009 and is divided into three parts, which reflect the main topics addressed at the workshop: (i) SGS modeling and discretization errors; (ii) Assessment and reduction of computational errors; (iii) Mathematical analysis and foundation for SGS modeling.
Large Eddy Simulation for Incompressible Flows An Introduction
Sagaut, P
2005-01-01
The first and most exhaustive work of its kind devoted entirely to the subject, Large Eddy Simulation presents a comprehensive account and a unified view of this young but very rich discipline. LES is the only efficient technique for approaching high Reynolds numbers when simulating industrial, natural or experimental configurations. The author concentrates on incompressible fluids and chooses his topics in treating with care both the mathematical ideas and their applications. The book addresses researchers as well as graduate students and engineers. The second edition was a greatly enriched version motivated both by the increasing theoretical interest in LES and the increasing number of applications. Two entirely new chapters were devoted to the coupling of LES with multiresolution multidomain techniques and to the new hybrid approaches that relate the LES procedures to the classical statistical methods based on the Reynolds-Averaged Navier-Stokes equations. This 3rd edition adds various sections to the text...
Aero-Acoustic Modelling using Large Eddy Simulation
International Nuclear Information System (INIS)
Shen, W Z; Soerensen, J N
2007-01-01
The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar flow past a NACA 0015 airfoil at a Reynolds number of 800, a Mach number of 0.2 and an angle of attack of 20 deg. The model is then applied to compute turbulent flow past a NACA 0015 airfoil at a Reynolds number of 100 000, a Mach number of 0.2 and an angle of attack of 20 deg. The predicted noise spectrum is compared to experimental data
Very large eddy simulation of the Red Sea overflow
Ilıcak, Mehmet; Özgökmen, Tamay M.; Peters, Hartmut; Baumert, Helmut Z.; Iskandarani, Mohamed
Mixing between overflows and ambient water masses is a critical problem of deep-water mass formation in the downwelling branch of the meridional overturning circulation of the ocean. Modeling approaches that have been tested so far rely either on algebraic parameterizations in hydrostatic ocean circulation models, or on large eddy simulations that resolve most of the mixing using nonhydrostatic models. In this study, we examine the performance of a set of turbulence closures, that have not been tested in comparison to observational data for overflows before. We employ the so-called very large eddy simulation (VLES) technique, which allows the use of k-ɛ models in nonhydrostatic models. This is done by applying a dynamic spatial filtering to the k-ɛ equations. To our knowledge, this is the first time that the VLES approach is adopted for an ocean modeling problem. The performance of k-ɛ and VLES models are evaluated by conducting numerical simulations of the Red Sea overflow and comparing them to observations from the Red Sea Outflow Experiment (REDSOX). The computations are constrained to one of the main channels transporting the overflow, which is narrow enough to permit the use of a two-dimensional (and nonhydrostatic) model. A large set of experiments are conducted using different closure models, Reynolds numbers and spatial resolutions. It is found that, when no turbulence closure is used, the basic structure of the overflow, consisting of a well-mixed bottom layer (BL) and entraining interfacial layer (IL), cannot be reproduced. The k-ɛ model leads to unrealistic thicknesses for both BL and IL, while VLES results in the most realistic reproduction of the REDSOX observations.
International Nuclear Information System (INIS)
Yoon, S; Lindstrom, P; Pascucci, V; Manocha, D
2005-01-01
We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications
Large-eddy simulation of swirling pulverized-coal combustion
Energy Technology Data Exchange (ETDEWEB)
Hu, L.Y.; Luo, Y.H. [Shanghai Jiaotong Univ. (China). School of Mechanical Engineering; Zhou, L.X.; Xu, C.S. [Tsinghua Univ., Beijing (China). Dept. of Engineering Mechanics
2013-07-01
A Eulerian-Lagrangian large-eddy simulation (LES) with a Smagorinsky-Lilly sub-grid scale stress model, presumed-PDF fast chemistry and EBU gas combustion models, particle devolatilization and particle combustion models are used to study the turbulence and flame structures of swirling pulverized-coal combustion. The LES statistical results are validated by the measurement results. The instantaneous LES results show that the coherent structures for pulverized coal combustion is stronger than that for swirling gas combustion. The particles are concentrated in the periphery of the coherent structures. The flame is located at the high vorticity and high particle concentration zone.
Large Eddy Simulation of the ventilated wave boundary layer
DEFF Research Database (Denmark)
Lohmann, Iris P.; Fredsøe, Jørgen; Sumer, B. Mutlu
2006-01-01
A Large Eddy Simulation (LES) of (1) a fully developed turbulent wave boundary layer and (2) case 1 subject to ventilation (i.e., suction and injection varying alternately in phase) has been performed, using the Smagorinsky subgrid-scale model to express the subgrid viscosity. The model was found...... slows down the flow in the full vertical extent of the boundary layer, destabilizes the flow and decreases the mean bed shear stress significantly; whereas suction generally speeds up the flow in the full vertical extent of the boundary layer, stabilizes the flow and increases the mean bed shear stress...
DEFF Research Database (Denmark)
Spietz, Henrik Juul; Hejlesen, Mads Mølholm; Walther, Jens Honore
in the oncoming flow. This may lead to structural instability e.g. when the shedding frequency aligns with the natural frequency of the structure. Fluid structure interaction must especially be considered when designing long span bridges. A three dimensional vortex-in-cell method is applied for the direct......The ability to predict aerodynamic forces, due to the interaction of a fluid flow with a solid body, is central in many fields of engineering and is necessary to identify error-prone structural designs. In bluff-body flows the aerodynamic forces oscillate due to vortex shedding and variations...... numerical simulation of the flow past a bodies of arbitrary shape. Vortex methods use a simple formulation where only the trajectories of discrete vortex particles are simulated. The Lagrangian formulation eliminates the CFL type condition that Eulerian methods have to satisfy. This allows vortex methods...
Mesh Excision: Is Total Mesh Excision Necessary?
Wolff, Gillian F; Winters, J Christian; Krlin, Ryan M
2016-04-01
Nearly 29% of women will undergo a secondary, repeat operation for pelvic organ prolapse (POP) symptom recurrence following a primary repair, as reported by Abbott et al. (Am J Obstet Gynecol 210:163.e1-163.e1, 2014). In efforts to decrease the rates of failure, graft materials have been utilized to augment transvaginal repairs. Following the success of using polypropylene mesh (PPM) for stress urinary incontinence (SUI), the use of PPM in the transvaginal repair of POP increased. However, in recent years, significant concerns have been raised about the safety of PPM mesh. Complications, some specific to mesh, such as exposures, erosion, dyspareunia, and pelvic pain, have been reported with increased frequency. In the current literature, there is not substantive evidence to suggest that PPM has intrinsic properties that warrant total mesh removal in the absence of complications. There are a number of complications that can occur after transvaginal mesh placement that do warrant surgical intervention after failure of conservative therapy. In aggregate, there are no high-quality controlled studies that clearly demonstrate that total mesh removal is consistently more likely to achieve pain reduction. In the cases of obstruction and erosion, it seems clear that definitive removal of the offending mesh is associated with resolution of symptoms in the majority of cases and reasonable practice. There are a number of complications that can occur with removal of mesh, and patients should be informed of this as they formulate a choice of treatment. We will review these considerations as we examine the clinical question of whether total versus partial removal of mesh is necessary for the resolution of complications following transvaginal mesh placement.
Chatelain, Philippe; Duponcheel, Matthieu; Caprace, Denis-Gabriel; Marichal, Yves; Winckelmans, Gregoire
2017-11-01
A vortex particle-mesh (VPM) method with immersed lifting lines has been developed and validated. Based on the vorticity-velocity formulation of the Navier-Stokes equations, it combines the advantages of a particle method and of a mesh-based approach. The immersed lifting lines handle the creation of vorticity from the blade elements and its early development. Large-eddy simulation (LES) of vertical axis wind turbine (VAWT) flows is performed. The complex wake development is captured in detail and over up to 15 diameters downstream: from the blades to the near-wake coherent vortices and then through the transitional ones to the fully developed turbulent far wake (beyond 10 rotor diameters). The statistics and topology of the mean flow are studied with respect to the VAWT geometry and its operating point. The computational sizes also allow insights into the detailed unsteady vortex dynamics and topological flow features, such as a recirculation region influenced by the tip speed ratio and the rotor geometry.
Multiscale Data Assimilation for Large-Eddy Simulations
Li, Z.; Cheng, X.; Gustafson, W. I., Jr.; Xiao, H.; Vogelmann, A. M.; Endo, S.; Toto, T.
2017-12-01
Large-eddy simulation (LES) is a powerful tool for understanding atmospheric turbulence, boundary layer physics and cloud development, and there is a great need for developing data assimilation methodologies that can constrain LES models. The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) User Facility has been developing the capability to routinely generate ensembles of LES. The LES ARM Symbiotic Simulation and Observation (LASSO) project (https://www.arm.gov/capabilities/modeling/lasso) is generating simulations for shallow convection days at the ARM Southern Great Plains site in Oklahoma. One of major objectives of LASSO is to develop the capability to observationally constrain LES using a hierarchy of ARM observations. We have implemented a multiscale data assimilation (MSDA) scheme, which allows data assimilation to be implemented separately for distinct spatial scales, so that the localized observations can be effectively assimilated to constrain the mesoscale fields in the LES area of about 15 km in width. The MSDA analysis is used to produce forcing data that drive LES. With such LES workflow we have examined 13 days with shallow convection selected from the period May-August 2016. We will describe the implementation of MSDA, present LES results, and address challenges and opportunities for applying data assimilation to LES studies.
Large eddy simulation of turbulent and stably-stratified flows
International Nuclear Information System (INIS)
Fallon, Benoit
1994-01-01
The unsteady turbulent flow over a backward-facing step is studied by mean of Large Eddy Simulations with structure function sub grid model, both in isothermal and stably-stratified configurations. Without stratification, the flow develops highly-distorted Kelvin-Helmholtz billows, undergoing to helical pairing, with A-shaped vortices shed downstream. We show that forcing injected by recirculation fluctuations governs this oblique mode instabilities development. The statistical results show good agreements with the experimental measurements. For stably-stratified configurations, the flow remains more bi-dimensional. We show with increasing stratification, how the shear layer growth is frozen by inhibition of pairing process then of Kelvin-Helmholtz instabilities, and the development of gravity waves or stable density interfaces. Eddy structures of the flow present striking analogies with the stratified mixing layer. Additional computations show the development of secondary Kelvin-Helmholtz instabilities on the vorticity layers between two primary structures. This important mechanism based on baroclinic effects (horizontal density gradients) constitutes an additional part of the turbulent mixing process. Finally, the feasibility of Large Eddy Simulation is demonstrated for industrial flows, by studying a complex stratified cavity. Temperature fluctuations are compared to experimental measurements. We also develop three-dimensional un-stationary animations, in order to understand and visualize turbulent interactions. (author) [fr
Large Eddy Simulation of Film-Cooling Jets
Iourokina, Ioulia
2005-11-01
Large Eddy Simulation of inclined jets issuing into a turbulent boundary layer crossflow has been performed. The simulation models film-cooling experiments of Pietrzyk et al. (J. of. Turb., 1989), consisting of a large plenum feeding an array of jets inclined at 35ÃÂ° to the flat surface with a pitch 3D and L/D=3.5. The blowing ratio is 0.5 with unity density ratio. The numerical method used is a hybrid combining external compressible solver with a low-Mach number code for the plenum and film holes. Vorticity dynamics pertinent to jet-in-crossflow interactions is analyzed and three-dimensional vortical structures are revealed. Turbulence statistics are compared to the experimental data. The turbulence production due to shearing in the crossflow is compared to that within the jet hole. The influence of three-dimensional coherent structures on the wall heat transfer is investigated and strategies to increase film- cooling performance are discussed.
Large Eddy Simulation of Supercritical CO2 Through Bend Pipes
He, Xiaoliang; Apte, Sourabh; Dogan, Omer
2017-11-01
Supercritical Carbon Dioxide (sCO2) is investigated as working fluid for power generation in thermal solar, fossil energy and nuclear power plants at high pressures. Severe erosion has been observed in the sCO2 test loops, particularly in nozzles, turbine blades and pipe bends. It is hypothesized that complex flow features such as flow separation and property variations may lead to large oscillations in the wall shear stresses and result in material erosion. In this work, large eddy simulations are conducted at different Reynolds numbers (5000, 27,000 and 50,000) to investigate the effect of heat transfer in a 90 degree bend pipe with unit radius of curvature in order to identify the potential causes of the erosion. The simulation is first performed without heat transfer to validate the flow solver against available experimental and computational studies. Mean flow statistics, turbulent kinetic energy, shear stresses and wall force spectra are computed and compared with available experimental data. Formation of counter-rotating vortices, named Dean vortices, are observed. Secondary flow pattern and swirling-switching flow motions are identified and visualized. Effects of heat transfer on these flow phenomena are then investigated by applying a constant heat flux at the wall. DOE Fossil Energy Crosscutting Technology Research Program.
Large Eddy Simulation for an inherent boron dilution transient
International Nuclear Information System (INIS)
Jayaraju, S.T.; Sathiah, P.; Komen, E.M.J.; Baglietto, E.
2013-01-01
Highlights: • Large Eddy Simulation is performed for a transient boron dilution scenario in the scaled experimental facility of ROCOM. • Fully conformal polyhedral grid of 14 million is created to capture all details of the domain. • Systematic multi-step validation methodology is followed to assess the accuracy of LES model. • For the presently simulated BDT scenario, the LES results lend support to its reliability in consistently predicting the slug transport in the RPV. -- Abstract: The present paper focuses on the validation and applicability of large eddy simulation (LES) to analyze the transport and mixing in the reactor pressure vessel (RPV) during an inherent boron dilution transient (BDT) scenario. Extensive validation data comes from relevant integral tests performed in the scaled ROCOM experimental facility. The modeling of sub-grid-scales is based on the WALE model. A fully conformal polyhedral grid of about 15 million cells is constructed to capture all details in the domain, including the complex structures of the lower-plenum. Detailed qualitative and quantitative validations are performed by following a systematic multi-step validation methodology. Qualitative comparisons to the experimental data in the cold legs, downcomer and the core inlet showed good predictions by the LES model. Minor deviations seen in the quantitative comparisons are rigorously quantified. A key parameter which is affecting the core neutron kinetics response is the value of highest deborated slug concentration that occurs at the core inlet during the transient. Detailed analyses are made at the core inlet to evaluate not only the value of the maximum slug concentration, but also the location and the time at which it occurs during the transient. The relative differences between the ensemble averaged experimental data and CFD predictions were within the range of relative differences seen within 10 different experimental realizations. For the studied scenario, the
Application of large-eddy simulation to pressurized thermal shock: Assessment of the accuracy
International Nuclear Information System (INIS)
Loginov, M.S.; Komen, E.M.J.; Hoehne, T.
2011-01-01
Highlights: → We compare large-eddy simulation with experiment on the single-phase pressurized thermal shock problem. → Three test cases are considered, they cover entire range of mixing patterns. → The accuracy of the flow mixing in the reactor pressure vessel is assessed qualitatively and quantitatively. - Abstract: Pressurized Thermal Shock (PTS) is identified as one of the safety issues where Computational Fluid Dynamics (CFD) can bring real benefits. The turbulence modeling may impact overall accuracy of the calculated thermal loads on the vessel walls, therefore advanced methods for turbulent flows are required. The feasibility and mesh resolution of LES for single-phase PTS are assessed earlier in a companion paper. The current investigation deals with the accuracy of LES approach with respect to the experiment. Experimental data from the Rossendorf Coolant Mixing (ROCOM) facility is used as a basis for validation. Three test cases with different flow rates are considered. They correspond to a buoyancy-driven, a momentum-driven, and a transitional coolant mixing pattern in the downcomer. Time- and frequency-domain analysis are employed for comparison of the numerical and experimental data. The investigation shows a good qualitative prediction of the bulk flow patterns. The fluctuations are modeled correctly. A conservative estimate of the temperature drop near the wall can be obtained from the numerical results with safety factor of 1.1-1.3. In general, the current LES gives a realistic and reliable description of the considered coolant mixing experiments. The accuracy of the prediction is definitely improved with respect to earlier CFD simulations.
Matsuda, K.; Onishi, R.; Takahashi, K.
2017-12-01
Urban high temperatures due to the combined influence of global warming and urban heat islands increase the risk of heat stroke. Greenery is one of possible countermeasures for mitigating the heat environments since the transpiration and shading effect of trees can reduce the air temperature and the radiative heat flux. In order to formulate effective measures, it is important to estimate the influence of the greenery on the heat stroke risk. In this study, we have developed a tree-crown-resolving large-eddy simulation (LES) model that is coupled with three-dimensional radiative transfer (3DRT) model. The Multi-Scale Simulator for the Geoenvironment (MSSG) is used for performing building- and tree-crown-resolving LES. The 3DRT model is implemented in the MSSG so that the 3DRT is calculated repeatedly during the time integration of the LES. We have confirmed that the computational time for the 3DRT model is negligibly small compared with that for the LES and the accuracy of the 3DRT model is sufficiently high to evaluate the radiative heat flux at the pedestrian level. The present model is applied to the analysis of the heat environment in an actual urban area around the Tokyo Bay area, covering 8 km × 8 km with 5-m grid mesh, in order to confirm its feasibility. The results show that the wet-bulb globe temperature (WBGT), which is an indicator of the heat stroke risk, is predicted in a sufficiently high accuracy to evaluate the influence of tree crowns on the heat environment. In addition, by comparing with a case without the greenery in the Tokyo Bay area, we have confirmed that the greenery increases the low WBGT areas in major pedestrian spaces by a factor of 3.4. This indicates that the present model can predict the greenery effect on the urban heat environment quantitatively.
Large-eddy simulation of maritime deep tropical convection
Directory of Open Access Journals (Sweden)
Peter A Bogenschutz
2009-12-01
Full Text Available This study represents an attempt to apply Large-Eddy Simulation (LES resolution to simulate deep tropical convection in near equilibrium for 24 hours over an area of about 205 x 205 km2, which is comparable to that of a typical horizontal grid cell in a global climate model. The simulation is driven by large-scale thermodynamic tendencies derived from mean conditions during the GATE Phase III field experiment. The LES uses 2048 x 2048 x 256 grid points with horizontal grid spacing of 100 m and vertical grid spacing ranging from 50 m in the boundary layer to 100 m in the free troposphere. The simulation reaches a near equilibrium deep convection regime in 12 hours. The simulated vertical cloud distribution exhibits a trimodal vertical distribution of deep, middle and shallow clouds similar to that often observed in Tropics. A sensitivity experiment in which cold pools are suppressed by switching off the evaporation of precipitation results in much lower amounts of shallow and congestus clouds. Unlike the benchmark LES where the new deep clouds tend to appear along the edges of spreading cold pools, the deep clouds in the no-cold-pool experiment tend to reappear at the sites of the previous deep clouds and tend to be surrounded by extensive areas of sporadic shallow clouds. The vertical velocity statistics of updraft and downdraft cores below 6 km height are compared to aircraft observations made during GATE. The comparison shows generally good agreement, and strongly suggests that the LES simulation can be used as a benchmark to represent the dynamics of tropical deep convection on scales ranging from large turbulent eddies to mesoscale convective systems. The effect of horizontal grid resolution is examined by running the same case with progressively larger grid sizes of 200, 400, 800, and 1600 m. These runs show a reasonable agreement with the benchmark LES in statistics such as convective available potential energy, convective inhibition
Nesting Large-Eddy Simulations Within Mesoscale Simulations for Wind Energy Applications
Lundquist, J. K.; Mirocha, J. D.; Chow, F. K.; Kosovic, B.; Lundquist, K. A.
2008-12-01
With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES) account for complex terrain and resolve individual atmospheric eddies on length scales smaller than turbine blades. These small-domain high-resolution simulations are possible with a range of commercial and open- source software, including the Weather Research and Forecasting (WRF) model. In addition to "local" sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecating model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosoviæ (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Large eddy simulation of soot evolution in an aircraft combustor
Mueller, Michael E.; Pitsch, Heinz
2013-11-01
An integrated kinetics-based Large Eddy Simulation (LES) approach for soot evolution in turbulent reacting flows is applied to the simulation of a Pratt & Whitney aircraft gas turbine combustor, and the results are analyzed to provide insights into the complex interactions of the hydrodynamics, mixing, chemistry, and soot. The integrated approach includes detailed models for soot, combustion, and the unresolved interactions between soot, chemistry, and turbulence. The soot model is based on the Hybrid Method of Moments and detailed descriptions of soot aggregates and the various physical and chemical processes governing their evolution. The detailed kinetics of jet fuel oxidation and soot precursor formation is described with the Radiation Flamelet/Progress Variable model, which has been modified to account for the removal of soot precursors from the gas-phase. The unclosed filtered quantities in the soot and combustion models, such as source terms, are closed with a novel presumed subfilter PDF approach that accounts for the high subfilter spatial intermittency of soot. For the combustor simulation, the integrated approach is combined with a Lagrangian parcel method for the liquid spray and state-of-the-art unstructured LES technology for complex geometries. Two overall fuel-to-air ratios are simulated to evaluate the ability of the model to make not only absolute predictions but also quantitative predictions of trends. The Pratt & Whitney combustor is a Rich-Quench-Lean combustor in which combustion first occurs in a fuel-rich primary zone characterized by a large recirculation zone. Dilution air is then added downstream of the recirculation zone, and combustion continues in a fuel-lean secondary zone. The simulations show that large quantities of soot are formed in the fuel-rich recirculation zone, and, furthermore, the overall fuel-to-air ratio dictates both the dominant soot growth process and the location of maximum soot volume fraction. At the higher fuel
Bernier, Caroline; Gazzola, Mattia; Ronsse, Renaud; Chatelain, Philippe
2017-11-01
We present a 2D fluid-structure interaction simulation method with a specific focus on articulated and actuated structures. The proposed algorithm combines a viscous Vortex Particle-Mesh (VPM) method based on a penalization technique and a Multi-Body System (MBS) solver. The hydrodynamic forces and moments acting on the structure parts are not computed explicitly from the surface stresses; they are rather recovered from the projection and penalization steps within the VPM method. The MBS solver accounts for the body dynamics via the Euler-Lagrange formalism. The deformations of the structure are dictated by the hydrodynamic efforts and actuation torques. Here, we focus on simplified swimming structures composed of neutrally buoyant ellipses connected by virtual joints. The joints are actuated through a simple controller in order to reproduce the swimming patterns of an eel-like swimmer. The method enables to recover the histories of torques applied on each hinge along the body. The method is verified on several benchmarks: an impulsively started elastically mounted cylinder and free swimming articulated fish-like structures. Validation will be performed by means of an experimental swimming robot that reproduces the 2D articulated ellipses.
Robust large-scale parallel nonlinear solvers for simulations.
Energy Technology Data Exchange (ETDEWEB)
Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)
2005-11-01
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any
International Nuclear Information System (INIS)
Tanaka, Nobuatsu; Maseguchi, Ryo; Ogawara, Takuya
2008-01-01
This study is concerned with improvement of numerical code called CRIMSON (Civa RefIned Multiphase SimulatiON), which has been developed to evaluate multi-phase flow behaviors based on the recent CFD (computational fluid dynamics) technologies. The CRIMSON employs a finite-volume method combined with the high order interpolation scheme, CIVA (cubic-interpolation with area/volume coordinates). The CRIMSON solves gas-liquid two phases by a unified scheme of CUP (combined unified procedure). The conventional CIVA method has two problems of interface blurring in long-term calculation and non-conservativeness. In this study, the problems were solved by introducing the ideas of the level set method and the phase field method. We verified out method by applying it to some popular benchmark problems of single bubble rising and collapse of water column problems. (author)
Seker, D; Oztuna, D; Kulacoglu, H; Genc, Y; Akcil, M
2013-04-01
Small mesh size has been recognized as one of the factors responsible for recurrence after Lichtenstein hernia repair due to insufficient coverage or mesh shrinkage. The Lichtenstein Hernia Institute recommends a 7 × 15 cm mesh that can be trimmed up to 2 cm from the lateral side. We performed a systematic review to determine surgeons' mesh size preference for the Lichtenstein hernia repair and made a meta-analysis to determine the effect of mesh size, mesh type, and length of follow-up time on recurrence. Two medical databases, PubMed and ISI Web of Science, were systematically searched using the key word "Lichtenstein repair." All full text papers were selected. Publications mentioning mesh size were brought for further analysis. A mesh surface area of 90 cm(2) was accepted as the threshold for defining the mesh as small or large. Also, a subgroup analysis for recurrence pooled proportion according to the mesh size, mesh type, and follow-up period was done. In total, 514 papers were obtained. There were no prospective or retrospective clinical studies comparing mesh size and clinical outcome. A total of 141 papers were duplicated in both databases. As a result, 373 papers were obtained. The full text was available in over 95 % of papers. Only 41 (11.2 %) papers discussed mesh size. In 29 studies, a mesh larger than 90 cm(2) was used. The most frequently preferred commercial mesh size was 7.5 × 15 cm. No papers mentioned the size of the mesh after trimming. There was no information about the relationship between mesh size and patient BMI. The pooled proportion in recurrence for small meshes was 0.0019 (95 % confidence interval: 0.007-0.0036), favoring large meshes to decrease the chance of recurrence. Recurrence becomes more marked when follow-up period is longer than 1 year (p < 0.001). Heavy meshes also decreased recurrence (p = 0.015). This systematic review demonstrates that the size of the mesh used in Lichtenstein hernia repair is rarely
CSIR Research Space (South Africa)
Schoombie, Janine
2017-10-01
Full Text Available -CCM+ (1) • Why STAR-CCM+? – In 2017 only a STAR-CCM+ v11.06 commercial licence was available – Didn’t want to start from scratch – Test solver with identical mesh STEP 1: • Import Fluent .cas files - Imported as a volume mesh - No mesh continuum... stream_source_info Schoombie_19954_2017.pdf.txt stream_content_type text/plain stream_size 3069 Content-Encoding UTF-8 stream_name Schoombie_19954_2017.pdf.txt Content-Type text/plain; charset=UTF-8 A comparison of ANSYS...
Large-scale ground motion simulation using GPGPU
Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.
2012-12-01
Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number
Large Eddy Simulations of turbulent flows at supercritical pressure
Energy Technology Data Exchange (ETDEWEB)
Kunik, C.; Otic, I.; Schulenberg, T., E-mail: claus.kunik@kit.edu, E-mail: ivan.otic@kit.edu, E-mail: thomas.schulenberg@kit.edu [Karlsruhe Inst. of Tech. (KIT), Karlsruhe (Germany)
2011-07-01
A Large Eddy Simulation (LES) method is used to investigate turbulent heat transfer to CO{sub 2} at supercritical pressure for upward flows. At those pressure conditions the fluid undergoes strong variations of fluid properties in a certain temperature range, which can lead to a deterioration of heat transfer (DHT). In this analysis, the LES method is applied on turbulent forced convection conditions to investigate the influence of several subgrid scale models (SGS-model). At first, only velocity profiles of the so-called inflow generator are considered, whereas in the second part temperature profiles of the heated section are investigated in detail. The results are statistically analyzed and compared with DNS data from the literature. (author)
Background simulations for the Large Area Detector onboard LOFT
DEFF Research Database (Denmark)
Campana, Riccardo; Feroci, Marco; Ettore, Del Monte
2013-01-01
and magnetic fields around compact objects and in supranuclear density conditions. Having an effective area of similar to 10 m(2) at 8 keV, LOFT will be able to measure with high sensitivity very fast variability in the X-ray fluxes and spectra. A good knowledge of the in-orbit background environment...... is essential to assess the scientific performance of the mission and optimize the design of its main instrument, the Large Area Detector (LAD). In this paper the results of an extensive Geant-4 simulation of the instrumentwillbe discussed, showing the main contributions to the background and the design...... an anticipated modulation of the background rate as small as 10 % over the orbital timescale. The intrinsic photonic origin of the largest background component also allows for an efficient modelling, supported by an in-flight active monitoring, allowing to predict systematic residuals significantly better than...
Langevin dynamics simulations of large frustrated Josephson junction arrays
International Nuclear Information System (INIS)
Groenbech-Jensen, N.; Bishop, A.R.; Lomdahl, P.S.
1991-01-01
Long-time Langevin dynamics simulations of large (N x N,N = 128) 2-dimensional arrays of Josephson junctions in a uniformly frustrating external magnetic field are reported. The results demonstrate: (1) Relaxation from an initially random flux configuration as a universal fit to a glassy stretched-exponential type of relaxation for the intermediate temperatures T(0.3 T c approx-lt T approx-lt 0.7 T c ), and an activated dynamic behavior for T ∼ T c ; (2) a glassy (multi-time, multi-length scale) voltage response to an applied current. Intrinsic dynamical symmetry breaking induced by boundaries as nucleation sites for flux lattice defects gives rise to transverse and noisy voltage response
Langevin dynamics simulations of large frustrated Josephson junction arrays
International Nuclear Information System (INIS)
Gronbech-Jensen, N.; Bishop, A.R.; Lomdahl, P.S.
1991-01-01
Long-time Langevin dynamics simulations of large (N x N, N = 128) 2-dimensional arrays of Josephson junctions in a uniformly frustrating external magnetic field are reported. The results demonstrate: Relaxation from an initially random flux configuration as a ''universal'' fit to a ''glassy'' stretched-exponential type of relaxation for the intermediate temperatures T (0.3 T c approx-lt T approx-lt 0.7 T c ), and an ''activated dynamic'' behavior for T ∼ T c A glassy (multi-time, multi-length scale) voltage response to an applied current. Intrinsic dynamical symmetry breaking induced by boundaries as nucleation sites for flux lattice defects gives rise to transverse and noisy voltage response
Large-Eddy-Simulation of turbulent magnetohydrodynamic flows
Directory of Open Access Journals (Sweden)
Woelck Johannes
2017-01-01
Full Text Available A magnetohydrodynamic turbulent channel flow under the influence of a wallnormal magnetic field is investigated using the Large-Eddy-Simulation technique and k-equation subgrid-scale-model. Therefore, the new solver MHDpisoFoam is implemented in the OpenFOAM CFD-Code. The temporal decay of an initial turbulent field for different magnetic parameters is investigated. The rms values of the averaged velocity fluctuations show a similar, trend for each coordinate direction. 80% of the fluctuations are damped out in the range between 0 < Ha < < 75 at Re = 6675. The trend can be approximated via an exponential of the form exp(−a·Ha, where a is a scaling parameter. At higher Hartmann numbers the fluctuations decrease in an almost linear way. Therefore, the results of this study show that it may be possible to construct a general law for the turbulence damping due to action of magnetic fields.
Large eddy simulation of the flow through a swirl generator
Energy Technology Data Exchange (ETDEWEB)
Conway, Stephen
1998-12-01
The advances made in computer technology over recent years have led to a great increase in the engineering problems that can be studied using CFD. The computation of flows over and through complex geometries at relatively high Reynolds numbers is becoming more common using the Large Eddy Simulation (LES) technique. Direct numerical simulations of such flows is still beyond the capacity of todays fastest supercomputers, requiring excessive computational times and memory. In addition, traditional Reynolds Averaged Navier Stokes (RANS) methods are known to have limited applicability in a wide range of engineering flow situations. In this thesis LES has been used to simulate the flow through a cascade of guidance vanes, more commonly known as a swirl generator, positioned at the inlet to a gas turbine combustion chamber. This flow case is of interest because of the complex flow phenomena which occur within the swirl generator, which include compressibility effects, different types of flow instabilities, transition, laminar and turbulent separation and near wall turbulence. It is also of interest because it fits very well into the range of engineering applications that can be studied using LES. Two computational grids with different resolutions and two subgrid scale stress models were used in the study. The effects of separation and transition are investigated. A vortex shedding frequency from the guidance vanes is determined which is seen to be dependent on the angle of incident air flow. Interaction between the movement of the separation region and the shedding frequency is also noted. Such vortex shedding phenomena can directly affect the quality of fuel and air mixing within the combustion chamber and can in some cases induce vibrations in the gas turbine structure. Comparisons between the results obtained using different grid resolutions with an implicit and a dynamic divergence (DDM) subgrid scale stress models are also made 32 refs, 35 figs, 2 tabs
Large Eddy Simulation of Vertical Axis Wind Turbine Wakes
Directory of Open Access Journals (Sweden)
Sina Shamsoddin
2014-02-01
Full Text Available In this study, large eddy simulation (LES is combined with a turbine model to investigate the wake behind a vertical-axis wind turbine (VAWT in a three-dimensional turbulent flow. Two methods are used to model the subgrid-scale (SGS stresses: (a the Smagorinsky model; and (b the modulated gradient model. To parameterize the effects of the VAWT on the flow, two VAWT models are developed: (a the actuator swept-surface model (ASSM, in which the time-averaged turbine-induced forces are distributed on a surface swept by the turbine blades, i.e., the actuator swept surface; and (b the actuator line model (ALM, in which the instantaneous blade forces are only spatially distributed on lines representing the blades, i.e., the actuator lines. This is the first time that LES has been applied and validated for the simulation of VAWT wakes by using either the ASSM or the ALM techniques. In both models, blade-element theory is used to calculate the lift and drag forces on the blades. The results are compared with flow measurements in the wake of a model straight-bladed VAWT, carried out in the Institute de Méchanique et Statistique de la Turbulence (IMST water channel. Different combinations of SGS models with VAWT models are studied, and a fairly good overall agreement between simulation results and measurement data is observed. In general, the ALM is found to better capture the unsteady-periodic nature of the wake and shows a better agreement with the experimental data compared with the ASSM. The modulated gradient model is also found to be a more reliable SGS stress modeling technique, compared with the Smagorinsky model, and it yields reasonable predictions of the mean flow and turbulence characteristics of a VAWT wake using its theoretically-determined model coefficient.
Study of Hydrokinetic Turbine Arrays with Large Eddy Simulation
Sale, Danny; Aliseda, Alberto
2014-11-01
Marine renewable energy is advancing towards commercialization, including electrical power generation from ocean, river, and tidal currents. The focus of this work is to develop numerical simulations capable of predicting the power generation potential of hydrokinetic turbine arrays-this includes analysis of unsteady and averaged flow fields, turbulence statistics, and unsteady loadings on turbine rotors and support structures due to interaction with rotor wakes and ambient turbulence. The governing equations of large-eddy-simulation (LES) are solved using a finite-volume method, and the presence of turbine blades are approximated by the actuator-line method in which hydrodynamic forces are projected to the flow field as a body force. The actuator-line approach captures helical wake formation including vortex shedding from individual blades, and the effects of drag and vorticity generation from the rough seabed surface are accounted for by wall-models. This LES framework was used to replicate a previous flume experiment consisting of three hydrokinetic turbines tested under various operating conditions and array layouts. Predictions of the power generation, velocity deficit and turbulence statistics in the wakes are compared between the LES and experimental datasets.
Density-functional theory simulation of large quantum dots
Jiang, Hong; Baranger, Harold U.; Yang, Weitao
2003-10-01
Kohn-Sham spin-density functional theory provides an efficient and accurate model to study electron-electron interaction effects in quantum dots, but its application to large systems is a challenge. Here an efficient method for the simulation of quantum dots using density-function theory is developed; it includes the particle-in-the-box representation of the Kohn-Sham orbitals, an efficient conjugate-gradient method to directly minimize the total energy, a Fourier convolution approach for the calculation of the Hartree potential, and a simplified multigrid technique to accelerate the convergence. We test the methodology in a two-dimensional model system and show that numerical studies of large quantum dots with several hundred electrons become computationally affordable. In the noninteracting limit, the classical dynamics of the system we study can be continuously varied from integrable to fully chaotic. The qualitative difference in the noninteracting classical dynamics has an effect on the quantum properties of the interacting system: integrable classical dynamics leads to higher-spin states and a broader distribution of spacing between Coulomb blockade peaks.
Large Eddy Simulation Study for Fluid Disintegration and Mixing
Bellan, Josette; Taskinoglu, Ezgi
2011-01-01
A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not
Deploy production sliding mesh capability with linear solver benchmarking.
Energy Technology Data Exchange (ETDEWEB)
Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thomas, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Barone, Matthew F. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Alan B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ananthan, Shreyas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Overfelt, James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sprague, Mike [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rood, Jon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2018-02-01
Wind applications require the ability to simulate rotating blades. To support this use-case, a novel design-order sliding mesh algorithm has been developed and deployed. The hybrid method combines the control volume finite element methodology (CVFEM) with concepts found within a discontinuous Galerkin (DG) finite element method (FEM) to manage a sliding mesh. The method has been demonstrated to be design-order for the tested polynomial basis (P=1 and P=2) and has been deployed to provide production simulation capability for a Vestas V27 (225 kW) wind turbine. Other stationary and canonical rotating ow simulations are also presented. As the majority of wind-energy applications are driving extensive usage of hybrid meshes, a foundational study that outlines near-wall numerical behavior for a variety of element topologies is presented. Results indicate that the proposed nonlinear stabilization operator (NSO) is an effective stabilization methodology to control Gibbs phenomena at large cell Peclet numbers. The study also provides practical mesh resolution guidelines for future analysis efforts. Application-driven performance and algorithmic improvements have been carried out to increase robustness of the scheme on hybrid production wind energy meshes. Specifically, the Kokkos-based Nalu Kernel construct outlined in the FY17/Q4 ExaWind milestone has been transitioned to the hybrid mesh regime. This code base is exercised within a full V27 production run. Simulation timings for parallel search and custom ghosting are presented. As the low-Mach application space requires implicit matrix solves, the cost of matrix reinitialization has been evaluated on a variety of production meshes. Results indicate that at low element counts, i.e., fewer than 100 million elements, matrix graph initialization and preconditioner setup times are small. However, as mesh sizes increase, e.g., 500 million elements, simulation time associated with \\setup-up" costs can increase to nearly 50% of
Large Eddy Simulation of Heat Entrainment Under Arctic Sea Ice
Ramudu, Eshwan; Gelderloos, Renske; Yang, Di; Meneveau, Charles; Gnanadesikan, Anand
2018-01-01
Arctic sea ice has declined rapidly in recent decades. The faster than projected retreat suggests that free-running large-scale climate models may not be accurately representing some key processes. The small-scale turbulent entrainment of heat from the mixed layer could be one such process. To better understand this mechanism, we model the Arctic Ocean's Canada Basin, which is characterized by a perennial anomalously warm Pacific Summer Water (PSW) layer residing at the base of the mixed layer and a summertime Near-Surface Temperature Maximum (NSTM) within the mixed layer trapping heat from solar radiation. We use large eddy simulation (LES) to investigate heat entrainment for different ice-drift velocities and different initial temperature profiles. The value of LES is that the resolved turbulent fluxes are greater than the subgrid-scale fluxes for most of our parameter space. The results show that the presence of the NSTM enhances heat entrainment from the mixed layer. Additionally there is no PSW heat entrained under the parameter space considered. We propose a scaling law for the ocean-to-ice heat flux which depends on the initial temperature anomaly in the NSTM layer and the ice-drift velocity. A case study of "The Great Arctic Cyclone of 2012" gives a turbulent heat flux from the mixed layer that is approximately 70% of the total ocean-to-ice heat flux estimated from the PIOMAS model often used for short-term predictions. Present results highlight the need for large-scale climate models to account for the NSTM layer.
Non-Markovian closure models for large eddy simulations using the Mori-Zwanzig formalism
Parish, Eric J.; Duraisamy, Karthik
2017-01-01
This work uses the Mori-Zwanzig (M-Z) formalism, a concept originating from nonequilibrium statistical mechanics, as a basis for the development of coarse-grained models of turbulence. The mechanics of the generalized Langevin equation (GLE) are considered, and insight gained from the orthogonal dynamics equation is used as a starting point for model development. A class of subgrid models is considered which represent nonlocal behavior via a finite memory approximation [Stinis, arXiv:1211.4285 (2012)], the length of which is determined using a heuristic that is related to the spectral radius of the Jacobian of the resolved variables. The resulting models are intimately tied to the underlying numerical resolution and are capable of approximating non-Markovian effects. Numerical experiments on the Burgers equation demonstrate that the M-Z-based models can accurately predict the temporal evolution of the total kinetic energy and the total dissipation rate at varying mesh resolutions. The trajectory of each resolved mode in phase space is accurately predicted for cases where the coarse graining is moderate. Large eddy simulations (LESs) of homogeneous isotropic turbulence and the Taylor-Green Vortex show that the M-Z-based models are able to provide excellent predictions, accurately capturing the subgrid contribution to energy transfer. Last, LESs of fully developed channel flow demonstrate the applicability of M-Z-based models to nondecaying problems. It is notable that the form of the closure is not imposed by the modeler, but is rather derived from the mathematics of the coarse graining, highlighting the potential of M-Z-based techniques to define LES closures.
Parallel simulation of tsunami inundation on a large-scale supercomputer
Oishi, Y.; Imamura, F.; Sugawara, D.
2013-12-01
An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the
Lee, Chin Yik; Cant, Stewart
2017-07-01
A premixed propane-air flame stabilised on a triangular bluff body in a model jet-engine afterburner configuration is investigated using large-eddy simulation (LES). The reaction rate source term for turbulent premixed combustion is closed using the transported flame surface density (TFSD) model. In this approach, there is no need to assume local equilibrium between the generation and destruction of subgrid FSD, as commonly done in simple algebraic closure models. Instead, the key processes that create and destroy FSD are accounted for explicitly. This allows the model to capture large-scale unsteady flame propagation in the presence of combustion instabilities, or in situations where the flame encounters progressive wrinkling with time. In this study, comprehensive validation of the numerical method is carried out. For the non-reacting flow, good agreement for both the time-averaged and root-mean-square velocity fields are obtained, and the Karman type vortex shedding behaviour seen in the experiment is well represented. For the reacting flow, two mesh configurations are used to investigate the sensitivity of the LES results to the numerical resolution. Profiles for the velocity and temperature fields exhibit good agreement with the experimental data for both the coarse and dense mesh. This demonstrates the capability of LES coupled with the TFSD approach in representing the highly unsteady premixed combustion observed in this configuration. The instantaneous flow pattern and turbulent flame behaviour are discussed, and the differences between the non-reacting and reacting flow are described through visualisation of vortical structures and their interaction with the flame. Lastly, the generation and destruction of FSD are evaluated by examining the individual terms in the FSD transport equation. Localised regions where straining, curvature and propagation are each dominant are observed, highlighting the importance of non-equilibrium effects of FSD generation and
Mellano, Erin M; Nakamura, Leah Y; Choi, Judy M; Kang, Diana C; Grisales, Tamara; Raz, Shlomo; Rodriguez, Larissa V
2016-01-01
Vaginal mesh complications necessitating excision are increasingly prevalent. We aim to study whether subclinical chronically infected mesh contributes to the development of delayed-onset mesh complications or recurrent urinary tract infections (UTIs). Women undergoing mesh removal from August 2013 through May 2014 were identified by surgical code for vaginal mesh removal. Only women undergoing removal of anti-incontinence mesh were included. Exclusion criteria included any women undergoing simultaneous prolapse mesh removal. We abstracted preoperative and postoperative information from the medical record and compared mesh culture results from patients with and without mesh extrusion, de novo recurrent UTIs, and delayed-onset pain. One hundred seven women with only anti-incontinence mesh removed were included in the analysis. Onset of complications after mesh placement was within the first 6 months in 70 (65%) of 107 and delayed (≥6 months) in 37 (35%) of 107. A positive culture from the explanted mesh was obtained from 82 (77%) of 107 patients, and 40 (37%) of 107 were positive with potential pathogens. There were no significant differences in culture results when comparing patients with delayed-onset versus immediate pain, extrusion with no extrusion, and de novo recurrent UTIs with no infections. In this large cohort of patients with mesh removed for a diverse array of complications, cultures of the explanted vaginal mesh demonstrate frequent low-density bacterial colonization. We found no differences in culture results from women with delayed-onset pain versus acute pain, vaginal mesh extrusions versus no extrusions, or recurrent UTIs using standard culture methods. Chronic prosthetic infections in other areas of medicine are associated with bacterial biofilms, which are resistant to typical culture techniques. Further studies using culture-independent methods are needed to investigate the potential role of chronic bacterial infections in delayed vaginal mesh
Mathematics of large eddy simulation of turbulent flows
Energy Technology Data Exchange (ETDEWEB)
Berselli, L.C. [Pisa Univ. (Italy). Dept. of Applied Mathematics ' ' U. Dini' ' ; Iliescu, T. [Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (United States). Dept. of Mathematics; Layton, W.J. [Pittsburgh Univ., PA (United States). Dept. of Mathematics
2006-07-01
Large eddy simulation (LES) is a method of scientific computation seeking to predict the dynamics of organized structures in turbulent flows by approximating local, spatial averages of the flow. Since its birth in 1970, LES has undergone an explosive development and has matured into a highly-developed computational technology. It uses the tools of turbulence theory and the experience gained from practical computation. This book focuses on the mathematical foundations of LES and its models and provides a connection between the powerful tools of applied mathematics, partial differential equations and LES. Thus, it is concerned with fundamental aspects not treated so deeply in the other books in the field, aspects such as well-posedness of the models, their energy balance and the connection to the Leray theory of weak solutions of the Navier-Stokes equations. The authors give a mathematically informed and detailed treatment of an interesting selection of models, focusing on issues connected with understanding and expanding the correctness and universality of LES. This volume offers a useful entry point into the field for PhD students in applied mathematics, computational mathematics and partial differential equations. Non-mathematicians will appreciate it as a reference that introduces them to current tools and advances in the mathematical theory of LES. (orig.)
Simulation of fatigue crack growth under large scale yielding conditions
Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann
2010-07-01
A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.
Contextual Compression of Large-Scale Wind Turbine Array Simulations
Energy Technology Data Exchange (ETDEWEB)
Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research (NCAR)
2017-12-04
Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.
Large-eddy simulations of unidirectional water flow over dunes
Grigoriadis, D. G. E.; Balaras, E.; Dimas, A. A.
2009-06-01
The unidirectional, subcritical flow over fixed dunes is studied numerically using large-eddy simulation, while the immersed boundary method is implemented to incorporate the bed geometry. Results are presented for a typical dune shape and two Reynolds numbers, Re = 17,500 and Re = 93,500, on the basis of bulk velocity and water depth. The numerical predictions of velocity statistics at the low Reynolds number are in very good agreement with available experimental data. A primary recirculation region develops downstream of the dune crest at both Reynolds numbers, while a secondary region develops at the toe of the dune crest only for the low Reynolds number. Downstream of the reattachment point, on the dune stoss, the turbulence intensity in the developing boundary layer is weaker than in comparable equilibrium boundary layers. Coherent vortical structures are identified using the fluctuating pressure field and the second invariant of the velocity gradient tensor. Vorticity is primarily generated at the dune crest in the form of spanwise "roller" structures. Roller structures dominate the flow dynamics near the crest, and are responsible for perturbing the boundary layer downstream of the reattachment point, which leads to the formation of "horseshoe" structures. Horseshoe structures dominate the near-wall dynamics after the reattachment point, do not rise to the free surface, and are distorted by the shear layer of the next crest. The occasional interaction between roller and horseshoe structures generates tube-like "kolk" structures, which rise to the free surface and persist for a long time before attenuating.
Enabling parallel simulation of large-scale HPC network systems
International Nuclear Information System (INIS)
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip
2016-01-01
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations
Simulating large-scale spiking neuronal networks with NEST
Schücker, Jannis; Eppler, Jochen Martin
2014-01-01
The Neural Simulation Tool NEST [1, www.nest-simulator.org] is the simulator for spiking neural networkmodels of the HBP that focuses on the dynamics, size and structure of neural systems rather than on theexact morphology of individual neurons. Its simulation kernel is written in C++ and it runs on computinghardware ranging from simple laptops to clusters and supercomputers with thousands of processor cores.The development of NEST is coordinated by the NEST Initiative [www.nest-initiative.or...
Energy Technology Data Exchange (ETDEWEB)
Murakami, Y.; Shi, B. [Geological Survey of Japan, Tsukuba (Japan); Matsushima, J. [The University of Tokyo, Tokyo (Japan). Faculty of Engineering
1997-05-27
Large deformation of the crust is generated by relatively large displacement of the mediums on both sides along a fault. In the conventional finite element method, faults are dealt with by special elements which are called joint elements, but joint elements, elements microscopic in width, generate numerical instability if large shear displacement is given. Therefore, by introducing the master slave (MO) method used for contact analysis in the metal processing field, developed was a large deformation simulator for analyzing diastrophism including large displacement along the fault. Analysis examples were shown in case the upper basement and lower basement were relatively dislocated with the fault as a boundary. The bottom surface and right end boundary of the lower basement are fixed boundaries. The left end boundary of the lower basement is fixed, and to the left end boundary of the upper basement, the horizontal speed, 3{times}10{sup -7}m/s, was given. In accordance with the horizontal movement of the upper basement, the boundary surface largely deformed. Stress is almost at right angles at the boundary surface. As to the analysis of faults by the MO method, it has been used for a single simple fault, but should be spread to lots of faults in the future. 13 refs., 2 figs.
Accelerating Large Data Analysis By Exploiting Regularities
Moran, Patrick J.; Ellsworth, David
2003-01-01
We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.
Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.
Energy Technology Data Exchange (ETDEWEB)
Smith, William S. [Los Alamos National Laboratory; Bull, Jeffrey S. [Los Alamos National Laboratory; Wilcox, Trevor [Los Alamos National Laboratory; Bos, Randall J. [Los Alamos National Laboratory; Shao, Xuan-Min [Los Alamos National Laboratory; Goorley, John T. [Los Alamos National Laboratory; Costigan, Keeley R. [Los Alamos National Laboratory
2012-08-13
In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, and diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.
Characteristics of Tornado-Like Vortices Simulated in a Large-Scale Ward-Type Simulator
Tang, Zhuo; Feng, Changda; Wu, Liang; Zuo, Delong; James, Darryl L.
2018-02-01
Tornado-like vortices are simulated in a large-scale Ward-type simulator to further advance the understanding of such flows, and to facilitate future studies of tornado wind loading on structures. Measurements of the velocity fields near the simulator floor and the resulting floor surface pressures are interpreted to reveal the mean and fluctuating characteristics of the flow as well as the characteristics of the static-pressure deficit. We focus on the manner in which the swirl ratio and the radial Reynolds number affect these characteristics. The transition of the tornado-like flow from a single-celled vortex to a dual-celled vortex with increasing swirl ratio and the impact of this transition on the flow field and the surface-pressure deficit are closely examined. The mean characteristics of the surface-pressure deficit caused by tornado-like vortices simulated at a number of swirl ratios compare well with the corresponding characteristics recorded during full-scale tornadoes.
Large-eddy simulation of unidirectional turbulent flow over dunes
Omidyeganeh, Mohammad
We performed large eddy simulation of the flow over a series of two- and three-dimensional dune geometries at laboratory scale using the Lagrangian dynamic eddy-viscosity subgrid-scale model. First, we studied the flow over a standard 2D transverse dune geometry, then bedform three-dimensionality was imposed. Finally, we investigated the turbulent flow over barchan dunes. The results are validated by comparison with simulations and experiments for the 2D dune case, while the results of the 3D dunes are validated qualitatively against experiments. The flow over transverse dunes separates at the dune crest, generating a shear layer that plays a crucial role in the transport of momentum and energy, as well as the generation of coherent structures. Spanwise vortices are generated in the separated shear; as they are advected, they undergo lateral instabilities and develop into horseshoe-like structures and finally reach the surface. The ejection that occurs between the legs of the vortex creates the upwelling and downdrafting events on the free surface known as "boils". The three-dimensional separation of flow at the crestline alters the distribution of wall pressure, which may cause secondary flow across the stream. The mean flow is characterized by a pair of counter-rotating streamwise vortices, with core radii of the order of the flow depth. Staggering the crestlines alters the secondary motion; two pairs of streamwise vortices appear (a strong one, centred about the lobe, and a weaker one, coming from the previous dune, centred around the saddle). The flow over barchan dunes presents significant differences to that over transverse dunes. The flow near the bed, upstream of the dune, diverges from the centerline plane; the flow close to the centerline plane separates at the crest and reattaches on the bed. Away from the centerline plane and along the horns, flow separation occurs intermittently. The flow in the separation bubble is routed towards the horns and leaves
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable
Energy Technology Data Exchange (ETDEWEB)
Lartigue, G.
2004-11-15
The new european laws on pollutants emission impose more and more constraints to motorists. This is particularly true for gas turbines manufacturers, that must design motors operating with very fuel-lean mixtures. Doing so, pollutants formation is significantly reduced but the problem of combustion stability arises. Actually, combustion regimes that have a large excess of air are naturally more sensitive to combustion instabilities. Numerical predictions of these instabilities is thus a key issue for many industrial involved in energy production. This thesis work tries to show that recent numerical tools are now able to predict these combustion instabilities. Particularly, the Large Eddy Simulation method, when implemented in a compressible CFD code, is able to take into account the main processes involved in combustion instabilities, such as acoustics and flame/vortex interaction. This work describes a new formulation of a Large Eddy Simulation numerical code that enables to take into account very precisely thermodynamics and chemistry, that are essential in combustion phenomena. A validation of this work will be presented in a complex geometry (the PRECCINSTA burner). Our numerical results will be successfully compared with experimental data gathered at DLR Stuttgart (Germany). Moreover, a detailed analysis of the acoustics in this configuration will be presented, as well as its interaction with the combustion. For this acoustics analysis, another CERFACS code has been extensively used, the Helmholtz solver AVSP. (author)
A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation
DEFF Research Database (Denmark)
Breton, Simon-Philippe; Sumner, J.; Sørensen, Jens Nørkær
2017-01-01
surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple......Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review...
... knitted mesh or non-knitted sheet forms. The synthetic materials used can be absorbable, non-absorbable or a combination of absorbable and non-absorbable materials. Animal-derived mesh are made of animal tissue, such as intestine or skin, that has been processed and disinfected to be ...
Large eddy simulation of the subcritical flow over a V grooved circular cylinder
International Nuclear Information System (INIS)
Alonzo-García, A.; Gutiérrez-Torres, C. del C.; Jiménez-Bernal, J.A.
2015-01-01
Highlights: • We compared numerically the turbulent flow over a smooth circular cylinder and a V grooved cylinder in the subcritical regime. • Turbulence intensities in both streamwise and normal direction suffered attenuations. • The swirls structures on grooves peaks seemed to have a cyclic behavior. • The evolution of the flow inside grooves showed that swirls structures located in peaks suffered elongations in the normal direction. • The secondary vortex structures formed in the grooved cylinder near wake were smaller in comparison of the smooth cylinder flow. - Abstract: In this paper, a comparative numerical study of the subcritical flow over a smooth cylinder and a cylinder with V grooves (Re = 140,000) is presented. The implemented technique was the Large Eddy Simulation (LES), which according to Kolmogorov's theory, resolves directly the most energetic largest eddies and models the smallest and considered universal high frequency ones. The Navier-Stokes (N-S) equations were solved using the commercial software ANSYS FLUENT V.12.1, which applied the finite volume method (FVM) to discretize these equations in their unsteady and incompressible forms. The grid densities were 2.6 million cells and 13.5 million cells for the smooth and V grooved cylinder, respectively. Both meshes were composed of structured hexahedral cells and close to the wall of the cylinders, additional refinements were employed in order to obtain y +<5 values. All cases were simulated during at least 15 vortex shedding cycles with the aim of obtaining significant statistical data. Results: showed that for both cases (smooth and V grooved cylinder flow), the numerical code was capable of reproducing the most important physical quantities of the subcritical regime. Velocity distribution and turbulence intensity in the flow direction suffered a slight attenuation along the wake, as a consequence of grooves perturbation, which also caused an increase in the pressure coefficient
Seeking new surgical predictors of mesh exposure after transvaginal mesh repair.
Wu, Pei-Ying; Chang, Chih-Hung; Shen, Meng-Ru; Chou, Cheng-Yang; Yang, Yi-Ching; Huang, Yu-Fang
2016-10-01
The purpose of this study was to explore new preventable risk factors for mesh exposure. A retrospective review of 92 consecutive patients treated with transvaginal mesh (TVM) in the urogynecological unit of our university hospital. An analysis of perioperative predictors was conducted in patients after vaginal repairs using a type 1 mesh. Mesh complications were recorded according to International Urogynecological Association (IUGA) definitions. Mesh-exposure-free durations were calculated by using the Kaplan-Meier method and compared between different closure techniques using log-rank test. Hazard ratios (HR) of predictors for mesh exposure were estimated by univariate and multivariate analyses using Cox proportional hazards regression models. The median surveillance interval was 24.1 months. Two late occurrences were found beyond 1 year post operation. No statistically significant correlation was observed between mesh exposure and concomitant hysterectomy. Exposure risks were significantly higher in patients with interrupted whole-layer closure in univariate analysis. In the multivariate analysis, hematoma [HR 5.42, 95 % confidence interval (CI) 1.26-23.35, P = 0.024), Prolift mesh (HR 5.52, 95 % CI 1.15-26.53, P = 0.033), and interrupted whole-layer closure (HR 7.02, 95 % CI 1.62-30.53, P = 0.009) were the strongest predictors of mesh exposure. Findings indicate the risks of mesh exposure and reoperation may be prevented by avoiding hematoma, large amount of mesh, or interrupted whole-layer closure in TVM surgeries. If these risk factors are prevented, hysterectomy may not be a relative contraindication for TVM use. We also provide evidence regarding mesh exposure and the necessity for more than 1 year of follow-up and preoperative counselling.
International Nuclear Information System (INIS)
Li Hanyu; Zhou Haijing; Dong Zhiwei; Liao Cheng; Chang Lei; Cao Xiaolin; Xiao Li
2010-01-01
A large-scale parallel electromagnetic field simulation program JEMS-FDTD(J Electromagnetic Solver-Finite Difference Time Domain) is designed and implemented on JASMIN (J parallel Adaptive Structured Mesh applications INfrastructure). This program can simulate propagation, radiation, couple of electromagnetic field by solving Maxwell equations on structured mesh explicitly with FDTD method. JEMS-FDTD is able to simulate billion-mesh-scale problems on thousands of processors. In this article, the program is verified by simulating the radiation of an electric dipole. A beam waveguide is simulated to demonstrate the capability of large scale parallel computation. A parallel performance test indicates that a high parallel efficiency is obtained. (authors)
DEFF Research Database (Denmark)
Sigurdsson, Haftor Örn; Kær, Søren Knudsen
2012-01-01
Steam reforming of hydrocarbons using a catalytic plate-type-heat-exchanger (CPHE) reformer is an attractive method of producing hydrogen for a fuel cell-based micro combined-heat-and-power system. In this study the flow distribution in a CPHE reformer, which uses a coated wire-mesh catalyst...
Gleam: the GLAST Large Area Telescope Simulation Framework
Boinee, P; De Angelis, Alessandro; Favretto, Dario; Frailis, Marco; Giannitrapani, Riccardo; Milotti, Edoardo; Longo, Francesco; Brigida, Monica; Gargano, Fabio; Giglietto, Nicola; Loparco, Francesco; Mazziotta, Mario Nicola; Cecchi, Claudia; Lubrano, Pasquale; Pepe, Monica; Baldini, Luca; Cohen-Tanugi, Johann; Kuss, Michael; Latronico, Luca; Omodei, Nicola; Spandre, Gloria; Bogart, Joanne R.; Dubois, Richard; Kamae, Tune; Rochester, Leon; Usher, Tracy; Burnett, Thompson H.; Robinson, Sean M.; Bastieri, Denis; Rando, Riccardo
2003-01-01
This paper presents the simulation of the GLAST high energy gamma-ray telescope. The simulation package, written in C++, is based on the Geant4 toolkit, and it is integrated into a general framework used to process events. A detailed simulation of the electronic signals inside Silicon detectors has been provided and it is used for the particle tracking, which is handled by a dedicated software. A unique repository for the geometrical description of the detector has been realized using the XML language and a C++ library to access this information has been designed and implemented.
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1994-01-01
We calculate reduced moments (xi bar)(sub q) of the matter density fluctuations, up to order q = 5, from counts in cells produced by particle-mesh numerical simulations with scale-free Gaussian initial conditions. We use power-law spectra P(k) proportional to k(exp n) with indices n = -3, -2, -1, 0, 1. Due to the supposed absence of characteristic times or scales in our models, all quantities are expected to depend on a single scaling variable. For each model, the moments at all times can be expressed in terms of the variance (xi bar)(sub 2), alone. We look for agreement with the hierarchical scaling ansatz, according to which ((xi bar)(sub q)) proportional to ((xi bar)(sub 2))(exp (q - 1)). For n less than or equal to -2 models, we find strong deviations from the hierarchy, which are mostly due to the presence of boundary problems in the simulations. A small, residual signal of deviation from the hierarchical scaling is however also found in n greater than or equal to -1 models. The wide range of spectra considered and the large dynamic range, with careful checks of scaling and shot-noise effects, allows us to reliably detect evolution away from the perturbation theory result.
Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers
Wu, Xingfu; Duan, Benchun; Taylor, Valerie
2011-01-01
, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular
An Agent Based Collaborative Simplification of 3D Mesh Model
Wang, Li-Rong; Yu, Bo; Hagiwara, Ichiro
Large-volume mesh model faces the challenge in fast rendering and transmission by Internet. The current mesh models obtained by using three-dimensional (3D) scanning technology are usually very large in data volume. This paper develops a mobile agent based collaborative environment on the development platform of mobile-C. Communication among distributed agents includes grasping image of visualized mesh model, annotation to grasped image and instant message. Remote and collaborative simplification can be efficiently conducted by Internet.
Large-eddy simulation of flow over a cylinder with from to : a skin-friction perspective
Cheng, Wan
2017-05-05
We present wall-resolved large-eddy simulations (LES) of flow over a smooth-wall circular cylinder up to , where is Reynolds number based on the cylinder diameter and the free-stream speed . The stretched-vortex subgrid-scale (SGS) model is used in the entire simulation domain. For the sub-critical regime, six cases are implemented with . Results are compared with experimental data for both the wall-pressure-coefficient distribution on the cylinder surface, which dominates the drag coefficient, and the skin-friction coefficient, which clearly correlates with the separation behaviour. In the super-critical regime, LES for three values of are carried out at different resolutions. The drag-crisis phenomenon is well captured. For lower resolution, numerical discretization fluctuations are sufficient to stimulate transition, while for higher resolution, an applied boundary-layer perturbation is found to be necessary to stimulate transition. Large-eddy simulation results at , with a mesh of , agree well with the classic experimental measurements of Achenbach (J. Fluid Mech., vol. 34, 1968, pp. 625-639) especially for the skin-friction coefficient, where a spike is produced by the laminar-turbulent transition on the top of a prior separation bubble. We document the properties of the attached-flow boundary layer on the cylinder surface as these vary with . Within the separated portion of the flow, mean-flow separation-reattachment bubbles are observed at some values of , with separation characteristics that are consistent with experimental observations. Time sequences of instantaneous surface portraits of vector skin-friction trajectory fields indicate that the unsteady counterpart of a mean-flow separation-reattachment bubble corresponds to the formation of local flow-reattachment cells, visible as coherent bundles of diverging surface streamlines.
Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo
2017-12-01
A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.
Real time simulation of large systems on mini-computer
International Nuclear Information System (INIS)
Nakhle, Michel; Roux, Pierre.
1979-01-01
Most simulation languages will only accept an explicit formulation of differential equations, and logical variables hold no special status therein. The pace of the suggested methods of integration is limited by the smallest time constant of the model submitted. The NEPTUNIX 2 simulation software has a language that will take implicit equations and an integration method of which the variable pace is not limited by the time constants of the model. This, together with high time and memory ressources optimization of the code generated, makes NEPTUNIX 2 a basic tool for simulation on mini-computers. Since the logical variables are specific entities under centralized control, correct processing of discontinuities and synchronization with a real process are feasible. The NEPTUNIX 2 is the industrial version of NEPTUNIX 1 [fr
Manufacturing Process Simulation of Large-Scale Cryotanks
Babai, Majid; Phillips, Steven; Griffin, Brian
2003-01-01
NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing
Context-Based Topology Control for Wireless Mesh Networks
Directory of Open Access Journals (Sweden)
Pragasen Mudali
2016-01-01
Full Text Available Topology Control has been shown to provide several benefits to wireless ad hoc and mesh networks. However these benefits have largely been demonstrated using simulation-based evaluations. In this paper, we demonstrate the negative impact that the PlainTC Topology Control prototype has on topology stability. This instability is found to be caused by the large number of transceiver power adjustments undertaken by the prototype. A context-based solution is offered to reduce the number of transceiver power adjustments undertaken without sacrificing the cumulative transceiver power savings and spatial reuse advantages gained from employing Topology Control in an infrastructure wireless mesh network. We propose the context-based PlainTC+ prototype and show that incorporating context information in the transceiver power adjustment process significantly reduces topology instability. In addition, improvements to network performance arising from the improved topology stability are also observed. Future plans to add real-time context-awareness to PlainTC+ will have the scheme being prototyped in a software-defined wireless mesh network test-bed being planned.
Large eddy simulation of turbulent mixing in a T-junction
International Nuclear Information System (INIS)
Kim, Jung Woo
2010-12-01
In this report, large eddy simulation was performed in order to further improve our understanding the physics of turbulent mixing in a T-junction, which is recently regarded as one of the most important problems in nuclear thermal-hydraulics safety. Large eddy simulation technique and the other numerical methods used in this study were presented in Sec. 2, and the numerical results obtained from large eddy simulation were described in Sec. 3. Finally, the summary was written in Sec. 4
Energy Technology Data Exchange (ETDEWEB)
Scherg-Kurmes, Harald; Hafez, Ahmad; Szyszka, Bernd [Technische Universitaet Berlin, Einsteinufer 25, 10587, Berlin (Germany); Siemers, Michael; Pflug, Andreas [Fraunhofer IST, Bienroder Weg 54E, 38108, Braunschweig (Germany); Schlatmann, Rutger [Helmholtz Zentrum Berlin, PVcomB, Schwarzschildstr. 3, 12489, Berlin (Germany); Rech, Bernd [Helmholtz Zentrum Berlin, Institute for Silicon Photovoltaics, Kekulestrasse 5, 12489, Berlin (Germany)
2016-09-15
Hydrogen-doped indium oxide (IOH) is a transparent conductive oxide offering great potential to optoelectronic applications because of its high mobility of over 100 cm{sup 2} V{sup -1}s{sup -1}. In films deposited statically by RF magnetron sputtering, a small area directly opposing the target center with a higher resistivity and lower crystallinity than the rest of the film has been found via hall- and XRD-measurements, which we attribute to plasma damage. In order to investigate the distribution of particle energies during the sputtering process we have simulated the RF-sputtering deposition process of IOH by particle-in-cell Monte Carlo (PICMC) simulation. At the surface of ceramic sputtering targets, negatively charged oxygen ions are created. These ions are accelerated toward the substrate in the plasma sheath with energies up to 150 eV. They damage the growing film and reduce its crystallinity. The influence of a negatively biased mesh inside the sputtering chamber on particle energies and distributions has been simulated and investigated. We found that the mesh decreased the high-energetic oxygen ion density at the substrate, thus enabling a more homogeneous IOH film growth. The theoretical results have been verified by XRD X-ray diffractometry (XRD), 4-point probe, and hall measurements of statically deposited IOH films on glass. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Grid adaptation using chimera composite overlapping meshes
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1994-01-01
The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.
Grid adaption using Chimera composite overlapping meshes
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
Geometrically Consistent Mesh Modification
Bonito, A.
2010-01-01
A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.
Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement
Leng, W.; Zhong, S.
2008-12-01
In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].
Modeling and Simulation Techniques for Large-Scale Communications Modeling
National Research Council Canada - National Science Library
Webb, Steve
1997-01-01
.... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel
Entropic Lattice Boltzmann: an implicit Large-Eddy Simulation?
Tauzin, Guillaume; Biferale, Luca; Sbragaglia, Mauro; Gupta, Abhineet; Toschi, Federico; Ehrhardt, Matthias; Bartel, Andreas
2017-11-01
We study the modeling of turbulence implied by the unconditionally stable Entropic Lattice Boltzmann Method (ELBM). We first focus on 2D homogeneous turbulence, for which we conduct numerical simulations for a wide range of relaxation times τ. For these simulations, we analyze the effective viscosity obtained by numerically differentiating the kinetic energy and enstrophy balance equations averaged over sub-domains of the computational grid. We aim at understanding the behavior of the implied sub-grid scale model and verify a formulation previously derived using Chapman-Enskog expansion. These ELBM benchmark simulations are thus useful to understand the range of validity of ELBM as a turbulence model. Finally, we will discuss an extension of the previously obtained results to the 3D case. Supported by the European Unions Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Sklodowska-Curie Grant Agreement No. 642069 and by the European Research Council under the ERC Grant Agreement No. 339032.
Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers
Wu, Xingfu
2011-01-01
Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.
MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank.
Mao, Yuqing; Lu, Zhiyong
2017-04-17
MeSH indexing is the task of assigning relevant MeSH terms based on a manual reading of scholarly publications by human indexers. The task is highly important for improving literature retrieval and many other scientific investigations in biomedical research. Unfortunately, given its manual nature, the process of MeSH indexing is both time-consuming (new articles are not immediately indexed until 2 or 3 months later) and costly (approximately ten dollars per article). In response, automatic indexing by computers has been previously proposed and attempted but remains challenging. In order to advance the state of the art in automatic MeSH indexing, a community-wide shared task called BioASQ was recently organized. We propose MeSH Now, an integrated approach that first uses multiple strategies to generate a combined list of candidate MeSH terms for a target article. Through a novel learning-to-rank framework, MeSH Now then ranks the list of candidate terms based on their relevance to the target article. Finally, MeSH Now selects the highest-ranked MeSH terms via a post-processing module. We assessed MeSH Now on two separate benchmarking datasets using traditional precision, recall and F 1 -score metrics. In both evaluations, MeSH Now consistently achieved over 0.60 in F-score, ranging from 0.610 to 0.612. Furthermore, additional experiments show that MeSH Now can be optimized by parallel computing in order to process MEDLINE documents on a large scale. We conclude that MeSH Now is a robust approach with state-of-the-art performance for automatic MeSH indexing and that MeSH Now is capable of processing PubMed scale documents within a reasonable time frame. http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/MeSHNow/ .
A nonlinear equivalent circuit method for analysis of passive intermodulation of mesh reflectors
Directory of Open Access Journals (Sweden)
Jiang Jie
2014-08-01
Full Text Available Passive intermodulation (PIM has gradually become a serious electromagnetic interference due to the development of high-power and high-sensitivity RF/microwave communication systems, especially large deployable mesh reflector antennas. This paper proposes a field-circuit coupling method to analyze the PIM level of mesh reflectors. With the existence of many metal–metal (MM contacts in mesh reflectors, the contact nonlinearity becomes the main reason for PIM generation. To analyze these potential PIM sources, an equivalent circuit model including nonlinear components is constructed to model a single MM contact so that the transient current through the MM contact point induced by incident electromagnetic waves can be calculated. Taking the electric current as a new electromagnetic wave source, the far-field scattering can be obtained by the use of electromagnetic numerical methods or the communication link method. Finally, a comparison between simulation and experimental results is illustrated to verify the validity of the proposed method.
International Nuclear Information System (INIS)
Sonnendrucker, E.; Ambrosiano, J.; Brandon, S.
1993-01-01
The Darwin model for electromagnetic simulation is a reduced form of the Maxwell-Vlasov system that retains all essential physical processes except the propagation of light waves. It is useful in modeling systems for which the light-transit timescales are less important than Alfven wave propagation, or quasistatic effects. The Darwin model is elliptic rather than hyperbolic as are the full set of Maxwell's equations. Appropriate boundary conditions must be chosen for the problems to be well-posed. Using finite element techniques to apply this method for unstructured triangular meshes, a mesh made up of unstructured triangles allows realistic device geometries to be modeled without the necessity of using a large number of mesh points. Analyzing the dispersion relation allows us to validate the code as well as the Darwin approximation
Statistics of LES simulations of large wind farms
DEFF Research Database (Denmark)
Andersen, Søren Juhl; Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming
2016-01-01
. The statistical moments appear to collapse and hence the turbulence inside large wind farms can potentially be scaled accordingly. The thrust coefficient is estimated by two different reference velocities and the generic CT expression by Frandsen. A reference velocity derived from the power production is shown...... to give very good agreement and furthermore enables the very good estimation of the thrust force using only the steady CT-curve, even for very short time samples. Finally, the effective turbulence inside large wind farms and the equivalent loads are examined....
Large Blast and Thermal Simulator Reflected Wave Eliminator Study
1990-03-01
it delays the passage of this wave through the test section until after the test is complete. The required length of extra duct depends on the strength...tube axis, which acts like an additional contraction effect since Se = Sj/[Cqsin(aj)]. Tii extra area is illustrated best by plotting (Se-Ae)/Ac versus...34Simulation de Choc et de Soaffie. Comimpensateur d’Ondes de Detente de Bouche pour tube a Choc de 2400 mm de diametre de Veine. Description, Compte- Renda
Directory of Open Access Journals (Sweden)
De-You Li
2016-06-01
Full Text Available For pump–turbines, most of the instabilities couple with high-level pressure fluctuations, which are harmful to pump–turbines, even the whole units. In order to understand the causes of pressure fluctuations and reduce their amplitudes, proper numerical methods should be chosen to obtain the accurate results. The method of large eddy simulation with wall-adapting local eddy-viscosity model was chosen to predict the pressure fluctuations in pump mode of a pump–turbine compared with the method of unsteady Reynolds-averaged Navier–Stokes with two-equation turbulence model shear stress transport k–ω. Partial load operating point (0.91QBEP under 15-mm guide vane opening was selected to make a comparison of performance and frequency characteristics between large eddy simulation and unsteady Reynolds-averaged Navier–Stokes based on the experimental validation. Good agreement indicates that the method of large eddy simulation could be applied in the simulation of pump–turbines. Then, a detailed comparison of variation for peak-to-peak value in the whole passage was presented. Both the methods show that the highest level pressure fluctuations occur in the vaneless space. In addition, the propagation of amplitudes of blade pass frequency, 2 times of blade pass frequency, and 3 times of blade pass frequency in the circumferential and flow directions was investigated. Although the difference exists between large eddy simulation and unsteady Reynolds-averaged Navier–Stokes, the trend of variation in different parts is almost the same. Based on the analysis, using the same mesh (8 million, large eddy simulation underestimates pressure characteristics and shows a better result compared with the experiments, while unsteady Reynolds-averaged Navier–Stokes overestimates them.
International Nuclear Information System (INIS)
Sheridan, Robert; VonLockette, Paris R; Roche, Juan; Lofland, Samuel E
2014-01-01
This work seeks to provide a framework for the numerical simulation of magneto-active elastomer (MAE) composite structures for use in origami engineering applications. The emerging field of origami engineering employs folding techniques, an array of crease patterns traditionally on a single flat sheet of paper, to produce structures and devices that perform useful engineering operations. Effective means of numerical simulation offer an efficient way to optimize the crease patterns while coupling to the performance and behavior of the active material. The MAE materials used herein are comprised of nominally 30% v/v, 325 mesh barium hexafarrite particles embedded in Dow HS II silicone elastomer compound. These particulate composites are cured in a magnetic field to produce magneto-elastic solids with anisotropic magnetization, e.g. they have a preferred magnetic axis parallel to the curing axis. The deformed shape and/or blocked force characteristics of these MAEs are examined in three geometries: a monolithic cantilever as well as two- and four-segment composite accordion structures. In the accordion structures, patches of MAE material are bonded to a Gelest OE41 unfilled silicone elastomer substrate. Two methods of simulation, one using the Maxwell stress tensor applied as a traction boundary condition and another employing a minimum energy kinematic (MEK) model, are investigated. Both methods capture actuation due to magnetic torque mechanisms that dominate MAE behavior. Comparison with experimental data show good agreement with only a single adjustable parameter, either an effective constant magnetization of the MAE material in the finite element models (at small and moderate deformations) or an effective modulus in the minimum energy model. The four-segment finite element model was prone to numerical locking at large deformation. The effective magnetization and modulus values required are a fraction of the actual experimentally measured values which suggests a
Energy Technology Data Exchange (ETDEWEB)
Priere, C
2005-01-15
Nowadays, environmental and economic constraints require considerable research efforts from the gas turbine industry. Objectives aim at lowering pollutants emissions and fuel consumption. These efforts take a primary importance to satisfy a continue growth of energy production and to obey to stringent environmental legislations. Recorded progresses are linked to mixing enhancement in combustors running at lean premixed operating point. Indeed, industry shows itself to be attentive in the mixing enhancement and during the last years, efforts are concentrated on fresh and burned gas dilution. The Jet In Cross Flow (JICF), which constitutes a representative case to further the research effort. It has been to be widely studied both in experimentally and numerically, and is particularly well suited for the evaluation of Large Eddy Simulations (LES). This approach, where large scale phenomena are naturally taken into account in the governing equation while the small scales are modelled, offers the means to well-predict such flows. The main objective of this work is to gauge and to enhance the quality of the LES predictions in JICF configurations by means of numerical tools developed in the compressible AVBP code. Physical and numerical parameters considered in the JICF modelization are taken into account and strategies that are able to enhance quality of LES results are proposed. Configurations studied in this work are the following: - Influences of the boundary conditions and jet injection system on a free JICF - Study of static mixing device in an industrial gas turbine chamber. - Study of a JICF configuration represented a dilution zone in low emissions combustors. (author)
DEFF Research Database (Denmark)
Yang, Yang; Kær, Søren Knudsen
2012-01-01
The flow structure of one isothermal swirling case in the Sydney swirl flame database was studied using two numerical methods. Results from the Reynolds-averaged Navier-Stokes (RANS) approach and large eddy simulation (LES) were compared with experimental measurements. The simulations were applied...
Large Eddy Simulation of stratified flows over structures
Brechler J.; Fuka V.
2013-01-01
We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model) to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.
Large Eddy Simulation of stratified flows over structures
Directory of Open Access Journals (Sweden)
Brechler J.
2013-04-01
Full Text Available We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.
Large Eddy Simulation of stratified flows over structures
Fuka, V.; Brechler, J.
2013-04-01
We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model) to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.
A synthetic-eddy-method for generating inflow conditions for large-eddy simulations
International Nuclear Information System (INIS)
Jarrin, N.; Benhamadouche, S.; Laurence, D.; Prosser, R.
2006-01-01
The generation of inflow data for spatially developing turbulent flows is one of the challenges that must be addressed prior to the application of LES to industrial flows and complex geometries. A new method of generation of synthetic turbulence, suitable for complex geometries and unstructured meshes, is presented herein. The method is based on the classical view of turbulence as a superposition of coherent structures. It is able to reproduce prescribed first and second order one point statistics, characteristic length and time scales, and the shape of coherent structures. The ability of the method to produce realistic inflow conditions in the test cases of a spatially decaying homogeneous isotropic turbulence and of a fully developed turbulent channel flow is presented. The method is systematically compared to other methods of generation of inflow conditions (precursor simulation, spectral methods and algebraic methods)
Large-eddy simulation of the temporal mixing layer using the Clark model
Vreman, A.W.; Geurts, B.J.; Kuerten, J.G.M.
1996-01-01
The Clark model for the turbulent stress tensor in large-eddy simulation is investigated from a theoretical and computational point of view. In order to be applicable to compressible turbulent flows, the Clark model has been reformulated. Actual large-eddy simulation of a weakly compressible,
LaWen Hollingsworth; James Menakis
2010-01-01
This project mapped wildland fire potential (WFP) for the conterminous United States by using the large fire simulation system developed for Fire Program Analysis (FPA) System. The large fire simulation system, referred to here as LFSim, consists of modules for weather generation, fire occurrence, fire suppression, and fire growth modeling. Weather was generated with...
Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing
Qiang Liu; Yi Qin; Guodong Li
2018-01-01
Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...
Mud pressure simulation on large horizontal directional drilling
Energy Technology Data Exchange (ETDEWEB)
Placido, Rafael R.; Avesani Neto, Jose O.; Martins, Pedro R.R.; Rocha, Ronaldo [Instituto de Pesquisas Tecnologicas do Estado de Sao Paulo (IPT), Sao Paulo, SP (Brazil)
2009-07-01
Horizontal Directional Drilling (HDD) is being extensively used in Brazil for installation of oil and gas pipelines. This trenchless technology is currently used in crossings of water bodies, environmental sensitive areas, densely populated areas, areas prone to mass movement and anywhere the traditional technology is not suitable because of the risks. One of the unwanted effects of HDD is collapsing of the soil surrounding the bore-hole, leading to loss of fluid. This can result in problems such as reducing the drilling efficiency, ground heave, structures damage, fluid infiltration and other environmental problems. This paper presents four simulations of down-hole fluid pressures which represents two different geometrical characteristics of the drilling and two different soils. The results showed that greater depths are needed in longer drillings to avoid ground rupture. Thus the end section of the drilling often represents the critical stage. (author)
Methodology for analysis and simulation of large multidisciplinary problems
Russell, William C.; Ikeda, Paul J.; Vos, Robert G.
1989-01-01
The Integrated Structural Modeling (ISM) program is being developed for the Air Force Weapons Laboratory and will be available for Air Force work. Its goal is to provide a design, analysis, and simulation tool intended primarily for directed energy weapons (DEW), kinetic energy weapons (KEW), and surveillance applications. The code is designed to run on DEC (VMS and UNIX), IRIS, Alliant, and Cray hosts. Several technical disciplines are included in ISM, namely structures, controls, optics, thermal, and dynamics. Four topics from the broad ISM goal are discussed. The first is project configuration management and includes two major areas: the software and database arrangement and the system model control. The second is interdisciplinary data transfer and refers to exchange of data between various disciplines such as structures and thermal. Third is a discussion of the integration of component models into one system model, i.e., multiple discipline model synthesis. Last is a presentation of work on a distributed processing computing environment.
Cyclic loading of simulated fault gouge to large strains
Jones, Lucile M.
1980-04-01
As part of a study of the mechanics of simulated fault gouge, deformation of Kayenta Sandstone (24% initial porosity) was observed in triaxial stress tests through several stress cycles. Between 50- and 300-MPa effective pressure the specimens deformed stably without stress drops and with deformation occurring throughout the sample. At 400-MPa effective pressure the specimens underwent strain softening with the deformation occurring along one plane. However, the difference in behavior seems to be due to the density variation at different pressures rather than to the difference in pressure. After peak stress was reached in each cycle, the samples dilated such that the volumetric strain and the linear strain maintained a constant ratio (approximately 0.1) at all pressures. The behavior was independent of the number of stress cycles to linear strains up to 90% and was in general agreement with laws of soil behavior derived from experiments conducted at low pressure (below 5 MPa).
Altitude simulation facility for testing large space motors
Katz, U.; Lustig, J.; Cohen, Y.; Malkin, I.
1993-02-01
This work describes the design of an altitude simulation facility for testing the AKM motor installed in the 'Ofeq' satellite launcher. The facility, which is controlled by a computer, consists of a diffuser and a single-stage ejector fed with preheated air. The calculations of performance and dimensions of the gas extraction system were conducted according to a one-dimensional analysis. Tests were carried out on a small-scale model of the facility in order to examine the design concept, then the full-scale facility was constructed and operated. There was good agreement among the results obtained from the small-scale facility, from the full-scale facility, and from calculations.
Numerical simulations of a large scale oxy-coal burner
Energy Technology Data Exchange (ETDEWEB)
Chae, Taeyoung [Korea Institute of Industrial Technology, Cheonan (Korea, Republic of). Energy System R and D Group; Sungkyunkwan Univ., Suwon (Korea, Republic of). School of Mechanical Engineering; Park, Sanghyun; Ryu, Changkook [Sungkyunkwan Univ., Suwon (Korea, Republic of). School of Mechanical Engineering; Yang, Won [Korea Institute of Industrial Technology, Cheonan (Korea, Republic of). Energy System R and D Group
2013-07-01
Oxy-coal combustion is one of promising carbon dioxide capture and storage (CCS) technologies that uses oxygen and recirculated CO{sub 2} as an oxidizer instead of air. Due to difference in physical properties between CO{sub 2} and N{sub 2}, the oxy-coal combustion requires development of burner and boiler based on fundamental understanding of the flame shape, temperature, radiation and heat flux. For design of a new oxy-coal combustion system, computational fluid dynamics (CFD) is an essential tool to evaluate detailed combustion characteristics and supplement experimental results. In this study, CFD analysis was performed to understand the combustion characteristics inside a tangential vane swirl type 30 MW coal burner for air-mode and oxy-mode operations. In oxy-mode operations, various compositions of primary and secondary oxidizers were assessed which depended on the recirculation ratio of flue gas. For the simulations, devolatilization of coal and char burnout by O{sub 2}, CO{sub 2} and H{sub 2}O were predicted with a Lagrangian particle tracking method considering size distribution of pulverized coal and turbulent dispersion. The radiative heat transfer was solved by employing the discrete ordinate method with the weighted sum of gray gases model (WSGGM) optimized for oxy-coal combustion. In the simulation results for oxy-model operation, the reduced swirl strength of secondary oxidizer increased the flame length due to lower specific volume of CO{sub 2} than N{sub 2}. The flame length was also sensitive to the flow rate of primary oxidizer. The oxidizer without N{sub 2} that reduces thermal NO{sub x} formation makes the NO{sub x} lower in oxy-mode than air-mode. The predicted results showed similar trends with measured temperature profiles for various oxidizer compositions. Further numerical investigations are required to improve the burner design combined with more detailed experimental results.
Large eddy simulations of flow and mixing in jets and swirl flows: application to a gas turbine
Energy Technology Data Exchange (ETDEWEB)
Schluter, J.U.
2000-07-01
Large Eddy Simulations (LES) are an accepted tool in turbulence research. Most LES investigations deal with low Reynolds-number flows and have a high spatial discretization, which results in high computational costs. To make LES applicable to industrial purposes, the possibilities of LES to deliver results with low computational costs on high Reynolds-number flows have to be investigated. As an example, the cold flow through the Siemens V64.3A.HR gas turbine burner shall be examined. It is a gas turbine burner of swirl type, where the fuel is injected on the surface of vanes perpendicular to the main air flow. The flow regime of an industrial gas turbine is governed by several flow phenomena. The most important are the fuel injection in form of a jet in cross flow (JICF) and the swirl flow issuing into a combustion chamber. In order to prove the ability of LES to deal with these flow phenomena, two numerical investigations were made in order to reproduce the results of experimental studies. The first one deals with JICF. It will be shown that the reproduction of three different JICF is possible with LES on meshes with a low number of mesh points. The results are used to investigate the flow physics of the JICF, especially the merging of two adjacent JICFs. The second fundamental investigation deals with swirl flows. Here, the accuracy of an axisymmetric assumption is examined in detail by comparing it to full 3D LES computations and experimental data. Having demonstrated the ability of LES and the flow solver to deal with such complex flows with low computational efforts, the LES approach is used to examine some details of the burner. First, the investigation of the fuel injection on a vane reveals that the vane flow tends to separate. Furthermore the tendency of the fuel jets to merge is shown. Second, the swirl flow in the combustion chamber is computed. For this investigation the vanes are removed from the burner and swirl is imposed as a boundary condition. As
Robotic removal of eroded vaginal mesh into the bladder.
Macedo, Francisco Igor B; O'Connor, Jeffrey; Mittal, Vijay K; Hurley, Patrick
2013-11-01
Vaginal mesh erosion into the bladder after midurethral sling procedure or cystocele repair is uncommon, with only a few cases having been reported in the literature. The ideal surgical management is still controversial. Current options for removal of eroded mesh include: endoscopic, transvaginal or abdominal (either open or laparoscopic) approaches. We, herein, present the first case of robotic removal of a large eroded vaginal mesh into the bladder and discuss potential benefits and limitations of the technique. © 2013 The Japanese Urological Association.
Large-Eddy Simulation Using Projection onto Local Basis Functions
Pope, S. B.
In the traditional approach to LES for inhomogeneous flows, the resolved fields are obtained by a filtering operation (with filter width Delta). The equations governing the resolved fields are then partial differential equations, which are solved numerically (on a grid of spacing h). For an LES computation of a given magnitude (i.e., given h), there are conflicting considerations in the choice of Delta: to resolve a large range of turbulent motions, Delta should be small; to solve the equations with numerical accuracy, Delta should be large. In the alternative approach advanced here, this conflict is avoided. The resolved fields are defined by projection onto local basis functions, so that the governing equations are ordinary differential equations for the evolution of the basis-function coefficients. There is no issue of numerical spatial discretization errors. A general methodology for modelling the effects of the residual motions is developed. The model is based directly on the basis-function coefficients, and its effect is to smooth the fields where their rates of change are not well resolved by the basis functions. Demonstration calculations are performed for Burgers' equation.
Electrothermal Simulation of Large-Area Semiconductor Devices
Directory of Open Access Journals (Sweden)
C Kirsch
2017-06-01
Full Text Available The lateral charge transport in thin-film semiconductor devices is affected by the sheet resistance of the various layers. This may lead to a non-uniform current distribution across a large-area device resulting in inhomogeneous luminance, for example, as observed in organic light-emitting diodes (Neyts et al., 2006. The resistive loss in electrical energy is converted into thermal energy via Joule heating, which results in a temperature increase inside the device. On the other hand, the charge transport properties of the device materials are also temperature-dependent, such that we are facing a two-way coupled electrothermal problem. It has been demonstrated that adding thermal effects to an electrical model significantly changes the results (Slawinski et al., 2011. We present a mathematical model for the steady-state distribution of the electric potential and of the temperature across one electrode of a large-area semiconductor device, as well as numerical solutions obtained using the finite element method.
Wind Data Analysis and Wind Flow Simulation Over Large Areas
Directory of Open Access Journals (Sweden)
Terziev Angel
2014-03-01
Full Text Available Increasing the share of renewable energy sources is one of the core policies of the European Union. This is because of the fact that this energy is essential in reducing the greenhouse gas emissions and securing energy supplies. Currently, the share of wind energy from all renewable energy sources is relatively low. The choice of location for a certain wind farm installation strongly depends on the wind potential. Therefore the accurate assessment of wind potential is extremely important. In the present paper an analysis is made on the impact of significant possible parameters on the determination of wind energy potential for relatively large areas. In the analysis the type of measurements (short- and long-term on-site measurements, the type of instrumentation and the terrain roughness factor are considered. The study on the impact of turbulence on the wind flow distribution over complex terrain is presented, and it is based on the real on-site data collected by the meteorological tall towers installed in the northern part of Bulgaria. By means of CFD based software a wind map is developed for relatively large areas. Different turbulent models in numerical calculations were tested and recommendations for the usage of the specific models in flows modeling over complex terrains are presented. The role of each parameter in wind map development is made. Different approaches for determination of wind energy potential based on the preliminary developed wind map are presented.
Large-eddy simulation of flow over a grooved cylinder up to transcritical Reynolds numbers
Cheng, W.
2017-11-27
We report wall-resolved large-eddy simulation (LES) of flow over a grooved cylinder up to the transcritical regime. The stretched-vortex subgrid-scale model is embedded in a general fourth-order finite-difference code discretization on a curvilinear mesh. In the present study grooves are equally distributed around the circumference of the cylinder, each of sinusoidal shape with height , invariant in the spanwise direction. Based on the two parameters, and the Reynolds number where is the free-stream velocity, the diameter of the cylinder and the kinematic viscosity, two main sets of simulations are described. The first set varies from to while fixing . We study the flow deviation from the smooth-cylinder case, with emphasis on several important statistics such as the length of the mean-flow recirculation bubble , the pressure coefficient , the skin-friction coefficient and the non-dimensional pressure gradient parameter . It is found that, with increasing at fixed , some properties of the mean flow behave somewhat similarly to changes in the smooth-cylinder flow when is increased. This includes shrinking and nearly constant minimum pressure coefficient. In contrast, while the non-dimensional pressure gradient parameter remains nearly constant for the front part of the smooth cylinder flow, shows an oscillatory variation for the grooved-cylinder case. The second main set of LES varies from to with fixed . It is found that this range spans the subcritical and supercritical regimes and reaches the beginning of the transcritical flow regime. Mean-flow properties are diagnosed and compared with available experimental data including and the drag coefficient . The timewise variation of the lift and drag coefficients are also studied to elucidate the transition among three regimes. Instantaneous images of the surface, skin-friction vector field and also of the three-dimensional Q-criterion field are utilized to further understand the dynamics of the near-surface flow
Large eddy simulation and direct numerical simulation of high speed turbulent reacting flows
Adumitroaie, V.; Frankel, S. H.; Madnia, C. K.; Givi, P.
The objective of this research is to make use of Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first phase of this research conducted within the past three years have been directed in several issues pertaining to intricate physics of turbulent reacting flows. In our previous 5 semi-annual reports submitted to NASA LaRC, as well as several technical papers in archival journals, the results of our investigations have been fully described. In this progress report which is different in format as compared to our previous documents, we focus only on the issue of LES. The reason for doing so is that LES is the primary issue of interest to our Technical Monitor and that our other findings were needed to support the activities conducted under this prime issue. The outcomes of our related investigations, nevertheless, are included in the appendices accompanying this report. The relevance of the materials in these appendices are, therefore, discussed only briefly within the body of the report. Here, results are presented of a priori and a posterior analyses for validity assessments of assumed Probability Density Function (PDF) methods as potential subgrid scale (SGS) closures for LES of turbulent reacting flows. Simple non-premixed reacting systems involving an isothermal reaction of the type A + B yields Products under both chemical equilibrium and non-equilibrium conditions are considered. A priori analyses are conducted of a homogeneous box flow, and a spatially developing planar mixing layer to investigate the performance of the Pearson Family of PDF's as SGS models. A posteriori analyses are conducted of the mixing layer using a hybrid one-equation Smagorinsky/PDF SGS closure. The Smagorinsky closure augmented by the solution of the subgrid turbulent kinetic energy (TKE) equation is employed to account for hydrodynamic fluctuations, and the PDF is employed for modeling the
On the Feasibility of Wireless Multimedia Sensor Networks over IEEE 802.15.5 Mesh Topologies.
Garcia-Sanchez, Antonio-Javier; Losilla, Fernando; Rodenas-Herraiz, David; Cruz-Martinez, Felipe; Garcia-Sanchez, Felipe
2016-05-05
Wireless Multimedia Sensor Networks (WMSNs) are a special type of Wireless Sensor Network (WSN) where large amounts of multimedia data are transmitted over networks composed of low power devices. Hierarchical routing protocols typically used in WSNs for multi-path communication tend to overload nodes located within radio communication range of the data collection unit or data sink. The battery life of these nodes is therefore reduced considerably, requiring frequent battery replacement work to extend the operational life of the WSN system. In a wireless sensor network with mesh topology, any node may act as a forwarder node, thereby enabling multiple routing paths toward any other node or collection unit. In addition, mesh topologies have proven advantages, such as data transmission reliability, network robustness against node failures, and potential reduction in energy consumption. This work studies the feasibility of implementing WMSNs in mesh topologies and their limitations by means of exhaustive computer simulation experiments. To this end, a module developed for the Synchronous Energy Saving (SES) mode of the IEEE 802.15.5 mesh standard has been integrated with multimedia tools to thoroughly test video sequences encoded using H.264 in mesh networks.
On the Feasibility of Wireless Multimedia Sensor Networks over IEEE 802.15.5 Mesh Topologies
Directory of Open Access Journals (Sweden)
Antonio-Javier Garcia-Sanchez
2016-05-01
Full Text Available Wireless Multimedia Sensor Networks (WMSNs are a special type of Wireless Sensor Network (WSN where large amounts of multimedia data are transmitted over networks composed of low power devices. Hierarchical routing protocols typically used in WSNs for multi-path communication tend to overload nodes located within radio communication range of the data collection unit or data sink. The battery life of these nodes is therefore reduced considerably, requiring frequent battery replacement work to extend the operational life of the WSN system. In a wireless sensor network with mesh topology, any node may act as a forwarder node, thereby enabling multiple routing paths toward any other node or collection unit. In addition, mesh topologies have proven advantages, such as data transmission reliability, network robustness against node failures, and potential reduction in energy consumption. This work studies the feasibility of implementing WMSNs in mesh topologies and their limitations by means of exhaustive computer simulation experiments. To this end, a module developed for the Synchronous Energy Saving (SES mode of the IEEE 802.15.5 mesh standard has been integrated with multimedia tools to thoroughly test video sequences encoded using H.264 in mesh networks.
Energy Technology Data Exchange (ETDEWEB)
Burgwinkel, Paul; Vreydal, Daniel; Eltaliawi, Gamil; Vijayakumar, Nandhakumar [RWTH Aachen (DE). Inst. fuer Maschinentechnik der Rohstoffindustrie (IMR)
2010-09-15
For the first time the Co-simulation method was successfully used for full representation of a large belt conveyor for an open cast mine in a simulation model at the Institute for Mechanical Engineering in the Raw Materials Industry at Rhineland-Westphalia Technological University in Aachen. The aim of this project was the development of an electro-mechanical simulation model, which represents all components of a large belt conveyor from the drive motor to the conveyor belt in one simulation model and thus makes the interactions between the individual assemblies verifiable by calculations. With the aid of the developed model it was possible to determine critical operating speeds of the represented large belt conveyor and derive suitable measures to combat undesirable resonance states in the drive assembly. Furthermore it was possible to clarify the advantage of the full numerical representation of an electromechanical drive system. (orig.)
MeshVoro: A Three-Dimensional Voronoi Mesh Building Tool for the TOUGH Family of Codes
Energy Technology Data Exchange (ETDEWEB)
Freeman, C. M.; Boyle, K. L.; Reagan, M.; Johnson, J.; Rycroft, C.; Moridis, G. J.
2013-09-30
Few tools exist for creating and visualizing complex three-dimensional simulation meshes, and these have limitations that restrict their application to particular geometries and circumstances. Mesh generation needs to trend toward ever more general applications. To that end, we have developed MeshVoro, a tool that is based on the Voro (Rycroft 2009) library and is capable of generating complex threedimensional Voronoi tessellation-based (unstructured) meshes for the solution of problems of flow and transport in subsurface geologic media that are addressed by the TOUGH (Pruess et al. 1999) family of codes. MeshVoro, which includes built-in data visualization routines, is a particularly useful tool because it extends the applicability of the TOUGH family of codes by enabling the scientifically robust and relatively easy discretization of systems with challenging 3D geometries. We describe several applications of MeshVoro. We illustrate the ability of the tool to straightforwardly transform a complex geological grid into a simulation mesh that conforms to the specifications of the TOUGH family of codes. We demonstrate how MeshVoro can describe complex system geometries with a relatively small number of grid blocks, and we construct meshes for geometries that would have been practically intractable with a standard Cartesian grid approach. We also discuss the limitations and appropriate applications of this new technology.
Adaptive radial basis function mesh deformation using data reduction
Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.
2016-09-01
Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited
Modeling a Large Data Acquisition Network in a Simulation Framework
AUTHOR|(INSPIRE)INSPIRE-00337030; The ATLAS collaboration; Froening, Holger; Garcia, Pedro Javier; Vandelli, Wainer
2015-01-01
The ATLAS detector at CERN records particle collision “events” delivered by the Large Hadron Collider. Its data-acquisition system is a distributed software system that identifies, selects, and stores interesting events in near real-time, with an aggregate throughput of several 10 GB/s. It is a distributed software system executed on a farm of roughly 2000 commodity worker nodes communicating via TCP/IP on an Ethernet network. Event data fragments are received from the many detector readout channels and are buffered, collected together, analyzed and either stored permanently or discarded. This system, and data-acquisition systems in general, are sensitive to the latency of the data transfer from the readout buffers to the worker nodes. Challenges affecting this transfer include the many-to-one communication pattern and the inherently bursty nature of the traffic. In this paper we introduce the main performance issues brought about by this workload, focusing in particular on the so-called TCP incast pathol...
Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations
Energy Technology Data Exchange (ETDEWEB)
Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah; Carns, Philip; Ross, Robert; Li, Jianping Kelvin; Ma, Kwan-Liu
2016-11-13
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a
Large-Eddy Simulations of Motored Flow and Combustion in a Homogeneous-Charge Spark-Ignition Engine
Shekhawat, Yajuvendra Singh
Cycle-to-cycle variations (CCV) of flow and combustion in internal combustion engines (ICE) limit their fuel efficiency and emissions potential. Large-eddy simulation (LES) is the most practical simulation tool to understand the nature of these CCV. In this research, multi-cycle LES of a two-valve, four-stroke, spark-ignition optical engine has been performed for motored and fired operations. The LES mesh quality is assessed using a length scale resolution parameter and a energy resolution parameter. For the motored operation, two 50-consecutive-cycle LES with different turbulence models (Smagorinsky model and dynamic structure model) are compared with the experiment. The pressure comparison shows that the LES is able to capture the wave-dynamics in the intake and exhaust ports. The LES velocity fields are compared with particle-image velocimetry (PIV) measurements at three cutting planes. Based on the structure and magnitude indices, the dynamic structure model is somewhat better than the Smagorinsky model as far as the ensemble-averaged velocity fields are concerned. The CCV in the velocity fields is assessed by proper-orthogonal decomposition (POD). The POD analysis shows that LES is able to capture the level of CCV seen in the experiment. For the fired operation, two 60-cycle LES with different combustion models (thickened frame model and coherent frame model) are compared with experiment. The in-cylinder pressure and the apparent heat release rate comparison shows higher CCV for LES compared to the experiment, with the thickened frame model showing higher CCV than the coherent frame model. The correlation analysis for the LES using thickened frame model shows that the CCV in combustion/pressure is correlated with: the tumble at the intake valve closing, the resolved and subfilter-scale kinetic energy just before spark time, and the second POD mode (shear flow near spark gap) of the velocity fields just before spark time.
International Nuclear Information System (INIS)
Kajzer, A; Pozorski, J; Szewc, K
2014-01-01
In the paper we present Large-eddy simulation (LES) results of 3D Taylor- Green vortex obtained by the three different computational approaches: Smoothed Particle Hydrodynamics (SPH), Lattice Boltzmann Method (LBM) and Finite Volume Method (FVM). The Smagorinsky model was chosen as a subgrid-scale closure in LES for all considered methods and a selection of spatial resolutions have been investigated. The SPH and LBM computations have been carried out with the use of the in-house codes executed on GPU and compared, for validation purposes, with the FVM results obtained using the open-source CFD software OpenFOAM. A comparative study in terms of one-point statistics and turbulent energy spectra shows a good agreement of LES results for all methods. An analysis of the GPU code efficiency and implementation difficulties has been made. It is shown that both SPH and LBM may offer a significant advantage over mesh-based CFD methods.
Cartesian anisotropic mesh adaptation for compressible flow
International Nuclear Information System (INIS)
Keats, W.A.; Lien, F.-S.
2004-01-01
Simulating transient compressible flows involving shock waves presents challenges to the CFD practitioner in terms of the mesh quality required to resolve discontinuities and prevent smearing. This paper discusses a novel two-dimensional Cartesian anisotropic mesh adaptation technique implemented for compressible flow. This technique, developed for laminar flow by Ham, Lien and Strong, is efficient because it refines and coarsens cells using criteria that consider the solution in each of the cardinal directions separately. In this paper the method will be applied to compressible flow. The procedure shows promise in its ability to deliver good quality solutions while achieving computational savings. The convection scheme used is the Advective Upstream Splitting Method (Plus), and the refinement/ coarsening criteria are based on work done by Ham et al. Transient shock wave diffraction over a backward step and shock reflection over a forward step are considered as test cases because they demonstrate that the quality of the solution can be maintained as the mesh is refined and coarsened in time. The data structure is explained in relation to the computational mesh, and the object-oriented design and implementation of the code is presented. Refinement and coarsening algorithms are outlined. Computational savings over uniform and isotropic mesh approaches are shown to be significant. (author)
Hesford, Andrew J; Astheimer, Jeffrey P; Greengard, Leslie F; Waag, Robert C
2010-02-01
A multiple-scattering approach is presented to compute the solution of the Helmholtz equation when a number of spherical scatterers are nested in the interior of an acoustically large enclosing sphere. The solution is represented in terms of partial-wave expansions, and a linear system of equations is derived to enforce continuity of pressure and normal particle velocity across all material interfaces. This approach yields high-order accuracy and avoids some of the difficulties encountered when using integral equations that apply to surfaces of arbitrary shape. Calculations are accelerated by using diagonal translation operators to compute the interactions between spheres when the operators are numerically stable. Numerical results are presented to demonstrate the accuracy and efficiency of the method.
Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.
2017-11-01
he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.
A Reconfigurable Mesh-Ring Topology for Bluetooth Sensor Networks
Directory of Open Access Journals (Sweden)
Ben-Yi Wang
2018-05-01
Full Text Available In this paper, a Reconfigurable Mesh-Ring (RMR algorithm is proposed for Bluetooth sensor networks. The algorithm is designed in three stages to determine the optimal configuration of the mesh-ring network. Firstly, a designated root advertises and discovers its neighboring nodes. Secondly, a scatternet criterion is built to compute the minimum number of piconets and distributes the connection information for piconet and scatternet. Finally, a peak-search method is designed to determine the optimal mesh-ring configuration for various sizes of networks. To maximize the network capacity, the research problem is formulated by determining the best connectivity of available mesh links. During the formation and maintenance phases, three possible configurations (including piconet, scatternet, and hybrid are examined to determine the optimal placement of mesh links. The peak-search method is a systematic approach, and is implemented by three functional blocks: the topology formation block generates the mesh-ring topology, the routing efficiency block computes the routing performance, and the optimum decision block introduces a decision-making criterion to determine the optimum number of mesh links. Simulation results demonstrate that the optimal mesh-ring configuration can be determined and that the scatternet case achieves better overall performance than the other two configurations. The RMR topology also outperforms the conventional ring-based and cluster-based mesh methods in terms of throughput performance for Bluetooth configurable networks.
Directory of Open Access Journals (Sweden)
André Brandalise
2012-12-01
observadas complicações (estenose ou erosão relacionadas com a prótese. CONCLUSÃO: O uso do modelo de prótese de polipropileno descrito é seguro, desde que observados os aspectos técnicos de sua implantação.BACKGROUND: The minimally invasive surgery has gained rapidly important role in the treatment of gastroesophageal reflux disease. However, the best method to treat large paraesophageal hernias (type III and IV is still under discussion. The use of prosthetics for enhancing the crural repair has been proposed by several authors in order to reduce the high relapse rates found in these patients. AIM: To demonstrate the technique and surgical results in using an idealized polypropylene mesh for the strengthening of the cruroraphy in large hiatal hernias. METHODS: Was applied the polypropylene mesh to reinforce the hiatal closure in large hernias - types II to IV in Hill's classification - with a primary or recurrent hiatal defect greater than 5 cm, in a series of 70 patients. The prosthesis was done cutting a polypropylene mesh in a U-shape, adapted to the dimensions found in the intraoperative field and coating the inner edge (which will have direct contact with the esophagus with a silicon catheter. This was achieved by removing a small longitudinal segment of the catheter and then inserting the edge of the cut mesh, fixing with running nylon 5-0 suture. RESULTS: From 1999 to 2012, this technique was used in 70 patients. There were 52 females and 18 males, aged 32-83 years (mean 63 years. In 48 (68.6% patients, paraesophageal hernia was primary and in 22 (31.4%, it was relapse after antireflux surgery. The only case of death in this series (1.4% occurred on 22nd postoperative day in one patient (74 y that had a laceration of the sutures on the fundoplication, causing gastropleural fistula and death. There was no relationship with the use of the prosthesis. A follow-up of six months or more was achieved in 60 patients (85.7%, ranging from six to 146 months (mean 49
DEFF Research Database (Denmark)
Krag, Ludvig Ahm; Herrmann, Bent; Karlsen, Junita
Based on catch comparison data, it is demonstrated how detailed and quantitative information about species-specific and size dependent escape behaviour in relation to a large mesh panel can be extracted. A new analytical model is developed, applied, and compared to the traditional modelling appro...
GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase I
National Aeronautics and Space Administration — Many large-scale numerical simulations can be broken down into common mathematical routines. While the applications may differ, the need to perform functions such as...
Energy Technology Data Exchange (ETDEWEB)
NONE
2003-03-01
Joint meeting of the 6th Simulation Science Symposium and the NIFS Collaboration Research 'Large Scale Computer Simulation' was held on December 12-13, 2002 at National Institute for Fusion Science, with the aim of promoting interdisciplinary collaborations in various fields of computer simulations. The present meeting attended by more than 40 people consists of the 11 invited and 22 contributed papers, of which topics were extended not only to fusion science but also to related fields such as astrophysics, earth science, fluid dynamics, molecular dynamics, computer science etc. (author)
On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach
Liu, Zheng; Xue, Kaiping; Hong, Peilin
The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.
Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder
Baurle, R. A.
2016-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.
Implementation of a Large Eddy Simulation Method Applied to Recirculating Flow in a Ventilated Room
DEFF Research Database (Denmark)
Davidson, Lars
In the present work Large Eddy Simulations are presented. The flow in a ventilated enclosure is studied. We use an explicit, two-steps time-advancement scheme where the pressure is solved from a Poisson equation.......In the present work Large Eddy Simulations are presented. The flow in a ventilated enclosure is studied. We use an explicit, two-steps time-advancement scheme where the pressure is solved from a Poisson equation....
Chantler, Tracey; Lwembe, Saumu; Saliba, Vanessa; Raj, Thara; Mays, Nicholas; Ramsay, Mary; Mounier-Jack, Sandra
2016-09-15
The English health system experienced a large-scale reorganisation in April 2013. A national tri-partite delivery framework involving the Department of Health, NHS England and Public Health England was agreed and a new local operational model applied. Evidence about how health system re-organisations affect constituent public health programmes is sparse and focused on low and middle income countries. We conducted an in-depth analysis of how the English immunisation programme adapted to the April 2013 health system reorganisation, and what facilitated or hindered the delivery of immunisation services in this context. A qualitative case study methodology involving interviews and observations at national and local level was applied. Three sites were selected to represent different localities, varying levels of immunisation coverage and a range of changes in governance. Study participants included 19 national decision-makers and 56 local implementers. Two rounds of interviews and observations (immunisation board/committee meetings) occurred between December 2014 and June 2015, and September and December 2015. Interviews were audio recorded and transcribed verbatim and written accounts of observed events compiled. Data was imported into NVIVO 10 and analysed thematically. The new immunisation programme in the new health system was described as fragmented, and significant effort was expended to regroup. National tripartite arrangements required joint working and accountability; a shift from the simpler hierarchical pre-reform structure, typical of many public health programmes. New local inter-organisational arrangements resulted in ambiguity about organisational responsibilities and hindered data-sharing. Whilst making immunisation managers responsible for larger areas supported equitable resource distribution and strengthened service commissioning, it also reduced their ability to apply clinical expertise, support and evaluate immunisation providers' performance
Hurtado, Eric A; Appell, Rodney A
2009-01-01
This case series' purpose is to review a referral center's experience with complications from mesh kits. A chart review of 12 patients who presented with complications associated with transvaginal mesh kit procedures was performed. All patients underwent complete surgical removal of the mesh to treat mesh exposure, pain, or vaginal bleeding/discharge followed by an anterior or posterior repair. The mean follow-up time after surgery was 3.4 months. Eight of 12 patients had mesh that had formed a fibrotic band. Six of 12 patients had complete resolution of pain. Of the nine patients with mesh exposure, all required significant resection of the vaginal wall. No further mesh exposure occurred. The use of transvaginal mesh kits may cause previously undescribed complications such as pelvic/vaginal pain or large extrusions requiring complete removal. Removal of all mesh except the arms may cure or significantly improve these problems.
Khosronejad, Ali; Sotiropoulos, Fotis; Stony Brook University Team
2016-11-01
We present a coupled flow and morphodynamic simulations of extreme flooding in 3 km long and 300 m wide reach of the Mississippi River in Minnesota, which includes three islands and hydraulic structures. We employ the large-eddy simulation (LES) and bed-morphodynamic modules of the VFS-Geophysics model to investigate the flow and bed evolution of the river during a 500 year flood. The coupling of the two modules is carried out via a fluid-structure interaction approach using a nested domain approach to enhance the resolution of bridge scour predictions. The geometrical data of the river, islands and structures are obtained from LiDAR, sub-aqueous sonar and in-situ surveying to construct a digital map of the river bathymetry. Our simulation results for the bed evolution of the river reveal complex sediment dynamics near the hydraulic structures. The numerically captured scour depth near some of the structures reach a maximum of about 10 m. The data-driven simulation strategy we present in this work exemplifies a practical simulation-based-engineering-approach to investigate the resilience of infrastructures to extreme flood events in intricate field-scale riverine systems. This work was funded by a Grant from Minnesota Dept. of Transportation.
2018-02-15
conservation equations. The closure problem hinges on the evaluation of the filtered chemical production rates. In MRA/MSR, simultaneous large-eddy... simultaneous , constrained large-eddy simulations at three different mesh levels as a means of connecting reactive scalar information at different...functions of a locally normalized subgrid Damköhler number (a measure of the distribution of inverse chemical time scales in the neighborhood of a
Large-scale micromagnetics simulations with dipolar interaction using all-to-all communications
Directory of Open Access Journals (Sweden)
Hiroshi Tsukahara
2016-05-01
Full Text Available We implement on our micromagnetics simulator low-complexity parallel fast-Fourier-transform algorithms, which reduces the frequency of all-to-all communications from six to two times. Almost all the computation time of micromagnetics simulation is taken up by the calculation of the magnetostatic field which can be calculated using the fast Fourier transform method. The results show that the simulation time is decreased with good scalability, even if the micromagentics simulation is performed using 8192 physical cores. This high parallelization effect enables large-scale micromagentics simulation using over one billion to be performed. Because massively parallel computing is needed to simulate the magnetization dynamics of real permanent magnets composed of many micron-sized grains, it is expected that our simulator reveals how magnetization dynamics influences the coercivity of the permanent magnet.
SUPERIMPOSED MESH PLOTTING IN MCNP
Energy Technology Data Exchange (ETDEWEB)
J. HENDRICKS
2001-02-01
The capability to plot superimposed meshes has been added to MCNP{trademark}. MCNP4C featured a superimposed mesh weight window generator which enabled users to set up geometries without having to subdivide geometric cells for variance reduction. The variance reduction was performed with weight windows on a rectangular or cylindrical mesh superimposed over the physical geometry. Experience with the new capability was favorable but also indicated that a number of enhancements would be very beneficial, particularly a means of visualizing the mesh and its values. The mathematics for plotting the mesh and its values is described here along with a description of other upgrades.
Monitoring and evaluation of wire mesh forming life
Enemuoh, Emmanuel U.; Zhao, Ping; Kadlec, Alec
2018-03-01
Forming tables are used with stainless steel wire mesh conveyor belts to produce variety of products. The forming tables will typically run continuously for several days, with some hours of scheduled downtime for maintenance, cleaning and part replacement after several weeks of operation. The wire mesh conveyor belts show large variation in their remaining life due to associated variations in their nominal thicknesses. Currently the industry is dependent on seasoned operators to determine the replacement time for the wire mesh formers. The drawback of this approach is inconsistency in judgements made by different operators and lack of data knowledge that can be used to develop decision making system that will be more consistent with wire mesh life prediction and replacement time. In this study, diagnostic measurements about the health of wire mesh former is investigated and developed. The wire mesh quality characteristics considered are thermal measurement, tension property, gage thickness, and wire mesh wear. The results show that real time thermal sensor and wear measurements would provide suitable data for the estimation of wire mesh failure, therefore, can be used as a diagnostic parameter for developing structural health monitoring (SHM) system for stainless steel wire mesh formers.
SALOME PLATFORM and TetGen for Polyhedral Mesh Generation
Energy Technology Data Exchange (ETDEWEB)
Lee, Sang Yong; Park, Chan Eok; Kim, Shin Whan [KEPCO E and C Company, Inc., Daejeon (Korea, Republic of)
2014-05-15
SPACE and CUPID use the unstructured mesh and they also require reliable mesh generation system. The combination of CAD system and mesh generation system is necessary to cope with a large number of cells and the complex fluid system with structural materials inside. In the past, a CAD system Pro/Engineer and mesh generator Pointwise were evaluated for this application. But, the cost of those commercial CAD and mesh generator is sometimes a great burden. Therefore, efforts have been made to set up a mesh generation system with open source programs. The evaluation of the TetGen has been made in focusing the application for the polyhedral mesh generation. In this paper, SALOME will be evaluated for the efforts in conjunction with TetGen. In section 2, review will be made on the CAD and mesh generation capability of SALOME. SALOME and TetGen codes are being integrated to construct robust polyhedral mesh generator. Edge removal on the flat surface and vertex reattachment to the solid are two challenging tasks. It is worthwhile to point out that the Python script capability of the SALOME should be fully utilized for the future investigation.
arXiv Stochastic locality and master-field simulations of very large lattices
Lüscher, Martin
2018-01-01
In lattice QCD and other field theories with a mass gap, the field variables in distant regions of a physically large lattice are only weakly correlated. Accurate stochastic estimates of the expectation values of local observables may therefore be obtained from a single representative field. Such master-field simulations potentially allow very large lattices to be simulated, but require various conceptual and technical issues to be addressed. In this talk, an introduction to the subject is provided and some encouraging results of master-field simulations of the SU(3) gauge theory are reported.
Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation
International Nuclear Information System (INIS)
Hoshi, T; Fujiwara, T
2009-01-01
An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.
Baurle, R. A.
2015-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit
Block-structured Adaptive Mesh Refinement - Theory, Implementation and Application
Directory of Open Access Journals (Sweden)
Deiterding Ralf
2011-12-01
Full Text Available Structured adaptive mesh refinement (SAMR techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.
Realizability conditions for the turbulent stress tensor in large-eddy simulation
Vreman, A.W.; Geurts, Bernardus J.; Kuerten, Johannes G.M.
1994-01-01
The turbulent stress tensor in large-eddy simulation is examined from a theoretical point of view. Realizability conditions for the components of this tensor are derived, which hold if and only if the filter function is positive. The spectral cut-off, one of the filters frequently used in large-eddy
Sensitivity of the scale partition for variational multiscale large-eddy simulation of channel flow
Holmen, J.; Hughes, T.J.R.; Oberai, A.A.; Wells, G.N.
2004-01-01
The variational multiscale method has been shown to perform well for large-eddy simulation (LES) of turbulent flows. The method relies upon a partition of the resolved velocity field into large- and small-scale components. The subgrid model then acts only on the small scales of motion, unlike
Directory of Open Access Journals (Sweden)
Guangtao Zhang
2015-01-01
Full Text Available In the field of hydropower station transient process simulation (HSTPS, characteristic graph-based iterative hydroturbine model (CGIHM has been widely used when large disturbance hydroturbine modeling is involved. However, by this model, iteration should be used to calculate speed and pressure, and slow convergence or no convergence problems may be encountered for some reasons like special characteristic graph profile, inappropriate iterative algorithm, or inappropriate interpolation algorithm, and so forth. Also, other conventional large disturbance hydroturbine models are of some disadvantages and difficult to be used widely in HSTPS. Therefore, to obtain an accurate simulation result, a simple method for hydroturbine modeling is proposed. By this method, both the initial operating point and the transfer coefficients of linear hydroturbine model keep changing during simulation. Hence, it can reflect the nonlinearity of the hydroturbine and be used for Francis turbine simulation under large disturbance condition. To validate the proposed method, both large disturbance and small disturbance simulations of a single hydrounit supplying a resistive, isolated load were conducted. It was shown that the simulation result is consistent with that of field test. Consequently, the proposed method is an attractive option for HSTPS involving Francis turbine modeling under large disturbance condition.
International Nuclear Information System (INIS)
Norman, A.; Boyd, J.; Davies, G.; Flumerfelt, E.; Herner, K.; Mayer, N.; Mhashilhar, P.; Tamsett, M.; Timm, S.
2015-01-01
Modern long baseline neutrino experiments like the NOvA experiment at Fermilab, require large scale, compute intensive simulations of their neutrino beam fluxes and backgrounds induced by cosmic rays. The amount of simulation required to keep the systematic uncertainties in the simulation from dominating the final physics results is often 10x to 100x that of the actual detector exposure. For the first physics results from NOvA this has meant the simulation of more than 2 billion cosmic ray events in the far detector and more than 200 million NuMI beam spill simulations. Performing these high statistics levels of simulation have been made possible for NOvA through the use of the Open Science Grid and through large scale runs on commercial clouds like Amazon EC2. We details the challenges in performing large scale simulation in these environments and how the computing infrastructure for the NOvA experiment has been adapted to seamlessly support the running of different simulation and data processing tasks on these resources. (paper)
Prasad, K.
2017-12-01
Atmospheric transport is usually performed with weather models, e.g., the Weather Research and Forecasting (WRF) model that employs a parameterized turbulence model and does not resolve the fine scale dynamics generated by the flow around buildings and features comprising a large city. The NIST Fire Dynamics Simulator (FDS) is a computational fluid dynamics model that utilizes large eddy simulation methods to model flow around buildings at length scales much smaller than is practical with models like WRF. FDS has the potential to evaluate the impact of complex topography on near-field dispersion and mixing that is difficult to simulate with a mesoscale atmospheric model. A methodology has been developed to couple the FDS model with WRF mesoscale transport models. The coupling is based on nudging the FDS flow field towards that computed by WRF, and is currently limited to one way coupling performed in an off-line mode. This approach allows the FDS model to operate as a sub-grid scale model with in a WRF simulation. To test and validate the coupled FDS - WRF model, the methane leak from the Aliso Canyon underground storage facility was simulated. Large eddy simulations were performed over the complex topography of various natural gas storage facilities including Aliso Canyon, Honor Rancho and MacDonald Island at 10 m horizontal and vertical resolution. The goal of these simulations included improving and validating transport models as well as testing leak hypotheses. Forward simulation results were compared with aircraft and tower based in-situ measurements as well as methane plumes observed using the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) and the next generation instrument AVIRIS-NG. Comparison of simulation results with measurement data demonstrate the capability of the coupled FDS-WRF models to accurately simulate the transport and dispersion of methane plumes over urban domains. Simulated integrated methane enhancements will be presented and
Multi-phase Volume Segmentation with Tetrahedral Mesh
DEFF Research Database (Denmark)
Nguyen Trung, Tuan; Dahl, Vedrana Andersen; Bærentzen, Jakob Andreas
Volume segmentation is efficient for reconstructing material structure, which is important for several analyses, e.g. simulation with finite element method, measurement of quantitative information like surface area, surface curvature, volume, etc. We are concerned about the representations of the 3......D volumes, which can be categorized into two groups: fixed voxel grids [1] and unstructured meshes [2]. Among these two representations, the voxel grids are more popular since manipulating a fixed grid is easier than an unstructured mesh, but they are less efficient for quantitative measurements....... In many cases, the voxel grids are converted to explicit meshes, however the conversion may reduce the accuracy of the segmentations, and the effort for meshing is also not trivial. On the other side, methods using unstructured meshes have difficulty in handling topology changes. To reduce the complexity...
Large scale statistics for computational verification of grain growth simulations with experiments
International Nuclear Information System (INIS)
Demirel, Melik C.; Kuprat, Andrew P.; George, Denise C.; Straub, G.K.; Misra, Amit; Alexander, Kathleen B.; Rollett, Anthony D.
2002-01-01
by curvature driven motion. This method utilizes gradientweighted moving finite elements (GWMFE) combined with algorithms for performing topological reconnections on the evolving mesh. We have previously showed a strong similarity between small-scale grain growth experiments and anisotropic three-dimensional simulations obtained from the EBSD measurements. Using the same technique, we obtained 5170-grain data from a thin Aluminum film with a columnar grain structure and compared the computational results with experiments.
A general coarse and fine mesh solution scheme for fluid flow modeling in VHTRS
International Nuclear Information System (INIS)
Clifford, I; Ivanov, K; Avramova, M.
2011-01-01
Coarse mesh Computational Fluid Dynamics (CFD) methods offer several advantages over traditional coarse mesh methods for the safety analysis of helium-cooled graphite-moderated Very High Temperature Reactors (VHTRs). This relatively new approach opens up the possibility for system-wide calculations to be carried out using a consistent set of field equations throughout the calculation, and subsequently the possibility for hybrid coarse/fine mesh or hierarchical multi scale CFD simulations. To date, a consistent methodology for hierarchical multi-scale CFD has not been developed. This paper describes work carried out in the initial development of a multi scale CFD solver intended to be used for the safety analysis of VHTRs. The VHTR is considered on any scale to consist of a homogenized two-phase mixture of fluid and stationary solid material of varying void fraction. A consistent set of conservation equations was selected such that they reduce to the single-phase conservation equations for the case where void fraction is unity. The discretization of the conservation equations uses a new pressure interpolation scheme capable of capturing the discontinuity in pressure across relatively large changes in void fraction. Based on this, a test solver was developed which supports fully unstructured meshes for three-dimensional time-dependent compressible flow problems, including buoyancy effects. For typical VHTR flow phenomena the new solver shows promise as an effective candidate for predicting the flow behavior on multiple scales, as it is capable of modeling both fine mesh single phase flows as well as coarse mesh flows in homogenized regions containing both fluid and solid materials. (author)
ROSA-IV Large Scale Test Facility (LSTF) system description for second simulated fuel assembly
International Nuclear Information System (INIS)
1990-10-01
The ROSA-IV Program's Large Scale Test Facility (LSTF) is a test facility for integral simulation of thermal-hydraulic response of a pressurized water reactor (PWR) during small break loss-of-coolant accidents (LOCAs) and transients. In this facility, the PWR core nuclear fuel rods are simulated using electric heater rods. The simulated fuel assembly which was installed during the facility construction was replaced with a new one in 1988. The first test with this second simulated fuel assembly was conducted in December 1988. This report describes the facility configuration and characteristics as of this date (December 1988) including the new simulated fuel assembly design and the facility changes which were made during the testing with the first assembly as well as during the renewal of the simulated fuel assembly. (author)
Fair packet scheduling in Wireless Mesh Networks
Nawab, Faisal
2014-02-01
In this paper we study the interactions of TCP and IEEE 802.11 MAC in Wireless Mesh Networks (WMNs). We use a Markov chain to capture the behavior of TCP sessions, particularly the impact on network throughput due to the effect of queue utilization and packet relaying. A closed form solution is derived to numerically determine the throughput. Based on the developed model, we propose a distributed MAC protocol called Timestamp-ordered MAC (TMAC), aiming to alleviate the unfairness problem in WMNs. TMAC extends CSMA/CA by scheduling data packets based on their age. Prior to transmitting a data packet, a transmitter broadcasts a request control message appended with a timestamp to a selected list of neighbors. It can proceed with the transmission only if it receives a sufficient number of grant control messages from these neighbors. A grant message indicates that the associated data packet has the lowest timestamp of all the packets pending transmission at the local transmit queue. We demonstrate that a loose ordering of timestamps among neighboring nodes is sufficient for enforcing local fairness, subsequently leading to flow rate fairness in a multi-hop WMN. We show that TMAC can be implemented using the control frames in IEEE 802.11, and thus can be easily integrated in existing 802.11-based WMNs. Our simulation results show that TMAC achieves excellent resource allocation fairness while maintaining over 90% of maximum link capacity across a large number of topologies.
Ji, X.; Shen, C.
2017-12-01
Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.
Directory of Open Access Journals (Sweden)
Humin Lei
2017-01-01
Full Text Available An adaptive mesh iteration method based on Hermite-Pseudospectral is described for trajectory optimization. The method uses the Legendre-Gauss-Lobatto points as interpolation points; then the state equations are approximated by Hermite interpolating polynomials. The method allows for changes in both number of mesh points and the number of mesh intervals and produces significantly smaller mesh sizes with a higher accuracy tolerance solution. The derived relative error estimate is then used to trade the number of mesh points with the number of mesh intervals. The adaptive mesh iteration method is applied successfully to the examples of trajectory optimization of Maneuverable Reentry Research Vehicle, and the simulation experiment results show that the adaptive mesh iteration method has many advantages.
Energy mesh optimization for multi-level calculation schemes
International Nuclear Information System (INIS)
Mosca, P.; Taofiki, A.; Bellier, P.; Prevost, A.
2011-01-01
The industrial calculations of third generation nuclear reactors are based on sophisticated strategies of homogenization and collapsing at different spatial and energetic levels. An important issue to ensure the quality of these calculation models is the choice of the collapsing energy mesh. In this work, we show a new approach to generate optimized energy meshes starting from the SHEM 281-group library. The optimization model is applied on 1D cylindrical cells and consists of finding an energy mesh which minimizes the errors between two successive collision probability calculations. The former is realized over the fine SHEM mesh with Livolant-Jeanpierre self-shielded cross sections and the latter is performed with collapsed cross sections over the energy mesh being optimized. The optimization is done by the particle swarm algorithm implemented in the code AEMC and multigroup flux solutions are obtained from standard APOLLO2 solvers. By this new approach, a set of new optimized meshes which encompass from 10 to 50 groups has been defined for PWR and BWR calculations. This set will allow users to adapt the energy detail of the solution to the complexity of the calculation (assembly, multi-assembly, two-dimensional whole core). Some preliminary verifications, in which the accuracy of the new meshes is measured compared to a direct 281-group calculation, show that the 30-group optimized mesh offers a good compromise between simulation time and accuracy for a standard 17 x 17 UO 2 assembly with and without control rods. (author)
Mesh erosion after abdominal sacrocolpopexy.
Kohli, N; Walsh, P M; Roat, T W; Karram, M M
1998-12-01
To report our experience with erosion of permanent suture or mesh material after abdominal sacrocolpopexy. A retrospective chart review was performed to identify patients who underwent sacrocolpopexy by the same surgeon over 8 years. Demographic data, operative notes, hospital records, and office charts were reviewed after sacrocolpopexy. Patients with erosion of either suture or mesh were treated initially with conservative therapy followed by surgical intervention as required. Fifty-seven patients underwent sacrocolpopexy using synthetic mesh during the study period. The mean (range) postoperative follow-up was 19.9 (1.3-50) months. Seven patients (12%) had erosions after abdominal sacrocolpopexy with two suture erosions and five mesh erosions. Patients with suture erosion were asymptomatic compared with patients with mesh erosion, who presented with vaginal bleeding or discharge. The mean (+/-standard deviation) time to erosion was 14.0+/-7.7 (range 4-24) months. Both patients with suture erosion were treated conservatively with estrogen cream. All five patients with mesh erosion required transvaginal removal of the mesh. Mesh erosion can follow abdominal sacrocolpopexy over a long time, and usually presents as vaginal bleeding or discharge. Although patients with suture erosion can be managed successfully with conservative treatment, patients with mesh erosion require surgical intervention. Transvaginal removal of the mesh with vaginal advancement appears to be an effective treatment in patients failing conservative management.
Shadowfax: Moving mesh hydrodynamical integration code
Vandenbroucke, Bert
2016-05-01
Shadowfax simulates galaxy evolution. Written in object-oriented modular C++, it evolves a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. For the hydrodynamical integration, it makes use of a (co-) moving Lagrangian mesh. The code has a 2D and 3D version, contains utility programs to generate initial conditions and visualize simulation snapshots, and its input/output is compatible with a number of other simulation codes, e.g. Gadget2 (ascl:0003.001) and GIZMO (ascl:1410.003).
Initial condition effects on large scale structure in numerical simulations of plane mixing layers
McMullan, W. A.; Garrett, S. J.
2016-01-01
In this paper, Large Eddy Simulations are performed on the spatially developing plane turbulent mixing layer. The simulated mixing layers originate from initially laminar conditions. The focus of this research is on the effect of the nature of the imposed fluctuations on the large-scale spanwise and streamwise structures in the flow. Two simulations are performed; one with low-level three-dimensional inflow fluctuations obtained from pseudo-random numbers, the other with physically correlated fluctuations of the same magnitude obtained from an inflow generation technique. Where white-noise fluctuations provide the inflow disturbances, no spatially stationary streamwise vortex structure is observed, and the large-scale spanwise turbulent vortical structures grow continuously and linearly. These structures are observed to have a three-dimensional internal geometry with branches and dislocations. Where physically correlated provide the inflow disturbances a "streaky" streamwise structure that is spatially stationary is observed, with the large-scale turbulent vortical structures growing with the square-root of time. These large-scale structures are quasi-two-dimensional, on top of which the secondary structure rides. The simulation results are discussed in the context of the varying interpretations of mixing layer growth that have been postulated. Recommendations are made concerning the data required from experiments in order to produce accurate numerical simulation recreations of real flows.
In vitro extracellular matrix model to evaluate stroma cell response to transvaginal mesh.
Wu, Ming-Ping; Huang, Kuan-Hui; Long, Cheng-Yu; Yang, Chau-Chen; Tong, Yat-Ching
2014-04-01
The use of surgical mesh for female pelvic floor reconstruction has increased in recent years. However, there is paucity of information about the biological responses of host stroma cells to different meshes. This study was aimed to establish an in vitro experimental model to study the micro-environment of extracellular matrix (ECM) with embedded mesh and the stroma cell behaviors to different synthetic meshes. Matrigel multi-cellular co-culture system with embedded mesh was used to evaluate the interaction of stroma cells and synthetic mesh in a simulated ECM environment. Human umbilical vein endothelial cells (HUVEC) and NIH3T3 fibroblasts were inoculated in the system. The established multi-cellular Matrigel co-culture system was used to detect stroma cell recruitment and tube formation ability for different synthetic meshes. HUVEC and NIH3T3 cells were recruited into the mesh interstices and organized into tube-like structures in type I mesh material from Perigee, Marlex and Prolift 24 hr after cell inoculation. On the contrary, there was little recruitment of HUVEC and NIH3T3 cells into the type III mesh of intra-vaginal sling (IVS). The Matrigel multi-cellular co-culture system with embedded mesh offers a useful in vitro model to study the biological behaviors of stroma cells in response to different types of synthetic meshes. The system can help to select ideal mesh candidates before actual implantation into the human body. © 2013 Wiley Periodicals, Inc.
Notes on the Mesh Handler and Mesh Data Conversion
International Nuclear Information System (INIS)
Lee, Sang Yong; Park, Chan Eok
2009-01-01
At the outset of the development of the thermal-hydraulic code (THC), efforts have been made to utilize the recent technology of the computational fluid dynamics. Among many of them, the unstructured mesh approach was adopted to alleviate the restriction of the grid handling system. As a natural consequence, a mesh handler (MH) has been developed to manipulate the complex mesh data from the mesh generator. The mesh generator, Gambit, was chosen at the beginning of the development of the code. But a new mesh generator, Pointwise, was introduced to get more flexible mesh generation capability. An open source code, Paraview, was chosen as a post processor, which can handle unstructured as well as structured mesh data. Overall data processing system for THC is shown in Figure-1. There are various file formats to save the mesh data in the permanent storage media. A couple of dozen of file formats are found even in the above mentioned programs. A competent mesh handler should have the capability to import or export mesh data as many as possible formats. But, in reality, there are two aspects that make it difficult to achieve the competence. The first aspect to consider is the time and efforts to program the interface code. And the second aspect, which is even more difficult one, is the fact that many mesh data file formats are proprietary information. In this paper, some experience of the development of the format conversion programs will be presented. File formats involved are Gambit neutral format, Ansys-CFX grid file format, VTK legacy file format, Nastran format and CGNS
Canè, Federico; Verhegghe, Benedict; De Beule, Matthieu; Bertrand, Philippe B.; Van der Geest, Rob J.; Segers, Patrick; De Santis, Gianluca
2018-01-01
With cardiovascular disease (CVD) remaining the primary cause of death worldwide, early detection of CVDs becomes essential. The intracardiac flow is an important component of ventricular function, motion kinetics, wash-out of ventricular chambers, and ventricular energetics. Coupling between Computational Fluid Dynamics (CFD) simulations and medical images can play a fundamental role in terms of patient-specific diagnostic tools. From a technical perspective, CFD simulations with moving boun...
On the rejection-based algorithm for simulation and analysis of large-scale reaction networks
Energy Technology Data Exchange (ETDEWEB)
Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)
2015-06-28
Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.
Performance of the hybrid wireless mesh protocol for wireless mesh networks
DEFF Research Database (Denmark)
Boye, Magnus; Staalhagen, Lars
2010-01-01
Wireless mesh networks offer a new way of providing end-user access and deploying network infrastructure. Though mesh networks offer a price competitive solution to wired networks, they also come with a set of new challenges such as optimal path selection, channel utilization, and load balancing....... and proactive. Two scenarios of different node density are considered for both path selection modes. The results presented in this paper are based on a simulation model of the HWMP specification in the IEEE 802.11s draft 4.0 implemented in OPNET Modeler....
Large-signal, dynamic simulation of the slowpoke-3 nuclear heating reactor
International Nuclear Information System (INIS)
Tseng, C.M.; Lepp, R.M.
1983-07-01
A 2 MWt nuclear reactor, called SLOWPOKE-3, is being developed at the Chalk River Nuclear Laboratories (CRNL). This reactor, which is cooled by natural circulation, is designed to produce hot water for commercial space heating and perhaps generate some electricity in remote locations where the costs of alternate forms of energy are high. A large-signal, dynamic simulation of this reactor, without closed-loop control, was developed and implemented on a hybrid computer, using the basic equations of conservation of mass, energy and momentum. The natural circulation of downcomer flow in the pool was simulated using a special filter, capable of modelling various flow conditions. The simulation was then used to study the intermediate and long-term transient response of SLOWPOKE-3 to large disturbances, such as loss of heat sink, loss of regulation, daily load following, and overcooling of the reactor coolant. Results of the simulation show that none of these disturbances produce hazardous transients
Heavy-Ion Collimation at the Large Hadron Collider Simulations and Measurements
AUTHOR|(CDS)2083002; Wessels, Johannes Peter; Bruce, Roderik; Wessels, Johannes Peter; Bruce, Roderik
The CERN Large Hadron Collider (LHC) stores and collides proton and $^{208}$Pb$^{82+}$ beams of unprecedented energy and intensity. Thousands of superconducting magnets, operated at 1.9 K, guide the very intense and energetic particle beams, which have a large potential for destruction. This implies the demand for a multi-stage collimation system to provide protection from beam-induced quenches or even hardware damage. In heavy-ion operation, ion fragments with significant rigidity offsets can still scatter out of the collimation system. When they irradiate the superconducting LHC magnets, the latter risk to quench (lose their superconducting property). These secondary collimation losses can potentially impose a limitation for the stored heavy-ion beam energy. Therefore, their distribution in the LHC needs to be understood by sophisticated simulations. Such simulation tools must accurately simulate the particle motion of many different nuclides in the magnetic LHC lattice and simulate their interaction with t...
Chatterjee, Tanmoy; Peet, Yulia T.
2018-03-01
Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.
Piomelli, Ugo; Zang, Thomas A.; Speziale, Charles G.; Lund, Thomas S.
1990-01-01
An eddy viscosity model based on the renormalization group theory of Yakhot and Orszag (1986) is applied to the large-eddy simulation of transition in a flat-plate boundary layer. The simulation predicts with satisfactory accuracy the mean velocity and Reynolds stress profiles, as well as the development of the important scales of motion. The evolution of the structures characteristic of the nonlinear stages of transition is also predicted reasonably well.
Establishment of DNS database in a turbulent channel flow by large-scale simulations
Abe, Hiroyuki; Kawamura, Hiroshi; 阿部 浩幸; 河村 洋
2008-01-01
In the present study, we establish statistical DNS (Direct Numerical Simulation) database in a turbulent channel flow with passive scalar transport at high Reynolds numbers and make the data available at our web site (http://murasun.me.noda.tus.ac.jp/turbulence/). The established database is reported together with the implementation of large-scale simulations, representative DNS results and results on turbulence model testing using the DNS data.
Hoffie, Andreas Frank
Large eddy simulation (LES) combined with the one-dimensional turbulence (ODT) model is used to simulate spatially developing turbulent reacting shear layers with high heat release and high Reynolds numbers. The LES-ODT results are compared to results from direct numerical simulations (DNS), for model development and validation purposes. The LES-ODT approach is based on LES solutions for momentum and pressure on a coarse grid and solutions for momentum and reactive scalars on a fine, one-dimensional, but three-dimensionally coupled ODT subgrid, which is embedded into the LES computational domain. Although one-dimensional, all three velocity components are transported along the ODT domain. The low-dimensional spatial and temporal resolution of the subgrid scales describe a new modeling paradigm, referred to as autonomous microstructure evolution (AME) models, which resolve the multiscale nature of turbulence down to the Kolmogorv scales. While this new concept aims to mimic the turbulent cascade and to reduce the number of input parameters, AME enables also regime-independent combustion modeling, capable to simulate multiphysics problems simultaneously. The LES as well as the one-dimensional transport equations are solved using an incompressible, low Mach number approximation, however the effects of heat release are accounted for through variable density computed by the ideal gas equation of state, based on temperature variations. The computations are carried out on a three-dimensional structured mesh, which is stretched in the transverse direction. While the LES momentum equation is integrated with a third-order Runge-Kutta time-integration, the time integration at the ODT level is accomplished with an explicit Forward-Euler method. Spatial finite-difference schemes of third (LES) and first (ODT) order are utilized and a fully consistent fractional-step method at the LES level is used. Turbulence closure at the LES level is achieved by utilizing the Smagorinsky
Meshed split skin graft for extensive vitiligo
Directory of Open Access Journals (Sweden)
Srinivas C
2004-05-01
Full Text Available A 30 year old female presented with generalized stable vitiligo involving large areas of the body. Since large areas were to be treated it was decided to do meshed split skin graft. A phototoxic blister over recipient site was induced by applying 8 MOP solution followed by exposure to UVA. The split skin graft was harvested from donor area by Padgett dermatome which was meshed by an ampligreffe to increase the size of the graft by 4 times. Significant pigmentation of the depigmented skin was seen after 5 months. This procedure helps to cover large recipient areas, when pigmented donor skin is limited with minimal risk of scarring. Phototoxic blister enables easy separation of epidermis thus saving time required for dermabrasion from recipient site.
Expected Transmission Energy Route Metric for Wireless Mesh Senor Networks
Directory of Open Access Journals (Sweden)
YanLiang Jin
2011-01-01
Full Text Available Mesh is a network topology that achieves high throughput and stable intercommunication. With great potential, it is expected to be the key architecture of future networks. Wireless sensor networks are an active research area with numerous workshops and conferences arranged each year. The overall performance of a WSN highly depends on the energy consumption of the network. This paper designs a new routing metric for wireless mesh sensor networks. Results from simulation experiments reveal that the new metric algorithm improves the energy balance of the whole network and extends the lifetime of wireless mesh sensor networks (WMSNs.
VerHulst, Claire; Meneveau, Charles
2014-02-01
In this study, we address the question of how kinetic energy is entrained into large wind turbine arrays and, in particular, how large-scale flow structures contribute to such entrainment. Previous research has shown this entrainment to be an important limiting factor in the performance of very large arrays where the flow becomes fully developed and there is a balance between the forcing of the atmospheric boundary layer and the resistance of the wind turbines. Given the high Reynolds numbers and domain sizes on the order of kilometers, we rely on wall-modeled large eddy simulation (LES) to simulate turbulent flow within the wind farm. Three-dimensional proper orthogonal decomposition (POD) analysis is then used to identify the most energetic flow structures present in the LES data. We quantify the contribution of each POD mode to the kinetic energy entrainment and its dependence on the layout of the wind turbine array. The primary large-scale structures are found to be streamwise, counter-rotating vortices located above the height of the wind turbines. While the flow is periodic, the geometry is not invariant to all horizontal translations due to the presence of the wind turbines and thus POD modes need not be Fourier modes. Differences of the obtained modes with Fourier modes are documented. Some of the modes are responsible for a large fraction of the kinetic energy flux to the wind turbine region. Surprisingly, more flow structures (POD modes) are needed to capture at least 40% of the turbulent kinetic energy, for which the POD analysis is optimal, than are needed to capture at least 40% of the kinetic energy flux to the turbines. For comparison, we consider the cases of aligned and staggered wind turbine arrays in a neutral atmospheric boundary layer as well as a reference case without wind turbines. While the general characteristics of the flow structures are robust, the net kinetic energy entrainment to the turbines depends on the presence and relative
Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues
Directory of Open Access Journals (Sweden)
Michele Farisco
2018-04-01
Full Text Available Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain’s operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs, e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs.
Large-scale simulations with distributed computing: Asymptotic scaling of ballistic deposition
International Nuclear Information System (INIS)
Farnudi, Bahman; Vvedensky, Dimitri D
2011-01-01
Extensive kinetic Monte Carlo simulations are reported for ballistic deposition (BD) in (1 + 1) dimensions. The large system sizes L observed for the onset of asymptotic scaling (L ≅ 2 12 ) explains the widespread discrepancies in previous reports for exponents of BD in one and likely in higher dimensions. The exponents obtained directly from our simulations, α = 0.499 ± 0.004 and β = 0.336 ± 0.004, capture the exact values α = 1/2 and β = 1/3 for the one-dimensional Kardar-Parisi-Zhang equation. An analysis of our simulations suggests a criterion for identifying the onset of true asymptotic scaling, which enables a more informed evaluation of exponents for BD in higher dimensions. These simulations were made possible by the Simulation through Social Networking project at the Institute for Advanced Studies in Basic Sciences in 2007, which was re-launched in November 2010.
Dynamics Modeling and Simulation of Large Transport Airplanes in Upset Conditions
Foster, John V.; Cunningham, Kevin; Fremaux, Charles M.; Shah, Gautam H.; Stewart, Eric C.; Rivers, Robert A.; Wilborn, James E.; Gato, William
2005-01-01
As part of NASA's Aviation Safety and Security Program, research has been in progress to develop aerodynamic modeling methods for simulations that accurately predict the flight dynamics characteristics of large transport airplanes in upset conditions. The motivation for this research stems from the recognition that simulation is a vital tool for addressing loss-of-control accidents, including applications to pilot training, accident reconstruction, and advanced control system analysis. The ultimate goal of this effort is to contribute to the reduction of the fatal accident rate due to loss-of-control. Research activities have involved accident analyses, wind tunnel testing, and piloted simulation. Results have shown that significant improvements in simulation fidelity for upset conditions, compared to current training simulations, can be achieved using state-of-the-art wind tunnel testing and aerodynamic modeling methods. This paper provides a summary of research completed to date and includes discussion on key technical results, lessons learned, and future research needs.
International Nuclear Information System (INIS)
Jamali, J.; Aghajafari, R.; Moini, R.; Sadeghi, H.
2002-01-01
A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper
Visualization of the Flux Rope Generation Process Using Large Quantities of MHD Simulation Data
Directory of Open Access Journals (Sweden)
Y Kubota
2013-03-01
Full Text Available We present a new concept of analysis using visualization of large quantities of simulation data. The time development of 3D objects with high temporal resolution provides the opportunity for scientific discovery. We visualize large quantities of simulation data using the visualization application 'Virtual Aurora' based on AVS (Advanced Visual Systems and the parallel distributed processing at "Space Weather Cloud" in NICT based on Gfarm technology. We introduce two results of high temporal resolution visualization: the magnetic flux rope generation process and dayside reconnection using a system of magnetic field line tracing.
Water Penetration through a Superhydrophobic Mesh During a Drop Impact
Ryu, Seunggeol; Sen, Prosenjit; Nam, Youngsuk; Lee, Choongyeop
2017-01-01
When a water drop impacts a mesh having submillimeter pores, a part of the drop penetrates through the mesh if the impact velocity is sufficiently large. Here we show that different surface wettability, i.e., hydrophobicity and superhydrophobicity, leads to different water penetration dynamics on a mesh during drop impact. We show, despite the water repellence of a superhydrophobic surface, that water can penetrate a superhydrophobic mesh more easily (i.e., at a lower impact velocity) over a hydrophobic mesh via a penetration mechanism unique to a superhydrophobic mesh. On a superhydrophobic mesh, the water penetration can occur during the drop recoil stage, which appears at a lower impact velocity than the critical impact velocity for water penetration right upon impact. We propose that this unique water penetration on a superhydrophobic mesh can be attributed to the combination of the hydrodynamic focusing and the momentum transfer from the water drop when it is about to bounce off the surface, at which point the water drop retrieves most of its kinetic energy due to the negligible friction on superhydrophobic surfaces.
Energy Technology Data Exchange (ETDEWEB)
Seeliger, Andreas; Vreydal, Daniel; Eltaliawi, Gamil; Vijayakumar, Nandhakumar [Technische Hochschule Aachen (Germany). Lehrstuhl und Inst. fuer Bergwerks- und Huettenmaschinenkunde
2009-04-28
The aim of the GrobaDyn research project is the complete modelling of a large conveyor system. With the aid of the model possible conversion of the previous drives with a constant speed to variable-speed drives will be simulated in advance of the planning phase of this conversion and any resonance phenomena within the operating speed range analysed and if necessary counter-measures taken. (orig.)
Design and simulation of betavoltaic battery using large-grain polysilicon
International Nuclear Information System (INIS)
Yao, Shulin; Song, Zijun; Wang, Xiang; San, Haisheng; Yu, Yuxi
2012-01-01
In this paper, we present the design and simulation of a p–n junction betavoltaic battery based on large-grain polysilicon. By the Monte Carlo simulation, the average penetration depth were obtained, according to which the optimal depletion region width was designed. The carriers transport model of large-grain polysilicon is used to determine the diffusion length of minority carrier. By optimizing the doping concentration, the maximum power conversion efficiency can be achieved to be 0.90% with a 10 mCi/cm 2 Ni-63 source radiation. - Highlights: ► Ni 63 is employed as the pure beta radioisotope source. ► The planar p–n junction betavoltaic battery is based on large-grain polysilicon. ► The carriers transport model of large-grain polysilicon is used to determine the diffusion length of minority carrier. ► The average penetration depth was obtained by using the Monte Carlo Method.
Directory of Open Access Journals (Sweden)
Zheng Yang
2013-01-01
Full Text Available Torsional spring-loaded antibacklash gear which can improve the transmission precision is widely used in many precision transmission fields. It is very important to investigate the dynamic characteristics of antibacklash gear. In the paper, applied force analysis is completed in detail. Then, defining the starting point of double-gear meshing as initial position, according to the meshing characteristic of antibacklash gear, single- or double-tooth meshing states of two gear pairs and the transformation relationship at any moment are determined. Based on this, a nonlinear model of antibacklash gear with time-varying friction and meshing stiffness is proposed. The influences of friction and variations of torsional spring stiffness, damping ratio and preload on dynamic transmission error (DTE are analyzed by numerical calculation and simulation, and the results show that antibacklash gear can increase the composite meshing stiffness; when the torsional spring stiffness is large enough, the oscillating components of the DTE (ODTE and the RMS of the DTE (RDTE trend to be a constant value; the variations of ODTE and RDTE are not significant, unless preload exceeds a certain value.
Large-scale simulations of plastic neural networks on neuromorphic hardware
Directory of Open Access Journals (Sweden)
James Courtney Knight
2016-04-01
Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.
International Nuclear Information System (INIS)
ColIn, Pedro; Vazquez-Semadeni, Enrique; Avila-Reese, Vladimir; Valenzuela, Octavio; Ceverino, Daniel
2010-01-01
We present numerical simulations aimed at exploring the effects of varying the sub-grid physics parameters on the evolution and the properties of the galaxy formed in a low-mass dark matter halo (∼7 x 10 10 h -1 M sun at redshift z = 0). The simulations are run within a cosmological setting with a nominal resolution of 218 pc comoving and are stopped at z = 0.43. For simulations that cannot resolve individual molecular clouds, we propose the criterion that the threshold density for star formation, n SF , should be chosen such that the column density of the star-forming cells equals the threshold value for molecule formation, N ∼ 10 21 cm -2 , or ∼8 M sun pc -2 . In all of our simulations, an extended old/intermediate-age stellar halo and a more compact younger stellar disk are formed, and in most cases, the halo's specific angular momentum is slightly larger than that of the galaxy, and sensitive to the SF/feedback parameters. We found that a non-negligible fraction of the halo stars are formed in situ in a spheroidal distribution. Changes in the sub-grid physics parameters affect significantly and in a complex way the evolution and properties of the galaxy: (1) lower threshold densities n SF produce larger stellar effective radii R e , less peaked circular velocity curves V c (R), and greater amounts of low-density and hot gas in the disk mid-plane; (2) when stellar feedback is modeled by temporarily switching off radiative cooling in the star-forming regions, R e increases (by a factor of ∼2 in our particular model), the circular velocity curve becomes flatter, and a complex multi-phase gaseous disk structure develops; (3) a more efficient local conversion of gas mass to stars, measured by a stellar particle mass distribution biased toward larger values, increases the strength of the feedback energy injection-driving outflows and inducing burstier SF histories; (4) if feedback is too strong, gas loss by galactic outflows-which are easier to produce in low
Energy Technology Data Exchange (ETDEWEB)
Yuan, Haomin; Solberg, Jerome; Merzari, Elia; Kraus, Adam; Grindeanu, Iulian
2017-10-01
This paper describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLO for structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation
A Novel CPU/GPU Simulation Environment for Large-Scale Biologically-Realistic Neural Modeling
Directory of Open Access Journals (Sweden)
Roger V Hoang
2013-10-01
Full Text Available Computational Neuroscience is an emerging field that provides unique opportunities to studycomplex brain structures through realistic neural simulations. However, as biological details are added tomodels, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they haveshown significant improvement in execution time compared to Central Processing Units (CPUs. Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks,the NeoCortical Simulator version 6 (NCS6. NCS6 is a free, open-source, parallelizable, and scalable simula-tor, designed to run on clusters of multiple machines, potentially with high performance computing devicesin each of them. It has built-in leaky-integrate-and-fire (LIF and Izhikevich (IZH neuron models, but usersalso have the capability to design their own plug-in interface for different neuron types as desired. NCS6is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing dataacross these heterogeneous clusters of CPUs and GPUs.
International Nuclear Information System (INIS)
Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.
2013-01-01
We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions
Large-scale agent-based social simulation : A study on epidemic prediction and control
Zhang, M.
2016-01-01
Large-scale agent-based social simulation is gradually proving to be a versatile methodological approach for studying human societies, which could make contributions from policy making in social science, to distributed artificial intelligence and agent technology in computer science, and to theory
Simulations of muon-induced neutron flux at large depths underground
International Nuclear Information System (INIS)
Kudryavtsev, V.A.; Spooner, N.J.C.; McMillan, J.E.
2003-01-01
The production of neutrons by cosmic-ray muons at large depths underground is discussed. The most recent versions of the muon propagation code MUSIC, and particle transport code FLUKA are used to evaluate muon and neutron fluxes. The results of simulations are compared with experimental data
Comparison of Large Eddy Simulations of a convective boundary layer with wind LIDAR measurements
DEFF Research Database (Denmark)
Pedersen, Jesper Grønnegaard; Kelly, Mark C.; Gryning, Sven-Erik
2012-01-01
Vertical profiles of the horizontal wind speed and of the standard deviation of vertical wind speed from Large Eddy Simulations of a convective atmospheric boundary layer are compared to wind LIDAR measurements up to 1400 m. Fair agreement regarding both types of profiles is observed only when...
Synthetic atmospheric turbulence and wind shear in large eddy simulations of wind turbine wakes
DEFF Research Database (Denmark)
Keck, Rolf-Erik; Mikkelsen, Robert Flemming; Troldborg, Niels
2014-01-01
, superimposed on top of a mean deterministic shear layer consistent with that used in the IEC standard for wind turbine load calculations. First, the method is evaluated by running a series of large-eddy simulations in an empty domain, where the imposed turbulence and wind shear is allowed to reach a fully...
Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations
Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara
2018-05-01
Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations
Directory of Open Access Journals (Sweden)
C. Orbe
2018-05-01
Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
Wind Energy-Related Atmospheric Boundary Layer Large-Eddy Simulation Using OpenFOAM: Preprint
Energy Technology Data Exchange (ETDEWEB)
Churchfield, M.J.; Vijayakumar, G.; Brasseur, J.G.; Moriarty, P.J.
2010-08-01
This paper develops and evaluates the performance of a large-eddy simulation (LES) solver in computing the atmospheric boundary layer (ABL) over flat terrain under a variety of stability conditions, ranging from shear driven (neutral stratification) to moderately convective (unstable stratification).
Hernandez Perez, F.E.
2011-01-01
Hydrogen (H2) enrichment of hydrocarbon fuels in lean premixed systems is desirable since it can lead to a progressive reduction in greenhouse-gas emissions, while paving the way towards pure hydrogen combustion. In recent decades, large-eddy simulation (LES) has emerged as a promising tool to
Vreman, A.W.; Oijen, van J.A.; Goey, de L.P.H.; Bastiaans, R.J.M.
2009-01-01
Large-eddy simulation (LES) of turbulent combustion with premixed flamelets is investigated in this paper. The approach solves the filtered Navier-Stokes equations supplemented with two transport equations, one for the mixture fraction and another for a progress variable. The LES premixed flamelet
Large-eddy simulation with accurate implicit subgrid-scale diffusion
B. Koren (Barry); C. Beets
1996-01-01
textabstractA method for large-eddy simulation is presented that does not use an explicit subgrid-scale diffusion term. Subgrid-scale effects are modelled implicitly through an appropriate monotone (in the sense of Spekreijse 1987) discretization method for the advective terms. Special attention is
Investigation of wake interaction using full-scale lidar measurements and large eddy simulation
DEFF Research Database (Denmark)
Machefaux, Ewan; Larsen, Gunner Chr.; Troldborg, Niels
2016-01-01
dynamics flow solver, using large eddy simulation and fully turbulent inflow. The rotors are modelled using the actuator disc technique. A mutual validation of the computational fluid dynamics model with the measurements is conducted for a selected dataset, where wake interaction occurs. This validation...
Large shear deformation of particle gels studied by Brownian Dynamics simulations
Rzepiela, A.A.; Opheusden, van J.H.J.; Vliet, van T.
2004-01-01
Brownian Dynamics (BD) simulations have been performed to study structure and rheology of particle gels under large shear deformation. The model incorporates soft spherical particles, and reversible flexible bond formation. Two different methods of shear deformation are discussed, namely affine and
DEFF Research Database (Denmark)
Gebhardt, Cristian; Veluri, Badrinath; Preidikman, Sergio
2010-01-01
In this work an aeroelastic model that describes the interaction between aerodynamics and drivetrain dynamics of a large horizontal–axis wind turbine is presented. Traditional designs for wind turbines are based on the output of specific aeroelastic simulation codes. The output of these codes giv...
A large-signal dynamic simulation for the series resonant converter
King, R. J.; Stuart, T. A.
1983-01-01
A simple nonlinear discrete-time dynamic model for the series resonant dc-dc converter is derived using approximations appropriate to most power converters. This model is useful for the dynamic simulation of a series resonant converter using only a desktop calculator. The model is compared with a laboratory converter for a large transient event.
Largenet2: an object-oriented programming library for simulating large adaptive networks.
Zschaler, Gerd; Gross, Thilo
2013-01-15
The largenet2 C++ library provides an infrastructure for the simulation of large dynamic and adaptive networks with discrete node and link states. The library is released as free software. It is available at http://biond.github.com/largenet2. Largenet2 is licensed under the Creative Commons Attribution-NonCommercial 3.0 Unported License. gerd@biond.org
Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing
Directory of Open Access Journals (Sweden)
Qiang Liu
2018-05-01
Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.
Large-eddy simulation of ethanol spray combustion using a finite-rate combustion model
Energy Technology Data Exchange (ETDEWEB)
Li, K.; Zhou, L.X. [Tsinghua Univ., Beijing (China). Dept. of Engineering Mechanics; Chan, C.K. [Hong Kong Polytechnic Univ. (China). Dept. of Applied Mathematics
2013-07-01
Large-eddy simulation of spray combustion is under its rapid development, but the combustion models are less validated by detailed experimental data. In this paper, large-eddy simulation of ethanol-air spray combustion was made using an Eulerian-Lagrangian approach, a subgrid-scale kinetic energy stress model, and a finite-rate combustion model. The simulation results are validated in detail by experiments. The LES obtained statistically averaged temperature is in agreement with the experimental results in most regions. The instantaneous LES results show the coherent structures of the shear region near the high-temperature flame zone and the fuel vapor concentration map, indicating the droplets are concentrated in this shear region. The droplet sizes are found to be in the range of 20-100{mu}m. The instantaneous temperature map shows the close interaction between the coherent structures and the combustion reaction.
Coupled large-eddy simulation of thermal mixing in a T-junction
International Nuclear Information System (INIS)
Kloeren, D.; Laurien, E.
2011-01-01
Analyzing thermal fatigue due to thermal mixing in T-junctions is part of the safety assessment of nuclear power plants. Results of two large-eddy simulations of mixing flow in a T-junction with coupled and adiabatic boundary condition are presented and compared. The temperature difference is set to 100 K, which leads to strong stratification of the flow. The main and the branch pipe intersect horizontally in this simulation. The flow is characterized by steady wavy pattern of stratification and temperature distribution. The coupled solution approach shows highly reduced temperature fluctuations in the near wall region due to thermal inertia of the wall. A conjugate heat transfer approach is necessary in order to simulate unsteady heat transfer accurately for large inlet temperature differences. (author)
Large-eddy simulation of a turbulent piloted methane/air diffusion flame (Sandia flame D)
International Nuclear Information System (INIS)
Pitsch, H.; Steiner, H.
2000-01-01
The Lagrangian Flamelet Model is formulated as a combustion model for large-eddy simulations of turbulent jet diffusion flames. The model is applied in a large-eddy simulation of a piloted partially premixed methane/air diffusion flame (Sandia flame D). The results of the simulation are compared to experimental data of the mean and RMS of the axial velocity and the mixture fraction and the unconditional and conditional averages of temperature and various species mass fractions, including CO and NO. All quantities are in good agreement with the experiments. The results indicate in accordance with experimental findings that regions of high strain appear in layer like structures, which are directed inwards and tend to align with the reaction zone, where the turbulence is fully developed. The analysis of the conditional temperature and mass fractions reveals a strong influence of the partial premixing of the fuel. (c) 2000 American Institute of Physics
Large-scale particle simulations in a virtual-memory computer
International Nuclear Information System (INIS)
Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.
1982-08-01
Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time
Tang, G.; Bartlein, P. J.
2012-01-01
Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p 0.46, p 0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.
Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model
Directory of Open Access Journals (Sweden)
Xin Wang
2012-01-01
Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.
International Nuclear Information System (INIS)
Jacques, D.; Perko, J.; Seetharam, S.; Mallants, D.
2012-01-01
This paper presents a methodology to assess the spatial-temporal evolution of chemical degradation fronts in real-size concrete structures typical of a near-surface radioactive waste disposal facility. The methodology consists of the abstraction of a so-called full (complicated) model accounting for the multicomponent - multi-scale nature of concrete to an abstracted (simplified) model which simulates chemical concrete degradation based on a single component in the aqueous and solid phase. The abstracted model is verified against chemical degradation fronts simulated with the full model under both diffusive and advective transport conditions. Implementation in the multi-physics simulation tool COMSOL allows simulation of the spatial-temporal evolution of chemical degradation fronts in large-scale concrete structures. (authors)
Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows
Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel
2017-11-01
We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.
Directory of Open Access Journals (Sweden)
Nobuaki Kimura
2014-01-01
Full Text Available Severe rainstorms have occurred more frequently in Taiwan over the last decade. To understand the flood characteristics of a local region under climate change, a hydrological model simulation was conducted for the Tsengwen Reservoir watershed. The model employed was the Integrated Flood Analysis System (IFAS, which has a conceptual, distributed rainfall-runoff analysis module and a GIS data-input function. The high-resolution rainfall data for flood simulation was categorized into three terms: 1979 - 2003 (Present, 2015 - 2039 (Near-future, and 2075 - 2099 (Future, provided by the Meteorological Research Institute atmospheric general circulation model (MRI-AGCM. Ten extreme rainfall (top ten events were selected for each term in descending order of total precipitation volume. Due to the small watershed area the MRI-AGCM3.2S data was downsized into higher resolution data using the Weather Research and Forecasting Model. The simulated discharges revealed that most of the Near-future and Future peaks caused by extreme rainfall increased compared to the Present peak. These ratios were 0.8 - 1.6 (Near-future/Present and 0.9 - 2.2 (Future/Present, respectively. Additionally, we evaluated how these future discharges would affect the reservoir¡¦s flood control capacity, specifically the excess water volume required to be stored while maintaining dam releases up to the dam¡¦s spillway capacity or the discharge peak design for flood prevention. The results for the top ten events show that the excess water for the Future term exceeded the reservoir¡¦s flood control capacity and was approximately 79.6 - 87.5% of the total reservoir maximum capacity for the discharge peak design scenario.
Convergence study of global meshing on enamel-cement-bracket finite element model
Samshuri, S. F.; Daud, R.; Rojan, M. A.; Basaruddin, K. S.; Abdullah, A. B.; Ariffin, A. K.
2017-09-01
This paper presents on meshing convergence analysis of finite element (FE) model to simulate enamel-cement-bracket fracture. Three different materials used in this study involving interface fracture are concerned. Complex behavior ofinterface fracture due to stress concentration is the reason to have a well-constructed meshing strategy. In FE analysis, meshing size is a critical factor that influenced the accuracy and computational time of analysis. The convergence study meshing scheme involving critical area (CA) and non-critical area (NCA) to ensure an optimum meshing sizes are acquired for this FE model. For NCA meshing, the area of interest are at the back of enamel, bracket ligature groove and bracket wing. For CA meshing, area of interest are enamel area close to cement layer, the cement layer and bracket base. The value of constant NCA meshing tested are meshing size 1 and 0.4. The value constant CA meshing tested are 0.4 and 0.1. Manipulative variables are randomly selected and must abide the rule of NCA must be higher than CA. This study employed first principle stresses due to brittle failure nature of the materials used. Best meshing size are selected according to convergence error analysis. Results show that, constant CA are more stable compare to constant NCA meshing. Then, 0.05 constant CA meshing are tested to test the accuracy of smaller meshing. However, unpromising result obtained as the errors are increasing. Thus, constant CA 0.1 with NCA mesh of 0.15 until 0.3 are the most stable meshing as the error in this region are lowest. Convergence test was conducted on three selected coarse, medium and fine meshes at the range of NCA mesh of 0.15 until 3 and CA mesh area stay constant at 0.1. The result shows that, at coarse mesh 0.3, the error are 0.0003% compare to 3% acceptable error. Hence, the global meshing are converge as the meshing size at CA 0.1 and NCA 0.15 for this model.
3D unstructured mesh discontinuous finite element hydro
International Nuclear Information System (INIS)
Prasad, M.K.; Kershaw, D.S.; Shaw, M.J.
1995-01-01
The authors present detailed features of the ICF3D hydrodynamics code used for inertial fusion simulations. This code is intended to be a state-of-the-art upgrade of the well-known fluid code, LASNEX. ICF3D employs discontinuous finite elements on a discrete unstructured mesh consisting of a variety of 3D polyhedra including tetrahedra, prisms, and hexahedra. The authors discussed details of how the ROE-averaged second-order convection was applied on the discrete elements, and how the C++ coding interface has helped to simplify implementing the many physics and numerics modules within the code package. The author emphasized the virtues of object-oriented design in large scale projects such as ICF3D